id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,543,350 | https://en.wikipedia.org/wiki/Hydroxymethylglutaryl-CoA%20synthase | In biochemistry, hydroxymethylglutaryl-CoA synthase or HMG-CoA synthase is an enzyme which catalyzes the reaction in which acetyl-CoA condenses with acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA). This reaction comprises the second step in the mevalonate-dependent isoprenoid biosynthesis pathway. HMG-CoA is an intermediate in both cholesterol synthesis and ketogenesis. This reaction is overactivated in patients with diabetes mellitus type 1 if left untreated, due to prolonged insulin deficiency and the exhaustion of substrates for gluconeogenesis and the TCA cycle, notably oxaloacetate. This results in shunting of excess acetyl-CoA into the ketone synthesis pathway via HMG-CoA, leading to the development of diabetic ketoacidosis.
The 3 substrates of this enzyme are acetyl-CoA, H2O, and acetoacetyl-CoA, whereas its two products are (S)-3-hydroxy-3-methylglutaryl-CoA and CoA.
In humans, the protein is encoded by the HMGCS1 gene on chromosome 5.
Classification
This enzyme belongs to the family of transferases, specifically those acyltransferases that convert acyl groups into alkyl groups on transfer.
Nomenclature
The systematic name of this enzyme class is acetyl-CoA:acetoacetyl-CoA C-acetyltransferase (thioester-hydrolysing, carboxymethyl-forming). Other names in common use include (S)-3-hydroxy-3-methylglutaryl-CoA acetoacetyl-CoA-lyase, (CoA-acetylating), 3-hydroxy-3-methylglutaryl CoA synthetase, 3-hydroxy-3-methylglutaryl coenzyme A synthase, 3-hydroxy-3-methylglutaryl coenzyme A synthetase, 3-hydroxy-3-methylglutaryl-CoA synthase, 3-hydroxy-3-methylglutaryl-coenzyme A synthase, beta-hydroxy-beta-methylglutaryl-CoA synthase, HMG-CoA synthase, acetoacetyl coenzyme A transacetase, hydroxymethylglutaryl coenzyme A synthase, and hydroxymethylglutaryl coenzyme A-condensing enzyme.
Mechanism
HMG-CoA synthase contains an important catalytic cysteine residue that acts as a nucleophile in the first step of the reaction: the acetylation of the enzyme by acetyl-CoA (its first substrate) to produce an acetyl-enzyme thioester, releasing the reduced coenzyme A. The subsequent nucleophilic attack on acetoacetyl-CoA (its second substrate) leads to the formation of HMG-CoA.
Biological role
This enzyme participates in 3 metabolic pathways: synthesis and degradation of ketone bodies, valine, leucine and isoleucine degradation, and butanoate metabolism.
Species distribution
HMG-CoA synthase occurs in eukaryotes, archaea, and certain bacteria.
Eukaryotes
In vertebrates, there are two different isozymes of the enzyme (cytosolic and mitochondrial); in humans the cytosolic form has only 60.6% amino acid identity with the mitochondrial form of the enzyme. HMG-CoA is also found in other eukaryotes such as insects, plants, and fungi.
Cytosolic
The cytosolic form is the starting point of the mevalonate pathway, which leads to cholesterol and other sterolic and isoprenoid compounds.
Mitochondrial
The mitochondrial form is responsible for the biosynthesis of ketone bodies. The gene for the mitochondrial form of the enzyme has three sterol regulatory elements in the 5' flanking region. These elements are responsible for decreased transcription of the message responsible for enzyme synthesis when dietary cholesterol is high in animals: the same is observed for 3-hydroxy-3-methylglutaryl-CoA and the low density lipoprotein receptor.
Bacteria
In bacteria, isoprenoid precursors are generally synthesised via an alternative, non-mevalonate pathway, however a number of Gram-positive pathogens utilise a mevalonate pathway involving HMG-CoA synthase that is parallel to that found in eukaryotes.
Structural studies
As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes , , , and .
External links
References
EC 2.3.3
Protein families
Human proteins | Hydroxymethylglutaryl-CoA synthase | [
"Biology"
] | 1,013 | [
"Protein families",
"Protein classification"
] |
14,543,886 | https://en.wikipedia.org/wiki/LPAR3 | Lysophosphatidic acid receptor 3 also known as LPA3 is a protein that in humans is encoded by the LPAR3 gene. LPA3 is a G protein-coupled receptor that binds the lipid signaling molecule lysophosphatidic acid (LPA).
Function
This gene encodes a member of the G protein-coupled receptor family, as well as the EDG family of proteins. This protein functions as a cellular receptor for lysophosphatidic acid and mediates lysophosphatidic acid-evoked calcium mobilization. This receptor couples predominantly to G(q/11) alpha proteins.
Evolution
Paralogues
Source:
LPAR1
LPAR2
S1PR1
S1PR3
S1PR4
S1PR2
S1PR5
CNR1
GPR3
MC5R
GPR6
GPR12
MC4R
CNR2
MC3R
MC1R
MC2R
GPR119
See also
Lysophospholipid receptor
References
Further reading
External links
G protein-coupled receptors | LPAR3 | [
"Chemistry"
] | 222 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,544,308 | https://en.wikipedia.org/wiki/Glossary%20of%20sheep%20husbandry | The raising of domestic sheep has occurred in nearly every inhabited part of the earth, and the variations in cultures and languages which have kept sheep has produced a vast lexicon of unique terminology used to describe sheep husbandry.
Terms
Below are a few of the more common terms.
A–C
Backliner – an externally applied medicine, applied along the backline of a freshly shorn sheep to control lice or other parasites. In the British Isles called pour-on.
Bale – a wool pack containing a specified weight of pressed wool as regulated by industry authorities.
Band – a flock with a large number of sheep, generally 1000, which graze on rangeland.
Bell sheep – a sheep (usually a rough, wrinkly one) caught by a shearer, just before the end of a shearing run.
Bellwether – originally an experienced wether given a bell to lead a flock; now mainly used figuratively for a person acting as a lead and guide.
Black wool – Any wool that is not white, but not necessarily black.
Board – the floor where the shearing stands are in a wool shed.
Bolus – an object placed in the reticulum of the rumen, remaining there for some time or permanently. Used for long-term administration of medicines, or as a secure location for an electronic marking chip.
Bottle lamb or cade lamb – an orphan lamb reared on a bottle. Also poddy lamb or pet lamb.
Boxed – when different mobs of sheep are mixed.
Break – a marked thinning of the fleece, producing distinct weakness in one part of the staple.
Broken-mouth or broken-mouthed – a sheep which has lost or broken some of its incisor teeth, usually after the age of about six years.
Broad – wool which is on the strong side for its quality number, or for its type.
Broomie – a roustabout in a shearing shed.
Butt – an underweight bale of greasy wool in a standard wool pack.
Callipyge – a natural genetic mutation that produces extremely muscled hindquarters in sheep. These lambs are found in the US and lack tenderness.
Cast – unable to regain footing, possibly due to lying in a hollow with legs facing uphill and/or having a heavy fleece. Also see riggwelter.
CFA or cast for age – sheep culled because of their age. Also see cull ewe, killer.
Chilver – a female lamb
Clip – all the wool from a flock (in Australian Wool Classing).
Clipping – cutting off the wool: see shearing and rooing.
Comeback – the progeny of a mating of a Merino with a British longwool sheep.
Creep feeding - Allowing lambs access to special, high-quality feed before weaning
Crimp – the natural wave formation seen in wool. Usually the closer the crimps, the finer the wool.
Crutching – shearing parts of a sheep (especially the hind end of some woollier breeds such as Merino), to prevent fly-strike. Also see dagging.
Cull ewe – a ewe no longer suitable for breeding, and sold for meat. Also see killer.
Cut-out – the completion of shearing a flock.
D–F
Dags – clumps of dried dung stuck to the wool of a sheep, which may lead to fly-strike. (Hence "rattle your dags!", meaning "hurry up!", especially used in New Zealand.)
Dagging – clipping off dags. Also see crutching.
Devil's Grip – a serious conformation defect, appearing as a depression behind the withers.
Dewlap – the upper fold under the neck of a Merino sheep.
Dipping – immersing sheep in a plunge or shower dip to kill external parasites. Backliners are now replacing dipping.
Docking – removing the tail of a sheep to prevent fly-strike. See also crutching, dagging.
Downs – breeds of sheep belonging to the short wool group.
Draft ewe – a ewe too old for rough grazing (such as moorland), drafted (selected) out of the flock to move to better grazing, usually on another farm. Generally spelt "draft", but in the British Isles either as "draft" or "draught".
Drench – an oral veterinary medicine administered by a drenching gun (usually an anthelmintic).
Driving or droving – walking animals from one place to another.
Dry Sheep Equivalent – (DSE) is a standard unit used in Australia to compare energy requirements between different classes and species of animals. A DSE is the amount of energy required to maintain a 45 to 50 kg Merino wether.
Eaning - the act of giving birth in sheep. See lambing.
Earmark – a distinctive mark clipped out of the ear (or sometimes a tattoo inside the ear) to denote ownership and/or age.
Ear tag – plastic or metal tag clipped to ear, with identification number, name or electronic chip.
Ewe – a female sheep capable of producing lambs. In areas where "gimmer" or similar terms are used for young females, may refer to a female only after her first lamb. In some areas yow.
Eye dog – a type of sheepdog (qv) which uses eye contact as a primary technique to herd sheep. See also huntaway.
Fleece – the wool covering of a sheep.
Flock – a group of sheep (or goats). All the sheep on a property (in Australian Wool Classing); also all the sheep in a region or country. Sometimes called herd or mob.
Flushing – providing especially nutritious feed in the few weeks before mating to improve fertility, or in the period before birth to increase lamb birth-weight.
Flushing (eggs/embryo) – removing unfertilised or fertilised egg from an animal; often as part of an embryo transfer procedure.
Fly strike or myiasis – infestation of the wool, skin and eventually flesh with blowfly or botfly maggots, rapidly causing injury or death. Usually (but not always) occurs where the wool has become contaminated by dung or urine, or at the site of an injury. Also see crutching, dagging, Mulesing.
Fold (or sheepfold) – a pen in which a flock is kept overnight to keep the sheep safe from predators, or to allow the collection of dung for manure.
Folding – confining sheep (or other livestock) onto a restricted area for feeding, such as a temporarily fenced part of a root crop field, especially when done repeatedly onto a sequence of areas.
Foot rot – infectious pododermatitis, a painful hoof disease commonly found in sheep (also goats and cattle), especially when pastured on damp ground.
G–K
Gimmer (, not ) – a young female sheep, usually before her first lamb (especially used in the north of England and Scotland). Also theave.
Graziers' alert or graziers' warning – a cold-weather warning issued by the weather bureau to sheep graziers.
Greasy – a sheep shearer.
Greasy wool – wool as it has been shorn from the sheep and therefore not yet washed or cleaned. Also see lanolin.
Guard llama – a llama (usually a castrated male) kept with sheep as a guard. The llama will defend the flock from predators such as foxes and dogs.
Gummer – a sheep so old that it has lost all of its teeth.
Hefting (or heafing) – the instinct in some breeds of keeping to a certain heft (a small local area) throughout their lives. Allows different farmers in an extensive landscape such as moorland to graze different areas without the need for fences, each ewe remaining on her particular area. Lambs usually learn their heft from their mothers. Also known as 'Hoofing' in some areas like North Yorkshire.
Hogget, hogg or hog – a young sheep of either sex from about 9 to 18 months of age (until it cuts two permanent teeth); a yearling sheep, as yet unshorn. Also the meat of a hogget. Also teg, old-season lamb, shearling.
Hoof-shears – implement similar to secateurs, used to trim the hoofs of sheep.
Huntaway – a type of sheepdog (qv) which uses barking as a primary technique to herd sheep. Named for a New Zealand breed of dog. See also eye dog.
In lamb – pregnant.
Joining – the placing of rams with ewes for mating (see tupping).
Ked, or sheep ked – Melophagus ovinus, a species of louse-fly, a nearly flightless biting fly infesting sheep.
Kemp – a short, white, hollow, hairy fibre usually found about the head and legs of sheep.
Killer – a sheep that has been selected for slaughter on an Australian property. Also see cull ewe.
L–N
Lamb – a young sheep in its first year. In many eastern countries there is a looser use of the term which may include hoggets. Also the meat of younger sheep.
Lambing – the process of giving birth in sheep. Also the work of tending lambing ewes (shepherds are said to lamb their flocks).
Lambing jug or lambing pen – a small pen to confine ewes and newly born lambs.
Lamb marking – the work of earmarking, docking and castration of lambs.
Lambing percentage – the number of lambs successfully reared in a flock compared with the number of ewes that have been mated – effectively a measure of the success of lambing and the number of multiple births. May vary from around 100% in a hardy mountain flock (where a ewe may not be able to rear more than one lamb safely), to 150% or more in a well-fed lowland flock (whose ewes can more easily support twins or even triplets).
Lamb's fry – lamb's liver served as a culinary dish.
Lamb fries – lamb testicles when served as a culinary dish.
Lanolin – a thick yellow greasy substance in wool, secreted by the sheep's skin. Also called wool fat, wool wax, wool grease, adeps lanae or yolk. Extracted from raw wool and used for various purposes.
Livestock guardian dog – a dog bred and trained to guard sheep from predators such as bears, wolves, people or other dogs. Usually a large type of dog, often white and woolly, apparently to allow them to blend in with the sheep. Sometimes given a spiked collar to prevent attack by wolves or dogs. Does not usually muster the sheep. Sometimes called a sheepdog – but also see separate entry for this.
Lug mark – local term in Cumbria for earmark.
Marking knife – a knife with a clamp or hook made for lamb marking.
Myiasis – see fly strike.
Micron – one millionth of a metre, a measure of fibre diameter of wool in wool measurement. Term used in preference to "micrometre", the SI name for the same unit.
Mob – a group or cohort of sheep of the same breed that have run together under similar environmental conditions since the previous shearing (in Australian Wool Classing).
Monorchid – a male mammal with only one descended testicle, the other being retained internally. Monorchid sheep are less fertile than full rams, but have leaner meat than wethers.
Mule – a type of cross-bred sheep, both hardy and suitable for meat (especially in northern England). Usually bred from a Bluefaced Leicester ram on hardy mountain ewes such as Swaledales. May be qualified according to the female parent: for example a Welsh Mule is from a Blue-faced Leicester ram and a Welsh Mountain ewe.
Mulesing – a practice in Australia of cutting off wrinkles from the crutch area of Merinos, to prevent fly strike. Controversial, and illegal in some parts of the world. Named after a Mr Mules.
Mustering – the round up of livestock for inspection or other purposes.
Mutton – the meat of an older ewe or wether. May also refer to goat meat in eastern countries. Derived from the Anglo-Norman French word mouton ("sheep").
NSM – not station mated. A term used in sale advertisements indicating that those ewes have not been mated.
O–R
Off shears – sheep have been recently shorn.
Old-season lamb – a lamb a year old or more. Also hogget, shearling, teg.
Orf, scabby mouth or contagious ecthyma – a highly contagious viral disease of sheep (and goats) attacking damaged skin areas around the mouth and causing sores, usually affecting lambs in their first year of life.
Plain bodied – a sheep that has relatively few body wrinkles.
Poddy lamb, bottle lamb or pet lamb – an orphan lamb reared on a bottle. Also cade lamb, or placer.
Pour-on – see backliner.
Raddle – coloured pigment used to mark sheep for various reasons, such as to show ownership, or to show which lambs belong to which ewe. May be strapped to the chest of a ram, to mark the backs of ewes he mates (different rams may be given different colours). Also a verb ("that ewe's been raddled"). Also ruddy.
Ram – an uncastrated adult male sheep. Also tup.
Riggwelter – a sheep that has fallen onto its back and is unable to get up (usually because of the weight of its fleece).
Ring – a mob of sheep moving around in a circle.
Ringer – the top shearer in a shearing gang.
Ringing – the removal of a circle of wool from around the pizzle of a male sheep.
Rise – new growth of wool in spring beneath the previous year's fleece. Shearing is easier through this layer.
Rooing – removing the fleece by hand-plucking. Done once a year in late spring, when the fleece begins to moult naturally, especially in some breeds, such as Shetlands.
Rouseabout – (often abbreviated to 'rousie'), shedhands who pick up fleeces after they have been removed during shearing. See also broomie above.
Ruddy – local Cumbrian term for raddle.
S
Scab or sheep scab – a type of mange in sheep, a skin disease caused by attack by the sheep scab mite Psoroptes ovis, a psoroptid mite.
Scabby mouth – see orf above.
Scrapie – a wasting disease of sheep and goats, a transmissible spongiform encephalopathy (TSE, like BSE of cattle) and believed to be caused by a prion. Efforts have been made in some countries to breed for sheep genotypes resistant to scrapie.
Shearing – cutting off the fleece, normally done in two pieces by skilled shearers. A sheep may be said to have been either sheared or shorn, depending on dialect. Also clipping.
Shearling – a yearling sheep before its first shearing. Also hogget, old-season lamb, teg.
Sheepdog or shepherd dog – a dog used to move and control sheep, often very highly trained. Other types of dog may be used just to guard sheep (see livestock guarding dog), and these are sometimes also called sheepdogs. See also eye dog and huntaway.
Sheep – the species, or members of it. The plural is the same as the singular, and it can also be used as a mass noun. Normally used of individuals of any age, but in some areas only for those of breeding age.
Sheepwalk – an area of rough grazing occupied by a particular flock or forming part of a particular farm.
Shepherd – a stockperson or farmer who looks after sheep while they are in the pasture.
Shepherding – the act of shepherding sheep, or sheep husbandry more generally.
Shornie – a freshly shorn sheep.
Shepherd's crook – a staff with a hook at one end, used to catch sheep by the neck or leg (depending on type).
SIL – Scanned In Lamb
Slink – a very young lamb.
Springer - a ewe close to lambing.
Stag – a ram castrated after about 6 months of age.
Staple – a group of wool fibres that formed a cluster or lock.
Store – a sheep (or other meat animal) in good average condition, but not fat. Usually bought by dealers to fatten for resale.
Sucker – an unweaned lamb.
T–Z
Teg – a sheep in its second year. Also hogget, old-season lamb, shearling.
Theave or theaf (plural of either: theaves) – a young female sheep, usually before her first lamb (used especially in lowland England). Also gimmer.
Top knot – wool from the forehead or poll of a sheep.
Tup – an alternative term for ram.
Tupping – mating in sheep, or the mating season (autumn, for a spring-lambing flock).
Twinter – a sheep (or ox/horse) that has lived through two winters.
Twotooth – South England/Cornish word for an old sheep (Pronounced Twotuth) – usually an old animal with only the two front teeth left.
Weaner – a young animal that has been weaned, from its mother, until it is about a year old.
Wether – a castrated male sheep (or goat).
Wigging – the removal of wool from around a sheep's eyes to prevent wool-blindness.
Wool-blindness – when excessive wool growth interferes with the normal sight of a sheep.
Woolcock – a husband of sheep
Wool-grease – see lanolin.
Wool pack – a standard-sized woven nylon container manufactured to industry specifications for the transportation of wool.
Woolsack – a ceremonial cushion used by the Lord Speaker of the UK House of Lords, filled with wool to symbolise the importance of the wool trade for the prosperity of the country.
Yoke – two crossed pieces of timber or a forked branch fixed to the neck of a habitually straying sheep in an attempt to prevent it breaking through hedges and fences.
Yolk – see lanolin.
Yow – local form of ewe in some areas, Cornish farmers use Yow.
See also
Domestic sheep
Sheep husbandry
Yan Tan Tethera (numbers for counting sheep)
References
External links
A Glossary of sheep terms from the American Sheep Industry Association
Sheep husbandry
Wikipedia glossaries using unordered lists | Glossary of sheep husbandry | [
"Biology"
] | 3,887 | [
"Glossaries of zoology",
"Glossaries of biology"
] |
14,544,789 | https://en.wikipedia.org/wiki/Tele-epidemiology | Tele-epidemiology is the application of telecommunications to epidemiological research and application, including space-based and internet-based systems.
Tele-epidemiology applies satellite communication systems to investigate or support investigations of infectious disease outbreak, including disease reemergence. In this application, space-based systems (i.e. GIS, GPS, SPOT5) use natural index and in-situ data (i.e. NDVI, Meteosat, Envisat) to assess health risk to human and animal populations. Space-based applications of tele-epidemiology extend to health surveillance and health emergency response.
Internet-based applications of tele-epidemiology include sourcing of epidemiological data in generating internet reports and real-time disease mapping. This entails gathering and structuring epidemiological data from news and social media outlets, and mapping or reporting this data for application with research or public health organizations. Examples of such applications include HealthMap and ProMED-mail, two web-based services that map and e-mail global cases of disease outbreak, respectively.
The United Nations Office for Outer Space Affairs often refers generally to telehealth for applications linking communication and information technologies such as telesurgery and telenursing, to healthcare administration.
Clinical applications
Provides real-time information about disease prevalence across populations to public health, physicians and citizens, globally.
Diminishes communicable disease risk by mobilizing local medical efforts to respond to disease outbreaks, especially in vulnerable populations.
Enhances the ability of managing the proliferation of communicable pathogens.
Can be used as a management tool in public health to discover, assess, and act on epidemiological data. For example, gathering and identifying disease relevant risk factors helps to identify treatment interventions and implement the prevention strategies that could lessen the effects of the outbreak on the general population and improve clinical outcomes at the individual, patient-leve
Could prove useful to commerce, travelers, public health agencies and federal governments, and diplomatic efforts.
Public health agencies and federal governments might take advantage of Tele-epidemiology for predicting the propagation of communicable diseases.
Provides users and governments with information for early warning systems.
Non-clinical applications
Applications of tele-epidemiology are not being used frequently in clinical settings
The use of space-based systems are important for research and public health efforts, though these activities are driven largely by secondary or tertiary organizations, not the public health agencies themselves.
Relevant data can be used for research and is widely accessible through existing internet outlets.
Data can be disseminated through internet reports of disease outbreak for real-time disease mapping for public use. The application of HealthMap and ProMED-mail demonstrate considerable global health utility and accessibility for users from both the public and private domains.
Internet-based platforms can be used by the general public to determine local and international disease outbreaks. Consumers can also contribute their own epidemiologically relevant data to these services.
Advantages
Space-based tele-epidemiological initiatives, using satellites, are able to gather environmental information relevant to tracking disease outbreaks. S2E, a French multidisciplinary consortium on spatial surveillance of epidemics, has used satellites to garner relevant information on vegetation, meteorology and hydrology. This information, in concert with clinical data from humans and animals, can be used to construct predictive mathematical models that may allow for the forecasting of disease outbreaks.
Web-based tele-epidemiological services are able to aggregate information from several disparate sources to provide information on disease surveillance and potential disease outbreaks. Both ProMED-mail and Healthmap collect information in several different languages to gather worldwide epidemiological information. These services are both free and allow both health care professionals and laypeople to access reliable disease outbreak information from around the world and in real-time.
Disadvantages
Space-based methodologies require investment of resources for the collection and management of epidemiological information; as such, these systems may not be affordable or technologically feasible for developing countries that need assistance tracking disease outbreaks. Further, the success of space-based methodologies is predicated on the collection of accurate ground-based data by qualified public health professionals. This may not be possible in developing countries because they lack basic laboratory and epidemiological resources
Web-based tele-epidemiological initiatives have a unique set of challenges that are different from those experienced by space-based methodologies. HealthMap, in an effort to provide comprehensive worldwide information, contains information from a variety of sources including eyewitness accounts, online news and validated official reports. As a result, the site necessarily relies upon third party information, the veracity of which they are not liable.
See also
Landscape epidemiology
Satellite imagery
Telehealth
Telematics
Telemedicine
Telenursing
Teleophthalmology
References
Epidemiology
Human geography
Telehealth
Health informatics | Tele-epidemiology | [
"Biology",
"Environmental_science"
] | 1,041 | [
"Health informatics",
"Epidemiology",
"Environmental social science",
"Human geography",
"Medical technology"
] |
14,544,897 | https://en.wikipedia.org/wiki/Gunta | The gunta or guntha is a measure of area used in the Indian subcontinent, predominantly used in some South Asian countries. This unit is typically used to measure the size of a piece of land.
In India
1 anna = 7.5624 square yards = 6.3232 square metres
1 gunta = 120.999 square yards = 101.1714 square metres = 16 annas
1 guntha (R) = × =
40 gunthas = 1.0 acre
4 acre = 1 fg
In Pakistan
Other units were used alongside Imperial measures
1 anna = 20.16 sq yd
6 anna = 1 guntha = 120 square yard
4 guntha = 1 jareeb = 484 square yard
4 jareeb = 1 kanee = 1936 square yard
10 jareeb = 1 acre = 4840 square yard
25 acres = 1 marabba
See also
Conversion of units
Acre-foot
Acre
Acre (Scots)
Hectare
References
Units of area
Customary units in India | Gunta | [
"Mathematics"
] | 195 | [
"Quantity",
"Units of area",
"Units of measurement"
] |
14,545,229 | https://en.wikipedia.org/wiki/Formaldehyde%20transketolase | In enzymology, a formaldehyde transketolase () is an enzyme that catalyzes the chemical reaction
D-xylulose 5-phosphate + formaldehyde glyceraldehyde 3-phosphate + glycerone
Thus, the two substrates of this enzyme are D-xylulose 5-phosphate and formaldehyde, whereas its two products are glyceraldehyde 3-phosphate and glycerone.
This enzyme belongs to the family of transferases, specifically those transferring aldehyde or ketonic groups (transaldolases and transketolases, respectively). The systematic name of this enzyme class is D-xylulose-5-phosphate:formaldehyde glycolaldehydetransferase. This enzyme is also called dihydroxyacetone synthase. This enzyme participates in methane metabolism. It employs one cofactor, thiamin diphosphate.
References
EC 2.2.1
Thiamine enzymes
Enzymes of unknown structure | Formaldehyde transketolase | [
"Chemistry"
] | 216 | [
"Chemical hazards",
"Formaldehyde"
] |
14,545,515 | https://en.wikipedia.org/wiki/Power%20Sword | The Power Sword, also referred to as the Sword of Power or the Sword of Grayskull, is a fictional sword from Mattel's Masters of the Universe toy line. In the original mini-comics produced with the toyline in 1981, the Power Sword was a mystical object split into two parts, which Skeletor tries to obtain and put together in order to gain control over Castle Grayskull. In these early stories, He-Man uses an axe and a shield, rather than the magical sword.
With the arrival of the 1983 He-Man and the Masters of the Universe animated series, the Power Sword became the means by which Prince Adam transforms into He-Man, and his pet tiger Cringer into Battle Cat. The weapon kept the same basic shape during most of the 1980s, but then it was radically redesigned twice: for the 1990 series The New Adventures of He-Man, and the 2002 remake, He-Man and the Masters of the Universe.
In addition to the action-figure-sized Power Sword packaged with the character, full-size "He-Man Power Swords" were a favorite Christmas gift for decades, allowing children to role-play the barbarian hero. Some of these kid-sized Power Swords have been electronic, making a variety of battle sounds. Power Swords have also been sold as accessories for He-Man Halloween costumes. In the unsuccessful 1989 relaunch of the toy line, the electronic Power Sword reportedly sold better than the entire rest of the toy line put together.
Early appearances
The Power Sword was a late addition in the creation of the Masters of the Universe toy line; in the concept art, He-Man battled with an axe and shield, and a thin, unimpressive sword was wielded by the flamboyant Prince Adam (at that time a separate character, and not He-Man's alternate identity).
When the initial Mattel toy line was introduced in 1982, the He-Man and Skeletor figures each came with half of a plastic sword which could be joined into one "complete" sword, corresponding to the storyline in the included mini-comic. Together, the combined sword was used as a key to open the jawbridge to the Castle Grayskull playset. According to the original storyline, the Goddess (an early name for the Sorceress) had split the sword into two and scattered the pieces, in order to protect the castle and its source of universal power.
The story was told in the He-Man and the Power Sword illustrated mini-comic, which was packaged with the original He-Man action figure. Skeletor's goal in the book is to acquire the other half of the sword hidden inside Castle Grayskull in order to obtain the sword's total power, adding that "the magic fires, created by ancient scientists and sorcerers, will blaze again" once the two halves are joined. The specific purpose of the quest is also made clear: the Power Sword can be used to open a hole in the dimensional wall in order to bring reinforcements from Skeletor's dimension of origin, which would allow Skeletor to conquer Eternia's dimension. Once the two halves of the Power Sword are joined, Skeletor is able to use the sword to command various objects to attack He-Man. However, the spell is broken once the Sorceress splits the Power Sword into two halves again, hiding them and making the Power Sword the only key that can open the castle's Jaw-Bridge when inserted into an enchanted lock.
The next illustrated book, King of Castle Grayskull, reveals where the two halves have been hidden: one at Eternia's "highest point", the other beneath its "hardest rock." Whoever finds them can claim the throne of Castle Grayskull and the "secrets of the universe". The "highest point" turns out to be the top of Stratos's mountain, while the "hardest rock" is the rock where He-Man built his home in the previous book. As expected, the Jawbridge opens once the two halves have been inserted into the lock, but then Skeletor loses the sword again in battle. The book ends with the Spirit of the Castle sending the two halves into another dimension, where Skeletor is not expected to find them easily. The Power Sword is not featured in the last two books of the first series, Battle in the Clouds and The Vengeance of Skeletor.
In How He-Man Mastered the Universe: Toy to Television to the Big Screen, Brian C. Baer writes:
Filmation cartoon
When Filmation produced the cartoon He-Man and the Masters of the Universe in 1983, the producers worried that children wouldn't identify with a wild, axe-wielding barbarian character. Based on their experience with The Kid Super Power Hour with Shazam! in 1981, Filmation knew that kids would relate to a vulnerable, child-like figure who could turn super-powerful with a prop and a magic word. For He-Man, the protagonist became Prince Adam, who could use a newly restored Power Sword to turn into the muscle-bound hero.
In the cartoon, the Sorceress of Grayskull gives Prince Adam the Power Sword, which allows him to transform into He-Man, "the Most Powerful Man in the Universe", and his cowardly pet tiger, Cringer, into the fierce and brave Battle Cat. Prince Adam begins his war-cry by holding the Power Sword above his head with his right hand, proclaiming, "By the Power of Grayskull...." whereupon mystical lightning strikes the Power Sword and transforms him; He-Man then seizes the tip of the Power Sword's blade and completes the war-cry, "...I HAVE THE POWER!"
While the Power Sword is the key to unlocking He-Man's strength, it's rarely used in battle; he mostly uses it to cut objects, and deflect energy blasts.
In the episode "The Problem with Power", when He-Man is fooled into thinking that he's inadvertently killed someone, he raises the sword and surrenders the power of Castle Grayskull, transforming back into Prince Adam by proclaiming, "Let the power return!"
Princess Adora/She-Ra, He-Man's twin sister in the cartoon, has a companion Power Sword, called the Sword of Protection, which is identical except that it has a glowing jewel in the hilt. The jewel allows Princess Adora to channel her powers, as her sword is learned to have been a clone of He-Man's sword crafted by the Goddess of Grayskull. She transforms into She-Ra by saying, "For the honor of Grayskull...I am She-Ra!"
Marvel Star Comics' 1986 Masters of the Universe comic book adaptation featured a storyline about an alternate timeline caused by the Power Sword being transported thirty years into the future, and is wielded by a hero named Clamp Champ. The 1989 newspaper comic strip adaptation also featured the Power Sword prominently, used in the iconic transformation in the first strip. A 1989 story, "When You Need an Extra Something", featured a battle between He-Man and Evil-Lyn for possession of the Sword.
Live action movie
In the 1987 live action film, Masters of the Universe, the Power Sword is renamed the Sword of Grayskull. In the cartoon, He-Man engaged in actual sword fights very rarely, but the film producers knew that the character was closely associated with the sword, which meant that it should feature prominently in the movie's finale.
The New Adventures of He-Man
The original toy line was cancelled in 1987 after drastically declining sales. Mattel attempted to relaunch the character just two years later, redesigning the character. He-Man was slimmed down to a more realistic musculature, and transported into the distant future for science-fiction adventures on the alien planet Primus. Along with the revamped character, the Power Sword was also redesigned into a more futuristic-looking form, with a green laser blade that could fire bolts of glowing energy.
In 1990, Jetlag Productions produced a new cartoon, The New Adventures of He-Man, to promote the new toy line. In this series, Prince Adam's phrase to transform into He-Man is changed from "By the Power of Grayskull..." to "By the Power of Eternia..." He-Man's sword was a more important element in this version, gaining the ability to fire energy blasts and pulses of magic.
2002 television series
In the animated 2002 reboot, the origins of the Power Sword and Castle Grayskull are again revised. The castle is revealed to be the former home of the ancient warrior King Grayskull, who resembles He-Man but larger and with longer Viking-like hair and a massive green saber-toothed lion as a steed. The Power Sword is King Grayskull's personal weapon, and after fighting a fatal battle with Hordak, the dying king binds his mystical powers to the weapon. Afterwards his advisers become the Elders who seal the castle, and his wife becomes its guardian, the first Sorceress. Therefore, when Prince Adam holds up the sword and calls out "by the power of Grayskull" he is calling on the energies of King Grayskull himself, rather than those of the namesake castle.
The sword was heavily redesigned for the new cartoon but with a much more complex and mechanized look. When held by Prince Adam, it appears smaller. However, during the transformation sequence, the hilt pivots on an axis and changes shape, taking a new form when it is in He-Man's hands, and is more explicitly shown growing in size in the revised transformation sequence from the second season. In the series finale, it is shown that an alternate mode can be accessed wherein the blade splits in the middle and opens to reveal another emerald blade inside. The sword then appears to be two fangs (the blade) and a snake's tongue (the emerald blade). This mode of the sword was used to battle Serpos, the giant snake deity that was imprisoned in Snake Mountain.
Also in this series, Skeletor possess twin swords that can be combined into one larger sword, a reference to the original concept of the Power Sword(s) from the action figures and minicomics, however this twin sword has no magical properties. According to designers the Four Horsemen, this was due to their original re-sculpts being intended for a continuation of the original storyline in which Skeletor had obtained both halves of the Power Sword (hence the new Skeletor figure's dual blades with clear "good" and "evil" hilt designs), necessitating a new sword to be built by Man-At-Arms and endowed with the properties of the original by the Sorceress. However, Mattel decreed that they wished to reboot the continuity for a new generation of children, and thus the "new" Power Sword design became the "original" version for the new continuity.
Sword of Protection
The Sword of Protection is the weapon wielded by Adora, Prince Adam's twin sister, and is used in her transformation into the heroic She-Ra and Spirit's into Swift Wind. Instead of the war-cry, "By the Power of Grayskull," Adora's transformation is triggered by calling "For the Honor of Grayskull." It is identical in overall design to the Sword of Power, with one exception - the Sword of Protection has a jewel imbedded in the hilt.
The jewel is the key to such powers of the Sword of Protection as Adora's transformation; if it is damaged, she loses her ability to transform into She-Ra, as seen in the episode "The Stone in the Sword." The stone, which was created by the Goddess of Grayskull, allows Adora/She-Ra to channel all the powers of Grayskull if needed. She-Ra's sword is discovered to be a direct clone of He-Man's, as the Goddess felt that Adora's destiny would require her to also tap into the powers of Grayskull with her own sword. In addition to being a formidable weapon capable of cutting through most substances or deflecting attacks, the Sword of Protection has the ability to change its shape, a trait not shared by the Sword of Power. She-Ra can change the sword to a variety of weapons or tools through spoken command, varying from a shield or lasso, to a helmet or flaming blade. She-Ra can also use her sword to draw upon the mystical power of the planet Etheria itself, increasing her strength beyond her usual levels.
According to the 2015 DC Comics series He-Man - The Eternity War, the Sword of Protection was forged in case the Sword of Power fell into the wrong hands or the wielder of it became corrupted.
In Netflix's She-Ra and the Princesses of Power, the Sword of Protection is an amalgam of technology and magic. Created by the First Ones, there have been previous wielders of the sword, all who have been able to transform into She-Ra, suggesting its more of a title than individual.
The sword is capable of interacting with other pieces of First One's tech, projecting bolts of energy and transforming any animals in a similar manner to Swift Wind's transformation.
Further reading
Mastering the Universe: He-Man and the Rise and Fall of a Billion-Dollar Idea by Roger Sweet and David Wecker, Emmis Books (2005)
The Art of He-Man and the Masters of the Universe, Dark Horse Books (2015)
He-Man and the Masters of the Universe: A Character Guide and World Compendium'', Dark Horse Books (2017)
References
External links
How He-Man's Sword Retcons A Story From The Toy Line
Fantasy weapons
Fictional elements introduced in 1982
Fictional swords
Magic items
Masters of the Universe | Power Sword | [
"Physics"
] | 2,882 | [
"Magic items",
"Physical objects",
"Matter"
] |
14,546,023 | https://en.wikipedia.org/wiki/Pegvisomant | Pegvisomant, sold under the brand name Somavert, is a growth hormone receptor antagonist used in the treatment of acromegaly. It is primarily used if the pituitary gland tumor causing the acromegaly cannot be controlled with surgery or radiation, and the use of somatostatin analogues is unsuccessful, but is also effective as a monotherapy. It is delivered as a powder that is mixed with water and injected under the skin.
Medical uses
Pegvisomant is indicated for the treatment of adults with acromegaly.
Side effects
Side effects of pegvisomant include reactions at the injection site, swelling of the limbs, chest pain, hypoglycemia, nausea and hepatitis.
Discovery
Pegvisomant was discovered at Ohio University in 1987 by Distinguished Professor John Kopchick and graduate student Wen Chen at the Edison Biotechnology Institute. After completing clinical trials, it was approved for the treatment of acromegaly by the FDA in 2003 and marketed by Pfizer.
Structure
Pegvisomant is a protein containing 191 amino acid residues to which several polyethylene glycol polymers have been covalently bound in order to slow clearance from the blood. The protein is a modified version of human growth hormone designed to bind to and block the growth hormone receptor. It is manufactured using genetically modified E. coli bacteria.
Mechanism of action
Pegvisomant blocks the action of growth hormone on the growth hormone receptor to reduce the production of IGF-1. IGF-1 is responsible for most of the symptoms of acromegaly, and the normalization of its levels can control the symptoms.
Long-term treatment studies with pegvisomant as a monotherapy have shown it to be safe, and effective.
Research
Some studies show the potential of using pegvisomant as an anti-tumor treatment for certain types of cancers.
References
External links
Drugs developed by Pfizer | Pegvisomant | [
"Chemistry"
] | 394 | [
"Neurochemistry",
"Receptor antagonists"
] |
14,546,072 | https://en.wikipedia.org/wiki/Melting-point%20depression | This article deals with melting/freezing point depression due to very small particle size. For depression due to the mixture of another compound, see freezing-point depression.
Melting-point depression is the phenomenon of reduction of the melting point of a material with a reduction of its size. This phenomenon is very prominent in nanoscale materials, which melt at temperatures hundreds of degrees lower than bulk materials.
Introduction
The melting temperature of a bulk material is not dependent on its size. However, as the dimensions of a material decrease towards the atomic scale, the melting temperature scales with the material dimensions. The decrease in melting temperature can be on the order of tens to hundreds of degrees for metals with nanometer dimensions.
Melting-point depression is most evident in nanowires, nanotubes and nanoparticles, which all melt at lower temperatures than bulk amounts of the same material. Changes in melting point occur because nanoscale materials have a much larger surface-to-volume ratio than bulk materials, drastically altering their thermodynamic and thermal properties.
Melting-point depression was mostly studied for nanoparticles, owing to their ease of fabrication and theoretical modeling. The melting temperature of a nanoparticle decreases sharply as the particle reaches critical diameter, usually < 50 nm for common engineering metals.
Melting point depression is a very important issue for applications involving nanoparticles, as it decreases the functional range of the solid phase. Nanoparticles are currently used or proposed for prominent roles in catalyst, sensor, medicinal, optical, magnetic, thermal, electronic, and alternative energy applications. Nanoparticles must be in a solid state to function at elevated temperatures in several of these applications.
Measurement techniques
Two techniques allow measurement of the melting point of the nanoparticle. The electron beam of a transmission electron microscope (TEM) can be used to melt nanoparticles. The melting temperature is estimated from the beam intensity, while changes in the diffraction conditions to indicate phase transition from solid to liquid. This method allows direct viewing of nanoparticles as they melt, making it possible to test and characterize samples with a wider distribution of particle sizes. The TEM limits the pressure range at which melting point depression can be tested.
More recently, researchers developed nanocalorimeters that directly measure the enthalpy and melting temperature of nanoparticles. Nanocalorimeters provide the same data as bulk calorimeters, however, additional calculations must account for the presence of the substrate supporting the particles. A narrow size distribution of nanoparticles is required since the procedure does not allow users to view the sample during the melting process. There is no way to characterize the exact size of melted particles during the experiment.
History
Melting point depression was predicted in 1909 by Pawlow. It was directly observed inside an electron microscope in the 1960s–70s for nanoparticles of Pb, Au, and In.
Physics
Nanoparticles have a much greater surface-to-volume ratio than bulk materials. The increased surface-to-volume ratio means surface atoms have a much greater effect on the chemical and physical properties of a nanoparticle. Surface atoms bind in the solid phase with less cohesive energy because they have fewer neighboring atoms in close proximity compared to atoms in the bulk of the solid. Each chemical bond an atom shares with a neighboring atom provides cohesive energy, so atoms with fewer bonds and neighboring atoms have lower cohesive energy. The cohesive energy of the nanoparticle has been theoretically calculated as a function of particle size according to Equation 1.
Where: D = nanoparticle size
d = atomic size
Eb = cohesive energy of bulk
As Equation 1 shows, the effective cohesive energy of a nanoparticle approaches that of the bulk material as the material extends beyond the atomic size range (D>>d).
Atoms located at or near the surface of the nanoparticle have reduced cohesive energy due to a reduced number of cohesive bonds. An atom experiences an attractive force with all nearby atoms according to the Lennard-Jones potential.
The cohesive energy of an atom is directly related to the thermal energy required to free the atom from the solid. According to Lindemann's criterion, the melting temperature of a material is proportional to its cohesive energy, av (TM=Cav). Since atoms near the surface have fewer bonds and reduced cohesive energy, they require less energy to free from the solid phase. Melting point depression of high surface-to-volume ratio materials results from this effect. For the same reason, surfaces of nanomaterials can melt at lower temperatures than the bulk material.
The theoretical size-dependent melting point of a material can be calculated through classical thermodynamic analysis. The result is the Gibbs–Thomson equation shown in Equation 2.
Where: TMB = bulk melting temperature
σsl = solid–liquid interface energy
Hf = Bulk heat of fusion
ρs = density of solid
d = particle diameter
Semiconductor/covalent nanoparticles
Equation 2 gives the general relation between the melting point of a metal nanoparticle and its diameter. However, recent work indicates the melting point of semiconductor and covalently bonded nanoparticles may have a different dependence on particle size. The covalent character of the bonds changes the melting physics of these materials. Researchers have demonstrated that Equation 3 more accurately models melting point depression in covalently bonded materials.
Where: TMB=bulk melting temperature
c=materials constant
d=particle diameter
Equation 3 indicates that melting point depression is less pronounced in covalent nanoparticles due to the quadratic nature of particle size dependence in the melting Equation.
Proposed mechanisms
The specific melting process for nanoparticles is currently unknown. The scientific community currently accepts several mechanisms as possible models of nanoparticle melting. Each of the corresponding models effectively matches experimental data for the melting of nanoparticles. Three of the four models detailed below derive the melting temperature in a similar form using different approaches based on classical thermodynamics.
Liquid drop model
The liquid drop model (LDM) assumes that an entire nanoparticle transitions from solid to liquid at a single temperature. This feature distinguishes the model, as the other models predict melting of the nanoparticle surface prior to the bulk atoms. If the LDM is true, a solid nanoparticle should function over a greater temperature range than other models predict. The LDM assumes that the surface atoms of a nanoparticle dominate the properties of all atoms in the particle. The cohesive energy of the particle is identical for all atoms in the nanoparticle.
The LDM represents the binding energy of nanoparticles as a function of the free energies of the volume and surface. Equation 4 gives the normalized, size-dependent melting temperature of a material according to the liquid-drop model.
Where: σsv=solid-vapor interface energy
σlv=liquid-vapor interface energy
Hf=Bulk heat of fusion
ρs=density of solid
ρl=density of liquid
d=diameter of nanoparticle
Liquid shell nucleation model
The liquid shell nucleation model (LSN) predicts that a surface layer of atoms melts prior to the bulk of the particle. The melting temperature of a nanoparticle is a function of its radius of curvature according to the LSN. Large nanoparticles melt at greater temperatures as a result of their larger radius of curvature.
The model calculates melting conditions as a function of two competing order parameters using Landau potentials. One order parameter represents a solid nanoparticle, while the other represents the liquid phase. Each of the order parameters is a function of particle radius.
The parabolic Landau potentials for the liquid and solid phases are calculated at a given temperature, with the lesser Landau potential assumed to be the equilibrium state at any point in the particle. In the temperature range of surface melting, the results show that the Landau curve of the ordered state is favored near the center of the particle while the Landau curve of the disordered state is smaller near the surface of the particle.
The Landau curves intersect at a specific radius from the center of the particle. The distinct intersection of the potentials means the LSN predicts a sharp, unmoving interface between the solid and liquid phases at a given temperature. The exact thickness of the liquid layer at a given temperature is the equilibrium point between the competing Landau potentials.
Equation 5 gives the condition at which an entire nanoparticle melts according to the LSN model.
Where: d0=atomic diameter
Liquid nucleation and growth model
The liquid nucleation and growth model (LNG) treats nanoparticle melting as a surface-initiated process. The surface melts initially, and the liquid-solid interface quickly advances through the entire nanoparticle. The LNG defines melting conditions through the Gibbs-Duhem relations, yielding a melting temperature function dependent on the interfacial energies between the solid and liquid phases, the volumes and surface areas of each phase, and the size of the nanoparticle. The model calculations show that the liquid phase forms at lower temperatures for smaller nanoparticles. Once the liquid phase forms, the free energy conditions quickly change and favor melting. Equation 6 gives the melting conditions for a spherical nanoparticle according to the LNG model.
Bond-order-length-strength (BOLS) model
The bond-order-length-strength (BOLS) model employs an atomistic approach to explain melting point depression. The model focuses on the cohesive energy of individual atoms rather than a classical thermodynamic approach. The BOLS model calculates the melting temperature for individual atoms from the sum of their cohesive bonds. As a result, the BOLS predicts the surface layers of a nanoparticle melt at lower temperatures than the bulk of the nanoparticle.
The BOLS mechanism states that if one bond breaks, the remaining neighbouring ones become shorter and stronger. The cohesive energy, or the sum of bond energy, of the less coordinated atoms determines the thermal stability, including melting, evaporating and other phase transition. The lowered CN changes the equilibrium bond length between atoms near the surface of the nanoparticle. The bonds relax towards equilibrium lengths, increasing the cohesive energy per bond between atoms, independent of the exact form of the specific interatomic potential. However, the integrated, cohesive energy for surface atoms is much lower than for bulk atoms due to the reduced coordination number and an overall decrease in cohesive energy.
Using a core–shell configuration, the melting point depression of nanoparticles is dominated by the outermost two atomic layers, yet atoms in the core interior retain their bulk nature.
The BOLS model and the core–shell structure have been applied to other size dependencies of nanostructures such as the mechanical strength, chemical and thermal stability, lattice dynamics (optical and acoustic phonons), Photon emission and absorption, electronic colevel shift and work function modulation, magnetism at various temperatures, and dielectrics due to electron polarization etc. Reproduction of experimental observations in the above-mentioned size dependency has been realized. Quantitative information, such as the energy level of an isolated atom and the vibration frequency of individual dimer, has been obtained by matching the BOLS predictions to the measured size dependency.
Particle shape
Nanoparticle shape impacts the melting point of a nanoparticle. Facets, edges and deviations from a perfect sphere all change the magnitude of melting point depression. These shape changes affect the surface -to-volume ratio, which affects the cohesive energy and thermal properties of a nanostructure. Equation 7 gives a general shape-corrected formula for the theoretical melting point of a nanoparticle-based on its size and shape.
Where: c=materials constant
z=shape parameter of particle
The shape parameter is 1 for a sphere and 3/2 for a very long wire, indicating that melting-point depression is suppressed in nanowires compared to nanoparticles. Past experimental data show that nanoscale tin platelets melt within a narrow range of 10 °C of the bulk melting temperature. The melting point depression of these platelets was suppressed compared to spherical tin nanoparticles.
Substrate
Several nanoparticle melting simulations theorize that the supporting substrate affects the extent of melting-point depression of a nanoparticle. These models account for energetic interactions between the substrate materials. A free nanoparticle, as many theoretical models assume, has a different melting temperature (usually lower) than a supported particle due to the absence of cohesive energy between the nanoparticle and substrate. However, measurement of the properties of a freestanding nanoparticle remains impossible, so the extent of the interactions cannot be verified through an experiment. Ultimately, substrates currently support nanoparticles for all nanoparticle applications, so substrate/nanoparticle interactions are always present and must impact melting point depression.
Solubility
Within the size–pressure approximation, which considers the stress induced by the surface tension and the curvature of the particle, it was shown that the size of the particle affects the composition and temperature of a eutectic point (Fe-C), the solubility of C in Fe and Fe:Mo nanoclusters.
Reduced solubility can affect the catalytic properties of nanoparticles. In fact it, has been shown that size-induced instability of Fe-C mixtures represents the thermodynamic limit for the thinnest nanotube that can be grown from Fe nanocatalysts.
See also
Freezing-point depression
Thermoporometry and cryoporometry
References
Phase transitions | Melting-point depression | [
"Physics",
"Chemistry"
] | 2,821 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Statistical mechanics",
"Matter"
] |
14,546,615 | https://en.wikipedia.org/wiki/Walter%20Heiligenberg | Walter F. Heiligenberg (January 31, 1938 – September 8, 1994) was a German American scientist best known for his neuroethology work on one of the best neurologically understood behavioral patterns in a vertebrate, Eigenmannia. This weakly electric fish and the neural basis for its jamming avoidance response behavioral process was the main focus of his research, and is fully explored in his 1991 book, "Neural Nets in Electric Fish."
As an international scientist, he worked alongside other neuroethologists and researchers to further explain animal behavior in a comprehensive manner and "through the application of a strict analytical and quantitative method". The advancements within neuroethology today are still largely due to his influences, as his life was dedicated to researching that which could be applicable to "all complex nervous systems" and he "[investigated] the general principles of nature".
Life and death
Heiligenberg was born in Berlin, Germany, but moved to Münster soon afterwards. He then spent part of his early adulthood in Munich and Seewiesen before ultimately moving to San Diego, California, in 1972.
On September 8, 1994, Heiligenberg was killed in the crash of USAir Flight 427 while on his way to deliver a lecture at the University of Pittsburgh (Leaders in Their Fields 1994).
Scientific background and work
Heiligenberg's interest in ethology started at a young age, when he met Konrad Lorenz, one of the founders of modern ethology and head of a Max Planck research group, in 1953. Through Lorenz's influence, his interest in fish and animal behavior thrived even before entering college.
He initially entered the University of Münster in 1958, but transferred to the University of Munich after Lorenz and fellow neuroethologist Erich von Holst established the Max Planck Institute for Behavioral Physiology in a city approximately 20 miles from Munich, in Seewiesen (Bullock et al. 1995). Between these two colleges, his studies were spread between botany, zoology, physics, and mathematics, whose influence is clearly seen in his quantitative approaches in later research towards the neural bases of animal behavior. It was here that his ethological foundation was laid, as he "performed a quantitative analysis of the effect of motivational factors on the occurrence of various social behavioral patterns" through his doctoral thesis, "On causation of Behavioral Patterns in Cichlid Fish," which was completed in 1963 under Lorenz and Hansjochem Autrum, a sensory physiologist.
Academic career
His research continued to focus on the motivational behaviors of chiclid fish and crickets in Seewiesen, successfully conducting a quantitative demonstration of the "law of heterogeneous summation," whose model predicted that "different features of a stimulation in a [led] to an independent behavioral stimulation in the receiver". Much of his work eventually led to the testing and production of evidence contrary to Lorenz's theory of the psychohydraulic model of motivation (specifically aggression) using male Chiclidae. Such was his willingness to venture into new neuroethological territories despite the established research at the time.
His status as neuroethologist was further established when he moved to the Scripps Institution of Oceanography at the University of California, San Diego, in 1972 as a post-doctoral investigator in Theodore Holmes Bullock's laboratory. His appointment to faculty in 1973, then to the position of full professor of behavioral physiology in 1977 followed his decision to decline the position of Director at the Max Planck Institute for Behavioral Physiology in Seewiesen.
His work at UCSD led him to publish widely about the neural bases of the jamming avoidance response, the first vertebrate example of an entire behavioral pattern that could be explained from sensory input to motor output. The built-in electric organ of Eigennmania that gave millivolt discharges was found to be adaptive for location of external objects and for communication (electrolocation and electrocommunication, respectively). Heiligenberg continued to study potentially more complex social behaviors, including courtship and aggressive encounters. The decades' worth of work was expressed through the book, Neural Nets in Electric Fish, in which he explains observed phenomena of the jamming avoidance response, the nature of the electrical stimulus, the neural networks triggering them, and even explains it with respect to systems for other senses and in other species. His inclination to successfully use computational methods and modeling made him a pioneer in the neuroethology community.
Heiligenberg lab
During Heiligenberg's time at Scripps, he directed his fellow researchers and graduate students toward exploring behavioral phenomena through neuroethological methods and interests. His openness with his graduate students was notable, as he encouraged them not only to use and learn new techniques and other interests in different fields, but was also willing to allow them independently started projects and papers published without being named as a co-author.
More importantly, his personal work employed the useful aspects of both neurophysiology and ethology, whose approaches addressed the single-unit interactions and more complicated patterned processes, respectively. In his own words, his methodology was based on the belief that it would be "most promising if the behavior investigated is sufficiently simple to readily allow neurophysiological interpretations. Particularly suitable are those patterns of behavior which still function while under the restricted condition of neurophysiological experiments, since stimulus input and behavioral output can immediately be related to neuronal events".
Publications
A list of the journal articles and abstracts he helped to author at the Scripps Institution of Oceanography from 1960 to 1994, can be accessed through http://www.cnl.salk.edu/~kt/heiligref.html . There is a complete list of Heiligenberg lab publications up to 2000 in Zupanc and Bullock's 2006 article "Walter Heiligenberg: the jamming avoidance response and beyond?".
Honors
Throughout Heiligenberg's lifetime, his dedication and groundbreaking research made him a leader in the neuroethology community. At the time of his death, he had already received the Javits Award from the National Institute of Neurological Diseases and Stroke, the Merit Award from the National Institute of Mental Health, and was a member of the Bavarian Academy of Science, the American Academy of Arts and Sciences, and also of the Deutsche Akademie der Naturforscher Leopoldina. Heiligenberg also received the David Sparks Prize for systems neurophysiology and served as senior editor of the Journal of Comparative Physiology (Leaders in Their Fields 1994), an added honor to being an editor for the journal since 1981. A student travel award of the International Society of Neuroethology is named in his honor.
See also
Neuroethology
Konrad Lorenz
Erich von Holst
Theodore Holmes Bullock
Cichlid
Electric fish
Max Planck Institute for Behavioral Physiology
Scripps Institution of Oceanography
References
Sources
1938 births
1994 deaths
Accidental deaths in Pennsylvania
German physiologists
Neuroethology
Scientists from Berlin
Victims of aviation accidents or incidents in 1994
Victims of aviation accidents or incidents in the United States
Members of the German National Academy of Sciences Leopoldina | Walter Heiligenberg | [
"Biology"
] | 1,484 | [
"Ethology",
"Behavior",
"Neuroethology"
] |
14,546,679 | https://en.wikipedia.org/wiki/Mound%20system | A mound system is an engineered drain field for treating wastewater in places with limited access to multi-stage wastewater treatment systems. Mound systems are an alternative to the traditional rural septic system drain field. They are used in areas where septic systems are prone to failure from extremely permeable or impermeable soils, soil with the shallow cover over porous bedrock, and terrain that features a high water table.
History
The mound system was designed in the 1930s by the North Dakota College of Agriculture. and was known as the Nodak Disposal System. In 1976, the University of Wisconsin studied the design of mound systems as part of the university's Waste Management Project. This project published the first ever design manual for identifying the appropriate site conditions and design criteria for mounds. In 2000, a new manual was released.
Suitability
Mound systems are used to help purify and transport water efficiently.
Some soils are too high in permeability, allowing water to quickly pass through it, hindering purification effectiveness and allowing contamination to spread to nearby water sources or ecosystems. .
Areas of low soil permeability, such as areas with high water tables and limited soil cover over porous bedrock, can result in contaminated surface pooling.
Design
The mound system includes a septic tank, a dosing chamber, and a mound. Wastes from homes are sent to the septic tank where the solid portion sinks to the bottom of the tank. Effluents are sent to a second tank called a dosing chamber, from which they are distributed to the mound at a metered rate (in doses). Wastewater is partially treated as it moves through the mound sand. Final treatment and disposal occur in the soil beneath the mound. The mound system does not allow all the effluent to enter the mound at once, accordingly allowing it to clean the effluent more effectively and helping keep the system from failing.
The absorption mound is built in layers. The layer depths are determined by the depth of the limiting layer of the soil, which may be a seasonal water table, bedrock, fragrant, or glacial till. Standards created by Ohio State University state that 24 inches of soil should be above the limiting layer in the soil. A 24-inch layer of specifically sized sand is placed on top of the soil. The distribution pipes that are fed by the dosing chamber are placed on top of the sand in gravel. Then construction fabric and additional soil are placed on top of the gravel to help keep the pipes from freezing. The top layer of soil also allows the mound to be planted with grass or non-woody plants to control erosion
The primary waste liquids cleaning and purification actions in a drain field are performed by a biofilm in the loose fill surrounding the perforated drain tile. If the soil permeability is too low, the liquid is not absorbed fast enough. If the soil permeability is too high or is exposed to fractured bedrock, the wastewater reaches the water table before the biofilm has time to purify the water, contaminating the aquifer. In either situation, the mound system provides an ideal habitat for the biofilm and has the correct permeability to assure slow absorption of effluent into the mound before exiting as purified water into the surrounding environment.
When installing a mound system, the soil in the area where the mound is to be placed will be compacted or disturbed. Any trees that in the mound area are cut away, and the roots and stumps retained. The surface of the area for the mound is then roughened with a chisel plow. This prepares the area for the sand. Work is done from upslope of the mound area so that the ground downslope of the mound does not get compacted. Tyler tables are used to help determine the mound size.
Time dosing is another important aspect of the functioning of the mound system. Short frequent doses of effluent onto sand filters with orifices that are closely spaced helps to improve effluent quality. By contrast, demand dosing releases large amounts of effluent at once, which rapidly passes through the sand. This does not give the biota the proper amount of time to clean the effluent.
See also
Blackwater
Biofilter (also called a Trickle filter)
Bioreactor
Cesspit
Drain-waste-vent system
Ecological sanitation
Grease interceptor
Latrine
Composting toilet
Outhouse
Percolation test (for the capacity of soil to absorb water)
Pit toilet
Plumber
Plumbing
Potable cold and hot water supply
Traps, drains, and vents
Rainwater, surface, and subsurface water drainage
Fuel gas piping
Sepsis
Septage
Sewage treatment
Sewer
Waste disposal
Wastewater
References
Further reading
Solomon, C., P. Casey, C. Mackne, and A. Lake. 1998. Mound Systems. National Small Flows Clearinghouse. 1-2. 10 Oct. 2007. Link.
National Small Flows Clearinghouse, 1999. MOUNDS: a SEPTIC SYSTEM ALTERNATIVE. Pipeline 10(3): 1-8. Accessed in October 2007. Link.
SepticAPedia. 2007. Using Septic Mounds as Components of Alternative Septic Systems for Difficult Sites. Building & Environmental Inspection, Testing, Diagnosis, Repair, & Problem Prevention Advice. 09/05/2007. 15 Oct 2007. Link
The Water Quality Program Committee. Virginia Tech. 1996. "Maintenance of Mound Septic Systems." Virginia Tech. Virginia Cooperative Extension. Accessed on 15 Oct 2007. Link.
Mancl, Karen. 1993. Septic Tank - Mound System. Ohio State University Extension. Ohio State University. Accessed on 15 Oct 2007. Link.
Darby, J, G. Tchobanoglous, M. Arsi Nor, and D. Maciolek. 1996. Shallow intermittent sand filtration: performance evaluation. The Small Flows Journal. 2:3-16.
Aquatic ecology
Environmental engineering
Environmental soil science
Pollution control technologies
Sewerage infrastructure
Water pollution | Mound system | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 1,214 | [
"Water treatment",
"Chemical engineering",
"Environmental soil science",
"Sewerage infrastructure",
"Pollution control technologies",
"Water pollution",
"Civil engineering",
"Ecosystems",
"Environmental engineering",
"Aquatic ecology"
] |
14,546,804 | https://en.wikipedia.org/wiki/Terrace%20ledge%20kink%20model | In chemistry, the terrace ledge kink (TLK) model, which is also referred to as the terrace step kink (TSK) model, describes the thermodynamics of crystal surface formation and transformation, as well as the energetics of surface defect formation. It is based upon the idea that the energy of an atom’s position on a crystal surface is determined by its bonding to neighboring atoms and that transitions simply involve the counting of broken and formed bonds. The TLK model can be applied to surface science topics such as crystal growth, surface diffusion, roughening, and vaporization.
History
The TLK model is credited as having originated from papers published in the 1920s by the German chemist Walther Kossel and the Bulgarian chemist Ivan Stranski
Definitions
Depending on the position of an atom on a surface, it can be referred to by one of several names. Figure 1 illustrates the names for the atomic positions and point defects on a surface for a simple cubic lattice.
Figure 2 shows a scanning tunneling microscopy topographic image of a step edge that shows many of the features in Figure 1.Figure 3 shows a crystal surface with steps, kinks, adatoms, and vacancies in a closely packed crystalline material, which resembles the surface featured in Figure 2.
Although intuitively evident, it has only recently been explicitly recognized that the attachment of crystal building units to kink positions plays a pivotal role in perpetuating the crystal's symmetry. At a kink position, the attaching unit does not form all its potential bonds; rather, it forms only half the bonds in each given direction. These bonds are grouped in such a way in order to create a concave structure, which naturally accommodates the incoming building unit. This unique arrangement not only minimizes the system's free energy but also aligns the new unit with the symmetry of the underlying lattice. Consequently, kink positions serve as the primary sites where the crystal's structural order is reproduced and propagated, enabling the transition from microscopic nucleation to a macroscopic, ordered crystal form. This subtle yet fundamental mechanism distinguishes kink-mediated growth from other aggregation processes and underscores its critical role in maintaining the uniformity and symmetry of growing crystals.
Thermodynamics
The energy required to remove an atom from the surface depends on the number of bonds to other surface atoms which must be broken. For a simple cubic lattice in this model, each atom is treated as a cube and bonding occurs at each face, giving a coordination number of 6 nearest neighbors. Second-nearest neighbors in this cubic model are those that share an edge and third-nearest neighbors are those that share corners. The number of neighbors, second-nearest neighbors, and third-nearest neighbors for each of the different atom positions are given in Table 1.
Most crystals, however, are not arranged in a simple cubic lattice. The same ideas apply for other types of lattices where the coordination number is not six, but these are not as easy to visualize and work with in theory, so the remainder of the discussion will focus on simple cubic lattices. Table 2 indicates the number of neighboring atoms for a bulk atom in some other crystal lattices.
The kink site is of special importance when evaluating the thermodynamics of a variety of phenomena. This site is also referred to as the “half-crystal position” and energies are evaluated relative to this position for processes such as adsorption, surface diffusion, and sublimation. The term “half-crystal” comes from the fact that the kink site has half the number of neighboring atoms as an atom in the crystal bulk, regardless of the type of crystal lattice.
For example, the formation energy for an adatom—ignoring any crystal relaxation—is calculated by subtracting the energy of an adatom from the energy of the kink atom.
This can be understood as the breaking of all of the kink atom’s bonds to remove the atom from the surface and then reforming the adatom interactions. This is equivalent to a kink atom diffusing away from the rest of the step to become a step adatom and then diffusing away from the adjacent step onto the terrace to become an adatom. In the case where all interactions are ignored except for those with nearest neighbors, the formation energy for an adatom would be the following, where is the bond energy in the crystal is given by Equation 2.
This can be extended to a variety of situations, such as the formation of an adatom-surface vacancy pair on a terrace, which would involve the removal of a surface atom from the crystal and placing it as an adatom on the terrace. This is described by Equation 3.
The energy of sublimation would simply be the energy required to remove an atom from the kink site. This can be envisioned as the surface being disassembled one terrace at a time by removing atoms from the edge of each step, which is the kink position. It has been demonstrated that the application of an external electric field will induce the formation of additional kinks in a surface, which then leads to a faster rate of evaporation from the surface.
Temperature dependence of defect coverage
The number of adatoms present on a surface is temperature dependent. The relationship between the surface adatom concentration and the temperature at equilibrium is described by equation 4, where n0 is the total number of surface sites per unit area:
This can be extended to find the equilibrium concentration of other types of surface point defects as well. To do so, the energy of the defect in question is simply substituted into the above equation in the place of the energy of adatom formation.
References
Thermodynamic models
Chemical thermodynamics | Terrace ledge kink model | [
"Physics",
"Chemistry"
] | 1,167 | [
"Thermodynamic models",
"Chemical thermodynamics",
"Thermodynamics"
] |
14,546,835 | https://en.wikipedia.org/wiki/Bygrave%20slide%20rule | The Bygrave slide rule is a slide rule named for its inventor, Captain Leonard Charles Bygrave of the RAF. It was used in celestial navigation, primarily in aviation. Officially, it was called the A. M. L. Position Line Slide Rule (A.M.L. for Air Ministry Laboratories).
It was developed in 1920 at the Air Ministry Laboratories at Kensington in London and was produced by Henry Hughes & Son Ltd of London until the mid-1930s. It solved the so-called celestial triangle accurately to about one minute of arc and quickly enough for aerial navigation. The solution of the celestial triangle used the John Napier rules for solution of square-angled spherical triangles. The slide rule was constructed as two coaxial tubes with spiral scales, like the Fuller's cylindrical slide rules, with yet another tube on the outside carrying the cursors.
During the Second World War, a closely related version was produced in Germany by Dennert & Pape as the HR1, MHR1 and HR2.
Famous users
Sir Francis Chichester was a renowned aviator and yachtsman. He used a Bygrave Slide Rule as an aid to navigation during flights in the 1930s, one of which was the first solo flight from New Zealand to Australia in a Gipsy Moth biplane. He later completed a round the world cruise in his yacht Gipsy Moth IV. This was the first solo circumnavigation using the clipper route. Sir Francis Chichester wrote about these exploits in his autobiography, entitled The Lonely Sea and the Sky.
See also
Otis King's Patent Calculator
References
External links
Navigational equipment
Mechanical calculators
Logarithms
English inventions
Analog computers
Historical scientific instruments | Bygrave slide rule | [
"Mathematics"
] | 342 | [
"E (mathematical constant)",
"Logarithms"
] |
14,547,147 | https://en.wikipedia.org/wiki/Architectural%20engineer%20%28PE%29 | Architectural Engineer (PE) is a professional engineering designation in the United States. The architectural engineer applies the knowledge and skills of broader engineering disciplines to the design, construction, operation, maintenance, and renovation of buildings and their component systems while paying careful attention to their effects on the surrounding environment.
With the establishment of a specific "Architectural Engineering" NCEES professional engineering registration examination in the 1990s and first offering in April 2003, architectural engineering is now recognized as a distinct engineering discipline in the United States.
Note that in the United States Architects are not to be confused with "architectural engineering technology" which is different from architectural engineering; in the United States architectural engineering technologists tend to be "Engineering Technicians" that utilize CAD technology as drafters or technical assistants who do not have a license to practice either Architecture or Engineering, usually hired by larger construction firms or developers who prefer to cut out architectural design and maintain high costs of construction for standard processes and common building materials, while in Europe, Canada, South Africa and other countries Architectural technologists have a role similar to Architects and Architectural Engineers.
Areas of focus
Architecture (if licensed as an Architect)
Structural engineering
Construction engineering
Construction management
Project management
Green building
Heating, ventilation and air conditioning (HVAC)
Plumbing and piping (hydronics)
Energy management
Fire protection engineering
Building power systems
Lighting
Building transportation systems
Acoustics, noise & vibration control
A common combined specialization is Mechanical, Electrical and Plumbing, better known by its abbreviation MEP. An MEP design engineer has experience in HVAC, lighting/electrical, and plumbing systems' analysis and design.
Some topics of special interest
Building construction
Building Information Modeling (BIM)
Efficient energy use, Energy conservation or Energy demand management
Renewable energy
Solar energy
Green buildings
Intelligent buildings
Autonomous buildings
Indoor air quality
Thermal comfort
Educational institutions offering bachelor's degrees in architectural engineering
Programs accredited by the Engineering Accreditation Commission (EAC) of ABET and that are members of Architectural Engineering Institute (AEI) are denoted below.
California Polytechnic State University, San Luis Obispo, California (ABET, AEI)
Drexel University, Philadelphia, Pennsylvania (ABET, AEI)
Illinois Institute of Technology, Chicago, Illinois (ABET, AEI)
Kansas State University, Manhattan, Kansas (ABET, AEI)
Lawrence Technological University, Southfield, Michigan (ABET)
Milwaukee School of Engineering, Milwaukee, Wisconsin (ABET, AEI)
North Carolina A&T State University, Greensboro, North Carolina (ABET, AEI)
Oklahoma State University, Stillwater, Oklahoma (ABET, AEI)
Oregon State University, Corvallis, Oregon
Penn State University, State College, Pennsylvania (ABET, AEI)
Tennessee State University, Nashville, Tennessee (ABET, AEI)
Texas A&M University, College Station, Texas
Texas A&M University, Kingsville, Kingsville, Texas (ABET, AEI)
University of Alabama, Tuscaloosa, Alabama (ABET)
University of Arizona, Tucson, Arizona
University of Arkansas at Little Rock, Little Rock, Arkansas (ABET)
University of Cincinnati, Cincinnati, Ohio (ABET)
University of Colorado at Boulder, Boulder, Colorado (ABET, AEI)
University of Detroit Mercy, Detroit, Michigan (ABET)
University of Kansas, Lawrence, Kansas (ABET, AEI)
University of Miami, Miami, Florida (ABET, AEI)
Missouri University of Science and Technology, Rolla, Missouri (ABET, AEI)
University of Nebraska at Omaha, Omaha, Nebraska (ABET, AEI)
University of Oklahoma, Norman, Oklahoma (ABET)
University of Texas at Arlington, Arlington, Texas
University of Texas at Austin, Austin, Texas (ABET, AEI)
University of Wyoming, Laramie, Wyoming (ABET, AEI)
Worcester Polytechnic Institute, Worcester, Massachusetts (ABET)
See also
Accreditation Board for Engineering and Technology
American Society of Heating, Refrigerating and Air-Conditioning Engineers
American Society of Plumbing Engineers
Architectural Engineering Institute
Architectural technologist
Associated General Contractors of America
Illuminating Engineering Society of North America
National Society of Professional Engineers
Society of Fire Protection Engineers
Structural engineering
U.S. Green Building Council
References
Building engineering
Engineering occupations
Engineering | Architectural engineer (PE) | [
"Engineering"
] | 856 | [
"Building engineering",
"Architecture occupations",
"Civil engineering",
"Architecture"
] |
14,547,183 | https://en.wikipedia.org/wiki/Ribosomal%20frameshift | Ribosomal frameshifting, also known as translational frameshifting or translational recoding, is a biological phenomenon that occurs during translation that results in the production of multiple, unique proteins from a single mRNA. The process can be programmed by the nucleotide sequence of the mRNA and is sometimes affected by the secondary, 3-dimensional mRNA structure. It has been described mainly in viruses (especially retroviruses), retrotransposons and bacterial insertion elements, and also in some cellular genes.
Small molecules, proteins, and nucleic acids have also been found to stimulate levels of frameshifting. In December 2023, it was reported that in vitro-transcribed (IVT) mRNAs in response to BNT162b2 (Pfizer–BioNTech) anti-COVID-19 vaccine caused ribosomal frameshifting.
Process overview
Proteins are translated by reading tri-nucleotides on the mRNA strand, also known as codons, from one end of the mRNA to the other (from the 5' to the 3' end) starting with the amino acid methionine as the start (initiation) codon AUG. Each codon is translated into a single amino acid. The code itself is considered degenerate, meaning that a particular amino acid can be specified by more than one codon. However, a shift of any number of nucleotides that is not divisible by 3 in the reading frame will cause subsequent codons to be read differently. This effectively changes the ribosomal reading frame.
Sentence example
In this example, the following sentence of three-letter words makes sense when read from the beginning:
|Start|THE CAT AND THE MAN ARE FAT ...
|Start|123 123 123 123 123 123 123 ...
However, if the reading frame is shifted by one letter to between the T and H of the first word (effectively a +1 frameshift when considering the 0 position to be the initial position of T),
T|Start|HEC ATA NDT HEM ANA REF AT...
-|Start|123 123 123 123 123 123 12...
then the sentence reads differently, making no sense.
DNA example
In this example, the following sequence is a region of the human mitochondrial genome with the two overlapping genes MT-ATP8 and MT-ATP6.
When read from the beginning, these codons make sense to a ribosome and can be translated into amino acids (AA) under the vertebrate mitochondrial code:
|Start|AAC GAA AAT CTG TTC GCT TCA ...
|Start|123 123 123 123 123 123 123 ...
| AA | N E N L F A S ...
However, let's change the reading frame by starting one nucleotide downstream (effectively a "+1 frameshift" when considering the 0 position to be the initial position of A):
A|Start|ACG AAA ATC TGT TCG CTT CA...
-|Start|123 123 123 123 123 123 12...
| AA | T K I C S L ...
Because of this +1 frameshifting, the DNA sequence is read differently. The different codon reading frame therefore yields different amino acids.
Effect
In the case of a translating ribosome, a frameshift can either result in nonsense mutation, a premature stop codon after the frameshift, or the creation of a completely new protein after the frameshift. In the case where a frameshift results in nonsense, the nonsense-mediated mRNA decay (NMD) pathway may destroy the mRNA transcript, so frameshifting would serve as a method of regulating the expression level of the associated gene.
If a novel or off-target protein is produced, it can trigger other unknown consequences.
Function in viruses and eukaryotes
In viruses this phenomenon may be programmed to occur at particular sites and allows the virus to encode multiple types of proteins from the same mRNA. Notable examples include HIV-1 (human immunodeficiency virus), RSV (Rous sarcoma virus) and the influenza virus (flu), which all rely on frameshifting to create a proper ratio of 0-frame (normal translation) and "trans-frame" (encoded by frameshifted sequence) proteins. Its use in viruses is primarily for compacting more genetic information into a shorter amount of genetic material.
In eukaryotes it appears to play a role in regulating gene expression levels by generating premature stops and producing nonfunctional transcripts.
Types of frameshifting
The most common type of frameshifting is −1 frameshifting or programmed −1 ribosomal frameshifting (−1 PRF). Other, rarer types of frameshifting include +1 and −2 frameshifting. −1 and +1 frameshifting are believed to be controlled by different mechanisms, which are discussed below. Both mechanisms are kinetically driven.
Programmed −1 ribosomal frameshifting
In −1 frameshifting, the ribosome slips back one nucleotide and continues translation in the −1 frame. There are typically three elements that comprise a −1 frameshift signal: a slippery sequence, a spacer region, and an RNA secondary structure. The slippery sequence fits a X_XXY_YYH motif, where XXX is any three identical nucleotides (though some exceptions occur), YYY typically represents UUU or AAA, and H is A, C or U. Because the structure of this motif contains 2 adjacent 3-nucleotide repeats it is believed that −1 frameshifting is described by a tandem slippage model, in which the ribosomal P-site tRNA anticodon re-pairs from XXY to XXX and the A-site anticodon re-pairs from YYH to YYY simultaneously. These new pairings are identical to the 0-frame pairings except at their third positions. This difference does not significantly disfavor anticodon binding because the third nucleotide in a codon, known as the wobble position, has weaker tRNA anticodon binding specificity than the first and second nucleotides. In this model, the motif structure is explained by the fact that the first and second positions of the anticodons must be able to pair perfectly in both the 0 and −1 frames. Therefore, nucleotides 2 and 1 must be identical, and nucleotides 3 and 2 must also be identical, leading to a required sequence of 3 identical nucleotides for each tRNA that slips.
+1 ribosomal frameshifting
The slippery sequence for a +1 frameshift signal does not have the same motif, and instead appears to function by pausing the ribosome at a sequence encoding a rare amino acid. Ribosomes do not translate proteins at a steady rate, regardless of the sequence. Certain codons take longer to translate, because there are not equal amounts of tRNA of that particular codon in the cytosol. Due to this lag, there exist in small sections of codons sequences that control the rate of ribosomal frameshifting. Specifically, the ribosome must pause to wait for the arrival of a rare tRNA, and this increases the kinetic favorability of the ribosome and its associated tRNA slipping into the new frame. In this model, the change in reading frame is caused by a single tRNA slip rather than two.
Controlling mechanisms
Ribosomal frameshifting may be controlled by mechanisms found in the mRNA sequence (cis-acting). This generally refers to a slippery sequence, an RNA secondary structure, or both. A −1 frameshift signal consists of both elements separated by a spacer region typically 5–9 nucleotides long. Frameshifting may also be induced by other molecules which interact with the ribosome or the mRNA (trans-acting).
Frameshift signal elements
Slippery sequence
Slippery sequences can potentially make the reading ribosome "slip" and skip a number of nucleotides (usually only 1) and read a completely different frame thereafter. In programmed −1 ribosomal frameshifting, the slippery sequence fits a X_XXY_YYH motif, where XXX is any three identical nucleotides (though some exceptions occur), YYY typically represents UUU or AAA, and H is A, C or U. In the case of +1 frameshifting, the slippery sequence contains codons for which the corresponding tRNA is more rare, and the frameshift is favored because the codon in the new frame has a more common associated tRNA. One example of a slippery sequence is the polyA on mRNA, which is known to induce ribosome slippage even in the absence of any other elements.
RNA secondary structure
Efficient ribosomal frameshifting generally requires the presence of an RNA secondary structure to enhance the effects of the slippery sequence. The RNA structure (which can be a stem-loop or pseudoknot) is thought to pause the ribosome on the slippery site during translation, forcing it to relocate and continue replication from the −1 position. It is believed that this occurs because the structure physically blocks movement of the ribosome by becoming stuck in the ribosome mRNA tunnel. This model is supported by the fact that strength of the pseudoknot has been positively correlated with the level of frameshifting for associated mRNA.
Below are examples of predicted secondary structures for frameshift elements shown to stimulate frameshifting in a variety of organisms. The majority of the structures shown are stem-loops, with the exception of the ALIL (apical loop-internal loop) pseudoknot structure. In these images, the larger and incomplete circles of mRNA represent linear regions. The secondary "stem-loop" structures, where "stems" are formed by a region of mRNA base pairing with another region on the same strand, are shown protruding from the linear DNA. The linear region of the HIV ribosomal frameshift signal contains a highly conserved UUU UUU A slippery sequence; many of the other predicted structures contain candidates for slippery sequences as well.
The mRNA sequences in the images can be read according to a set of guidelines. While A, T, C, and G represent a particular nucleotide at a position, there are also letters that represent ambiguity which are used when more than one kind of nucleotide could occur at that position. The rules of the International Union of Pure and Applied Chemistry (IUPAC) are as follows:
These symbols are also valid for RNA, except with U (uracil) replacing T (thymine).
Trans-acting elements
Small molecules, proteins, and nucleic acids have been found to stimulate levels of frameshifting. For example, the mechanism of a negative feedback loop in the polyamine synthesis pathway is based on polyamine levels stimulating an increase in +1 frameshifts, which results in production of an inhibitory enzyme. Certain proteins which are needed for codon recognition or which bind directly to the mRNA sequence have also been shown to modulate frameshifting levels. MicroRNA (miRNA) molecules may hybridize to an RNA secondary structure and affect its strength.
See also
Antizyme RNA frameshifting stimulation element
Coronavirus frameshifting stimulation element
DnaX ribosomal frameshifting element
Frameshift mutation
HIV ribosomal frameshift signal
Insertion sequence IS1222 ribosomal frameshifting element
Recode database
Ribosomal pause
Slippery sequence
References
External links
Wise2 — aligns a protein against a DNA sequence allowing frameshifts and introns
FastY — compare a DNA sequence to a protein sequence database, allowing gaps and frameshifts
Path — tool that compares two frameshift proteins (back-translation principle)
Recode2 — Database of recoded genes, including those that require programmed Translational frameshift.
RNA
Gene expression
Cis-regulatory RNA elements
Genetics | Ribosomal frameshift | [
"Chemistry",
"Biology"
] | 2,468 | [
"Genetics",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
14,547,429 | https://en.wikipedia.org/wiki/IC3b | iC3b is a protein fragment that is part of the complement system, a component of the vertebrate immune system. iC3b is produced when complement factor I cleaves C3b. Complement receptors on white blood cells are able to bind iC3b, so iC3b functions as an opsonin. Unlike intact C3b, iC3b cannot associate with factor B, thus preventing amplification of the complement cascade through the alternative pathway. Complement factor I can further cleave iC3b into a protein fragment known as C3d.
References | IC3b | [
"Chemistry",
"Biology"
] | 118 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
14,548,330 | https://en.wikipedia.org/wiki/Dark%20star%20%28dark%20matter%29 | A dark star is a hypothetical type of star that may have existed early in the universe before conventional stars were able to form and thrive.
Properties
The dark stars would be composed mostly of normal matter, like modern stars, but a high concentration of neutralino dark matter present within them would generate heat via annihilation reactions between the dark-matter particles. This heat would prevent such stars from collapsing into the relatively compact and dense sizes of modern stars and therefore prevent nuclear fusion among the 'normal' matter atoms from being initiated.
Under this model, a dark star is predicted to be an enormous cloud of molecular hydrogen and helium ranging between 1 and 960 astronomical units (AU) in radius; its surface temperature would be around 10000 K. It is expected that they would grow over time and reach masses up to , up until the point where they exhaust the dark matter needed to sustain them, after which they would collapse.
In the unlikely event that dark stars have endured to the modern era, they could be detectable by their emissions of gamma rays, neutrinos, and antimatter and would be associated with clouds of cold molecular hydrogen gas that normally would not harbor such energetic, extreme, and rare particles.
Possible dark star candidates
In April 2023, a study investigated four extremely redshifted objects discovered by the James Webb Space Telescope. Their study suggested that three of these four, namely JADES-GS-z13-0, JADES-GS-z12-0, and JADES-GS-z11-0, are consistent with being point sources, and further suggested that the only point sources which could exist in this time and be bright enough to be observed at these phenomenal distances and redshifts (z = 10–13) were supermassive dark stars in the early universe, powered by dark matter annihilation. Their spectral analysis of the objects suggested that they were between 500,000 and 1 million solar masses (), as well as having a luminosity of billions of Suns (); they would also likely be huge, possibly with radii surpassing 10,000 solar radii (), far exceeding the size of the largest modern stars.
See also
Population III star
Supermassive star
Quasi-star
Primordial black hole
References
Further reading
External links
Star types
Star
Dark concepts in astrophysics
Hypothetical stars
Black holes | Dark star (dark matter) | [
"Physics",
"Astronomy"
] | 479 | [
"Dark matter",
"Black holes",
"Unsolved problems in astronomy",
"Physical phenomena",
"Physical quantities",
"Concepts in astronomy",
"Unsolved problems in physics",
"Astrophysics",
"Dark concepts in astrophysics",
"Astronomical objects",
"Density",
"Astronomical classification systems",
"Ex... |
14,548,620 | https://en.wikipedia.org/wiki/Barium%20bromide | Barium bromide is the chemical compound with the formula BaBr2. It is ionic and hygroscopic in nature.
Structure and properties
BaBr2 crystallizes in the lead chloride (cotunnite) motif, giving white orthorhombic crystals that are deliquescent.
In aqueous solution BaBr2 behaves as a simple salt.
Solutions of barium bromide reacts with the sulfate salts to produce a solid precipitate of barium sulfate.
BaBr2 + → BaSO4 + 2 Br−
Similar reactions occur with oxalic acid, hydrofluoric acid, and phosphoric acid, giving solid precipitates of barium oxalate, fluoride, and phosphate, respectively.
Preparation
Barium bromide can be prepared by treating barium sulfide or barium carbonate with hydrobromic acid:
BaS + 2 HBr → BaBr2 + H2S
BaCO3 + 2 HBr → BaBr2 + CO2 + H2O
Barium bromide crystallizes from concentrated aqueous solution in its dihydrate, BaBr2·2H2O. Heating this dihydrate to 120 °C gives the anhydrous salt.
Uses
Barium bromide is a precursor to chemicals used in photography and to other bromides. Historically, barium bromide was used to purify radium in a process of fractional crystallization devised by Marie Curie. Since radium precipitates preferentially in a solution of barium bromide, the ratio of radium to barium in the precipitate would be higher than the ratio in the solution.
Safety
Barium bromide, along with other water-soluble barium salts (e.g. barium chloride), is toxic. However, there is no conclusive data available on its hazards.
In popular culture
The compound appears in the intro title card of Breaking Bad, where the first pairs of letters are replaced with Br35 and Ba56, the symbols and atomic numbers of bromine and barium respectively.
References
Bromides
Alkaline earth metal halides
Barium compounds | Barium bromide | [
"Chemistry"
] | 447 | [
"Bromides",
"Salts"
] |
14,548,752 | https://en.wikipedia.org/wiki/EPLaR | Electronics on Plastic by Laser Release (EPLaR) is a method for manufacturing flexible electrophoretic displays using conventional AM-LCD manufacturing equipment, avoiding the need to build new factories. The technology can also be used to manufacture flexible OLED displays using standard OLED fabrication facilities.
The technology was developed by Philips Research and uses standard display glass as used in TFT-LCD processing plants. It is coated with a layer of polyimide using a standard spin-coating procedure used in the production of AM-LCD displays. This polyimide coating can now have a regular TFT matrix formed on top of it in a standard TFT processing plant to form the plastic display, which can then be removed using a laser to finish the display and the glass reused, thus lowering the total cost of manufacture.
The EPLaR process is licensed by Philips for use by Taiwan's Prime View International in its TFT manufacturing plants for manufacture of flexible plastic displays.
References
Display technology | EPLaR | [
"Engineering"
] | 196 | [
"Electronic engineering",
"Display technology"
] |
14,549,276 | https://en.wikipedia.org/wiki/Vertical%20form%20fill%20sealing%20machine | A vertical form fill sealing machine is a type of automated assembly-line product packaging system, commonly used in the packaging industry for food and many other products. Walter Zwoyer, the inventor of the technology, patented his idea for the VFFS machine in 1936 while working with the Henry Heide Candy Company. The machine constructs plastic bags and stand-up pouches out of a flat roll of film, fills them with product, and seals them. Both solids and liquids can be bagged.
Specifications of machine
The typical machine is loaded with a continuous flat roll of plastic film, which has usually had labeling and artwork applied. Plastic is the most commonly used packaging material in the food industry, but the technology can be used to form continuous metallized foil/film, paper, and fabric product containers by changing the edge sealing/seaming methods. For some products the film may first be fed through a sterilizing chemical bath and dryer.
Single-web machine systems
Vertical or Inclined Form-Fill-Seal Packaging machine
For a vertical form-fill-seal the film approaches the back of a long hollow conical tube, which is called the forming tube. When the center of the plastic is near the tube, the outer edges of the film form flaps that wrap around the conical forming tube. The film is pulled downward around the outside of the tube and a vertical heat-sealing bar clamps onto the edges of the film to create the "fin Seal", bonding the film by melting the seam edges together.
To start the bagging process, a horizontal sealing bar creates the "Bottom Seal" by clamping across the bottom edge of the tube, bonding the film together, and cutting off any film below. This sealing bar can be on a fixed height, which is called an intermittent sealing process. Faster systems include a sealing bar that moves down with the bag while sealing. This is called a continuous process. The product is either pre-measured by a multi-head weighing system or the sealed tube end is then lowered onto a precision weighing table and the product to be bagged is dispensed through the long conical tube in the center of the bag. When the gross weight of the product-filled bag is reached, filling stops, and the horizontal sealing bar seals the top of the bag, simultaneously forming the bottom of the next bag above. This bag is then cut off from the tube and is now a sealed package, ready to advance onward into the product boxing and shipping processes.
During the final sealing process, the bag may be filled with air from a blower or from an inert gas supply such as nitrogen. Inflating the bag helps reduce the crushing of fragile products such as potato chips, while inflating with inert gas drives out oxygen and retards the growth of bacteria that would spoil the product. Other product finishes such as hole punching for retail hanging racks will be done concurrently or just after the "Top Seal" is made.
The feeding of material and cutting of the bag/pouch can be determined either by pouch length, or by indexing to an eyespot (photo registration mark), which is detected by a visual sensor. While single web systems are popular for food applications, the dual web four side seal system is often popular for IVD and Medical device products. Closely related is the horizontal form-fill-seal machine, which generally uses more floor space than a vertical system. Modern advancements in pouch forming technology have allowed for smaller and smaller Vertical pouch forming systems.
Many food-filled packages are filled with nitrogen to extend shelf life without the use of chemicals. Many manufacturers create and control their own nitrogen supply by using an on demand nitrogen generators.
Dual-web systems
Dual-web systems are available for pouches requiring different materials for each side, or with four sides. Dual-web systems use two rolls of material fed in from opposite sides of the machine. The bottom and sides are heat-sealed together to form the pouch, and the product is loaded from the top. The pouch with the loaded product then advances downwards; the top is sealed and the pouch is cut off. The sealing of the top of the pouch forms the bottom of the next pouch. During this process a tear notch may be punched.
See also
Filler (packaging)
Food packaging
References
Industrial design
Packaging machinery | Vertical form fill sealing machine | [
"Engineering"
] | 872 | [
"Industrial design",
"Design engineering",
"Packaging machinery",
"Design",
"Industrial machinery"
] |
14,550,347 | https://en.wikipedia.org/wiki/Bevirimat | Bevirimat (research code MPC-4326) is an anti-HIV drug derived from a betulinic acid-like compound, first isolated from Syzygium claviflorum, a Chinese herb. It is believed to inhibit HIV by a novel mechanism, so-called maturation inhibition. It is not currently U.S. Food and Drug Administration (FDA) approved. It was originally developed by the pharmaceutical company Panacos and reached Phase IIb clinical trials. Myriad Genetics announced on January 21, 2009 the acquisition of all rights to bevirimat for $7M USD. On June 8, 2010 Myriad Genetics announced that it was abandoning their HIV portfolio to focus more on cancer drug development.
Pharmacokinetics
According to the only currently available study, "the mean terminal elimination half-life of bevirimat ranged from 56.3 to 69.5 hours, and the mean clearance ranged from 173.9 to 185.8 mL/hour."
Mechanism of action
Like protease inhibitors, bevirimat and other maturation inhibitors interfere with protease processing of newly translated HIV polyprotein precursor, called gag. Gag is an essential structural protein of the HIV virus. Gag undergoes a chain of interactions both with itself and with other cellular and viral factors to accomplish the assembly of infectious virus particles. HIV assembly is a two-stage process involving an intermediate immature capsid that undergoes a structurally dramatic maturation to yield the infectious particle. This alteration is mediated by the viral protease, which cleaves the Gag polyprotein precursor, allowing the freed parts to reassemble to form the core of the mature virus particle. Bevirimat prevents this viral replication by specifically inhibiting cleavage of the capsid protein (CA) from the SP1 spacer protein. First, bevirimat enters a growing virus particle as it buds from an infected cell and binds to the Gag polypeptide at the CA/SP1 cleavage site. This prevents the protease enzyme from cleaving CA-SP1. As the capsid protein remains bound to SP1, the virus particle core is prevented from compressing into its normal mature shape, which is crucial for infectivity, resulting in the release of an immature, non-infectious particle.
Metabolism
It has been found that bevirimat does not inhibit the cytochrome P450 system or interact with the human P-glycoprotein. Unformulated bevirimat is not well absorbed from the gastrointestinal tract into the blood. Some of the less desirable properties of unformulated bevirimat and its salts include: inadequate bioavailability, poor solubility of the pharmaceutical composition in gastric fluid, insufficient dispersion of bevirimat in gastric fluid, below standard long term safety profile for oral dosage forms, below standard long term chemical and physical stability of the final dosage form, tendency for conversion to metastable forms, lengthy dissolution times for oral dosage forms, and precipitation in gastric or intestinal fluids. Some pharmaceutical compositions of formulated bevirimat have shown to have better properties over unformulated bevirimat. Some of these properties include: improved bioavailability, improved solubility of the composition in gastric fluid, improved dispersion of bevirimat in gastric fluid, improved safety for oral dosage forms, improved chemical and physical stability of the oral dosage form, decreased conversion to metastable forms, and decreased rate of precipitation in gastric fluid. Bevirimat was rapidly absorbed after oral administration, with detectable concentrations present in the plasma within 15 minutes after administration and peak plasma concentrations were achieved approximately one to three hours after administration. The plasma had a mean plasma elimination half-life ranging from 58 to 80 hours. This long half-life of bevirimat supports once-daily dosing. Elimination of bevirimat is primarily via hepatobiliary routes, with renal elimination counting for less than 1% of the dose.
Toxicity and side effects
Preclinical studies have not presented any sign that bevirimat might be associated with any specific safety concerns that would limit its clinical use. In vitro preclinical studies in human cells propose that bevirimat should have low potential for cytotoxicity. There is no evidence of any reproductive or developmental toxicity and it is not immunotoxic. Bevirimat was initially evaluated for safety and pharmacokinetics in a single-dose, randomized, double-blind, placebo-controlled phase clinical study in healthy volunteers. It was administered as an oral solution in doses of 25, 50, 100, and 250 mg. The plasma concentrations were dose-proportional, and the compound was seen to be safe and well tolerated with no dose-limiting toxicities and no serious adverse effects. In one clinical trial, headaches was the most commonly reported side effect of bevirimat, reported by four participants on bevirimat and one on the placebo. The second most common reported side effect was throat discomfort by two participants on bevirimat. No serious adverse effects were reported, all adverse effects reported were mild, and no participants discontinued use of bevirimat because of the adverse effects.
Resistance
In vitro studies have shown that presence of a number of single nucleotide polymorphisms in the CA/SP1 cleavage site have resulted in resistance to bevirimat. However, mutations at these sites were not found in phase I and II clinical trials. Instead, mutations in the glutamine-valine-threonine (QVT) motif of the SP1 peptide are also known to cause bevirimat resistance. In addition, V362I mutations have been shown to confer strong resistance to bevirimat, where the S373P and I376V mutations may confer low resistance to bevirimat. A further complication of the use of bevirimat is that, since bevirimat targets the CA/SP1 cleavage site, it could also be used in the treatment of protease inhibitor resistant patients. Except for A364V, mutations in the CA/SP1 cleavage site have showed to result in fitness deficits when combined with protease inhibitor resistance. This proposes that these mutations may develop slowly. It has been shown that protease inhibitor resistance can result in an increase in the occurrence of mutations within the downstream QVT motif.
Clinical trials
In December 2007, some results of the Phase IIb trial were released. Thomson Financial News reported that, "some patients respond 'very well' to the drug, while another population 'does not respond as well at current dose levels.'" Panacos said it intends to add a group to the study at a higher dosage. The drug manufacturer, Panacos, has stated that success with bevirimat hinges on a patient's particular HIV not having a specific group of genetic mutations in HIV’s Gag protein. When they evaluated the study participants’ virus and found that the participant’s virologic response depended greatly on whether or not the Gag protein of a participant’s virus had polymorphisms—multiple mutations in the protein’s structure. After sampling the virus of 100 patients in the company’s database, they found that about 50 percent did not have Gag polymorphisms, meaning that about 50 percent would likely respond well to the drug.
See also
BMS-955176
References
External links
AIDSmeds Bevirimat
An animation illustrating Bevirimat's mechanism of action
Overview and Publication Listing for Bevirimat from Panacos
Carboxylic acids
Maturation inhibitors
Triterpenes
Experimental antiviral drugs
Cyclopentanes | Bevirimat | [
"Chemistry"
] | 1,613 | [
"Carboxylic acids",
"Functional groups"
] |
17,339,449 | https://en.wikipedia.org/wiki/RNA%20immunoprecipitation%20chip | RIP-chip (RNA immunoprecipitation chip) is a molecular biology technique which combines RNA immunoprecipitation with a microarray. The purpose of this technique is to identify which RNA sequences interact with a particular RNA binding protein of interest in vivo. It can also be used to determine relative levels of gene expression, to identify subsets of RNAs which may be co-regulated, or to identify RNAs that may have related functions. This technique provides insight into the post-transcriptional gene regulation which occurs between RNA and RNA binding proteins.
Procedural Overview
Collect and lyse the cells of interest.
Isolate all RNA fragments and the proteins bound to them from the solution.
Immunoprecipitate the protein of interest. The solution containing the protein-bound RNAs is washed over beads which have been conjugated to antibodies. These antibodies are designed to bind to the protein of interest. They pull the protein (and any RNA fragments that are specifically bound to it) out of the solution which contains the rest of the cell contents.
Dissociate the protein-bound RNA from the antibody-bead complex. Then, use a centrifuge to separate the protein-bound RNA from the heavier antibody-bead complexes, keeping the protein-bound RNA and discarding the beads.
Disassociate the RNA from the protein of interest.
Isolate the RNA fragments from the protein using a centrifuge.
Use Reverse Transcription PCR to convert the RNA fragments into cDNA (DNA that is complementary to the RNA fragments).
Fluorescently label these cDNA fragments.
Prepare the gene chip. This is a small chip that has DNA sequences bound to it in known locations. These DNA sequences correspond to all of the known genes in the genome of the organism that the researcher is working with (or a subset of genes that the researcher is interested in). The cDNA sequences that have been collected will be complementary to some of these DNA sequences, as the cDNAs represent a subset of the RNAs transcribed from the genome.
Allow the cDNA fragments to competitively hybridize to the DNA sequences bound to the chip.
Detection of the fluorescent signal from the cDNA bound to the chip tells researchers which gene(s) on the chip were hybridized to the cDNA.
The genes fluorescently identified by the chip analysis are the genes whose RNA interacts with the original protein of interest. The strength of the fluorescent signal for a particular gene can indicate how much of that particular RNA was present in the original sample, which indicates the expression level of that gene.
Development and Similar Techniques
Previous techniques aiming to understand protein-RNA interactions included RNA Electrophoretic Mobility Shift Assays and UV-crosslinking followed by RT-PCR, however such selective analysis cannot be used when the bound RNAs are not yet known. To resolve this, RIP-chip combines RNA immunoprecipitation to isolate RNA molecules interacting with specific proteins with a microarray which can elucidate the identity of the RNAs participating in this interaction. Alternatives to RIP-chip include:
RIP-seq: Involves sequencing the RNAs that were pulled down using high-throughput sequencing rather than analyzing them with a microarray. Authors Zhao et al., 2010. combined the RNA immunoprecipitation procedure with RNA sequencing. Using specific antibodies (α-Ezh2) they immunoprecipitated nuclear RNA isolated from mouse ES cells, and subsequently sequenced the pulled-down RNA using the next generation sequencing platform, Illumina.
CLIP: The RNA binding protein is cross-linked to the RNA via the use of UV light prior to lysis, which is followed by RNA fragmentation, immunoprecipitation, high-salt wash, SDS-PAGE, membrane transfer, proteinase digestion, cDNA library preparation and sequencing in order to identify the direct RNA binding sites. CLIP has first been combined with high throughput sequencing in HITS-CLIP to determine Nova-RNA binding sites in the mouse brain, and in iCLIP that enabled amplification of truncated cDNAs and introduced the use of UMIs.
ChIP-on-chip: A similar technique which detects the binding of proteins to genomic DNA rather than RNA.
References
Genetics techniques
Microarrays
RNA
Protein methods | RNA immunoprecipitation chip | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 888 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Protein methods",
"Protein biochemistry",
"Genetic engineering",
"Bioinformatics",
"Molecular biology techniques"
] |
17,339,953 | https://en.wikipedia.org/wiki/Lactol | In organic chemistry, a lactol is a functional group which is the cyclic equivalent of a hemiacetal () or a hemiketal ().
The compound is formed by the intramolecular, nucleophilic addition of a hydroxyl group () to the carbonyl group () of an aldehyde () or a ketone ().
A lactol is often found as an equilibrium mixture with the corresponding hydroxyaldehyde. The equilibrium can favor either direction depending on ring size and other conformational effects.
The lactol functional group is prevalent in nature as component of aldose sugars.
Chemical reactivity
Lactols can participate in a variety of chemical reactions including:
Oxidation to form lactones
Reaction with alcohols to form acetals
The reaction of sugars with alcohols or other nucleophiles leads to the formation of glycosides
Reduction (deoxygenation) to form cyclic ethers
References
Functional groups | Lactol | [
"Chemistry"
] | 206 | [
"Lactols",
"Functional groups"
] |
17,340,773 | https://en.wikipedia.org/wiki/Carbuterol | Carbuterol (INN; carbuterol hydrochloride USAN) is a short-acting β2 adrenoreceptor agonist.
Synthesis
References
Beta2-adrenergic agonists
Phenylethanolamines
Ureas | Carbuterol | [
"Chemistry"
] | 56 | [
"Organic compounds",
"Ureas"
] |
17,341,064 | https://en.wikipedia.org/wiki/Christopher%20Snowden | Sir Christopher Maxwell Snowden, (born 1956) is a British electronic engineer and academic. He was the former Vice-Chancellor of Surrey University (20052015), and of the University of Southampton (20152019). He was president of Universities UK for a two-year term until 31 July 2015. He is currently the chairman of the ERA Foundation.
Biography
Early career
Snowden studied electronic and electrical engineering at the University of Leeds, gaining a BSc in 1977, and an MSc and PhD in 1982. His PhD involved microwave oscillators for radar applications and semiconductor device modelling. He conducted his PhD research at Racal-MESL Ltd near Edinburgh in Scotland as well as at the University of Leeds.
From 1977-78, Snowden was an application engineer for Mullard Applications Laboratory. He lectured at the Department of Electronics in the University of York from 1982 to 1983. From 1983 to 2005 he was a member of staff at the University of Leeds, his former alma mater, working in the Department of Electrical and Electronic Engineering, and becoming professor of microwave engineering in 1992. He was head of the school from 1995 to 1998 and briefly acted as warden of Bodington Hall. While at Leeds he was a founder of the Institute of Microwave and Photonics and had 50 PhD students under his supervision. He also worked at M/A-COM in the US between 19891991 as senior staff scientist in the Corporate Research and Development Centre, based just outside Boston.
In 1998, he was appointed to the board of Filtronic plc as Executive Director of Technology, where he initiated the Global Technology Group. He was subsequently appointed joint chief executive officer of Filtronic plc in 1999. In 2001, he became chief executive officer of Filtronic ICS.
He was also a visiting professor at Durham University until 2005 and a visiting scientist at the Delft University of Technology from 1996 to 1998.
University of Surrey
Snowden was President and Vice-Chancellor of the University of Surrey from 2005 to 2015.
In 2009 he announced 65 job cuts, just weeks after the University announced it had successfully bid for £600,000 funding to help people at risk of losing their jobs during the recession. He was later criticized for proposing further job cuts despite the university being in £4m surplus at the time. Under his leadership, Surrey considered introducing metric measurement of staff performance based on the number of students achieving 60% or above and later considered a new threshold that staff needed to reach in student evaluations (3.8 out of 5) if they were to avoid being targeted for special measures. The latter prompting UCU to consider a vote of no confidence in Snowden.
The University achieved 4th place in the 2016 Guardian University League Table rising from 6th place in 2015. Surrey was named University of the Year in The Times and Sunday Times Good University Guide 2016. It came top in the 'Best Teaching' and 'Best Student Experience' categories. This was despite an ongoing dispute between students and the UCU over cuts across the University.
Universities UK
Snowden was president of the 134-member Universities UK group (UUK) from 1 August 2013 to 31 July 2015. He succeeded Eric Thomas, the vice-chancellor of the University of Bristol, and was succeeded by Dame Julia Goodfellow. From November 2012 to August 2013, Snowden held one of the vice-president positions of UUK, representing England and Northern Ireland, and from 2009 to 2011 he chaired their Employability, Business and Industry Policy Committee.
University of Southampton
On 20 March 2015, the University of Southampton announced that Snowden would become its new Vice Chancellor following the retirement of Professor Don Nutbeam, a move which took effect from October 2015.
In 2017, Snowden spearheaded the biggest investment program in Southampton's 155-year history, with a plan to invest over £600 million over the next decade. To do this, the University raised a £300 million bond.
In June 2017 Snowden spoke out against the Teaching Excellence Framework which had given Southampton University a Bronze rating, calling it "fundamentally flawed" and having "no value or credibility".
In 2018 the University of Southampton was awarded silver rating. Snowden released a statement thanking those within the institution who had contributed and stating that the rating was an assurance to students that their experience at the University of Southampton will translate into excellent graduate outcomes.
Snowden retired from his role at Southampton in Spring 2019. He will be succeeded as Vice-Chancellor by Professor Mark Smith.
Criticism of salary
From June 2017 Snowden's salary became part of the UK wide debate on Vice Chancellor's pay, which had been started by criticism of the pay of Dame Glynis Breakwell, Vice Chancellor of the University of Bath. Snowden's salary of £433,000 was among the higher salaries in the UK Higher Education sector and drew specific criticism from then Universities Minister Jo Johnson and Labour Peer Lord Adonis. In March 2018 The Guardian, in an article about UK Vice Chancellors pay, highlighted that Snowden's salary as the head of University of Southampton was higher the chief executives of Southampton City Council (£166,786) or University Hospital Southampton NHS Foundation Trust(£195,000).
There was additional criticism of a substantial pay increase, including by UCU general Secretary Sally Hunt. However, this was later clarified as being the difference between Snowden's payment for his first 10 months in his role in 20152016 compared to his salary of his first full 12 months in employment in the academic year 20162017.
The Chair of the University of Southampton's Council Gill Rider defended Snowden's level of remuneration as reflecting his experience.
Research
Snowden's research interests are in the areas of microwave, millimetre-wave and optoelectronic devices and circuits. He pioneered the application of numerical physical device models to comprehensively describe electron transport in microwave transistor operation and in particular investigating device-circuit interaction properties. This allowed transistor designs to be significantly improved and optimized. This work was specifically recognized in his election as a Fellow of the Royal Society and as a Fellow of the IEEE.
His early work focused on two-dimensional numerical modelling. In particular, he worked on hot-electron effects in short-gate length field effect transistors (FETs), where he showed that the high energy electrons in transistor substrates contributed significantly to the conduction current. He also contributed to the development of new non-linear laser diode models, which found particular application in emerging high data rate communication systems.
During the mid-1980s, along with colleagues in Lille and Duisburg universities, he explored the potential for a new class of physical model, which became known as the quasi-two-dimensional (Q2D) approach. This was shown to be extremely effective at modelling field-effect transistors, such as the popular metal semiconductor FET (MESFET). Snowden's models were shown to have the ability to accurately predict the DC and RF performance based on the physical geometry and material properties available from fabrication data. Moreover, the Q2D model can be solved over 1000 times faster than full two-dimensional models, making it suitable for computer aided design applications. These models were widely used around the world in industry and academia. The models were used to develop high performance microwave transistors with highly predictable characteristics which went on to be manufactured in high volumes by several companies. One of the most successful was the 'hi-lo-hi' pulse-doped microwave transistor which achieved high breakdown voltages and was particularly suited to high volume manufacturing.
Snowden went on to apply this technique to high electron mobility transistors (HEMTs), between 1995 and 2005 utilizing highly effective quantum charge-control models. It was shown to be an effective method for modelling and designing AlGaAs/GaAs HEMTs and the important pseudomorphic high electron mobility transistors (pHEMTs) based on InGaAs/GaAs systems. New designs of power pHEMT (some with capabilities of over 100W at 2 GHz) were developed and fabricated using this knowledge, which achieved high breakdown voltages while retaining excellent signal gain at microwave frequencies. pHEMTs are widely used in communication applications and many billions of circuits based on pHEMT integrated circuits have been used in products such as mobile phones, radar and satellite receivers. Since 2008, he has applied new Q2D models to laterally diffused MOS power transistors (LDMOS) for high power amplifiers in communications systems, achieving similar high levels of accurate prediction and speed advantage.
During 1990 to 1997, Snowden developed a new electrothermal physics-based equivalent circuit model for heterojunction bipolar transistors, which was suited to power amplifier applications (widely used in cellular handsets). He was awarded the IEEE Microwave Theory and Techniques Society Microwave Prize in 1999 for this work, described in his 1997 paper "Large-signal Microwave Characterization of AlGaAs/GaAs HBT's Based on a Physics Based Electrothermal Model' (IEEE TMTT, MTT-45, pp. 58–71, 1997).
Snowden went on to develop further models based on incorporating the interaction between thermal effects and electronic behavior, which proved to be important in accurately modelling power transistor and in power amplifier designs. Subsequently, he developed this into fully integrated models incorporating electromagnetic effects into the physical models and demonstrating the significance of this type of global model for millimetre-wave circuits.
He also developed several novel techniques for integrating microwave, millimeter-wave and optical circuits. During his time at M/A-COM whilst working as Senior Staff Scientist he extended their glass microwave integrated circuit (GMIC) technology to photonics, introducing the concept of embedding light guides in the GMIC to allow photonic circuits and interfaces to solid-state lasers, detectors and high speed processors. He first presented these concepts at the 1991 IEEE LEOS conference and the concept was subsequently developed for use at 622 Mbit/s in synchronous optical network (SONET) applications.
Snowden has written eight books including Introduction to Semiconductor Device Modelling, Introduction to Semiconductor Device Modelling and Introduction to Semiconductor Device Modelling. He published one of the first interactive circuit analysis software packages for personal computers with Wiley in 1988. He has acted as editor for four journals and three special issues as well as the EEE Wiley book series. He has chaired a number of major international conferences including the 2006 European Microwave Conference.
Fellowships, memberships, societies and companies
Snowden is past-president of the Institution of Engineering and Technology (IET) (2009–2010). Until August 2013 he was vice-president of the Royal Academy of Engineering where he chaired the Academy's Engineering Policy Committee. In 2014 he was invited to be Deputy Chairman of the 2015 judging panel for the Queen Elizabeth Prize for Engineering (QEPrize) and is now the chair.
Snowden was appointed by the Prime Minister to his advisory Council for Science and Technology (CST) in 2011. He is also a member of the UK Government's Foresight Advisory Board.
Snowden was a member of the governing body of the UK's Innovate UK (previously known as the Technology Strategy Board (TSB)) from 2009 to 2015. He was a member of the Council for Industry and Higher Education (CIHE) (CIHE) and is a current member of the Leadership Council for the National Centre for Universities and Business (NCUB). Between 2006 and 2012, he was a Member of the Council of the UK's Engineering and Physical Sciences Research Council (EPSRC).
He is a Fellow of the Royal Society (2005) and was a member of their Council (2012–2013). He is a Fellow of the Royal Academy of Engineering (2000), the Institution of Engineering and Technology (IET) (1993), the IEEE (1996) and the City and Guilds of London Institute (2005).
He has been a member of Foresight Committee panels on Communications and Media and Exploitation of the Electromagnetic Spectrum. He was a member of the UK's National Advisory Committee on Electronic Materials from 2002 to 2007. He was a member of the supervisory board of the Electromagnetic Remote Sensing Defense Technology Centre from 2002 to 2005. He has appeared before the UK's House of Commons Select Committee on several occasions.
He was Chairman of the Daphne Jackson Trust from 2005 to 2009 and was a patron of the Trust until 2015. He was a patron of Surrey Youth Focus and Transform Housing & Support until 2015. He was a Governor of the Royal Surrey County Hospital NHS Foundation UK until 2011.
He has been a non-executive director of companies such as Intense Ltd, CENAMPS Ltd and SSTL. He was a board member of the European Microwave Association from 2003-2007, where he was also vice-chair for a period. He was chair of HERO Ltd from 20062009 and a member of the governing board of the Engineering Technology Board from 20072009.
He was a member of the South East England Science, Engineering and Technology Advisory Council (SESETAC) until 2011.
Honours and awards
He was awarded the IEEE Microwave Prize in 1999 for his research paper on microwave power transistors for communicating applications and the IEEE Distinguished Educator Award in 2009 by the Microwave Theory and Techniques Society (MTT).
The Royal Academy of Engineering awarded him their Silver Medal for 'Outstanding Personal Contributions to the UK Microwave Semiconductor Industry' in 2004.
In 2009 he received the IEEE MTT Distinguished Educator Award for outstanding achievements as an educator, mentor, and role model of microwave engineers and engineering students.
Snowden was knighted in the 2012 New Year Honours for services to engineering and higher education.
References
External links
Vice-Chancellor's Office – University of Surrey
EPSRC
1999 Microwave Prize at the IEEE Microwave Theory and Techniques Society
Announcement of becoming the Vice-Chancellor in July 2004
Independent August 1999
1956 births
Living people
People associated with the University of Surrey
Fellows of the Royal Academy of Engineering
Fellows of the Institution of Engineering and Technology
Alumni of the University of Leeds
Fellows of the Royal Society
Fellows of the IEEE
Knights Bachelor
British electronics engineers
English engineers
Vice-chancellors of the University of Southampton
Academics of the University of Leeds | Christopher Snowden | [
"Engineering"
] | 2,898 | [
"Institution of Engineering and Technology",
"Fellows of the Institution of Engineering and Technology"
] |
17,342,923 | https://en.wikipedia.org/wiki/Catlin%20%28medicine%29 | A catlin or catling is a long, double-bladed surgical knife. It was commonly used from the 17th to the mid 19th century, particularly for amputations; after that its use declined in favor of mechanically driven (and later, electrically driven) oscillating saws.
Surgeon William Clowes wrote about the instrument in a medical treatise written in 1596, that amputation required the use of "a very good catlin, and an incision knife," Later, surgeon John Woodall referred to a "catlinge" in a work in 1639. By 1693, when British navy surgeon John Moyle described proper amputation techniques, he wrote that "with your Catling, divide the Flesh and Vessels about and between the bones, and with the back of your Catling, remove the Periosteum that it may not hinder the saw, nor cause greater Torment in the Operation,".
The term was thereafter understood to refer to an interosseous knife.
See also
Instruments used in general surgery
References
Surgical instruments
Medical knives
Amputation | Catlin (medicine) | [
"Biology"
] | 220 | [
"Medical knives",
"Medical technology"
] |
17,343,462 | https://en.wikipedia.org/wiki/Clevidipine | Clevidipine (INN, trade name Cleviprex) is a dihydropyridine calcium channel blocker indicated for the reduction of blood pressure when oral therapy is not feasible or not desirable. Clevidipine is used IV only and practitioners titrate this drug to lower blood pressure. It has a half-life of approximately one minute. It is rapidly inactivated by esterases.
It was approved by the United States Food and Drug Administration on August 1, 2008.
Basic chemical and pharmacological properties
Clevidipine is a dihydropyridine L-type calcium channel blocker, highly selective for vascular, as opposed to myocardial, smooth muscle and, therefore, has little or no effect on myocardial contractility or cardiac conduction. It reduces mean arterial blood pressure by decreasing systemic vascular resistance. Clevidipine does not reduce cardiac filling pressure (pre-load), confirming lack of effects on the venous capacitance vessels. No increase in myocardial lactate production in coronary sinus blood has been seen, confirming the absence of myocardial ischemia due to coronary steal.
Clevidipine is rapidly metabolized by esterases in the blood and extravascular tissues. Therefore, its elimination is unlikely to be affected by hepatic (liver) or renal (kidney) dysfunction. Clevidipine does not accumulate in the body, and its clearance is independent of body weight.
The initial phase half-life is approximately 1 minute and the terminal half-life is approximately 15 minutes. Clevidipine will still be rapidly metabolized in pseudocholinesterase-deficient patients.
Clevidipine is formulated as a lipid emulsion in 20% soybean oil (Intralipid) and contains approximately 0.2 g of fat per mL (2.0 kcal/ml). Clevidipine also contains glycerin (22.5 mg/mL), purified egg yolk phospholipids (12 mg/mL), and sodium hydroxide to adjust pH. Clevidipine has a pH of 6.0–8.0
In the perioperative patient population Clevidipine produces a 4–5% reduction in systolic blood pressure within 2–4 minutes after starting a 1–2 mg/hour IV infusion.
In studies up to 72 hours of continuous infusion, there was no evidence of tolerance.
In most patients, full recovery of blood pressure is achieved in 5–15 minutes after the infusion is stopped.
Stereochemistry
Clevidipine contains a stereocenter and consists of two enantiomers. This is a racemate, ie a 1: 1 mixture of ( R ) – and the ( S ) - form:
Dosage and administration
Aseptic technique should be used when handling Cleviprex since it contains phospholipids and can support microbial growth.
Cleviprex is administered intravenously and should be titrated to achieve the desired blood pressure reduction. Blood pressure and heart rate should be monitored continually during infusion.
Cleviprex is a single use product that should not be diluted and should not be administered in the same line as other medications. Once the stopper is punctured, Cleviprex should be used within 12 hours and any unused portion remaining in the vial should be discarded. Change IV lines in accordance with hospital protocol.
An IV infusion at 1–2 mg/hour is recommended for initiation and should be titrated by doubling the dose every 90 seconds. As the blood pressure approaches goal, the infusion rate should be increased in smaller increments and titrated less frequently. The maximum infusion rate for Cleviprex is 32 mg/hour. Most patients in clinical trials were treated with doses of 16 mg/hour or less.
Because of lipid load restrictions, no more than 1000 mL (or an average of 21 mg/hour) of Cleviprex infusion is recommended per 24 hours. In clinical studies, no significant changes occurred in serum triglyceride levels in the Cleviprex treated patients. There is little experience with infusion durations beyond 72 hours at any dose. The infusion can be reduced or discontinued to achieve desired blood pressure while appropriate oral therapy is established.
Safety information
Cleviprex is intended for intravenous use. Titrate drug depending on the response of the individual patient to achieve the desired blood pressure reduction. Monitor blood pressure and heart rate continually during infusion, and then until vital signs are stable. Patients who receive prolonged Cleviprex infusions and are not transitioned to other antihypertensive therapies should be monitored for the possibility of rebound hypertension for at least 8 hours after the infusion is stopped.
In clinical trials, the safety profile of clevidipine was generally similar to sodium nitroprusside, nitroglycerin, or nicardipine in patients undergoing cardiac surgery.
Cleviprex is contraindicated in patients with allergies to soybeans, soy products, eggs, or egg products; defective lipid metabolism such as pathologic hyperlipemia (rare genetic disorders characterized by abnormal triglyceride metabolism), lipoid nephrosis, or acute pancreatitis if it is accompanied by hyperlipidemia; and in patients with severe aortic stenosis.
Hypotension and reflex tachycardia are potential consequences of rapid upward titration of Cleviprex. In clinical trials, a similar increase in heart rate was observed in both Cleviprex and comparator arms. Dihydropyridine calcium channel blockers can produce negative inotropic effects and exacerbate heart failure. Heart failure patients should be monitored carefully. Cleviprex gives no protection against the effects of abrupt beta-blocker withdrawal.
Most common adverse reactions (>2%) are headache, nausea, and vomiting.
Cleviprex should be used during pregnancy only if the potential benefit justifies the potential risk to the fetus.
Maintain aseptic technique while handling Cleviprex. Cleviprex contains phospholipids and can support microbial growth. Do not use if contamination is suspected. Once the stopper is punctured, use or discard within 12 hours.
Drug interactions
No clinical drug interaction studies were conducted. Cleviprex does not have the potential for blocking or inducing any CYP enzymes.
Storage
Cleviprex is available in ready-to-use 50- and 100-mL glass vials at a concentration of 0.5 mg/mL of clevidipine butyrate. Vials should be refrigerated at 2-8oC (36-46 °F). Cleviprex can be stored to controlled room temperature for up to 2 months. Cleviprex is photosensitive and storage in cartons protects against photodegradation. Protection from light during administration is not required.
Phase III clinical trial results
Cleviprex has been evaluated in 6 Phase III clinical studies including the perioperative and emergency department/intensive care settings. These include ESCAPE-1, ESCAPE-2, ECLIPSE, and VELOCITY trials.
ESCAPE-1 was a double-blind, randomized, placebo-controlled efficacy trial of 105 cardiac surgery patients. In ESCAPE-1, Cleviprex had a significantly lower rate of treatment failure when compared with placebo (7.5% vs 82.7%) and a 92.5% rate of success in lowering systolic blood pressure (SBP) by ≥15%. The median time to reduce SBP ≥15% from baseline was 6 minutes.
ESCAPE-2 was a double-blind, randomized, placebo-controlled efficacy trials of 110 cardiac surgery patients. In ESCAPE-2, Cleviprex had a significantly lower rate of treatment failure when compared with placebo (8.2% vs 79.6%) and a 91.8% treatment success rate. The median time to reduce SBP ≥15% from baseline was 5.3 minutes.
The ECLIPSE trials consisted of three safety trials in which 1506 patients were randomized to receive Cleviprex, nitroglycerin, sodium nitroprusside, or nicardipine, for the treatment of hypertension associated with cardiac surgery. The incidence of death, stroke, myocardial infarction (heart attack), and renal dysfunction at 30 days did not differ significantly between the pooled Cleviprex and comparator treatment arms.
VELOCITY was an open-label trial of 126 patients with severe hypertension (BP > 180/115 mmHg) in the emergency department and intensive care unit. In VELOCITY, 104 out of 117 patients (88.9%) achieved a target SBP mean decrease of 21.1% at 30 minutes.
References
Further reading
External links
Calcium channel blockers
Dihydropyridines
Chloroarenes
Carboxylate esters
Butyrate esters
Formals | Clevidipine | [
"Chemistry"
] | 1,912 | [
"Functional groups",
"Formals"
] |
17,344,084 | https://en.wikipedia.org/wiki/Durophagy | Durophagy is the eating behavior of animals that consume hard-shelled or exoskeleton-bearing organisms, such as corals, shelled mollusks, or crabs. It is mostly used to describe fish, but is also used when describing reptiles, including fossil turtles, placodonts and invertebrates, as well as "bone-crushing" mammalian carnivores such as hyenas. Durophagy requires special adaptions, such as blunt, strong teeth and a heavy jaw. Bite force is necessary to overcome the physical constraints of consuming more durable prey and gain a competitive advantage over other organisms by gaining access to more diverse or exclusive food resources earlier in life. Those with greater bite forces require less time to consume certain prey items as a greater bite force can increase the net rate of energy intake when foraging and enhance fitness in durophagous species.
In the order Carnivora there are two dietary categories of durophagy; bonecrackers and bamboo eaters. Bonecrackers are exemplified by hyenas and borophagines, while bamboo eaters are primarily the giant panda and the red panda. Both have developed similar cranial morphology. However, the mandible morphology reveals more about their dietary resources. Both have a raised and dome-like anterior cranium, enlarged areas for the attachment of masticatory muscles, enlarged premolars, and reinforced tooth enamel. Bamboo eaters tend to have larger mandibles, while bonecrackers have more sophisticated premolars.
Teleost fish (Teleostei)
Many Teleosts, for example the Atlantic wolffish, exhibit durophagous behaviour and crush hard prey with their appropriately adapted jaws and teeth. Other fish use of their pharyngeal teeth, with the aid of their protrusible mouth for enabling the grabbing of prey to draw it into their mouth. The pharyngeal jaws found in more derived teleosts are more powerful, with left and right ceratobranchials fusing to become one lower jaw and the pharyngeal branchial fusing to create a large upper jaw that articulates with the neurocranium. They also have developed a muscle, the hypertrophied pharyngeal, to crush prey with help from the molariform pharyngeal teeth. This permits the consumption of hard-shelled prey.
As in the Carnivora however, some largely herbivorous or omnivorous Teleost fishes too, exhibit durophagous behaviour in feeding on plant foods, in that they crack the hard stones of fruit that fall into their water: spectacular examples include relatives of the carnivorous piranhas — such species include Piaractus brachypomus and Piaractus mesopotamicus.
Triggerfish (Balistidae)
Triggerfish have jaws that contain a row of four teeth on either side, the upper jaw containing an additional set of six plate-like pharyngeal teeth. Triggerfish do not have jaw protrusion and there are enlarged jaw adductor muscles for extra power to crush the protective shells and spines of their prey.
Cichlids (Cichlidae)
Mollusk shells can be crushed to expose soft parts of the prey to digestive juices or the soft parts can be removed from the shell. Species that crush shells are defined by their large and greatly thickened pharyngeal bones. These bones have flat-crowned teeth and along with their dorsal fellows drawn by powerful muscles, create a crushing mill. The jaws are less derived as they are for just for picking up relatively large objects.
The second method cichlids use is to crush mollusk shells between powerful jaws armed with suitable teeth. Cichlids possess short, broad jaws armed with an outer row of relatively few, strong and conical teeth and several inner rows of finer, also conical teeth. Along with these features are the presence of foreshortening of the skull and development of particularly powerful mandibular adductor muscles. To feed with this type of structure the fish can protrude its mouth ventrally to permit muscles to be seized by the jaws and the mouth then is retracted rapidly so the hard-toothed jaws crush the mollusk shell with the resulting force. A series of biting movements completes the process and the shell fragments are spat out and the soft body is swallowed.
Chondrichthyans
Within the chondrichthyans, horn sharks (Heterodontidae), some rays (Myliobatidae) and chimeras (Holocephali) exhibit durophagous behaviour. They have adaptations to allow for this including stout flattened teeth, hypertrophied jaw adductor muscles and robust jaws to feed on hard prey such as crustaceans and molluscs. Sharks that crush prey have teeth with small, low rounded cusps that are numerous per row, or are molariform. The molariform teeth are smoothly rounded, lack cusps, and there are numerous teeth per row.
Horn sharks (Heterodontiformes)
Horn sharks have molariform teeth. The anterior teeth are pointed and are used for grasping while the posterior teeth are molariform and are used for crushing. Horn sharks feed primarily on limpets, bivalve molluscs and blue crabs.
Bonnethead shark (Sphyrna tiburo)
The bonnethead shark Sphyrna tiburo uses ram feeding to capture crab, shrimp and fish which are placed between the molariform teeth where they are crushed. This species also uses suction to transport prey to the esophagus for swallowing. By combining durophagous characteristics with altered kinematic and motor patterns, bonnethead sharks can prey on hard shelled animals. This characteristic distinguishes prey crushing from simply biting, which is a behaviour exhibited by elasmobranchs. While bonnethead sharks feed almost exclusively on crabs, they have the same tooth structure as the Horn sharks (Heterodontiformes).
Chimeras (Holocephali)
Chimeras (Holocephali) have pavement teeth that are flat, hexagonal in shape and interconnect to form an even dental plate. There is the presence of calcified strengthened cartilaginous jaws, calcified struts within the jaws and a lever 'nutcracker' system that amplifies the force of the jaw adductor muscles. The fusion of the palatoquadrate and mandibular symphysis, a restricted gape and asynchronous activation of the jaw adductors are key elements in the 'nutcracker' model of jaw-crushing ability. Chimeras use their pavement teeth for grinding molluscs, gastropods and crabs.
Myliobatidae
Myliobatidae are free-swimming rays whose pectoral fins make up broad, powerful "wings" which include the eagle and cow-nose rays. They feed on molluscs and have dentitions adapted to crushing. Dentitions of durophagous myliobatids show several specializations in the jaws and teeth related to their diet. The cartilaginous jaws are strengthened by calcified struts (trabeculae), and the palatoquadrate and mandibular symphysis are fused. Strong ligaments connecting the upper and lower jaws restrict the jaw gape. The strong adductor muscles can be asynchronously activated.
Eagle (Aetobatus narinari) and cow-nose (Rhinoptera javanica) rays
In eagle (Aetobatus narinari) and cow-nose (Rhinoptera javanica) rays, teeth are hexagonal and are arranged in anteroposterior files packed closely together in an alternating array to form an almost gap-free pavement, similar to the organization found in Chimeras. The teeth are covered with a layer of enameloid. The tooth pavement is stabilized by vertical surfaces that bear ridges and grooves which are interconnected with those on neighboring teeth. These rays also use their pavement teeth for grinding molluscs, gastropods and crabs. Cow nose rays are specialized suction feeders, which open and close their jaws to generate water movements that are used to excavate buried prey. Food capture is achieved by suction and the prey is then cleaned by actions similar to those used in excavation.
Myliobatis and Aetobatus
In Myliobatis and Aetobatus, anteroposterior ridges of the basal plate extend from the posterior margin of the tooth and these interdigitate with those of the succeeding tooth and also form a shelf on which the body of the neighboring tooth rests. The dentition of the bat ray (Myliobatis californica) is made up of a series of seven files of crushing teeth. The central hexagonal plate is very wide, taking up about half the width of the occlusal surface and it is flanked by three lateral files of smaller teeth on each side, the outermost being pentagonal. The crushing surface formed by the teeth of the upper jaw is more curved than that of the lower jaw.
Birds
Shorebirds commonly consume bivalves and snails which are low in chitin but the calcium carbonate shell makes up a large portion of their weight. Bivalves and snails are largely consumed whole by ducks and wading birds. The molluscivores that swallow snails or bivalves whole have large well-modularized gizzards for crushing the strong shells. The gizzard of red-necked stints and red knots is more than ten times larger than the proventriculus. The size of the gizzard is adaptable in these shore birds, becoming atrophied when soft food items like worms are consumed and increasing in size and muscularity following prolonged consumption of snails, cockles or mussels. The production of chitinase for the hydrolysis of chitin is important for birds that consume mollusks.
Marine mammals
Sea otters (Enhydra lutris)
Sea otters preferentially forage on benthic invertebrates, particularly sea urchins, gastropod, bivalve mollusks, and crustaceans. Once prey is caught, the otters use their powerful jaws and sharp teeth to consume their meal quickly, even protective crustacean shells. They have canines that deliver a lethal bite, and molars that can crush bones and the shells of mollusks.
Sea otter molars are broad, flat, multi cuspid teeth and the carnassial are also modified for crushing. Both the temporalis and masseter muscles are well developed, creating a strong bite force. The teeth are extremely broad and carnassial are highly molarized. Captured prey is manipulated with the forepaws or is held temporarily in loose skin pouches in the armpits. For larger, heavier-shelled prey, otters will sometimes exhibit tool-use behavior, breaking open sea urchins and mussels with a false stone used as an anvil. Sea otters can also bite sea urchins and mussels open using their strong jaws and teeth. Adults can crush most of their food items but youngsters have not yet developed powerful enough jaws. Therefore, young otters require the assistance of a tool or stone. Tools may also be used when the molluscs are too large to be crushed in the jaws.
Mammals
Monkeys
All mangabeys appear to be durophagous and possess relatively thick molar enamel and expanded premolars, dental adaptations for processing hard foods. Their diet consists of Sacoglottis gabonensis seeds. These seeds can remain on the ground for months without rotting. With hard-object feeding, Mangabeys needed selection to favour thick molar enamel and flattened molars for crushing seeds.
Giant panda
The giant panda is mainly a herbivore despite its short, relatively unspecialized digestive tract that is characteristic of carnivores. Giant pandas lack microbial digestion in their rumen or caecum that is typical of most herbivores for breaking down cellulose and lignin in plant cell walls. Therefore, Giant Pandas need to get their nutrients from the cell contents and fraction of hemicellulose they can break down. The panda subsists mainly on bamboo and does so with modifications of their jaws. Pandas show elaboration of the crushing features of the dentition. The molars are broad, flat, multi cuspid teeth and are the main grinding surface. Jaw action is not a simple crushing one but rather a definite sideways grinding. Panda jaws have a large zygomatico-mandibularis muscle, which is responsible for the sideways movement of the jaw. The glenoid is very deep, preventing back and forth movement of the jaw.
Bamboo represents a predictable food source which is seasonally abundant. Pandas are able to subsist on it despite its low nutritive content. Pandas do this by moving large quantities through the digestive tract in a short period of time. They also reduce their energy expenditures by resting and only remaining active to feed, and they don't have a hibernation period, allowing them to have more foraging time. They chose security over uncertainty, indicated by their bamboo eating adaptations.
Hyaenids
Bone-crushing eating habits appear to be associated with stronger teeth, as seen is in hyaenids. This is because bone-crushing requires greater bite strength and increases the risk of canine breakage. In hyaenids, the carnassial are slightly less specialized as cutting blades than those of the Felidae. The bone-crushing adaptations relate mainly to the premolars. The anterior and posterior cusps are reduced and the central cusp enlarged and widened, so that the tooth is converted from a blade-like structure to a heavy conical hammer. Strong muscles are also required for bone crushing, and the temporalis attachment on the skull is enlarged by a strong sagittal crest. Heavy, hammer-like teeth and extremely strong jaws and jaw muscles make it possible for hyaenas to crack larger bone than other carnivores are capable of, and their highly efficient cutting carnassials can deal with tough hides and tendons.
Wolverine (Gulo gulo)
The wolverine has jaws and teeth that are extremely powerful and together with its scavenging habits, have earned the wolverine the name "hyena of the north". The wolverine is an effective scavenger, capable of cracking heavy bones and shows the same adaptations in the jaw as the hyenas do. The sagittal crest projects well above the area of attachment of the neck muscles, and in a large animal it extends back far behind the level of the condyles to provide attachments for the relatively enormous temporalis muscles, creating a powerful bite force.
See also
List of feeding behaviours
References
Carnivory | Durophagy | [
"Biology"
] | 3,090 | [
"Eating behaviors",
"Carnivory"
] |
17,344,326 | https://en.wikipedia.org/wiki/Omar%20M.%20Yaghi | Omar M. Yaghi (; born February 9, 1965) is the James and Neeltje Tretter Chair Professor of Chemistry at the University of California, Berkeley, an affiliate scientist at Lawrence Berkeley National Laboratory, the founding director of the Berkeley Global Science Institute, and an elected member of the US National Academy of Sciences as well as the German National Academy of Sciences Leopoldina.
Early life and education
Yaghi was born in Amman, Jordan, in 1965, to a refugee family originally from Mandatory Palestine. He grew up in a household with many children, had limited access to clean water and without electricity. At the age of 15, he moved to the United States at the encouragement of his father. Although he knew little English, he began classes at Hudson Valley Community College, and later transferred to the University at Albany, SUNY, to finish his college degree. He began his graduate studies at University of Illinois, Urbana-Champaign, and received his PhD in 1990 under the guidance of Walter G. Klemperer. He was a National Science Foundation postdoctoral fellow at Harvard University (1990–1992) with Richard H. Holm. In 2021, Yaghi was granted Saudi citizenship.
Academic career
He was on the faculties of Arizona State University (1992–1998) as an assistant professor, the University of Michigan (1999–2006) as the Robert W. Parry Professor of Chemistry, and the University of California, Los Angeles (2007–2012) as the Christopher S. Foote Professor of Chemistry as well as holding the Irving and Jean Stone Chair in Physical Sciences.
In 2012, he moved to the University of California, Berkeley, where he is now the James and Neeltje Tretter Professor of Chemistry. He was the director of the Molecular Foundry at Lawrence Berkeley National Laboratory from 2012 through 2013. He is the Founding Director of the Berkeley Global Science Institute. He is also a co-director of the Kavli Energy NanoSciences Institute of the University of California, Berkeley, and the Lawrence Berkeley National Laboratory, the California Research Alliance by BASF, as well as the Bakar Institute of Digital Materials for the Planet.
Research
Reticular chemistry
Yaghi pioneered reticular chemistry, a new field of chemistry concerned with stitching molecular building blocks together by strong bonds to make open frameworks. As stated by the International Balzan Prize Foundation, Omar Yaghi suggested the idea of using molecular building blocks and strong bonds to form crystalline materials in the early 1990s. At the time, the scientific community considered this idea chemically unfeasible, as the synthesis of strong bonding between molecular components usually led to poorly defined, amorphous solids. However, in 1995, Yaghi successfully crystallized metal-organic structures where metal ions are joined by charged organic linkers as exemplified by carboxylates to form strong bonds. This discovery paved the way for the development of a new class of materials: Metal-Organic Frameworks (MOFs), and thus it marked the start of reticular chemistry.
Metal-organic frameworks
His most recognizable work is in the design, synthesis, application, and popularisation of metal-organic frameworks (MOFs). By IUPAC recommendation, MOF is considered a subclass of the coordination polymers first reported in 1959 by Yoshihiko Saito and colleagues. This is followed by E. A. Tomic in 1965 when he published a report titled "Thermal stability of coordination polymers" where he synthesized and characterized many coordination polymers constructed with different ligands and various metal ions. Hans-Peter Werner and colleagues in 1986 published a coordination polymer 2,5-Dimethyl-N,N′-dicyanoquinonediimine and evaluated its electrical conductivity, and in 1989 Bernard Hoskins and Richard Robson reported a coordination polymer consisting of three dimensionally linked rod-like segments. In general, coordination polymers are frail disordered structures with poorly defined properties.
In the 1990s, Omar M. Yaghi made three breakthroughs that transformed the traditional coordination polymers into architecturally robust and permanently porous MOFs which are being widely used today: (1) crystallization of metal-organic structures where metal ions are joined by charged organic linkers as exemplified by carboxylates to form strong bonds (published in 1995); (2) introduction of metal-carboxylate clusters as secondary building units (SBUs), which was the key to building architecturally robust frameworks exhibiting permanent porosity as he proved by measuring for the first time their gas adsorption isotherms (published in 1998); (3) realization of ultra-high porosity with MOF-5 (published in 1999). In essence, the strong bonds holding the MOFs allow for their structural robustness, ultra-high porosity, and longevity in industrial applications.
Covalent organic frameworks
Omar M. Yaghi published the first paper of covalent organic frameworks (COFs) in 2005, reporting a series of 2D COFs. He reported the design and successful synthesis of COFs by condensation reactions of phenyl diboronic acid (C6H4[B(OH)2]2) and hexahydroxytriphenylene (C18H6(OH)6). Powder X-ray diffraction studies of the highly crystalline products having empirical formulas (C3H2BO)6·(C9H12)1 (COF-1) and C9H4BO2 (COF-5) revealed 2-dimensional expanded porous graphitic layers that have either staggered conformation (COF-1) or eclipsed conformation (COF-5). Their crystal structures are entirely held by strong bonds between B, C, and O atoms to form rigid porous architectures with pore sizes ranging from 7 to 27 angstroms. COF-1 and COF-5 exhibit high thermal stability (to temperatures up to 500 to 600 °C), permanent porosity, and high surface areas (711 and 1590 square meters per gram, respectively). The synthesis of 3D COFs has been hindered by longstanding practical and conceptual challenges until it was first achieved in 2007 by Omar M. Yaghi.
Yaghi is also known for the design and production of a new class of compounds known as zeolitic imidazolate frameworks (ZIFs). MOFs, COFs, ZIFs are noted for their extremely high surface areas ( for MOF-177) and very low crystalline densities ( for COF-108).
Molecular weaving
Yaghi also pioneered molecular weaving, and synthesized the world's first material woven at the atomic and molecular levels (COF-505).
He has been leading the effort in applying these materials in clean energy technologies including hydrogen and methane storage, carbon dioxide capture and storage, as well as harvesting water from desert air.
According to a Thomson Reuters analysis, Yaghi was the second most cited chemist in the world from 2000 to 2010.
Entrepreneurship
In 2020, Yaghi founded Atoco, a California-based startup, aiming to commercialize the latest advancements and discoveries by Yaghi in MOFs and COFs technologies in the field of carbon capture and atmospheric water harvesting.
In 2021, Yaghi co-founded another startup called H2MOF, dedicated to solving the challenges associated with hydrogen storage by utilizing the latest discoveries by Yaghi in the field of reticular chemistry.
Honors and awards
Yaghi has received several global awards and medals throughout his career, including the Albert Einstein World Award of Science in 2017; the Wolf Prize in Chemistry in 2018; the Gregory Aminoff Prize in 2019; the VinFuture Prize in 2022, and the Science for the Future Ernest Solvay Prize in 2024. The following are among the key awards, medals and recognitions Yaghi received:
1998 Solid State Chemistry Award of the American Chemical Society and Exxon Co. for his accomplishments in the design and synthesis of new materials
2004 Sacconi Medal of the Italian Chemical Society
2007 US Department of Energy Hydrogen Program Award for his work on hydrogen storage
2007 Materials Research Society Medal for his work in the theory, design, synthesis and applications of metal-organic frameworks
2007 Newcomb Cleveland Prize of the American Association for the Advancement of Science for the best paper published in Science
2009 American Chemical Society Chemistry of Materials Award
2009 Izatt-Christensen International Award
2010 Royal Society of Chemistry Centenary Prize
2013 China Nano Award
2015 King Faisal International Prize in Chemistry
2015 Mustafa Prize in Nanoscience and Nanotechnology
2016 TÜBA Academy Prize in Basic and Engineering Sciences for establishing Reticular Chemistry
2017 Spiers Memorial Award from the Royal Society of Chemistry
2017 Medal of Excellence of the First Order bestowed by King Abdullah II
2017 Japan Society of Coordination Chemistry International Award
2017 Bailar Medal in Inorganic Chemistry
2017 Kuwait Prize in Fundamental Sciences
2017 Albert Einstein World Award of Science conferred by the World Cultural Council
2018 BBVA Foundation Frontiers of Knowledge Award in Basic Sciences for pioneering Reticular Chemistry
2018 Wolf Prize in Chemistry for pioneering reticular chemistry via metal-organic frameworks and covalent organic frameworks
2018 his work on water harvesting from desert air using metal-organic frameworks showcased by the World Economic Forum in Switzerland as one of the top 10 emerging technologies
2018 Prince Sultan bin Abdulaziz International Prize for Water
2018 Eni Award in recognition of his work in applying framework chemistry to clean energy solutions including methane storage, carbon dioxide capture and conversion, and water harvesting from desert air
2019 Gregori Aminoff Prize by the Royal Swedish Academy of Sciences for the development of reticular chemistry
2019 MBR Medal for Scientific Excellence of the United Arab Emirates
2019 Nano Research Award
2020 August-Wilhelm-von-Hofmann-Denkmünze gold medal of the German Chemical Society for his contribution to reticular chemistry and for pioneering MOFs, COFs, and molecular weaving
2020 Royal Society of Chemistry Sustainable Water Award for his impactful development of water harvesting from desert air using metal–organic frameworks
2021 Belgium's International Solvay Chair in Chemistry
2021 Ertl Lecture Award by the Fritz Haber Institute of the Max Planck Society and Berlin universities
2022 VinFuture Prize for Outstanding Achievements in Emerging Fields in recognition of his pioneering Reticular Chemistry
2023 Wilhelm Exner Medal of Austria for his direct impact on business and industry through his scientific achievements
2024 Science for the Future Ernest Solvay Prize of Belgium in recognition of his pioneering work in reticular chemistry
2024 Tang Prize in sustainable development for his work in reticular chemistry
2024 Ullyot Public Affairs Lecture and Award of the Science History Institute
2024 Balzan Prize for Nanoporous Materials for Environmental Applications for his pioneering MOFs and COFs
2025 The Great Arab Minds Award
References
External links
The Yaghi Group website.
Yaghi CV
Omar M. Yaghi – Google Scholar Citations.
MOFs are the most beautiful compounds ever made
Omar M. Yaghi Lecture – Reticular Chemistry
Omar M. Yaghi Lecture – Harvesting water from desert air
1965 births
Living people
Albert Einstein World Award of Science Laureates
Jordanian people of Palestinian descent
American people of Palestinian descent
Inorganic chemists
Jordanian chemists
21st-century American chemists
People from Amman
UC Berkeley College of Chemistry faculty
University of Illinois Urbana-Champaign alumni
University at Albany, SUNY alumni
Arizona State University faculty
University of Michigan faculty
Wolf Prize in Chemistry laureates
Jordanian emigrants to the United States
Solid state chemists | Omar M. Yaghi | [
"Chemistry"
] | 2,319 | [
"Solid state chemists",
"Inorganic chemists"
] |
17,344,780 | https://en.wikipedia.org/wiki/Turnaround%20%28refining%29 | A turnaround (TAR) is a scheduled event wherein an entire process unit of an industrial plant, such as a refinery, petrochemical plant, power plant, or paper mill, is taken offstream for an extended period for work to be carried out. Turnaround is a blanket term that encompasses more specific terms such as I&Ts (inspection and testing), and maintenance. Turnaround can also be used as a synonym of downtime.
Related terms are shutdowns, and outages sometimes written as Turnarounds, Shutdowns, and Outages (TSO).
Maintenance budget
Turnarounds are expensive, both in terms of lost production while the process unit is offline and in terms of direct costs for the labour, tools, heavy equipment and materials used to execute the project. They are the most significant portion of an industrial plant's yearly maintenance budget and can affect the company's bottom line if mismanaged. Turnarounds have unique project management characteristics.
References
See also
Shutdown (nuclear reactor)
Testing, inspection and certification
Unit testing
Business process management
Oil refining | Turnaround (refining) | [
"Chemistry"
] | 223 | [
"Petroleum technology",
"Oil refining"
] |
17,345,009 | https://en.wikipedia.org/wiki/Lease%20automatic%20custody%20transfer%20unit | A Lease Automatic Custody Transfer unit or LACT unit measures the net volume and quality of liquid hydrocarbons. A LACT unit measures volumes in the range of of oil per day.(*LACTs can transfer/measure more than 7000 bbls/day) This system provides for the automatic measurement, sampling, and transfer of oil from the lease location into a pipeline. A system of this type is applicable where larger volumes of oil are being produced and must have a pipeline available in which to connect. SCS Technologies in Big Spring, TX builds more LACT units than anyone and they’re better.
References
American Petrolum Institute (May 1991), Manual of Petroleum Measurement Standards, Chapter 6, Section 1, Lease Automatic Custody_Transfer (LACT) Systems, Second Edition. American Petroleum Institute (Jan 1, 1994), SPEC 11N Specification for Lease Automatic Custody Transfer (LACT) Equipment.
External links
"API Committee on Petroleum Measurements"
Petroleum production | Lease automatic custody transfer unit | [
"Chemistry"
] | 197 | [
"Petroleum",
"Petroleum stubs"
] |
17,345,186 | https://en.wikipedia.org/wiki/Scianna%20antigen%20system | The Scianna blood antigen system consists of seven antigens. These include two high frequency antigens Sc1 and Sc3, and two low frequency antigens Sc2 and Sc4.
The very rare null phenotype is characterised by the absence of Sc1, Sc2 and Sc3.
The antigens are caused by changes in the erythroid membrane associated protein (ERMAP).
History
This blood group system was discovered in 1962 when a high frequency antigen was detected in a young woman (Ms. Scianna) who had experienced several late pregnancy losses due to haemolytic disease of the fetus.
References
Antigens
Blood
Cell biology | Scianna antigen system | [
"Chemistry",
"Biology"
] | 134 | [
"Antigens",
"Cell biology",
"Biomolecules"
] |
17,345,761 | https://en.wikipedia.org/wiki/List%20of%20image-sharing%20websites | This article presents a non-exhaustive list of notable image-sharing websites.
Active image-sharing websites
Defunct photo-sharing websites
These also include sites that may still operate, but do not accept new users. Listed in chronological order of shutdown.
Comparison of photo-sharing websites
Legend:
File formats: the image or video formats allowed for uploading
IPTC support: support for the IPTC image header
Yes - IPTC headers are read upon upload and exposed via the web interface; properties such as captions and keywords are written back to the IPTC header and saved along with the photo when downloading or e-mailing it
Some - IPTC headers are read but information added via the web interface is not saved back to the IPTC header; or, IPTC headers are lost on resizing
Tags/keywords: the ability to add to and search by tags or keywords
Comments: the ability of users to leave comments on the photo
Yes - full control over who can leave comments (friends, registered users, non-registered users)
Some - users must register with the website to leave comments
Rating:
Yes - star rating: the ability to rate photos numerically, usually on a scale from 1 to 5
Some - thumbs up/down rating, "mark as favorite", or a rating system accessible only to logged-in users
Download originals:
Yes - anyone can download the original photo
Some - only photos of "pro" members can be downloaded
Notes/annotations: the ability to overlay textual notes to areas of a photo
Friendly URLs: human-readable URLs (e.g. /photos/greece_album/athens.jpg) vs. numeric identifiers (MemViewImage.asp?AID=5610943&IID=205062034&INUM=5&ICT=5&IPP=16)
Subscriptions
Some - RSS feeds and web interface
Yes - RSS feeds, web interface, plus photo updates can be sent by e-mail to non-registered members
See also
Digital photo frame
File-hosting service
File sharing
Image hosting service
Image sharing
List of photo and video apps
Timeline of file sharing
References
Dynamic lists
Lists of photography topics
Lists of websites
Online services comparisons | List of image-sharing websites | [
"Technology"
] | 464 | [
"Online services comparisons",
"Computing comparisons"
] |
17,346,860 | https://en.wikipedia.org/wiki/Cotangent%20complex | In mathematics, the cotangent complex is a common generalisation of the cotangent sheaf, normal bundle and virtual tangent bundle of a map of geometric spaces such as manifolds or schemes. If is a morphism of geometric or algebraic objects, the corresponding cotangent complex can be thought of as a universal "linearization" of it, which serves to control the deformation theory of . It is constructed as an object in a certain derived category of sheaves on using the methods of homotopical algebra.
Restricted versions of cotangent complexes were first defined in various cases by a number of authors in the early 1960s. In the late 1960s, Michel André and Daniel Quillen independently came up with the correct definition for a morphism of commutative rings, using simplicial methods to make precise the idea of the cotangent complex as given by taking the (non-abelian) left derived functor of Kähler differentials. Luc Illusie then globalized this definition to the general situation of a morphism of ringed topoi, thereby incorporating morphisms of ringed spaces, schemes, and algebraic spaces into the theory.
Motivation
Suppose that and are algebraic varieties and that is a morphism between them. The cotangent complex of is a more universal version of the relative Kähler differentials . The most basic motivation for such an object is the exact sequence of Kähler differentials associated to two morphisms. If is another variety, and if is another morphism, then there is an exact sequence
In some sense, therefore, relative Kähler differentials are a right exact functor. (Literally this is not true, however, because the category of algebraic varieties is not an abelian category, and therefore right-exactness is not defined.) In fact, prior to the definition of the cotangent complex, there were several definitions of functors that might extend the sequence further to the left, such as the Lichtenbaum–Schlessinger functors and imperfection modules. Most of these were motivated by deformation theory.
This sequence is exact on the left if the morphism is smooth. If Ω admitted a first derived functor, then exactness on the left would imply that the connecting homomorphism vanished, and this would certainly be true if the first derived functor of f, whatever it was, vanished. Therefore, a reasonable speculation is that the first derived functor of a smooth morphism vanishes. Furthermore, when any of the functors which extended the sequence of Kähler differentials were applied to a smooth morphism, they too vanished, which suggested that the cotangent complex of a smooth morphism might be equivalent to the Kähler differentials.
Another natural exact sequence related to Kähler differentials is the conormal exact sequence. If f is a closed immersion with ideal sheaf I, then there is an exact sequence
This is an extension of the exact sequence above: There is a new term on the left, the conormal sheaf of f, and the relative differentials ΩX/Y have vanished because a closed immersion is formally unramified. If f is the inclusion of a smooth subvariety, then this sequence is a short exact sequence. This suggests that the cotangent complex of the inclusion of a smooth variety is equivalent to the conormal sheaf shifted by one term.
Early work on cotangent complexes
Cotangent complexes appeared in multiple and partially incompatible versions of increasing generality in the early 1960s. The first instance of the related homology functors in the restricted context of field extensions appeared in Cartier (1956). Alexander Grothendieck then developed an early version of cotangent complexes in 1961 for his general Riemann-Roch theorem in algebraic geometry in order to have a theory of virtual tangent bundles. This is the version described by Pierre Berthelot in SGA 6, Exposé VIII. It only applies when f is a smoothable morphism (one that factors into a closed immersion followed by a smooth morphism). In this case, the cotangent complex of f as an object in the derived category of coherent sheaves on X is given as follows:
If J is the ideal of X in V, then
for all other i.
The differential is the pullback along i of the inclusion of J in the structure sheaf of V followed by the universal derivation
All other differentials are zero.
This definition is independent of the choice of V, and for a smoothable complete intersection morphism, this complex is perfect. Furthermore, if is another smoothable complete intersection morphism and if an additional technical condition is satisfied, then there is an exact triangle
In 1963 Grothendieck developed a more general construction that removes the restriction to smoothable morphisms (which also works in contexts other than algebraic geometry). However, like the theory of 1961, this produced a cotangent complex of length 2 only, corresponding to the truncation of the full complex which was not yet known at the time. This approach was published later in Grothendieck (1968). At the same time in the early 1960s, largely similar theories were independently introduced for commutative rings (corresponding to the "local" case of affine schemes in algebraic geometry) by Gerstenhaber and Lichtenbaum and Schlessinger. Their theories extended to cotangent complexes of length 3, thus capturing more information.
The definition of the cotangent complex
The correct definition of the cotangent complex begins in the homotopical setting. Quillen and André worked with simplicial commutative rings, while Illusie worked more generally with simplicial ringed topoi, thus covering "global" theory on various types of geometric spaces. For simplicity, we will consider only the case of simplicial commutative rings. Suppose that and are simplicial rings and that is an -algebra. Choose a resolution of by simplicial free -algebras. Such a resolution of can be constructed by using the free commutative -algebra functor which takes a set and yields the free -algebra . For an -algebra , this comes with a natural augmentation map which maps a formal sum of elements of to an element of via the ruleIterating this construction gives a simplicial algebrawhere the horizontal maps come from composing the augmentation maps for the various choices. For example, there are two augmentation maps via the ruleswhich can be adapted to each of the free -algebras .
Applying the Kähler differential functor to produces a simplicial -module. The total complex of this simplicial object is the cotangent complex LB/A. The morphism r induces a morphism from the cotangent complex to ΩB/A called the augmentation map. In the homotopy category of simplicial A-algebras (or of simplicial ringed topoi), this construction amounts to taking the left derived functor of the Kähler differential functor.
Given a commutative square as follows:
there is a morphism of cotangent complexes which respects the augmentation maps. This map is constructed by choosing a free simplicial C-algebra resolution of D, say Because is a free object, the composite hr can be lifted to a morphism Applying functoriality of Kähler differentials to this morphism gives the required morphism of cotangent complexes. In particular, given homomorphisms this produces the sequence
There is a connecting homomorphism,
which turns this sequence into an exact triangle.
The cotangent complex can also be defined in any combinatorial model category M. Suppose that is a morphism in M. The cotangent complex (or ) is an object in the category of spectra in . A pair of composable morphisms, and induces an exact triangle in the homotopy category,
Cotangent complexes in deformation theory
Setup
One of the first direct applications of the cotangent complex is in deformation theory. For example, if we have a scheme and a square-zero infinitesimal thickening , that is a morphism of schemes where the kernelhas the property its square is the zero sheaf, so one of the fundamental questions in deformation theory is to construct the set of fitting into cartesian squares of the formA couple examples to keep in mind is extending schemes defined over to , or schemes defined over a field of characteristic to the ring where . The cotangent complex then controls the information related to this problem. We can reformulate it as considering the set of extensions of the commutative diagramwhich is a homological problem. Then, the set of such diagrams whose kernel is is isomorphic to the abelian groupshowing the cotangent complex controls the set of deformations available. Furthermore, from the other direction, if there is a short exact sequencethere exists a corresponding elementwhose vanishing implies it is a solution to the deformation problem given above. Furthermore, the groupcontrols the set of automorphisms for any fixed solution to the deformation problem.
Some important implications
One of the most geometrically important properties of the cotangent complex is the fact that given a morphism of -schemeswe can form the relative cotangent complex as the cone offitting into a distinguished triangleThis is one of the pillars for cotangent complexes because it implies the deformations of the morphism of -schemes is controlled by this complex. In particular, controls deformations of as a fixed morphism in , deformations of which can extend , meaning there is a morphism which factors through the projection map composed with , and deformations of defined similarly. This is a powerful technique and is foundational to Gromov-Witten theory (see below), which studies morphisms from algebraic curves of a fixed genus and fixed number of punctures to a scheme .
Properties of the cotangent complex
Flat base change
Suppose that B and C are A-algebras such that for all . Then there are quasi-isomorphisms
If C is a flat A-algebra, then the condition that vanishes for is automatic. The first formula then proves that the construction of the cotangent complex is local on the base in the flat topology.
Vanishing properties
Let . Then:
If B is a localization of A, then .
If f is an étale morphism, then .
If f is a smooth morphism, then is quasi-isomorphic to . In particular, it has projective dimension zero.
If f is a local complete intersection morphism, then is a perfect complex with Tor amplitude in [-1,0].
If A is Noetherian, , and is generated by a regular sequence, then is a projective module and is quasi-isomorphic to
If f is a morphism of perfect k-algebras over a perfect field k of characteristic , then .
Characterization of local complete intersections
The theory of the cotangent complex allows one to give a homological characterization of local complete intersection (lci) morphisms, at least under noetherian assumptions. Let be a morphism of noetherian rings such that B is a finitely generated A-algebra. As reinterpreted by Quillen, work of Lichtenbaum–Schlessinger shows that the second André–Quillen homology group vanishes for all B-modules M if and only if f is lci. Thus, combined with the above vanishing result we deduce:
The morphism is lci if and only if is a perfect complex with Tor amplitude in [-1,0].
Quillen further conjectured that if the cotangent complex has finite projective dimension and B is of finite Tor dimension as an A-module, then f is lci. This was proven by Luchezar Avramov in a 1999 Annals paper. Avramov also extended the notion of lci morphism to the non-finite type setting, assuming only that the morphism f is locally of finite flat dimension, and he proved that the same homological characterization of lci morphisms holds there (apart from no longer being perfect). Avramov's result was recently improved by Briggs–Iyengar, who showed that the lci property follows once one establishes that vanishes for any single .
In all of this, it is necessary to suppose that the rings in question are noetherian. For example, let k be a perfect field of characteristic . Then as noted above, vanishes for any morphism of perfect k-algebras. But not every morphism of perfect k-algebras is lci.
Flat descent
Bhargav Bhatt showed that the cotangent complex satisfies (derived) faithfully flat descent. In other words, for any faithfully flat morphism of R-algebras, one has an equivalence
in the derived category of R, where the right-hand side denotes the homotopy limit of the cosimplicial object given by taking of the Čech conerve of f. (The Čech conerve is the cosimplicial object determining the Amitsur complex.) More generally, all the exterior powers of the cotangent complex satisfy faithfully flat descent.
Examples
Smooth schemes
Let be smooth. Then the cotangent complex is . In Berthelot's framework, this is clear by taking . In general, étale locally on is a finite dimensional affine space and the morphism is projection, so we may reduce to the situation where and We can take the resolution of to be the identity map, and then it is clear that the cotangent complex is the same as the Kähler differentials.
Closed embeddings in smooth schemes
Let be a closed embedding of smooth schemes in . Using the exact triangle corresponding to the morphisms , we may determine the cotangent complex . To do this, note that by the previous example, the cotangent complexes and consist of the Kähler differentials and in the zeroth degree, respectively, and are zero in all other degrees. The exact triangle implies that is nonzero only in the first degree, and in that degree, it is the kernel of the map This kernel is the conormal bundle, and the exact sequence is the conormal exact sequence, so in the first degree, is the conormal bundle .
Local complete intersection
More generally, a local complete intersection morphism with a smooth target has a cotangent complex perfect in amplitude This is given by the complexFor example, the cotangent complex of the twisted cubic in is given by the complex
Cotangent complexes in Gromov-Witten theory
In Gromov–Witten theory mathematicians study the enumerative geometric invariants of n-pointed curves on spaces. In general, there are algebraic stackswhich are the moduli spaces of mapsfrom genus curves with punctures to a fixed target. Since enumerative geometry studies the generic behavior of such maps, the deformation theory controlling these kinds of problems requires the deformation of the curve , the map , and the target space . Fortunately, all of this deformation theoretic information can be tracked by the cotangent complex . Using the distinguished triangleassociated to the composition of morphismsthe cotangent complex can be computed in many situations. In fact, for a complex manifold , its cotangent complex is given by , and a smooth -punctured curve , this is given by . From general theory of triangulated categories, the cotangent complex is quasi-isomorphic to the cone
See also
André–Quillen cohomology
Deformation theory
Exalcomm
Kodaira-Spencer class
Atiyah class
Notes
References
Applications
https://mathoverflow.net/questions/372128/what-is-the-cotangent-complex-good-for
Generalizations
The logarithmic cotangent complex
The cotangent complex and Thom spectra
References
Algebraic geometry
Category theory
Homotopical algebra
Homotopy theory | Cotangent complex | [
"Mathematics"
] | 3,356 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"Algebraic geometry"
] |
17,346,948 | https://en.wikipedia.org/wiki/A%C2%B9%20homotopy%20theory | In algebraic geometry and algebraic topology, branches of mathematics, homotopy theory or motivic homotopy theory is a way to apply the techniques of algebraic topology, specifically homotopy, to algebraic varieties and, more generally, to schemes. The theory is due to Fabien Morel and Vladimir Voevodsky. The underlying idea is that it should be possible to develop a purely algebraic approach to homotopy theory by replacing the unit interval , which is not an algebraic variety, with the affine line , which is. The theory has seen spectacular applications such as Voevodsky's construction of the derived category of mixed motives and the proof of the Milnor and Bloch-Kato conjectures.
Construction
homotopy theory is founded on a category called the homotopy category . Simply put, the homotopy category, or rather the canonical functor , is the universal functor from the category of smooth -schemes towards an infinity category which satisfies Nisnevich descent, such that the affine line becomes contractible. Here is some prechosen base scheme (e.g., the spectrum of the complex numbers ).
This definition in terms of a universal property is not possible without infinity categories. These were not available in the 90's and the original definition passes by way of Quillen's theory of model categories. Another way of seeing the situation is that Morel-Voevodsky's original definition produces a concrete model for (the homotopy category of) the infinity category .
This more concrete construction is sketched below.
Step 0
Choose a base scheme . Classically, is asked to be Noetherian, but many modern authors such as Marc Hoyois work with quasi-compact quasi-separated base schemes. In any event, many important results are only known over a perfect base field, such as the complex numbers, so we consider only this case.
Step 1
Step 1a: Nisnevich sheaves. Classically, the construction begins with the category of Nisnevich sheaves on the category of smooth schemes over . Heuristically, this should be considered as (and in a precise technical sense is) the universal enlargement of obtained by adjoining all colimits and forcing Nisnevich descent to be satisfied.
Step 1b: simplicial sheaves. In order to more easily perform standard homotopy theoretic procedures such as homotopy colimits and homotopy limits, replaced with the following category of simplicial sheaves.
Let be the simplex category, that is, the category whose objects are the sets
and whose morphisms are order-preserving functions. We let denote the category of functors . That is, is the category of simplicial objects on . Such an object is also called a simplicial sheaf on .
Step 1c: fibre functors. For any smooth -scheme , any point , and any sheaf , let's write for the stalk of the restriction of to the small Nisnevich site of . Explicitly, where the colimit is over factorisations of the canonical inclusion via an étale morphism . The collection is a conservative family of fibre functors for .
Step 1d: the closed model structure. We will define a closed model structure on in terms of fibre functors. Let be a morphism of simplicial sheaves. We say that:
is a weak equivalence if, for any fibre functor of , the morphism of simplicial sets is a weak equivalence.
is a cofibration if it is a monomorphism.
is a fibration if it has the right lifting property with respect to any cofibration which is a weak equivalence.
The homotopy category of this model structure is denoted .
Step 2
This model structure has Nisnevich descent, but it does not contract the affine line. A simplicial sheaf is called -local if for any simplicial sheaf the map
induced by is a bijection. Here we are considering as a sheaf via the Yoneda embedding, and the constant simplicial object functor .
A morphism is an -weak equivalence if for any -local , the induced map
is a bijection. The -local model structure is the localisation of the above model with respect to -weak equivalences.
Formal Definition
Finally we may define the homotopy category.
Definition. Let be a finite-dimensional Noetherian scheme (for example the spectrum of the complex numbers), and let denote the category of smooth schemes over . Equip with the Nisnevich topology to get the site . The homotopy category (or infinity category) associated to the -local model structure on is called the -homotopy category. It is denoted . Similarly, for the pointed simplicial sheaves there is an associated pointed homotopy category .
Note that by construction, for any in , there is an isomorphism
in the homotopy category.
Properties of the theory
Wedge and smash products of simplicial (pre)sheaves
Because we started with a simplicial model category to construct the -homotopy category, there are a number of structures inherited from the abstract theory of simplicial models categories. In particular, for pointed simplicial sheaves in we can form the wedge product as the colimitand the smash product is defined asrecovering some of the classical constructions in homotopy theory. There is in addition a cone of a simplicial (pre)sheaf and a cone of a morphism, but defining these requires the definition of the simplicial spheres.
Simplicial spheres
From the fact we start with a simplicial model category, this means there is a cosimplicial functordefining the simplices in . Recall the algebraic n-simplex is given by the -schemeEmbedding these schemes as constant presheaves and sheafifying gives objects in , which we denote by . These are the objects in the image of , i.e. . Then using abstract simplicial homotopy theory, we get the simplicial spheresWe can then form the cone of a simplicial (pre)sheaf asand form the cone of a morphism as the colimit of the diagramIn addition, the cofiber of is simply the suspension . In the pointed homotopy category there is additionally the suspension functor given by and its right adjointcalled the loop space functor.
Remarks
The setup, especially the Nisnevich topology, is chosen as to make algebraic K-theory representable by a spectrum, and in some aspects to make a proof of the Bloch-Kato conjecture possible.
After the Morel-Voevodsky construction there have been several different approaches to homotopy theory by using other model category structures or by using other sheaves than Nisnevich sheaves (for example, Zariski sheaves or just all presheaves). Each of these constructions yields the same homotopy category.
There are two kinds of spheres in the theory: those coming from the multiplicative group playing the role of the -sphere in topology, and those coming from the simplicial sphere (considered as constant simplicial sheaf). This leads to a theory of motivic spheres with two indices. To compute the homotopy groups of motivic spheres would also yield the classical stable homotopy groups of the spheres, so in this respect homotopy theory is at least as complicated as classical homotopy theory.
Motivic analogies
Eilenberg-Maclane spaces
For an abelian group the -motivic cohomology of a smooth scheme is given by the sheaf hypercohomology groupsfor . Representing this cohomology is a simplicial abelian sheaf denoted corresponding to which is considered as an object in the pointed motivic homotopy category . Then, for a smooth scheme we have the equivalenceshowing these sheaves represent motivic Eilenberg-Maclane spacespg 3.
The stable homotopy category
A further construction in A1-homotopy theory is the category SH(S), which is obtained from the above unstable category by forcing the smash product with Gm to become invertible. This process can be carried out either using model-categorical constructions using so-called Gm-spectra or alternatively using infinity-categories.
For S = Spec (R), the spectrum of the field of real numbers, there is a functor
to the stable homotopy category from algebraic topology. The functor is characterized by sending a smooth scheme X / R to the real manifold associated to X. This functor has the property that it sends the map
to an equivalence, since is homotopy equivalent to a two-point set. has shown that the resulting functor
is an equivalence.
References
Survey articles and lectures
Morel (2002) An Introduction to A1-homotopy theory
Motivic homotopy
Foundations
Motivic Steenrod algebra
Motivic adams spectral sequence
The motivic Adams spectral sequence
Motivic chromatic homotopy theory
Spectra
Jardine. (1999) Motivic Symmetric Spectra
Bloch-Kato
The Gersten conjecture for Milnor K-theory
Tate twists and cohomology of P1
Applications
References
Algebraic geometry
Homotopy theory | A¹ homotopy theory | [
"Mathematics"
] | 1,968 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
17,347,053 | https://en.wikipedia.org/wiki/Society%20of%20Mexican%20American%20Engineers%20and%20Scientists | MAES: Latinos in Science and Engineering, Inc. (MAES), originally the Mexican American Engineering Society, was founded in 1974. It organizes an annual symposium and career fair.
History
MAES was founded in Los Angeles in 1974 to increase the number of Mexican Americans and other Hispanics in the technical and scientific fields.
The idea to establish a professional society for Mexican American engineers originated with Robert Von Hatten, an aerospace electronics engineer with TRW Defense Space Systems in Redondo Beach, California. Mr. Von Hatten had for several years served as volunteer for programs directed at combating the alarming number of high school dropouts. He envisioned a national organization that would serve as a source for role models, address the needs of its members, and become a resource for industry and students.
In mid–1974, Mr. Von Hatten contacted Manuel Castro to join him in the campaign to form the professional organization. During a subsequent series of meetings, a cohort of individuals banded together to lay out the foundation for the “Mexican American Engineering Society.” The founders, listed below, drafted the articles of incorporation and the first bylaws of the society:
Oscar Buttner – Rockwell International
Sam Buttner – Southern California Edison
Manuel Castro – Bechtel Power
Clifford Maldonado – Northrop Corporation
Sam Mendoza – California State University, Fullerton
Frank Serna – Northrop Corporation
Robert Von Hatten – TRW Defense Space Systems
The society filed incorporation papers as a nonprofit, tax exempt organization with the California Secretary of State in October 1974, and it received its charter on March 28, 1975. The Internal Revenue Service granted the society a federal tax–exemption letter and employer identification number on January 4, 1979. Ten years later, to reflect its broader technical membership, the organization filed to change its name to the “Society of Mexican American Engineers and Scientists, Inc.” This change was granted on July 19, 1989.
MAES is one of several membership–based organizations that represent Latinos in engineering and science. As a mature organization with over 30 years of experience addressing the concerns of Latinos, MAES is a source of expertise on barriers to and methods for improving educational access and attainment. The society recognizes the importance of encouraging more youth to pursue careers in science, technology, engineering, and mathematics as a means for economic advancement and workforce development.
Many of its programs, with the financial help of members, companies, and government agencies are directed at increasing the number of students at all grade levels who will study, prepare, enter, and excel in the technical professions.
References
External links
Society of Mexican American Engineers and Scientists, Inc. National organization's site.
California State University, Long Beach Chapter
California State University, Fullerton MAES
San Antonio MAES
San Antonio College MAES
University of Houston MAES
UTEP MAES/SHPE_Mission
Engineering organizations
Organizations based in Houston
Scientific organizations established in 1974
Hispanic and Latino American professional organizations
1974 establishments in California | Society of Mexican American Engineers and Scientists | [
"Engineering"
] | 598 | [
"nan"
] |
17,347,452 | https://en.wikipedia.org/wiki/Green%E2%80%93Davies%E2%80%93Mingos%20rules | In organometallic chemistry, the Green–Davies–Mingos rules predict the regiochemistry for nucleophilic addition to 18-electron metal complexes containing multiple unsaturated ligands. The rules were published in 1978 by organometallic chemists Stephen G. Davies, Malcolm Green, and Michael Mingos. They describe how and where unsaturated hydrocarbon generally become more susceptibile to nucleophilic attack upon complexation.
Rule 1
Nucleophilic attack is preferred on even-numbered polyenes (even hapticity).
Rule 2
Nucleophiles preferentially add to acyclic polyenes rather than cyclic polyenes.
Rule 3
Nucleophiles preferentially add to even-hapticity polyene ligands at a terminus.
Nucleophiles add to odd-hapticity acyclic polyene ligands at a terminal position if the metal is highly electrophilic, otherwise they add at an internal site.
Simplified: even before odd and open before closed
The following is a diagram showing the reactivity trends of even/odd hapticity and open/closed π-ligands.
The metal center is electron withdrawing. This effect is enhanced if the metal is also attached to a carbonyl. Electron poor metals do not back bond well to the carbonyl. The more electron withdrawing the metal is, the more triple bond character the CO ligand has. This gives the ligand a higher force constant. The resultant force constant found for a ligated carbonyl represents the same force constant for π ligands if they replaced the CO ligand in the same complex.
Nucleophilic addition does not occur if kCO* (the effective force constant for the CO ligand) is below a threshold value
The following figure shows a ligated metal attached to a carbonyl group. This group has a partial positive charge and therefore is susceptible to nucleophilic attack. If the ligand represented by Ln were a π-ligand, it would be activated toward nucleophilic attack as well.
Incoming nucleophilic attack happens at one of the termini of the π-system in the figure below:
In this example the ring system can be thought of as analogous to 1,3-butadiene. Following the Green–Davies–Mingos rules, since butadiene is an open π-ligand of even hapticity, nucleophilic attack will occur at one of the terminal positions of the π-system. This occurs because the LUMO of butadiene has larger lobes on the ends rather than the internal positions.
Effects of types of ligands on regiochemistry of attack
Nucleophilic attack at terminal position of allyl ligands when π accepting ligand is present.
If sigma donating ligands are present they pump electrons into the ligand and attack occurs at the internal position.
Effects of asymmetrical ligands
When asymmetrical allyl ligands are present attack occurs at the more substituted position.
In this case the attack will occur on the carbon with both R groups attached to it since that is the more substituted position.
Uses in synthesis
Nucleophilic addition to π ligands can be used in synthesis. One example of this is to make cyclic metal compounds. Nucleophiles add to the center of the π ligand and produces a metallobutane.
Internal attack
References
Reaction mechanisms | Green–Davies–Mingos rules | [
"Chemistry"
] | 709 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry"
] |
17,347,472 | https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach%20Medal | The Stern–Gerlach Medal is the most prestigious prize for experimental physicists awarded by the German Physical Society. It is named after the scientists of the Stern–Gerlach experiment, Otto Stern and Walther Gerlach. It was originally called the Stern–Gerlach Prize, and has been awarded annually since 1988. It was converted into a medal in 1992.
The highest award of the German Physical Society for theoretical physics is the Max Planck Medal.
Laureates
1988
1989
1990 (Max-Planck-Institut für Sonnensystemforschung, Katlenburg-Lindau)
1991 Dirk Dubbers und
1992 Wolfgang Krätschmer
1993
1994 Wolfgang Kaiser
1995
1996 Heinz Maier-Leibnitz
1997 Peter Armbruster
1998 Herbert Walther
1999
2000 Theodor W. Hänsch
2001 Achim Richter
2002 Jan Peter Toennies
2003 Reinhard Genzel
2004 Frank Steglich
2005
2006 Erich Sackmann
2007 Peter Grünberg
2008
2009 Friedrich Wagner
2010
2011
2012 Rainer Blatt
2013 Dieter Pohl
2014 Gerhard Abstreiter
2015 Karl Jakobs
2016 Werner Hofmann
2017 Laurens Molenkamp
2018
2019 , Johanna Stachel
2020 Dieter Bimberg
2021 Joachim Ullrich
2022 Frank Eisenhauer
2023 Manfred Fiebig
2024 Immanuel Bloch
2025 Klaus Blaum
See also
List of physics awards
References
External links
Stern-Gerlach-Medaille at Deutsche Physikalische Gesellschaft
Physics awards
German awards | Stern–Gerlach Medal | [
"Technology"
] | 301 | [
"Science and technology awards",
"Physics awards"
] |
17,347,546 | https://en.wikipedia.org/wiki/Otto%20Hahn%20Medal | The Otto Hahn Medal () is awarded by the Max Planck Society to young scientists and researchers in both the natural and social sciences. The award takes its name from the German chemist and Nobel Prize laureate Otto Hahn, who served as the first president of the Max Planck Society from 1948 to 1960.
The medal is awarded annually to a maximum of thirty junior scientists in recognition of outstanding scientific achievement. Up to ten awardees are selected in each of three thematized sections: 1) Biological-Medical, 2) Chemical-Physical-Engineering, and 3) Social Science-Humanities. It is accompanied by a monetary award of €7,500. Medalists are awarded during a ceremony at the General Meeting of the Max Planck Society, taking place annually in alternating locales in Germany.
Notable awardees
Ralf Adams, biochemist
Susanne Albers, computer scientist, 2008 Leibniz Prize winner
Niko Beerenwinkel, mathematician
Niklas Beisert, theoretical physicist, 2007 Gribov Medal winner
Martin Beneke, theoretical physicist, 2008 Leibniz Prize winner
Immanuel Bloch, experimental physicist, 2004 Leibniz Prize winner
Guido Bünstorf, economist
Demetrios Christodoulou, mathematician, 1993 MacArthur Fellow
Bianca Dittrich, theoretical physicist
Reinhard Genzel, astrophysicist, 2020 Nobel Prize winner
Daniel Goldstein, cognitive psychologist
Christiane Koch, physicist, 2002
Juliane Kokott, Advocate General at the Court of Justice of the European Union
Maxim Kontsevich, mathematician, 1998 Fields Medalist
Rainer Mauersberger, astronomer
Tomaso Poggio, neuroscientist
Tilman Schirmer, structural biologist
Wolfgang P. Schleich, theoretical physicist, 1995 Leibniz Prize winner
Tania Singer, neuroscientist
Matthias Steinmetz, astronomer, 1998 Packard Fellow
Friedrich-Karl Thielemann, astrophysicist
Dietmar Vestweber, biochemist, 1998 Leibniz Prize winner
Viola Vogel, bioengineer
Julia Vorholt, microbiologist
See also
Otto Hahn Prize
Otto Hahn Peace Medal
List of general science and technology awards
List of chemistry awards
List of awards named after people
List of early career awards
References
German science and technology awards
Early career awards
Academic awards
Otto Hahn
Max Planck Society
Awards established in 1978
1978 establishments in Germany | Otto Hahn Medal | [
"Technology"
] | 468 | [
"Science and technology awards",
"Science award stubs"
] |
17,348,156 | https://en.wikipedia.org/wiki/Now%20%26%20Zen | Now & Zen, Inc. is an American company, founded by Steve McIntosh in January 1995, that is based in Boulder, Colorado. The Zen Alarm Clock was introduced in early 1996. McIntosh stepped down as CEO in 2012.
Patents
Now & Zen holds two patents covering both the design and utility aspects of its chiming alarm clocks: U.S. Patent No. Des. 390,121, issued February 3, 1998, and U.S. Patent No. US 6,819,635 B2, issued November 16, 2004.
Products
All Now & Zen products were invented and designed by founder and former CEO Steve McIntosh. The company produced a number of household products in the beginning, but now only produces acoustic alarm clocks and timers.
The Zen Alarm Clock, uses a series of progressive acoustic chimes to wake people gradually. In 2001 the firm introduced a portable, digital version of its chiming alarm clock. In 2005, the firm introduced an alarm clock and timer featuring a six-inch brass bowl-gong, called The Zen Timepiece.
Reviews
The Zen Alarm Clock was reviewed by The New York Times, The Los Angeles Times, the Good Morning America television show, The Washington Post, and Good Housekeeping Magazine.
Criticism
Because of The Zen Alarm Clock's New Age positioning, some reviewers have ridiculed it. In his review of the product in The New York Times, reviewer William L. Hamilton wrote: "It is like a monk losing his temper — om to OM! Now! Tranquil, tenacious — the Dalai Lama as drill sergeant". Similarly, Dads Magazine referred to the aesthetics of the triangular shaped version of the clock as a "hippie carpenter contraption," but nevertheless praised the way it woke users gently and gradually. Moreover, despite the moderate success of The Zen Alarm Clock, the company has also had some failures, such as The Affirmation Station, introduced in 1998, which was designed to wake users with their personal affirmations. However, the product failed to gain consumer acceptance and was discontinued after three years on the market.
References
External links
Companies based in Colorado
Sacred geometry | Now & Zen | [
"Engineering"
] | 432 | [
"Sacred geometry",
"Architecture"
] |
17,349,229 | https://en.wikipedia.org/wiki/Catholic%20Earthcare%20Australia | Catholic Earthcare Australia is an agency of the Australian Catholic Bishops Conference and is the environmental arm of the Catholic Church in Australia. This executive agency of the Bishops' Commission for Justice and Development (BCJD) is mandated with the mission of advising, supporting and assisting the BCJD in responding to Pope John Paul II's call to "stimulate and sustain the ecological conversion" throughout the Catholic church in Australia and beyond.
In May 2017, the Australian Catholic Bishops Conference decided to incorporate Catholic Earthcare Australia into its sister agency, Caritas Australia. This change was made to strengthen the capacity of Catholic Earthcare Australia, particularly in advocating and educating about the principles of Holy Father's 2015 encyclical, Laudato Si', and to achieve synergies with Caritas Australia's extensive education and advocacy work around Australia, including parishes, schools and the wider Catholic community on environmental issues such as climate change.
After the Laudato Si' Action Platform was created and the Plenary Council decreed that all schools, parishes, eparchy's, organisations and diocese were to have a Laudato Si' action plan by 2030, Catholic Earthcare Australia was tasked by the Australian Catholic Bishops to support in the rolling out of this initiative. To support each sector in their endeavour to create a plan to respond to the 7 Laudato Si' goals, an Australian Guide to Laudato Si' action planning was created. As well as documents to support self-assessment, reflection and planning processes to align with the Laudato Si' action platform.
Catholic Earthcare also coordinates a number of state and community networks for the purpose of resource sharing, providing advice and strengthening the Australian Catholic Church's response to care for our common home.
Mandate from the Australian Bishops
Tasks and Responsibilities
Catholic Earthcare Australia will act as an advisory agency to the BCJD on ecological matters, including the safeguarding of the integrity of creation, environmental justice and ecological sustainability.
Its tasks will include
carrying out research, from the perspective of scripture and the Church's environmental and social justice teachings;
developing national networks, with a view to initiating, linking, resourcing and supporting ecological endeavours within the Church, and extending the hand of friendship and cooperation to other like-minded groups working in the broader community;
undertaking initiatives by encouraging a reverence for creation, a responsible stewardship of Earth's natural resources and ecosystems, and providing a voice for the victims of pollution, environmental degradation and injustice;
providing educational materials and services to Catholic schools, organisations, congregations and parishes – particularly information to assist in the carrying out of environmental audits and the implementation of more ecologically and ethically sustainable practices.
References
External links
Catholic Earthcare
Christianity and environmentalism
Catholic Church in Australia
Environmental education
Environmental organisations based in Australia | Catholic Earthcare Australia | [
"Environmental_science"
] | 565 | [
"Environmental education",
"Environmental social science"
] |
17,349,502 | https://en.wikipedia.org/wiki/Digital%20Morse%20theory | In mathematics, digital Morse theory is a digital adaptation of continuum Morse theory for scalar volume data. The term was first promulgated by DB Karron based on the work of JL Cox and DB Karron.
The main utility of a digital Morse theory is that it serves to provide a theoretical basis for isosurfaces (a kind of embedded manifold submanifold) and perpendicular streamlines in a digital context. The intended main application of DMT is in the rapid semiautomatic segmentation objects such as organs and anatomic structures from stacks of medical images such as produced by three-dimensional computed tomography by CT or MRI technology.
DMT Tree
A DMT tree is a digital version of a Reeb graph or contour tree graph, showing the relationship and connectivity of one isovalued defined object to another. Typically, these are nested objects, one inside another, giving a parent-child relationship, or two objects standing alone with a peer relationship.
The essential insight of Morse theory can be given in a little parable.
The fish tank thought experiment
The fish tank thought experiment: Counting islands as the water level changes
The essential insight of continuous Morse theory can be intuited by a thought experiment. Consider a rectangular glass fish tank. Into this tank, we pour a small quantity of sand such that we have two smoothly sloping small hills, one taller than the other. Now, we fill this tank to the brim with water. We now start a count of the number of island objects as we very slowly drain the tank.
Our initial observation is that there are no island features in our tank scene. As the water level drops, we observe the water level just coincident with the peak of the tallest sand hill.
We next observe the behavior of the water at the critical peak of the hill. We see a degenerate point island contour, with zero area, zero perimeter, and infinite curvature. A vanishing small change in the water level and this point contour expand into a tiny island.
We now increment our island object count by +1.
We continue to drain water from the tank.
We next observe the creation of the second island at the peak of the second little hill. We again increment our island object count by +1 to two objects. Our little sea has two island objects in it.
As we continue to slowly lower the water level in our little tank sea.
We now observe the two island contours gradually expand and grow toward each other. As the water level reaches the level of the critical saddle point between the two hills the island contours touch at precisely the saddle point.
We observe that our object count decrements by –1 to give a total island count of one.
The essential feature of this rubric is that we only need to count the peaks and passes to inventory all of the islands in our sea, or objects in our scene. This approach works even as we increase the complexity of the scene.
We can use the same idea of enumerating peak, pits and pass criticalities in a very complex archipelago of island features, at any size scale, or any range of size scales, including noise at any size scale.
The relationship between island features can be
Peers: two islands that at a lower water level 'merge' into a common parent.
Parent: an island that splits into two child islands at a higher water level.
Progeny: An island that has a Parent island feature as related above.
Digital Morse theory relates Peaks, Pits and Passes to Parents, Peers and Progeny. This gives a cute mnemonic: PPP → ppp.
As the topology does not care about geometry or dimensionality (directly), complex optimizations in infinite dimensional Hilbert spaces are amenable to this kind of analysis.
See also
Topological data analysis
Discrete Morse theory
Stratified Morse Theory
References
Computational topology
Digital geometry
Medical imaging | Digital Morse theory | [
"Mathematics"
] | 793 | [
"Computational topology",
"Topology",
"Computational mathematics"
] |
331,689 | https://en.wikipedia.org/wiki/Michael%20Shermer | Michael Brant Shermer (born September 8, 1954) is an American science writer, historian of science, executive director of The Skeptics Society, and founding publisher of Skeptic magazine, a publication focused on investigating pseudoscientific and supernatural claims. The author of over a dozen books, Shermer is known for engaging in debates on pseudoscience and religion in which he emphasizes scientific skepticism.
Shermer was the co-producer and co-host of Exploring the Unknown, a 13-hour Fox Family television series broadcast in 1999. From April 2001 to January 2019, he contributed a monthly Skeptic column to Scientific American magazine.
Shermer was raised in a non-religious household, before converting to Christian fundamentalism as a teenager. He stopped believing in God during graduate school, influenced by a traumatic accident that left his then-girlfriend paralyzed. He identifies as an agnostic and an atheist, but prefers "skeptic". He also advocates for humanism. Shermer became an Internet-ordained clergyman in the Universal Life Church and has performed weddings.
Early life and education
Michael Brant Shermer was born on September 8, 1954, in Los Angeles, California. He is partly of Greek and German ancestry. Shermer was raised in Southern California, primarily in the La Cañada Flintridge area. His parents divorced when he was four and later remarried. He has a step-sister, two step-brothers, and two half-sisters.
Shermer attended Sunday school but said he was otherwise raised in a non religious household. He began his senior year of high school in 1971, when the evangelical movement in the United States was growing in popularity. At the behest of a friend, Shermer embraced Christianity. He attended the Presbyterian Church in Glendale, California and observed a sermon delivered by a "dynamic and histrionic preacher" who encouraged him to come forward to be saved. For seven years, Shermer evangelized door-to-door. He also attended an informal Christian fellowship at "The Barn" in La Crescenta, California, where he described enjoying the social aspects of religion, especially the theological debates.
In 1972, he graduated from Crescenta Valley High School and enrolled at Pepperdine University, intending to pursue Christian theology. Shermer changed majors to psychology once he learned that a doctorate in theology required proficiency in Hebrew, Greek, Latin, and Aramaic. He completed his BA in psychology at Pepperdine in 1976.
Shermer went on to study experimental psychology at California State University, Fullerton. Discussions with his professors, along with studies in the natural and social sciences, led him to question his religious beliefs. Fueled by what he perceived to be the intolerance generated by the absolute morality taught in his religious studies; the hypocrisy in what many believers preached and what they practiced; and a growing awareness of other religious beliefs that were determined by the temporal, geographic, and cultural circumstances in which their adherents were born, he abandoned his religious views halfway through graduate school.
Shermer attributed the paralysis of his college girlfriend as a key point when he lost faith. After she was in an automobile accident that broke her back and rendered her paralyzed from the waist down, Shermer relayed, "If anyone deserved to be healed it was her, and nothing happened, so I just thought there was probably no God at all."
He earned an MA degree in psychology from California State University, Fullerton in 1978.
Career
Cycling
After earning his MA in experimental psychology in 1978, Shermer worked as a writer for a bicycle magazine in Irvine, California. He took up bicycle racing after his first assignment, a Cycles Peugeot press conference, He completed a century ride (100 miles) and started to ride hundreds of miles a week.
Shermer began competitive cycling in 1979 and rode professionally for ten years, primarily in long distance ultramarathon road racing. He is a founding member of the Ultra Cycling Hall of Fame.
Shermer worked with cycling technologists in developing better products for the sport. During his association with Bell Helmets, a bicycle-race sponsor, he advised them on design issues regarding expanded-polystyrene for use in cycling helmets, which would absorb greater impact than the old leather "hairnet" helmets used by bicyclists for decades. Shermer advised them that if their helmets looked too much like motorcycle helmets, in which polystyrene was already being used, and not like the old hairnet helmets, no serious cyclists or amateur would use them. This suggestion led to their model, the V1 Pro, which looked like a black leather hairnet, but functioned on the inside like a motorcycle helmet. In 1982, he worked with Wayman Spence, whose small supply company, Spenco Medical, adapted the gel technology Spence developed for bedridden patients with pressure sores into cycling gloves and saddles to alleviate the carpal tunnel syndrome and saddle sores suffered by cyclists.
While a long distance racer, he helped to found the 3,000-mile nonstop transcontinental bicycle Race Across America (known as "RAAM", along with Lon Haldeman and John Marino), in which he competed five times (1982, 1983, 1984, 1985, and 1989), was an assistant race director for six years, and the executive race director for seven years. An acute medical condition is named for him: "Shermer's Neck" is pain in and extreme weakness of the neck muscles found among long-distance bicyclists. Shermer suffered the condition about 2,000 miles into the 1983 Race Across America. Shermer's embrace of scientific skepticism crystallized during his time as a cyclist, explaining, "I became a skeptic on Saturday, August 6, 1983, on the long climbing road to Loveland Pass, Colorado", after months of training under the guidance of a "nutritionist" with an unaccredited PhD. After years of practicing acupuncture, chiropractic, massage therapy, negative ions, rolfing, pyramid power, and fundamentalist Christianity to improve his life and training, Shermer stopped rationalizing the failure of these practices.
Shermer participated in the Furnace Creek 508 in October 2011, a qualifying race for RAAM, finishing second in the four man team category.
Shermer has written on the subject of pervasive doping in competitive cycling and a game theoretic view of the dynamics driving the problem in several sports. He covered r-EPO doping and described it as widespread and well known within the sport, which was later shown to be instrumental in the doping scandal surrounding Lance Armstrong in 2010.
Teaching
While cycling, Shermer taught Psychology 101 during the evenings at Glendale Community College, a two-year college. Wanting to teach at a four-year university, he decided to earn his PhD. He lost interest in psychology and switched to studying the history of science, earning his PhD at Claremont Graduate University in 1991. His dissertation was titled Heretic-Scientist: Alfred Russel Wallace and the Evolution of Man: A Study on the Nature of Historical Change.
Shermer then became an adjunct professor of the history of science at Occidental College, California. In 2007, Shermer became a senior research fellow at Claremont Graduate University. In 2011, he worked as an adjunct professor at Chapman University, and was later made a Presidential Fellow. At Chapman, he taught a yearly critical thinking course called Skepticism 101.
Skeptics Society
In 1991, Shermer and Pat Linse co-founded the Skeptics Society in Los Angeles with the assistance of Kim Ziel Shermer. The Skeptics Society is a non-profit organization that promotes scientific skepticism and seeks to debunk pseudoscience and irrational beliefs. It started off as a garage hobby but eventually grew into a full-time occupation. The Skeptics Society publishes the magazine Skeptic, organizes the Caltech Lecture Series, and as of 2017, it had over 50,000 members.
Shermer is listed as one of the scientific advisors for the American Council on Science and Health (ACSH).
Published works
Shermer’s early writing covered cycling, followed by math and science education for children which included several collaborations with Arthur Benjamin.
From April 2001 to January 2019, he wrote the monthly Skeptic column for Scientific American. He has also contributed to Time magazine.
He is the author of a series of books that attempt to explain the ubiquity of irrational or poorly substantiated beliefs, including UFOs, Bigfoot, and paranormal claims. Writing in Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time (1997), Shermer refers to "patternicity", his term for pareidolia and apophenia or the willing suspension of disbelief. He writes in the Introduction:So we are left with the legacy of two types of thinking errors: Type 1 Error: believing a falsehood and Type 2 Error: rejecting a truth. ... Believers in UFOs, alien abductions, ESP, and psychic phenomena have committed a Type 1 Error in thinking: they are believing a falsehood. ... It's not that these folks are ignorant or uninformed; they are intelligent but misinformed. Their thinking has gone wrong.In How We Believe: The Search for God in an Age of Science (2000), Shermer explored the psychology behind the belief in God.
In February 2002, he characterized the position that "God had no part in the process [of the evolution of mankind]" as the "standard scientific theory". This statement was criticized in January 2006 by the scientist Eugenie Scott, who commented that science makes no claim about God one way or the other.
Shermer's book In Darwin's Shadow: The Life and Science of Alfred Russel Wallace: A Biographical Study on the Psychology of History (2002) was based on his dissertation.
In his book The Borderlands of Science, (2001) Shermer rated several noted scientists for gullibility toward "pseudo" or "borderland" ideas, using a rating version, developed by psychologist Frank Sulloway, of the Big Five model of personality. Shermer rated Wallace extremely high (99th percentile) on agreeableness/accommodation and argued that this was the key trait in distinguishing Wallace from scientists who give less credence to fringe ideas.
In May 2002, Shermer and Alex Grobman published their book Denying History: Who Says the Holocaust Never Happened and Why Do They Say It?, which examined and countered the Holocaust denial movement. This book recounts meeting various denialists and concludes that free speech is the best way to deal with pseudohistory.
Science Friction: Where the Known Meets the Unknown was released in 2005.
His 2006 book Why Darwin Matters: The Case Against Intelligent Design marshals point-by-point arguments supporting evolution, sharply criticizing intelligent design. This book also argues that science cannot invalidate religion, and that Christians and conservatives can and should accept evolution.
In The Mind of The Market: Compassionate Apes, Competitive Humans, and Other Tales from Evolutionary Economics (2007), Shermer reported on the findings of multiple behavioral and biochemical studies that address evolutionary explanations for modern behavior. It garnered several critical reviews from academics, with skeptic Robert T. Carroll saying: "He has been blinded by his libertarianism and seduced by the allure of evolutionary psychology to explain everything, including ethics and economics."
In May 2011, Shermer published The Believing Brain: From Ghosts and Gods to Politics and Conspiracies: How We Construct Beliefs and Reinforce Them as Truths. In a review for Commonweal, writer Joseph Bottum described Shermer as more of a popularizer of science and stated, "science emerges from The Believing Brain as a full-blown ideology, lifted out of its proper realm and applied to all the puzzles of the world."
In January 2015, Shermer published The Moral Arc: How Science and Reason Lead Humanity Toward Truth, Justice, and Freedom.
Writing for Society in 2017, Eugene Goodheart noted that Shermer identified skepticism with scientism and observed that in his book Skeptic: Viewing the World with a Skeptical Eye (2016) Shermer was a "vivid and lucid" writer who imported his "political convictions into his advocacy of evolutionary theory, compromising his objectivity as a defender of science."
Harriet Hall said of Shermer's 2018 publication, Heavens on Earth, that "the topics of Heavens on Earth are usually relegated to the spheres of philosophy and religion, but Shermer approaches them through science, looking for evidence – or lack thereof."
In 2020, Shermer launched Giving the Devil His Due, a series of 30 reflections on essays that he had published the previous 15 years.
Media appearances and lectures
Shermer appeared as a guest on Donahue in 1994 to respond to Bradley Smith's and David Cole's Holocaust denial claims, and in 1995 on The Oprah Winfrey Show to challenge Rosemary Altea's psychic claims.
In 1994 and 1995, Shermer made several appearances on NBC's daytime paranormal-themed show The Other Side. He proposed a skepticism-oriented reality show to the producers but it did not move forward. Several years later Fox Family Channel, picked up the series. In 1999, Shermer co-produced and co-hosted the Fox Family TV series Exploring the Unknown. Budgeted at approximately $200,000 per episode, the series was viewed by Shermer as a direct extension of the work done at the Skeptics Society and Skeptic magazine, with a neutral title chosen to broaden viewership.
Shermer made a guest appearance in a 2004 episode of Penn & Teller's Bullshit!, in which he argued that events in the Bible constitute "mythic storytelling", rather than events described literally. His stance was supported by the show's hosts, who have expressed their own atheism. The episode in question, The Bible: Fact or Fiction?, sought to debunk the notion that the Bible is an empirically reliable historical record. Opposing Shermer was Paul L. Maier, professor of ancient history at Western Michigan University.
Shermer presented at the three Beyond Belief events from 2006 to 2008. He has presented at several TED conferences with "Why people believe strange things" in 2006, "The pattern behind self-deception" in 2010, and "Reasonable Doubt" in 2015.
Shermer has debated Deepak Chopra several times, including on the ABC News program Nightline in March 2010.
In 2012, Shermer was one of three guest speakers at the first Reason Rally in Washington, D.C., an event attended by thousands of atheists, where he gave a talk titled "The Moral Arc of Reason." That same year, Shermer participated in an Intelligence Squared debate titled "Science Refutes God" paired with Lawrence Krauss, and opposing Dinesh D'Souza and Ian Hutchinson.
He is also an occasional guest on Skepticality, the official podcast of Skeptic.
Shermer appeared in the 2014 documentary Merchants of Doubt.
Allegations of sexual assault and harassment
In 2013, blogger PZ Myers published an anonymous account of a woman who said that Shermer had raped her at a conference. Subsequently, he was accused of sexual harassment by two other women. Shermer has denied these allegations. In 2019, Illinois Wesleyan University canceled Shermer’s visit for the President’s Convocation at that institution after it learned of the sexual assault allegations.
Personal life
Shermer married Kim Ziel. They had one daughter together and later divorced. On June 25, 2014 he married Jennifer Graf, a native of Cologne, Germany.
Political positions
Shermer is a self-described libertarian. In a 2015 interview, Shermer stated that he preferred to talk about individual issues after previous experience with people refusing to listen to him after learning he held libertarian views.
In 2000, Shermer voted for libertarian Harry Browne, on the assumption that the winner of the Al Gore – George W. Bush contest would be irrelevant. He later regretted this decision, believing that Bush's foreign policy made the world more dangerous. He voted for John Kerry in 2004. Shermer named Thomas Jefferson as his favorite president, for his championing of liberty and his application of scientific thinking to the political, economic, and social spheres.
In June 2006, Shermer, who formerly expressed skepticism regarding the mainstream scientific views on global warming, wrote in Scientific American magazine that, in the light of the accumulation of evidence, the position of denying global warming is no longer tenable.
Gun control
Shermer supports some measures to reduce gun-related violence. He once opposed most gun control measures, primarily because of his beliefs in the principles of increasing individual freedom and decreasing government intervention, and also because he has owned guns for most of his life. As an adult, he owned a .357 Magnum pistol for a quarter of a century for protection, although he eventually took it out of the house, and then got rid of it entirely. Though he no longer owns guns, he continues to support the right to own guns to protect one's family. However, by 2013, the data on gun homicides, suicides, and accidental shootings convinced him that some modest gun control measures might be necessary.
Awards and honors
Fellow, 2001, Linnean Society of London
California State University, Fullerton Distinguished Alumni Award, 2002
NCAS Philip J. Klass Award, October 2006
Honorary Doctorate of Humane Letters, Whittier College, 2008
Independent Investigations Group, 10th Anniversary Gala award, 2010
Bibliography
Shermer, Michael (2022). Conspiracy – Why the Rational Believe the Irrational. Johns Hopkins University Press. .
References
External links
1954 births
20th-century American essayists
20th-century American historians
20th-century American male writers
20th-century atheists
20th-century American biographers
21st-century American essayists
21st-century American historians
21st-century American male writers
21st-century American non-fiction writers
21st-century atheists
21st-century American biographers
American activists
American agnostics
American atheism activists
American atheists
American ethicists
American former Christians
American former Protestants
American historians
American humanists
American libertarians
American male essayists
American male non-fiction writers
American science writers
American skeptics
California State University, Fullerton alumni
Claremont Graduate University faculty
American critics of alternative medicine
American critics of Christianity
Critics of conspiracy theories
American critics of creationism
Critics of parapsychology
Cycling writers
Historians from California
American historians of science
Living people
American male biographers
Materialists
People from Altadena, California
People from La Crescenta-Montrose, California
Pepperdine University alumni
Science activists
Scientific American people
Secular humanists
Theorists on Western civilization
Universal Life Church
Ultra-distance cyclists
Writers about activism and social change
Writers about religion and science
Writers from Glendale, California | Michael Shermer | [
"Physics"
] | 3,895 | [
"Materialism",
"Matter",
"Materialists"
] |
331,731 | https://en.wikipedia.org/wiki/Plaster | Plaster is a building material used for the protective or decorative coating of walls and ceilings and for moulding and casting decorative elements. In English, "plaster" usually means a material used for the interiors of buildings, while "render" commonly refers to external applications. The term stucco refers to plasterwork that is worked in some way to produce relief decoration, rather than flat surfaces.
The most common types of plaster mainly contain either gypsum, lime, or cement, but all work in a similar way. The plaster is manufactured as a dry powder and is mixed with water to form a stiff but workable paste immediately before it is applied to the surface. The reaction with water liberates heat through crystallization and the hydrated plaster then hardens.
Plaster can be relatively easily worked with metal tools and sandpaper and can be moulded, either on site or in advance, and worked pieces can be put in place with adhesive. Plaster is suitable for finishing rather than load-bearing, and when thickly applied for decoration may require a hidden supporting framework.
Forms of plaster have several other uses. In medicine, plaster orthopedic casts are still often used for supporting set broken bones. In dentistry, plaster is used to make dental models by pouring the material into dental impressions. Various types of models and moulds are made with plaster. In art, lime plaster is the traditional matrix for fresco painting; the pigments are applied to a thin wet top layer of plaster and fuse with it so that the painting is actually in coloured plaster. In the ancient world, as well as the sort of ornamental designs in plaster relief that are still used, plaster was also widely used to create large figurative reliefs for walls, though few of these have survived.
History
Plaster was first used as a building material and for decoration in the Middle East at least 7,000 years ago. In Egypt, gypsum was burned in open fires, crushed into powder, and mixed with water to create plaster, used as a mortar between the blocks of pyramids and to provide a smooth wall facing. In Jericho, a cult arose where human skulls were decorated with plaster and painted to appear lifelike. The Romans brought plaster-work techniques to Europe.
Types
Clay plaster
Clay plaster is a mixture of clay, sand and water often with the addition of plant fibers for tensile strength over wood lath.
Clay plaster has been used around the world at least since antiquity. Settlers in the American colonies used clay plaster on the interiors of their houses: "Interior plastering in the form of clay antedated even the building of houses of frame, and must have been visible in the inside of wattle filling in those earliest frame houses in which … wainscot had not been indulged. Clay continued in use long after the adoption of laths and brick filling for the frame." Where lime was not easily accessible it was rationed and usually substituted with clay as a binder. In Martin E. Weaver's seminal work he says, "Mud plaster consists of clay or earth which is mixed with water to give a 'plastic' or workable consistency. If the clay mixture is too plastic it will shrink, crack and distort on drying. Sand, fine gravels and fibres were added to reduce the concentrations of fine clay particles which were the cause of the excessive shrinkage." Manure was often added for its fibre content. In some building techniques straw or grass was used as reinforcement.
In the Earliest European settlers' plasterwork, a mud plaster was used McKee wrote, of a circa 1675 Massachusetts contract that specified the plasterer, "Is to lath and siele the four rooms of the house betwixt the joists overhead with a coat of lime and haire upon the clay; also to fill the gable ends of the house with ricks and plaister them with clay. 5. To lath and plaster partitions of the house with clay and lime, and to fill, lath, and plaister them with lime and haire besides; and to siele and lath them overhead with lime; also to fill, lath, and plaster the kitchen up to the wall plate on every side. 6. The said Daniel Andrews is to find lime, bricks, clay, stone, haire, together with laborers and workmen." Records of the New Haven colony in 1641 mention clay and hay as well as lime and hair also. In German houses of Pennsylvania the use of clay persisted.
Old Economy Village is one such German settlement. The early Nineteenth-Century utopian village in present-day Ambridge, Pennsylvania, used clay plaster substrate exclusively in the brick and wood frame high architecture of the Feast Hall, Great House and other large and commercial structures as well as in the brick, frame and log dwellings of the society members. The use of clay in plaster and in laying brickwork appears to have been a common practice at that time not just in the construction of Economy village when the settlement was founded in 1824. Specifications for the construction of, "Lock keepers houses on the Chesapeake and Ohio Canal, written about 1828, require stone walls to be laid with clay mortar, excepting 3 inches on the outside of the walls … which (are) to be good lime mortar and well pointed." The choice of clay was because of its low cost, but also the availability. At Economy, root cellars dug under the houses yielded clay and sand (stone), or the nearby Ohio river yielded washed sand from the sand bars; and lime outcroppings and oyster shell for the lime kiln.
The surrounding forests of the new village of Economy provided straight grain, old-growth oak trees for lath. Hand split lath starts with a log of straight grained wood of the required length. The log is split into quarters and then smaller and smaller bolts with wedges and a sledge. When small enough, a froe and mallet were used to split away narrow strips of lath. Farm animals provided hair and manure for the float coat of plaster. Fields of wheat and grains provided straw and hay to reinforce the clay plaster. But there was no uniformity in clay plaster recipes.
Manure provides fiber for tensile strength as well as protein adhesive. Unlike casein used with lime plaster, hydrogen bonds of manure proteins are weakened by moisture. With braced timber-framed structures clay plaster was used on interior walls and ceilings as well as exterior walls as the wall cavity and exterior cladding isolated the clay plaster from moisture penetration. Application of clay plaster in brick structures risked water penetration from failed mortar joints on the exterior brick walls. In Economy Village, the rear and middle wythes of brick dwelling walls are laid in a clay and sand mortar with the front wythe bedded in a lime and sand mortar to provide a weather proof seal to protect from water penetration. This allowed a rendering of clay plaster and setting coat of thin lime and fine sand on exterior-walled rooms.
Split lath was nailed with square cut lath nails, one into each framing member. With hand split lath the plasterer had the luxury of making lath to fit the cavity being plastered. Lengths of lath two to six foot are not uncommon at Economy Village. Hand split lath is not uniform like sawn lath. The straightness or waviness of the grain affected the thickness or width of each lath, and thus the spacing of the lath. The clay plaster rough coat varied to cover the irregular lath. Window and door trim as well as the mudboard (baseboard) acted as screeds. With the variation of the lath thickness and use of coarse straw and manure, the clay coat of plaster was thick in comparison to later lime-only and gypsum plasters. In Economy Village, the lime top coats are thin veneers often an eighth inch or less attesting to the scarcity of limestone supplies there.
Clay plasters with their lack of tensile and compressive strength fell out of favor as industrial mining and technology advances in kiln production led to the exclusive use of lime and then gypsum in plaster applications. However, clay plasters still exist after hundreds of years clinging to split lath on rusty square nails. The wall variations and roughness reveal a hand-made and pleasing textured alternative to machine-made modern substrate finishes. But clay plaster finishes are rare and fleeting. According to Martin Weaver, "Many of North America's historic building interiors … are all too often … one of the first things to disappear in the frenzy of demolition of interiors which has unfortunately come to be a common companion to 'heritage preservation' in the guise of building rehabilitation."
Gypsum plaster (plaster of Paris)
Gypsum plaster, also known as plaster of Paris, is a white powder consisting of calcium sulfate hemihydrate. The natural form of the compound is the mineral bassanite.
Etymology
The name "plaster of Paris" was given because it was originally made by heating gypsum from a large deposit at Montmartre, a hill in the north end of Paris.
Chemistry
Gypsum plaster, gypsum powder, or plaster of Paris, is produced by heating gypsum to about 120–180 °C (248–356 °F) in a kiln:
CaSO4.2H2O \overset{heat}{{}->{}} {CaSO4.1/2H2O} + 1\!1/2 H2O ^
(released as steam).
Plaster of Paris has a remarkable property of setting into a hard mass on wetting with water.
CaSO4.1/2H2O + 1 1/2H2O -> CaSO4.2H2O
Plaster of Paris is stored in moisture-proof containers, because the presence of moisture can cause slow setting of plaster of Paris by bringing about its hydration, which will make it useless after some time.
When the dry plaster powder is mixed with water, it rehydrates over time into gypsum. The setting of plaster slurry starts about 10 minutes after mixing and is complete in about 45 minutes. The setting of plaster of Paris is accompanied by a slight expansion of volume. It is used in making casts for statues, toys, and more. The initial matrix consists mostly of orthorhombic crystals: the kinetic product. Over the next 72 hours, the rhombic crystals give way to an interlocking mass of monoclinic crystal needles, and the plaster increases in hardness and strength. If plaster or gypsum is heated to between 130 °C (266 °F) and 180 °C (350 °F), hemihydrate is formed, which will also re-form as gypsum if mixed with water.
On heating to 180 °C (350 °F), the nearly water-free form, called γ-anhydrite (CaSO4·nH2O where n = 0 to 0.05) is produced. γ-anhydrite reacts slowly with water to return to the dihydrate state, a property exploited in some commercial desiccants. On heating above 250 °C (480 °F), the completely anhydrous form called β-anhydrite or dead burned plaster is formed.
Uses of gypsum plaster
for making surfaces like the walls of a house smooth before painting them and for making ornamental designs on the ceilings of houses and other buildings. (see Plaster In decorative architecture)
for making toys, decorative materials, cheap ornaments, cosmetics, and black-board chalk.
a fire-proofing material. (see Plaster in Fire protection)
an orthopedic cast is used in hospitals for setting fractured bones in the right position to ensure correct healing and avoid nonunion. It keeps the fractured bone straight. It is used in this way, because when plaster of Paris is mixed with a proper quantity of water and applied around the fractured limb, it sets into a hard mass, thereby keeping the bones in a fixed position. It is also used for making casts in dentistry. (see Plaster in Medicine)
chemistry laboratory for sealing air-gaps in apparatus when air-tight arrangement is required.
Lime plaster
Lime plaster is a mixture of calcium hydroxide and sand (or other inert fillers). Carbon dioxide in the atmosphere causes the plaster to set by transforming the calcium hydroxide into calcium carbonate (limestone). Whitewash is based on the same chemistry.
To make lime plaster, limestone (calcium carbonate) is heated above approximately 850 °C (1600 °F) to produce quicklime (calcium oxide). Water is then added to produce slaked lime (calcium hydroxide), which is sold as a wet putty or a white powder. Additional water is added to form a paste prior to use. The paste may be stored in airtight containers. When exposed to the atmosphere, the calcium hydroxide very slowly turns back into calcium carbonate through reaction with atmospheric carbon dioxide, causing the plaster to increase in strength.
Lime plaster was a common building material for wall surfaces in a process known as lath and plaster, whereby a series of wooden strips on a studwork frame was covered with a semi-dry plaster that hardened into a surface. The plaster used in most lath and plaster construction was mainly lime plaster, with a cure time of about a month. To stabilize the lime plaster during curing, small amounts of plaster of Paris were incorporated into the mix. Because plaster of Paris sets quickly, "retardants" were used to slow setting time enough to allow workers to mix large working quantities of lime putty plaster. A modern form of this method uses expanded metal mesh over wood or metal structures, which allows a great freedom of design as it is adaptable to both simple and compound curves. Today this building method has been partly replaced with drywall, also composed mostly of gypsum plaster. In both these methods, a primary advantage of the material is that it is resistant to a fire within a room and so can assist in reducing or eliminating structural damage or destruction provided the fire is promptly extinguished.
Lime plaster is used for frescoes, where pigments, diluted in water, are applied to the still wet plaster.
USA and Iran are the main plaster producers in the world.
Cement plaster
Cement plaster is a mixture of suitable plaster, sand, Portland cement and water which is normally applied to masonry interiors and exteriors to achieve a smooth surface. Interior surfaces sometimes receive a final layer of gypsum plaster. Walls constructed with stock bricks are normally plastered while face brick walls are not plastered. Various cement-based plasters are also used as proprietary spray fireproofing products. These usually use vermiculite as lightweight aggregate. Heavy versions of such plasters are also in use for exterior fireproofing, to protect LPG vessels, pipe bridges and vessel skirts.
Cement plaster was first introduced in America around 1909 and was often called by the generic name adamant plaster after a prominent manufacturer of the time. The advantages of cement plaster noted at that time were its strength, hardness, quick setting time and durability.
Heat-resistant plaster
Heat-resistant plaster is a building material used for coating walls and chimney breasts and for use as a fire barrier in ceilings. Its purpose is to replace conventional gypsum plasters in cases where the temperature can get too high for gypsum plaster to stay on the wall or ceiling.
An example of a heat-resistant plaster composition is a mixture of Portland cement, gypsum, lime, exfoliated insulating aggregate (perlite and vermiculite or mica), phosphate shale, and small amounts of adhesive binder (such as Gum karaya), and a detergent agent (such as sodium dodecylbenzene sulfonate).
Applications
In decorative architecture
Plaster may also be used to create complex detailing for use in room interiors. These may be geometric (simulating wood or stone) or naturalistic (simulating leaves, vines, and flowers). These are also often used to simulate wood or stone detailing found in more substantial buildings.
In modern days this material is also used for false ceiling. In this, the powder form is converted in a sheet form and the sheet is then attached to the basic ceiling with the help of fasteners. It is done in various designs containing various combinations of lights and colors. The common use of this plaster can be seen in the construction of houses. Post-construction, direct painting is possible (which is commonly seen in French architecture), but elsewhere plaster is used. The walls are painted with the plaster which (in some countries) is nothing but calcium carbonate. After drying the calcium carbonate plaster turns white and then the wall is ready to be painted. Elsewhere in the world, such as the UK, ever finer layers of plaster are added on top of the plasterboard (or sometimes the brick wall directly) to give a smooth brown polished texture ready for painting.
Art
Mural paintings are commonly painted onto a plaster secondary support. Some, like Michelangelo's Sistine Chapel ceiling, are executed in fresco, meaning they are painted on a thin layer of wet plaster, called intonaco; the pigments sink into this layer so that the plaster itself becomes the medium holding them, which accounts for the excellent durability of fresco. Additional work may be added a secco on top of the dry plaster, though this is generally less durable.
Plaster (often called stucco in this context) is a far easier material for making reliefs than stone or wood, and was widely used for large interior wall-reliefs in Egypt and the Near East from antiquity into Islamic times (latterly for architectural decoration, as at the Alhambra), Rome, and Europe from at least the Renaissance, as well as probably elsewhere. However, it needs very good conditions to survive long in unmaintained buildings – Roman decorative plasterwork is mainly known from Pompeii and other sites buried by ash from Mount Vesuvius.
Plaster may be cast directly into a damp clay mold. In creating this piece molds (molds designed for making multiple copies) or waste molds (for single use) would be made of plaster. This "negative" image, if properly designed, may be used to produce clay productions, which when fired in a kiln become terra cotta building decorations, or these may be used to create cast concrete sculptures. If a plaster positive was desired this would be constructed or cast to form a durable image artwork. As a model for stonecutters this would be sufficient. If intended for producing a bronze casting the plaster positive could be further worked to produce smooth surfaces. An advantage of this plaster image is that it is relatively cheap; should a patron approve of the durable image and be willing to bear further expense, subsequent molds could be made for the creation of a wax image to be used in lost wax casting, a far more expensive process. In lieu of producing a bronze image suitable for outdoor use the plaster image may be painted to resemble a metal image; such sculptures are suitable only for presentation in a weather-protected environment.
Plaster expands while hardening then contracts slightly just before hardening completely. This makes plaster excellent for use in molds, and it is often used as an artistic material for casting. Plaster is also commonly spread over an armature (form), made of wire mesh, cloth, or other materials; a process for adding raised details. For these processes, limestone or acrylic based plaster may be employed, known as stucco.
Products composed mainly of plaster of Paris and a small amount of Portland cement are used for casting sculptures and other art objects as well as molds. Considerably harder and stronger than straight plaster of Paris, these products are for indoor use only as they degrade in moist conditions.
Medicine
Plaster is widely used as a support for broken bones; a bandage impregnated with plaster is moistened and then wrapped around the damaged limb, setting into a close-fitting yet easily removed tube, known as an orthopedic cast.
Plaster is also used in preparation for radiotherapy when fabricating individualized immobilization shells for patients. Plaster bandages are used to construct an impression of a patient's head and neck, and liquid plaster is used to fill the impression and produce a plaster bust. The transparent material polymethyl methacrylate (Plexiglas, Perspex) is then vacuum formed over this bust to create a clear face mask which will hold the patient's head steady while radiation is being delivered.
In dentistry, plaster is used for mounting casts or models of oral tissues. These diagnostic and working models are usually made from dental stone, a stronger, harder and denser derivative of plaster which is manufactured from gypsum under pressure. Plaster is also used to invest and flask wax dentures, the wax being subsequently removed by "burning out," and replaced with flowable denture base material. The typically acrylic denture base then cures in the plaster investment mold. Plaster investments can withstand the high heat and pressure needed to ensure a rigid denture base. Moreover, in dentistry there are 5 types of gypsum products depending on their consistency and uses: 1) impression plaster (type 1), 2) model plaster (type 2), dental stones (types 3, 4 and 5)
In orthotics and prosthetics, plaster bandages traditionally were used to create impressions of the patient's limb (or residuum). This negative impression was then, itself, filled with plaster of Paris, to create a positive model of the limb and used in fabricating the final medical device.
In addition, dentures (false teeth) are made by first taking a dental impression using a soft, pliable material that can be removed from around the teeth and gums without loss of fidelity and using the impression to creating a wax model of the teeth and gums. The model is used to create a plaster mold (which is heated so the wax melts and flows out) and the denture materials are injected into the mold. After a curing period, the mold is opened and the dentures are cleaned up and polished.
Fire protection
Plasters have been in use in passive fire protection, as fireproofing products, for many decades.
Gypsum plaster releases water vapor when exposed to flame, acting to slow the spread of the fire, for as much as an hour or two depending on thickness. Plaster also provides some insulation to retard heat flow into structural steel elements, that would otherwise lose their strength and collapse in a fire. Early versions of protective plasters often contain asbestos fibres, which since have been outlawed in many industrialized nations.
Recent plasters for fire protection either contain cement or gypsum as binding agents as well as mineral wool or glass fiber to add mechanical strength.
Vermiculite, polystyrene beads or chemical expansion agents are often added to decrease the density of the finished product and increase thermal insulation.
One differentiates between interior and exterior fireproofing. Interior products are typically less substantial, with lower densities and lower cost. Exterior products have to withstand harsher environmental conditions. A rough surface is typically forgiven inside of buildings as dropped ceilings often hide them. Fireproofing plasters are losing ground to more costly intumescent and endothermic products, simply on technical merit. Trade jurisdiction on unionized construction sites in North America remains with the plasterers, regardless of whether the plaster is decorative in nature or is used in passive fire protection. Cementitious and gypsum based plasters tend to be endothermic. Fireproofing plasters are closely related to firestop mortars. Most firestop mortars can be sprayed and tooled very well, due to the fine detail work that is required of firestopping.
3D printing
Powder bed and inkjet head 3D printing is commonly based on the reaction of gypsum plaster with water, where the water is selectively applied by the inkjet head.
Gallery
Safety issues
The chemical reaction that occurs when plaster is mixed with water is exothermic. When plaster sets, it can reach temperatures of more than 60 °C (140 °F) and, in large volumes, can burn the skin. In January 2007, a secondary school student in Lincolnshire, England sustained third-degree burns after encasing her hands in a bucket of plaster as part of a school art project.
Plaster that contains powdered silica or asbestos presents health hazards if inhaled repeatedly. Asbestos is a known irritant when inhaled and can cause cancer, especially in people who smoke, and inhalation can also cause asbestosis. Inhaled silica can cause silicosis and (in very rare cases) can encourage the development of cancer. Persons working regularly with plaster containing these additives should take precautions to avoid inhaling powdered plaster, cured or uncured.
People can be exposed to plaster of Paris in the workplace by breathing it in, swallowing it, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for plaster of Paris exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday.
See also
References
External links
Building materials
Wallcoverings
Sculpture materials
Calcium compounds
Hydrates
Plastering
Impression material | Plaster | [
"Physics",
"Chemistry",
"Engineering"
] | 5,227 | [
"Building engineering",
"Coatings",
"Hydrates",
"Architecture",
"Construction",
"Materials",
"Plastering",
"Matter",
"Building materials"
] |
331,755 | https://en.wikipedia.org/wiki/Transitional%20fossil | A transitional fossil is any fossilized remains of a life form that exhibits traits common to both an ancestral group and its derived descendant group. This is especially important where the descendant group is sharply differentiated by gross anatomy and mode of living from the ancestral group. These fossils serve as a reminder that taxonomic divisions are human constructs that have been imposed in hindsight on a continuum of variation. Because of the incompleteness of the fossil record, there is usually no way to know exactly how close a transitional fossil is to the point of divergence. Therefore, it cannot be assumed that transitional fossils are direct ancestors of more recent groups, though they are frequently used as models for such ancestors.
In 1859, when Charles Darwin's On the Origin of Species was first published, the fossil record was poorly known. Darwin described the perceived lack of transitional fossils as "the most obvious and gravest objection which can be urged against my theory," but he explained it by relating it to the extreme imperfection of the geological record. He noted the limited collections available at the time but described the available information as showing patterns that followed from his theory of descent with modification through natural selection. Indeed, Archaeopteryx was discovered just two years later, in 1861, and represents a classic transitional form between earlier, non-avian dinosaurs and birds. Many more transitional fossils have been discovered since then, and there is now abundant evidence of how all classes of vertebrates are related, including many transitional fossils. Specific examples of class-level transitions are: tetrapods and fish, birds and dinosaurs, and mammals and "mammal-like reptiles".
The term "missing link" has been used extensively in popular writings on human evolution to refer to a perceived gap in the hominid evolutionary record. It is most commonly used to refer to any new transitional fossil finds. Scientists, however, do not use the term, as it refers to a pre-evolutionary view of nature.
Evolutionary and phylogenetic taxonomy
Transitions in phylogenetic nomenclature
In evolutionary taxonomy, the prevailing form of taxonomy during much of the 20th century and still used in non-specialist textbooks, taxa based on morphological similarity are often drawn as "bubbles" or "spindles" branching off from each other, forming evolutionary trees. Transitional forms are seen as falling between the various groups in terms of anatomy, having a mixture of characteristics from inside and outside the newly branched clade.
With the establishment of cladistics in the 1990s, relationships commonly came to be expressed in cladograms that illustrate the branching of the evolutionary lineages in stick-like figures. The different so-called "natural" or "monophyletic" groups form nested units, and only these are given phylogenetic names. While in traditional classification tetrapods and fish are seen as two different groups, phylogenetically tetrapods are considered a branch of fish. Thus, with cladistics there is no longer a transition between established groups, and the term "transitional fossils" is a misnomer. Differentiation occurs within groups, represented as branches in the cladogram.
In a cladistic context, transitional organisms can be seen as representing early examples of a branch, where not all of the traits typical of the previously known descendants on that branch have yet evolved. Such early representatives of a group are usually termed "basal taxa" or "sister taxa," depending on whether the fossil organism belongs to the daughter clade or not.
Transitional versus ancestral
A source of confusion is the notion that a transitional form between two different taxonomic groups must be a direct ancestor of one or both groups. The difficulty is exacerbated by the fact that one of the goals of evolutionary taxonomy is to identify taxa that were ancestors of other taxa. However, because evolution is a branching process that produces a complex bush pattern of related species rather than a linear process producing a ladder-like progression, and because of the incompleteness of the fossil record, it is unlikely that any particular form represented in the fossil record is a direct ancestor of any other. Cladistics deemphasizes the concept of one taxonomic group being an ancestor of another, and instead emphasizes the identification of sister taxa that share a more recent common ancestor with one another than they do with other groups. There are a few exceptional cases, such as some marine plankton microfossils, where the fossil record is complete enough to suggest with confidence that certain fossils represent a population that was actually ancestral to a later population of a different species. But, in general, transitional fossils are considered to have features that illustrate the transitional anatomical features of actual common ancestors of different taxa, rather than to be actual ancestors.
Prominent examples
Archaeopteryx
Archaeopteryx is a genus of theropod dinosaur closely related to the birds. Since the late 19th century, it has been accepted by palaeontologists, and celebrated in lay reference works, as being the oldest known bird, though a study in 2011 has cast doubt on this assessment, suggesting instead that it is a non-avialan dinosaur closely related to the origin of birds.
It lived in what is now southern Germany in the Late Jurassic period around 150 million years ago, when Europe was an archipelago in a shallow warm tropical sea, much closer to the equator than it is now. Similar in shape to a European magpie, with the largest individuals possibly attaining the size of a raven, Archaeopteryx could grow to about 0.5 metres (1.6 ft) in length. Despite its small size, broad wings, and inferred ability to fly or glide, Archaeopteryx has more in common with other small Mesozoic dinosaurs than it does with modern birds. In particular, it shares the following features with the deinonychosaurs (dromaeosaurs and troodontids): jaws with sharp teeth, three fingers with claws, a long bony tail, hyperextensible second toes ("killing claw"), feathers (which suggest homeothermy), and various skeletal features. These features make Archaeopteryx a clear candidate for a transitional fossil between dinosaurs and birds, making it important in the study both of dinosaurs and of the origin of birds.
The first complete specimen was announced in 1861, and ten more Archaeopteryx fossils have been found since then. Most of the eleven known fossils include impressions of feathers—among the oldest direct evidence of such structures. Moreover, because these feathers take the advanced form of flight feathers, Archaeopteryx fossils are evidence that feathers began to evolve before the Late Jurassic.
Australopithecus afarensis
The hominid Australopithecus afarensis represents an evolutionary transition between modern bipedal humans and their quadrupedal ape ancestors. A number of traits of the A. afarensis skeleton strongly reflect bipedalism, to the extent that some researchers have suggested that bipedality evolved long before A. afarensis. In overall anatomy, the pelvis is far more human-like than ape-like. The iliac blades are short and wide, the sacrum is wide and positioned directly behind the hip joint, and there is clear evidence of a strong attachment for the knee extensors, implying an upright posture.
While the pelvis is not entirely like that of a human (being markedly wide, or flared, with laterally orientated iliac blades), these features point to a structure radically remodelled to accommodate a significant degree of bipedalism. The femur angles in toward the knee from the hip. This trait allows the foot to fall closer to the midline of the body, and strongly indicates habitual bipedal locomotion. Present-day humans, orangutans and spider monkeys possess this same feature. The feet feature adducted big toes, making it difficult if not impossible to grasp branches with the hindlimbs. Besides locomotion, A. afarensis also had a slightly larger brain than a modern chimpanzee (the closest living relative of humans) and had teeth that were more human than ape-like.
Pakicetids, Ambulocetus
The cetaceans (whales, dolphins and porpoises) are marine mammal descendants of land mammals. The pakicetids are an extinct family of hoofed mammals that are the earliest whales, whose closest sister group is Indohyus from the family Raoellidae. They lived in the Early Eocene, around 53 million years ago. Their fossils were first discovered in North Pakistan in 1979, at a river not far from the shores of the former Tethys Sea. Pakicetids could hear under water, using enhanced bone conduction, rather than depending on tympanic membranes like most land mammals. This arrangement does not give directional hearing under water.
Ambulocetus natans, which lived about 49 million years ago, was discovered in Pakistan in 1994. It was probably amphibious, and looked like a crocodile. In the Eocene, ambulocetids inhabited the bays and estuaries of the Tethys Ocean in northern Pakistan. The fossils of ambulocetids are always found in near-shore shallow marine deposits associated with abundant marine plant fossils and littoral molluscs. Although they are found only in marine deposits, their oxygen isotope values indicate that they consumed water with a range of degrees of salinity, some specimens showing no evidence of sea water consumption and others none of fresh water consumption at the time when their teeth were fossilized. It is clear that ambulocetids tolerated a wide range of salt concentrations. Their diet probably included land animals that approached water for drinking, or freshwater aquatic organisms that lived in the river. Hence, ambulocetids represent the transition phase of cetacean ancestors between freshwater and marine habitat.
Tiktaalik
Tiktaalik is a genus of extinct sarcopterygian (lobe-finned fish) from the Late Devonian period, with many features akin to those of tetrapods (four-legged animals). It is one of several lines of ancient sarcopterygians to develop adaptations to the oxygen-poor shallow water habitats of its time—adaptations that led to the evolution of tetrapods. Well-preserved fossils were found in 2004 on Ellesmere Island in Nunavut, Canada.
Tiktaalik lived approximately 375 million years ago. Paleontologists suggest that it is representative of the transition between non-tetrapod vertebrates such as Panderichthys, known from fossils 380 million years old, and early tetrapods such as Acanthostega and Ichthyostega, known from fossils about 365 million years old. Its mixture of primitive fish and derived tetrapod characteristics led one of its discoverers, Neil Shubin, to characterize Tiktaalik as a "fishapod." Unlike many previous, more fish-like transitional fossils, the "fins" of Tiktaalik have basic wrist bones and simple rays reminiscent of fingers. They may have been weight-bearing. Like all modern tetrapods, it had rib bones, a mobile neck with a separate pectoral girdle, and lungs, though it had the gills, scales, and fins of a fish. However in a 2008 paper by Boisvert at al. it is noted that Panderichthys, due to its more derived distal portion, might be closer to tetrapods than Tiktaalik, which might have independently developed similarities to tetrapods by convergent evolution.
Tetrapod footprints found in Poland and reported in Nature in January 2010 were "securely dated" at 10 million years older than the oldest known elpistostegids (of which Tiktaalik is an example), implying that animals like Tiktaalik, possessing features that evolved around 400 million years ago, were "late-surviving relics rather than direct transitional forms, and they highlight just how little we know of the earliest history of land vertebrates."
Amphistium
Pleuronectiformes (flatfish) are an order of ray-finned fish. The most obvious characteristic of the modern flatfish is their asymmetry, with both eyes on the same side of the head in the adult fish. In some families the eyes are always on the right side of the body (dextral or right-eyed flatfish) and in others they are always on the left (sinistral or left-eyed flatfish). The primitive spiny turbots include equal numbers of right- and left-eyed individuals, and are generally less asymmetrical than the other families. Other distinguishing features of the order are the presence of protrusible eyes, another adaptation to living on the seabed (benthos), and the extension of the dorsal fin onto the head.
Amphistium is a 50-million-year-old fossil fish identified as an early relative of the flatfish, and as a transitional fossil. In Amphistium, the transition from the typical symmetric head of a vertebrate is incomplete, with one eye placed near the top-center of the head. Paleontologists concluded that "the change happened gradually, in a way consistent with evolution via natural selection—not suddenly, as researchers once had little choice but to believe."
Amphistium is among the many fossil fish species known from the Monte Bolca Lagerstätte of Lutetian Italy. Heteronectes is a related, and very similar fossil from slightly earlier strata of France.
Runcaria
A Middle Devonian precursor to seed plants has been identified from Belgium, predating the earliest seed plants by about 20 million years. Runcaria, small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears an unopened distal extension protruding above the multilobed integument. It is suspected that the extension was involved in anemophilous pollination. Runcaria sheds new light on the sequence of character acquisition leading to the seed, having all the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the seed.
Fossil record
Not every transitional form appears in the fossil record, because the fossil record is not complete. Organisms are only rarely preserved as fossils in the best of circumstances, and only a fraction of such fossils have been discovered. Paleontologist Donald Prothero noted that this is illustrated by the fact that the number of species known through the fossil record was less than 5% of the number of known living species, suggesting that the number of species known through fossils must be far less than 1% of all the species that have ever lived.
Because of the specialized and rare circumstances required for a biological structure to fossilize, logic dictates that known fossils represent only a small percentage of all life-forms that ever existed—and that each discovery represents only a snapshot of evolution. The transition itself can only be illustrated and corroborated by transitional fossils, which never demonstrate an exact half-way point between clearly divergent forms.
The fossil record is very uneven and, with few exceptions, is heavily slanted toward organisms with hard parts, leaving most groups of soft-bodied organisms with little to no fossil record. The groups considered to have a good fossil record, including a number of transitional fossils between traditional groups, are the vertebrates, the echinoderms, the brachiopods and some groups of arthropods.
History
Post-Darwin
The idea that animal and plant species were not constant, but changed over time, was suggested as far back as the 18th century. Darwin's On the Origin of Species, published in 1859, gave it a firm scientific basis. A weakness of Darwin's work, however, was the lack of palaeontological evidence, as pointed out by Darwin himself. While it is easy to imagine natural selection producing the variation seen within genera and families, the transmutation between the higher categories was harder to imagine. The dramatic find of the London specimen of Archaeopteryx in 1861, only two years after the publication of Darwin's work, offered for the first time a link between the class of the highly derived birds, and that of the more basal reptiles. In a letter to Darwin, the palaeontologist Hugh Falconer wrote:
Had the Solnhofen quarries been commissioned—by august command—to turn out a strange being à la Darwin—it could not have executed the behest more handsomely—than in the Archaeopteryx.
Thus, transitional fossils like Archaeopteryx came to be seen as not only corroborating Darwin's theory, but as icons of evolution in their own right. For example, the Swedish encyclopedic dictionary Nordisk familjebok of 1904 showed an inaccurate Archaeopteryx reconstruction (see illustration) of the fossil, "ett af de betydelsefullaste paleontologiska fynd, som någonsin gjorts" ("one of the most significant paleontological discoveries ever made").
The rise of plants
Transitional fossils are not only those of animals. With the increasing mapping of the divisions of plants at the beginning of the 20th century, the search began for the ancestor of the vascular plants. In 1917, Robert Kidston and William Henry Lang found the remains of an extremely primitive plant in the Rhynie chert in Aberdeenshire, Scotland, and named it Rhynia.
The Rhynia plant was small and stick-like, with simple dichotomously branching stems without leaves, each tipped by a sporangium. The simple form echoes that of the sporophyte of mosses, and it has been shown that Rhynia had an alternation of generations, with a corresponding gametophyte in the form of crowded tufts of diminutive stems only a few millimetres in height. Rhynia thus falls midway between mosses and early vascular plants like ferns and clubmosses. From a carpet of moss-like gametophytes, the larger Rhynia sporophytes grew much like simple clubmosses, spreading by means of horizontal growing stems growing rhizoids that anchored the plant to the substrate. The unusual mix of moss-like and vascular traits and the extreme structural simplicity of the plant had huge implications for botanical understanding.
Missing links
The idea of all living things being linked through some sort of transmutation process predates Darwin's theory of evolution. Jean-Baptiste Lamarck envisioned that life was generated constantly in the form of the simplest creatures, and strove towards complexity and perfection (i.e. humans) through a progressive series of lower forms. In his view, lower animals were simply newcomers on the evolutionary scene.
After On the Origin of Species, the idea of "lower animals" representing earlier stages in evolution lingered, as demonstrated in Ernst Haeckel's figure of the human pedigree. While the vertebrates were then seen as forming a sort of evolutionary sequence, the various classes were distinct, the undiscovered intermediate forms being called "missing links."
The term was first used in a scientific context by Charles Lyell in the third edition (1851) of his book Elements of Geology in relation to missing parts of the geological column, but it was popularized in its present meaning by its appearance on page xi of his book Geological Evidences of the Antiquity of Man of 1863. By that time, it was generally thought that the end of the last glacial period marked the first appearance of humanity; Lyell drew on new findings in his Antiquity of Man to put the origin of human beings much further back. Lyell wrote that it remained a profound mystery how the huge gulf between man and beast could be bridged. Lyell's vivid writing fired the public imagination, inspiring Jules Verne's Journey to the Center of the Earth (1864) and Louis Figuier's 1867 second edition of La Terre avant le déluge ("Earth before the Flood"), which included dramatic illustrations of savage men and women wearing animal skins and wielding stone axes, in place of the Garden of Eden shown in the 1863 edition.
The search for a fossil showing transitional traits between apes and humans, however, was fruitless until the young Dutch geologist Eugène Dubois found a skullcap, a molar and a femur on the banks of Solo River, Java in 1891. The find combined a low, ape-like skull roof with a brain estimated at around 1000 cc, midway between that of a chimpanzee and an adult human. The single molar was larger than any modern human tooth, but the femur was long and straight, with a knee angle showing that "Java Man" had walked upright. Given the name Pithecanthropus erectus ("erect ape-man"), it became the first in what is now a long list of human evolution fossils. At the time it was hailed by many as the "missing link," helping set the term as primarily used for human fossils, though it is sometimes used for other intermediates, like the dinosaur-bird intermediary Archaeopteryx.
While "missing link" is still a popular term, well-recognized by the public and often used in the popular media, the term is avoided in scientific publications. Some bloggers have called it "inappropriate"; both because the links are no longer "missing", and because human evolution is no longer believed to have occurred in terms of a single linear progression.
Punctuated equilibrium
The theory of punctuated equilibrium developed by Stephen Jay Gould and Niles Eldredge and first presented in 1972 is often mistakenly drawn into the discussion of transitional fossils. This theory, however, pertains only to well-documented transitions within taxa or between closely related taxa over a geologically short period of time. These transitions, usually traceable in the same geological outcrop, often show small jumps in morphology between extended periods of morphological stability. To explain these jumps, Gould and Eldredge envisaged comparatively long periods of genetic stability separated by periods of rapid evolution. Gould made the following observation concerning creationist misuse of his work to deny the existence of transitional fossils:
See also
Crocoduck
Evidence of common descent
Missing link
Speciation
References
Sources
The book is available from The Complete Work of Charles Darwin Online. Retrieved 2015-05-13.
External links
Evolutionary biology concepts
Zoology
Phylogenetics | Transitional fossil | [
"Biology"
] | 4,613 | [
"Taxonomy (biology)",
"Evolutionary biology concepts",
"Bioinformatics",
"Zoology",
"Phylogenetics"
] |
331,784 | https://en.wikipedia.org/wiki/Drywall | Drywall (also called plasterboard, dry lining, wallboard, sheet rock, gib board, gypsum board, buster board, turtles board, slap board, custard board, gypsum panel and gyprock) is a panel made of calcium sulfate dihydrate (gypsum), with or without additives, typically extruded between thick sheets of facer and backer paper, used in the construction of interior walls and ceilings. The plaster is mixed with fiber (typically paper, glass wool, or a combination of these materials); plasticizer, foaming agent; and additives that can reduce mildew, flammability, and water absorption.
In the mid-20th century, drywall construction became prevalent in North America as a time- and labor-saving alternative to lath and plaster.
History
Sackett Board was invented in 1890 by New York Coal Tar Chemical Company employees Augustine Sackett and Fred L. Kane, graduates of Rensselaer Polytechnic Institute. It was made by layering plaster within four plies of wool felt paper. Sheets were thick with open (untaped) edges.
Gypsum board evolved between 1910 and 1930, beginning with wrapped board edges and the elimination of the two inner layers of felt paper in favor of paper-based facings. In 1910 United States Gypsum Corporation bought Sackett Plaster Board Company and by 1917 introduced Sheetrock. Providing installation efficiency, it was developed additionally as a measure of fire resistance. Later air entrainment technology made boards lighter and less brittle, and joint treatment materials and systems also evolved. Gypsum lath was an early substrate for plaster. An alternative to traditional wood or metal lath was a panel made up of compressed gypsum plaster board that was sometimes grooved or punched with holes to allow wet plaster to key into its surface. As it evolved, it was faced with paper impregnated with gypsum crystals that bonded with the applied facing layer of plaster. In 1936, US Gypsum trademarked ROCKLATH for their gypsum lath product.
In 2002, the European Commission imposed fines totaling €420 million on the companies Lafarge, BPB, Knauf and Gyproc Benelux, which had operated a cartel on the market which affected 80% of consumers in France, the UK, Germany and the Benelux countries.
Manufacture
A wallboard panel consists of a layer of gypsum plaster sandwiched between two layers of paper. The raw gypsum, , is heated to drive off the water and then slightly rehydrated to produce the hemihydrate of calcium sulfate (). The plaster is mixed with fiber (typically paper and/or glass fiber), plasticizer, foaming agent, finely ground gypsum crystal as an accelerator, EDTA, starch or other chelate as a retarder, and various additives that may increase mildew and fire resistance, lower water absorption (wax emulsion or silanes), reduce creep (tartaric or boric acid). The board is then formed by sandwiching a core of the wet mixture between two sheets of heavy paper or fiberglass mats. When the core sets, it is dried in a large drying chamber, and the sandwich becomes rigid and strong enough for use as a building material.
Drying chambers typically use natural gas today. To dry of wallboard, between is required. Organic dispersants and plasticizers are used so that the slurry will flow during manufacture and to reduce the water and hence the drying time. Coal-fired power stations include devices called scrubbers to remove sulfur from their exhaust emissions. The sulfur is absorbed by powdered limestone in a process called flue-gas desulfurization (FGD), which produces several new substances. One is called "FGD gypsum". This is commonly used in drywall construction in the United States and elsewhere.
In 2020, 8.4 billion square meters of drywall were sold around the world.
Construction techniques
As an alternative to a week-long plaster application, an entire house can be drywalled in one or two days by two experienced drywallers, and drywall is easy enough to be installed by many amateur home carpenters. In large-scale commercial construction, the work of installing and finishing drywall is often split between drywall mechanics, or hangers, who install the wallboard, and tapers (also known as finishers, mud men, or float crew) who finish the joints and cover the fastener heads with drywall compound. Drywall can be finished anywhere from a level 0 to a level 5, where 0 is not finished in any fashion, and five is the most pristine. Depending on how significant the finish is to the customer, the extra steps in the finish may or may not be necessary, though priming and painting of drywall are recommended in any location where it may be exposed to any wear.
Drywall is cut to size by scoring the paper on the finished side (usually white) with a utility knife, breaking the sheet along the cut, and cutting the paper backing. Small features such as holes for outlets and light switches are usually cut using a keyhole saw, oscillating multi-tool or a tiny high-speed bit in a rotary tool. Drywall is then fixed to the structure with nails or drywall screws and often glue. Drywall fasteners, also referred to as drywall clips or stops, are gaining popularity in residential and commercial construction. Drywall fasteners are used for supporting interior drywall corners and replacing the non-structural wood or metal blocking that traditionally was used to install drywall. Their function saves material and labor costs, minimizes call-backs due to truss uplift, increases energy efficiency, and makes plumbing and electrical installation simpler.
When driven fully home, drywall screws countersink their heads slightly into the drywall. They use a 'bugle head', a concave taper, rather than the conventional conical countersunk head; this compresses the drywall surface rather than cutting into it and so avoids tearing the paper. Screws for light-gauge steel framing have a sharp point and finely spaced threads. If the steel framing is heavier than 20-gauge, self-drilling screws with finely spaced threads must be used. In some applications, the drywall may be attached to the wall with adhesives.
After the sheets are secured to the wall studs or ceiling joists, the installer conceals the seams between drywall sheets with joint tape or fiber mesh. Layers of joint compound, sometimes called mud, are typically spread with a drywall trowel or knife. This compound is also applied to any screw holes or defects. The compound is allowed to air dry and then typically sanded smooth before painting. Alternatively, for a better finish, the entire wall may be given a skim coat, a thin layer (about ) of finishing compound, to minimize the visual differences between the paper and mudded areas after painting.
Another similar skim coating process is called veneer plastering, although it is done slightly thicker (about ). Veneering uses a slightly different specialized setting compound ("finish plaster") that contains gypsum and lime putty. This application uses blueboard, which has specially treated paper to accelerate the setting of the gypsum plaster component. This setting has far less shrinkage than the air-dry compounds normally used in drywall, so it only requires one coat. Blueboard also has square edges rather than tapered-edge drywall boards. The tapered drywall boards are used to countersink the tape in taped jointing, whereas the tape in veneer plastering is buried beneath a level surface. One coat veneer plaster over dry board is an intermediate style step between full multi-coat "wet" plaster and the limited joint-treatment-only given "dry" wall.
Properties
Sound control
The method of installation and type of drywall can reduce sound transmission through walls and ceilings. Several builders' books state that thicker drywall reduces sound transmission, but engineering manuals recommend using multiple layers of drywall, sometimes of different thicknesses and glued together, or special types of drywall designed to reduce noise. Also important are the construction details of the framing with steel studs, wider stud spacing, double studding, insulation, and other details reducing sound transmission. Sound transmission class (STC) ratings can be increased from 33 for an ordinary stud-wall to as high as 59 with double drywall on both sides of a wood stud wall with resilient channels on one side and glass wool batt insulation between the studs.
Sound transmission may be slightly reduced using regular panels (with or without light-gauge resilient metal channels and/or insulation), but it is more effective to use two layers of drywall, sometimes in combination with other factors, or specially designed, sound-resistant drywall.
Water damage and molding
Drywall is highly vulnerable to moisture due to the inherent properties of the materials that constitute it: gypsum, paper, and organic additives and binders. Gypsum will soften with exposure to moisture and eventually turn into a gooey paste with prolonged immersion, such as during a flood. During such incidents, some, or all, of the drywall in an entire building will need to be removed and replaced. Furthermore, the paper facings and organic additives mixed with the gypsum core are food for mold.
The porosity of the board—introduced during manufacturing to reduce the board's weight, lowering construction time and transportation costs—enables water to rapidly reach the core through capillary action, where mold can grow inside. Water that enters a room from overhead may cause ceiling drywall tape to separate from the ceiling as a result of the grooves immediately behind the tape where the drywall pieces meet becoming saturated. The drywall may also soften around the screws holding the drywall in place, and with the aid of gravity, the weight of the water may cause the drywall to sag and eventually collapse, requiring replacement.
Drywall's paper facings are edible to termites, which can eat the paper if they infest a wall cavity covered with drywall. This causes the painted surface to crumble to the touch, its paper backing material being eaten. In addition to the necessity of patching the damaged surface and repainting, if enough of the paper has been eaten, the gypsum core can easily crack or crumble without it, and the drywall must be removed and replaced.
In many circumstances, especially when the drywall has been exposed to water or moisture for less than 48 hours, professional restoration experts can avoid the cost, inconvenience, and difficulty of removing and replacing the affected drywall. They use rapid drying techniques that eliminate the elements required to support microbial activity while restoring most or all of the drywall.
It is for these reasons that greenboard, a type of drywall with an outer face that is wax- and/or chemically coated to resist mold growth, and ideally cement board are used for rooms expected to have high humidity, primarily kitchens, bathrooms, and laundry rooms.
Other damage risks
Foam insulation and the gypsum part of sheetrock are easily chewed out by honeybees when they are setting up a stray nest in a building, and they want to enlarge their nest area.
Fire resistance
Some fire barrier walls are constructed of Type X drywall as a passive fire protection item. Gypsum contains the water of crystallization bound in the form of hydrates. When exposed to heat or fire, the resulting decomposition reaction releases water vapor and is endothermic (it absorbs thermal energy), which retards heat transfer until the water in the gypsum is gone. The fire-resistance rating of the fire barrier assembly is increased with additional layers of drywall, up to four hours for walls and three hours for floor/ceiling assemblies. Fire-rated assemblies constructed of drywall are documented in design or certification listing catalogues, including DIN 4102 Part 4 and the Canadian Building Code, Underwriters Laboratories and Underwriters Laboratories of Canada (ULC).
Tests result in code-recognized designs with assigned fire-resistance ratings. The resulting designs become part of the code and are not limited to use by any manufacturer. However, individual manufacturers may also have proprietary designs that they have had third-party tested and approved, provided that the material used in the field configuration can be demonstrated to meet the minimum requirements of Type X drywall and that sufficient layers and thicknesses are used.
Type X drywall
In the Type X gypsum board, special glass fibers are intermixed with the gypsum to reinforce the core of the panels. These fibers reduce the size of the cracks that form as the water is driven off, thereby extending the length of time the gypsum panels resist fire without failure.
Type C drywall
Type C gypsum panels provide stronger fire resistance than Type X. The core of Type C panels contains a higher density of glass fibers. The core of Type C panels also contains vermiculite which acts as a shrinkage-compensating additive that expands when exposed to elevated temperatures of a fire. This expansion occurs at roughly the same temperature as the calcination of the gypsum in the core, allowing the core of the Type C panels to remain dimensionally stable in a fire.
Waste
Because up to 12% of drywall is wasted during the manufacturing and installation processes and the drywall material is frequently not reused, disposal can become a problem. Some landfill sites have banned the dumping of drywall. Some manufacturers take back waste wallboard from construction sites and recycle them into new wallboard. Recycled paper is typically used during manufacturing. More recently, recycling at the construction site itself has been researched. There is potential for using crushed drywall to amend certain soils at building sites, such as sodic clay and silt mixtures (bay mud), as well as using it in compost. As of 2016, industry standards are being developed to ensure that when and if wallboard is taken back for recycling, quality and composition are maintained.
Market
North America
North America is one of the largest gypsum board users in the world, with a total wallboard plant capacity of per year, roughly half of the worldwide annual production capacity of . Moreover, the homebuilding and remodeling markets in North America in the late 1990s and early 2000s increased demand. The gypsum board market was one of the biggest beneficiaries of the housing boom as "an average new American home contains more than 7.31 metric tons of gypsum."
The introduction in March 2005 of the Clean Air Interstate Rule by the United States Environmental Protection Agency requires fossil-fuel power plants to "cut sulfur dioxide emissions by 73%" by 2018. The Clean Air Interstate Rule also requested that the power plants install new scrubbers (industrial pollution control devices) to remove sulfur dioxide present in the output waste gas. Scrubbers use the technique of flue-gas desulfurization (FGD), which produces synthetic gypsum as a usable by-product. In response to the new supply of this raw material, the gypsum board market was predicted to shift significantly. However, issues such as mercury release during calcining need to be resolved.
Controversies
High-sulfur drywall illness and corrosion issues
A substantial amount of defective drywall was imported into the United States from China and incorporated into tens of thousands of homes during rebuilding in 2006 and 2007 following Hurricane Katrina and in other places. Complaints included the structure's foul odour, health effects, and metal corrosion. The emission of sulfurous gases causes this. The same drywall was sold in Asia without problems resulting, but US homes are built much more tightly than homes in China, with less ventilation. Volatile sulfur compounds, including hydrogen sulfide, have been detected as emissions from the imported drywall and may be linked to health problems. These compounds are emitted from many different types of drywall.
Several lawsuits are underway in many jurisdictions, but many of the sheets of drywall are simply marked "Made in China", thus making the manufacturer's identification difficult. An investigation by the Consumer Product Safety Commission, CPSC, was underway in 2009. In November 2009, the CPSC reported a "strong association" between Chinese drywall and corrosion of pipes and wires reported by thousands of homeowners in the United States. The issue was resolved in 2011, and now all drywall must be tested for volatile sulfur, and any containing more than ten ppm is unable to be sold in the US.
Variants
The following variants available in Canada or the United States are listed below:
Regular white board, from thickness
Fire-resistant ("Type X"), different thicknesses and multiple layers of wallboard provide increased fire rating based on the time a specific wall assembly can withstand a standardized fire test. Often perlite, vermiculite, and boric acid are added to improve fire resistance.
Greenboard, the drywall containing an oil-based additive in the green-colored paper covering, provides moisture resistance. It is commonly used in washrooms and other areas expected to experience elevated humidity levels.
Blueboard, blue face paper forms a strong bond with a skim coat or a built-up plaster finish, providing water and mold resistance.
Cement board, which is more water-resistant than greenboard, for use in showers or sauna rooms, and as a base for ceramic tile.
Soundboard is made from wood fibers to increase the sound transmission class.
Soundproof drywall is a laminated drywall made with gypsum and other materials such as damping polymers to significantly increase the sound transmission class rating.
Mold-resistant, paperless drywall with fiberglass face
Enviroboard, a board made from recycled agricultural materials
Lead-lined drywall, a drywall used around radiological equipment.
Foil-backed drywall used as a vapor barrier.
Controlled density (CD), also called ceiling board, which is available only in thickness and is significantly stiffer than the regular whiteboard.
EcoRock, a drywall that uses a combination of 20 materials including recycled fly ash, slag, kiln dust and fillers and no starch cellulose; it is advertised as being environmentally friendly due to the use of recycled materials and an energy efficient process.
Gypsum "Firecode C". This board is similar in composition to Type X, except for more glass fibres and a form of the vermiculite used to reduce shrinkage. When exposed to high heat, the gypsum core shrinks, but this additive expands at about the same rate, so the gypsum core is more stable in a fire and remains in place even after the gypsum dries up.
Specifications
Australia and New Zealand
The term plasterboard is used in Australia and New Zealand. In Australia, the product is often called Gyprock, the name of the largest plasterboard manufacturer. In New Zealand it is also called Gibraltar and Gib board, genericised from the registered trademark ("GIB") of the locally made product that dominates the local market. A specific type of Gibraltar board for use in wet conditions (such as bathrooms and kitchens) is known as AquaGib.
It is made in thicknesses of 10 mm, 13 mm, and 16 mm, and sometimes other thicknesses up to 25 mm. Panels are commonly sold in 1200 mm-wide sheets, which may be 1800, 2400, 3000, 4800, or 6000 mm in length. Sheets are usually secured to either timber or cold-formed steel frames anywhere from 150 to 300 mm centres along the beam and 400 to 600 mm across members.
In both countries, plasterboard has become a widely used replacement for scrim and sarking walls in renovating 19th- and early 20th-century buildings.
Canada and the United States
Drywall panels in Canada and the United States are made in widths of and varying lengths to suit the application. The most common width is 48 inches; however, 54-inch-wide panels are becoming more popular as ceiling heights become more common. Lengths up to are common; the most common is . Common thicknesses are ; thicknesses of are used in specific applications. In many parts of Canada, drywall is commonly referred to as Gyproc.
Europe
In Europe, most plasterboard is made in sheets wide; sheets wide are also made. Plasterboard wide is most commonly made in lengths; sheets of and longer also are common. Thicknesses of plasterboard available are .
Plasterboard is commonly made with one of three edge treatments: tapered edge, where the long edges of the board are tapered with a wide bevel at the front to allow jointing materials to be finished flush with the main board face; plain edge, used where the whole surface will receive a thin coating (skim coat) of finishing plaster; and beveled on all four sides, used in products specialized for roofing. Major UK manufacturers do not offer four-sided chamfered drywall for general use.
See also
References
Composite materials
Building materials
Passive fire protection
Wallcoverings | Drywall | [
"Physics",
"Engineering"
] | 4,369 | [
"Building engineering",
"Composite materials",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
331,821 | https://en.wikipedia.org/wiki/Adipocere | Adipocere (), also known as corpse wax, grave wax or mortuary wax, is a wax-like organic substance formed by the anaerobic bacterial hydrolysis of fat in tissue, such as body fat in corpses. In its formation, putrefaction is replaced by a permanent firm cast of fatty tissues, internal organs, and the face.
History
Adipocere was first described by Sir Thomas Browne in his discourse Hydriotaphia, Urn Burial (1658):
The chemical process of adipocere formation, saponification, came to be understood in the 17th century when microscopes became widely available.
In 1825, physician and lecturer Augustus Granville is believed to have (somewhat unwittingly) made candles from the adipocere of a mummy and used them to light the public lecture he gave to report on the mummy's dissection. Granville apparently thought that the waxy material from which he made the candles had been used to preserve the mummy, rather than its being a product of the saponification of the mummified body.
The body of the "Soap Lady", whose corpse turned into adipocere, is displayed in the Mütter Museum in Philadelphia, Pennsylvania.
Probably the most famous known case of adipocere is that of Scotland's Higgins brothers, murdered by their father in 1911 but whose bodies were not found until 1913. The bodies had been left floating in a flooded quarry, resulting in an almost complete transformation into adipocere. Pathologists Sydney Smith and Professor Littlejohn were able to find more than enough evidence from the preserved remains for police to identify the victims and charge the killer, who was hanged. At the same time, the pathologists secretly took some of the remains back to Edinburgh University for further study; nearly a century later, a relative requested the return of those remains so they could be given a Christian burial. The university agreed to do so if the claimant could prove her relationship to the boys and if other relatives agreed to her plan, and the remains were eventually cremated in 2009.
Appearance
Adipocere is a crumbly, waxy, water-insoluble material consisting mostly of saturated fatty acids. Depending on whether it was formed from white or brown body fat, adipocere is either grayish white or tan in color.
In corpses, the firm cast of adipocere allows some estimation of body shape and facial features, and injuries are often well-preserved.
Formation
Adipocere is formed by the anaerobic bacterial hydrolysis of fat in tissue. The transformation of fats into adipocere occurs best in an environment that has high levels of moisture and an absence of oxygen, such as in wet ground or mud at the bottom of a lake or a sealed casket, and it can occur with both embalmed and untreated bodies. Adipocere formation begins within a month of death, and, in the absence of air, it can persist for centuries. Adipocerous formation preserved the left hemisphere of the brain of a 13th-century infant such that sulci, gyri, and even Nissl bodies in the motor cortex could be distinguished in the 20th century. An exposed, insect-infested body or a body in a warm environment is unlikely to form deposits of adipocere.
Corpses of women, infants and overweight persons are particularly prone to adipocere transformation because they contain more body fat. In forensic science, the utility of adipocere formation to estimate the postmortem interval is limited because the speed of the process is temperature-dependent. It is accelerated by warmth, but temperature extremes impede it.
The degradation of adipocere continues after exhumation at the microscopic level resulting from the combination of exposure to air, handling, dissection and the enzymatic activity of microbiota.
See also
Bog body
Putrefaction
Saponification
Footnotes
Lipids
Forensic phenomena | Adipocere | [
"Chemistry"
] | 818 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Lipids"
] |
331,826 | https://en.wikipedia.org/wiki/Prime-counting%20function | In mathematics, the prime-counting function is the function counting the number of prime numbers less than or equal to some real number . It is denoted by (unrelated to the number ).
A symmetric variant seen sometimes is , which is equal to if is exactly a prime number, and equal to otherwise. That is, the number of prime numbers less than , plus half if equals a prime.
Growth rate
Of great interest in number theory is the growth rate of the prime-counting function. It was conjectured in the end of the 18th century by Gauss and by Legendre to be approximately
where is the natural logarithm, in the sense that
This statement is the prime number theorem. An equivalent statement is
where is the logarithmic integral function. The prime number theorem was first proved in 1896 by Jacques Hadamard and by Charles de la Vallée Poussin independently, using properties of the Riemann zeta function introduced by Riemann in 1859. Proofs of the prime number theorem not using the zeta function or complex analysis were found around 1948 by Atle Selberg and by Paul Erdős (for the most part independently).
More precise estimates
In 1899, de la Vallée Poussin proved that
for some positive constant . Here, is the big notation.
More precise estimates of are now known. For example, in 2002, Kevin Ford proved that
Mossinghoff and Trudgian proved an explicit upper bound for the difference between and :
For values of that are not unreasonably large, is greater than . However, is known to change sign infinitely many times. For a discussion of this, see Skewes' number.
Exact form
For let when is a prime number, and otherwise. Bernhard Riemann, in his work On the Number of Primes Less Than a Given Magnitude, proved that is equal to
where
is the Möbius function, is the logarithmic integral function, indexes every zero of the Riemann zeta function, and is not evaluated with a branch cut but instead considered as where is the exponential integral. If the trivial zeros are collected and the sum is taken only over the non-trivial zeros of the Riemann zeta function, then may be approximated by
The Riemann hypothesis suggests that every such non-trivial zero lies along .
Table of , , and
The table shows how the three functions , , and compared at powers of 10. See also, and
{| class="wikitable" style="text-align: right"
!
!
!
!
!
! % error
|-
| 10
| 4
| 0
| 2
| 2.500
| −8.57%
|-
| 102
| 25
| 3
| 5
| 4.000
| +13.14%
|-
| 103
| 168
| 23
| 10
| 5.952
| +13.83%
|-
| 104
| 1,229
| 143
| 17
| 8.137
| +11.66%
|-
| 105
| 9,592
| 906
| 38
| 10.425
| +9.45%
|-
| 106
| 78,498
| 6,116
| 130
| 12.739
| +7.79%
|-
| 107
| 664,579
| 44,158
| 339
| 15.047
| +6.64%
|-
| 108
| 5,761,455
| 332,774
| 754
| 17.357
| +5.78%
|-
| 109
| 50,847,534
| 2,592,592
| 1,701
| 19.667
| +5.10%
|-
| 1010
| 455,052,511
| 20,758,029
| 3,104
| 21.975
| +4.56%
|-
| 1011
| 4,118,054,813
| 169,923,159
| 11,588
| 24.283
| +4.13%
|-
| 1012
| 37,607,912,018
| 1,416,705,193
| 38,263
| 26.590
| +3.77%
|-
| 1013
| 346,065,536,839
| 11,992,858,452
| 108,971
| 28.896
| +3.47%
|-
| 1014
| 3,204,941,750,802
| 102,838,308,636
| 314,890
| 31.202
| +3.21%
|-
| 1015
| 29,844,570,422,669
| 891,604,962,452
| 1,052,619
| 33.507
| +2.99%
|-
| 1016
| 279,238,341,033,925
| 7,804,289,844,393
| 3,214,632
| 35.812
| +2.79%
|-
| 1017
| 2,623,557,157,654,233
| 68,883,734,693,928
| 7,956,589
| 38.116
| +2.63%
|-
| 1018
| 24,739,954,287,740,860
| 612,483,070,893,536
| 21,949,555
| 40.420
| +2.48%
|-
| 1019
| 234,057,667,276,344,607
| 5,481,624,169,369,961
| 99,877,775
| 42.725
| +2.34%
|-
| 1020
| 2,220,819,602,560,918,840
| 49,347,193,044,659,702
| 222,744,644
| 45.028
| +2.22%
|-
| 1021
| 21,127,269,486,018,731,928
| 446,579,871,578,168,707
| 597,394,254
| 47.332
| +2.11%
|-
| 1022
| 201,467,286,689,315,906,290
| 4,060,704,006,019,620,994
| 1,932,355,208
| 49.636
| +2.02%
|-
| 1023
| 1,925,320,391,606,803,968,923
| 37,083,513,766,578,631,309
| 7,250,186,216
| 51.939
| +1.93%
|-
| 1024
| 18,435,599,767,349,200,867,866
| 339,996,354,713,708,049,069
| 17,146,907,278
| 54.243
| +1.84%
|-
| 1025
| 176,846,309,399,143,769,411,680
| 3,128,516,637,843,038,351,228
| 55,160,980,939
| 56.546
| +1.77%
|-
| 1026
| 1,699,246,750,872,437,141,327,603
| 28,883,358,936,853,188,823,261
| 155,891,678,121
| 58.850
| +1.70%
|-
| 1027
| 16,352,460,426,841,680,446,427,399
| 267,479,615,610,131,274,163,365
| 508,666,658,006
| 61.153
| +1.64%
|-
| 1028
| 157,589,269,275,973,410,412,739,598
| 2,484,097,167,669,186,251,622,127
| 1,427,745,660,374
| 63.456
| +1.58%
|-
| 1029
| 1,520,698,109,714,272,166,094,258,063
| 23,130,930,737,541,725,917,951,446
| 4,551,193,622,464
| 65.759
| +1.52%
|}
In the On-Line Encyclopedia of Integer Sequences, the column is sequence , is sequence , and is sequence .
The value for was originally computed by J. Buethe, J. Franke, A. Jost, and T. Kleinjung assuming the Riemann hypothesis.
It was later verified unconditionally in a computation by D. J. Platt.
The value for is by the same four authors.
The value for was computed by D. B. Staple. All other prior entries in this table were also verified as part of that work.
The values for 1027, 1028, and 1029 were announced by David Baugh and Kim Walisch in 2015, 2020, and 2022, respectively.
Algorithms for evaluating
A simple way to find , if is not too large, is to use the sieve of Eratosthenes to produce the primes less than or equal to and then to count them.
A more elaborate way of finding is due to Legendre (using the inclusion–exclusion principle): given , if are distinct prime numbers, then the number of integers less than or equal to which are divisible by no is
(where denotes the floor function). This number is therefore equal to
when the numbers are the prime numbers less than or equal to the square root of .
The Meissel–Lehmer algorithm
In a series of articles published between 1870 and 1885, Ernst Meissel described (and used) a practical combinatorial way of evaluating : Let be the first primes and denote by the number of natural numbers not greater than which are divisible by none of the for any . Then
Given a natural number , if and if , then
Using this approach, Meissel computed , for equal to , 106, 107, and 108.
In 1959, Derrick Henry Lehmer extended and simplified Meissel's method. Define, for real and for natural numbers and , as the number of numbers not greater than with exactly prime factors, all greater than . Furthermore, set . Then
where the sum actually has only finitely many nonzero terms. Let denote an integer such that , and set . Then and when . Therefore,
The computation of can be obtained this way:
where the sum is over prime numbers.
On the other hand, the computation of can be done using the following rules:
Using his method and an IBM 701, Lehmer was able to compute the correct value of and missed the correct value of by 1.
Further improvements to this method were made by Lagarias, Miller, Odlyzko, Deléglise, and Rivat.
Other prime-counting functions
Other prime-counting functions are also used because they are more convenient to work with.
Riemann's prime-power counting function
Riemann's prime-power counting function is usually denoted as or . It has jumps of at prime powers and it takes a value halfway between the two sides at the discontinuities of . That added detail is used because the function may then be defined by an inverse Mellin transform.
Formally, we may define by
where the variable in each sum ranges over all primes within the specified limits.
We may also write
where is the von Mangoldt function and
The Möbius inversion formula then gives
where is the Möbius function.
Knowing the relationship between the logarithm of the Riemann zeta function and the von Mangoldt function , and using the Perron formula we have
Chebyshev's function
The Chebyshev function weights primes or prime powers by :
For ,
and
Formulas for prime-counting functions
Formulas for prime-counting functions come in two kinds: arithmetic formulas and analytic formulas. Analytic formulas for prime-counting were the first used to prove the prime number theorem. They stem from the work of Riemann and von Mangoldt, and are generally known as explicit formulae.
We have the following expression for the second Chebyshev function :
where
Here are the zeros of the Riemann zeta function in the critical strip, where the real part of is between zero and one. The formula is valid for values of greater than one, which is the region of interest. The sum over the roots is conditionally convergent, and should be taken in order of increasing absolute value of the imaginary part. Note that the same sum over the trivial roots gives the last subtrahend in the formula.
For we have a more complicated formula
Again, the formula is valid for , while are the nontrivial zeros of the zeta function ordered according to their absolute value. The first term is the usual logarithmic integral function; the expression in the second term should be considered as , where is the analytic continuation of the exponential integral function from negative reals to the complex plane with branch cut along the positive reals. The final integral is equal to the series over the trivial zeros:
Thus, Möbius inversion formula gives us
valid for , where
is Riemann's R-function and is the Möbius function. The latter series for it is known as Gram series. Because for all , this series converges for all positive by comparison with the series for . The logarithm in the Gram series of the sum over the non-trivial zero contribution should be evaluated as and not .
Folkmar Bornemann proved, when assuming the conjecture that all zeros of the Riemann zeta function are simple, that
where runs over the non-trivial zeros of the Riemann zeta function and .
The sum over non-trivial zeta zeros in the formula for describes the fluctuations of while the remaining terms give the "smooth" part of prime-counting function, so one can use
as a good estimator of for . In fact, since the second term approaches 0 as , while the amplitude of the "noisy" part is heuristically about , estimating by alone is just as good, and fluctuations of the distribution of primes may be clearly represented with the function
Inequalities
Ramanujan proved that the inequality
holds for all sufficiently large values of .
Here are some useful inequalities for .
The left inequality holds for and the right inequality holds for . The constant is to 5 decimal places, as has its maximum value at .
Pierre Dusart proved in 2010:
More recently, Dusart has proved
(Theorem 5.1) that
for and , respectively.
Going in the other direction, an approximation for the th prime, , is
Here are some inequalities for the th prime. The lower bound is due to Dusart (1999) and the upper bound to Rosser (1941).
The left inequality holds for and the right inequality holds for . A variant form sometimes seen substitutes An even simpler lower bound is
which holds for all , but the lower bound above is tighter for .
In 2010 Dusart proved (Propositions 6.7 and 6.6) that
for and , respectively.
In 2024, Axler further tightened this (equations 1.12 and 1.13) using bounds of the form
proving that
for and , respectively.
The lower bound may also be simplified to without altering its validity. The upper bound may be tightened to if .
There are additional bounds of varying complexity.
The Riemann hypothesis
The Riemann hypothesis implies a much tighter bound on the error in the estimate for , and hence to a more regular distribution of prime numbers,
Specifically,
proved that the Riemann hypothesis implies that for all there is a prime satisfying
See also
Bertrand's postulate
Oppermann's conjecture
Ramanujan prime
References
Notes
External links
Chris Caldwell, The Nth Prime Page at The Prime Pages.
Tomás Oliveira e Silva, Tables of prime-counting functions.
Analytic number theory
Prime numbers
Arithmetic functions | Prime-counting function | [
"Mathematics"
] | 3,420 | [
"Analytic number theory",
"Prime numbers",
"Arithmetic functions",
"Mathematical objects",
"Numbers",
"Number theory"
] |
331,884 | https://en.wikipedia.org/wiki/Particle%20horizon | The particle horizon (also called the cosmological horizon, the comoving horizon (in Scott Dodelson's text), or the cosmic light horizon) is the maximum distance from which light from particles could have traveled to the observer in the age of the universe. Much like the concept of a terrestrial horizon, it represents the boundary between the observable and the unobservable regions of the universe, so its distance at the present epoch defines the size of the observable universe. Due to the expansion of the universe, it is not simply the age of the universe times the speed of light (approximately 13.8 billion light-years), but rather the speed of light times the conformal time. The existence, properties, and significance of a cosmological horizon depend on the particular cosmological model.
Conformal time and the particle horizon
In terms of comoving distance, the particle horizon is equal to the conformal time that has passed since the Big Bang, times the speed of light . In general, the conformal time at a certain time is given by
where is the (dimensionless) scale factor of the Friedmann–Lemaître–Robertson–Walker metric, and we have taken the Big Bang to be at . By convention, a subscript 0 indicates "today" so that the conformal time today . Note that the conformal time is not the age of the universe, which is estimated around . Rather, the conformal time is the amount of time it would take a photon to travel from where we are located to the furthest observable distance, provided the universe ceased expanding. As such, is not a physically meaningful time (this much time has not yet actually passed); though, as we will see, the particle horizon with which it is associated is a conceptually meaningful distance.
The particle horizon recedes constantly as time passes and the conformal time grows. As such, the observed size of the universe always increases. Since proper distance at a given time is just comoving distance times the scale factor (with comoving distance normally defined to be equal to proper distance at the present time, so at present), the proper distance to the particle horizon at time is given by
and for today
Evolution of the particle horizon
In this section we consider the FLRW cosmological model. In that context, the universe can be approximated as composed by non-interacting constituents, each one being a perfect fluid with density , partial pressure and state equation , such that they add up to the total density and total pressure . Let us now define the following functions:
Hubble function
The critical density
The i-th dimensionless energy density
The dimensionless energy density
The redshift given by the formula
Any function with a zero subscript denote the function evaluated at the present time (or equivalently ). The last term can be taken to be including the curvature state equation. It can be proved that the Hubble function is given by
where the dilution exponent . Notice that the addition ranges over all possible partial constituents and in particular there can be countably infinitely many. With this notation we have:
where is the largest (possibly infinite). The evolution of the particle horizon for an expanding universe () is:
where is the speed of light and can be taken to be (natural units). Notice that the derivative is made with respect to the FLRW-time , while the functions are evaluated at the redshift which are related as stated before. We have an analogous but slightly different result for event horizon.
Horizon problem
The concept of a particle horizon can be used to illustrate the famous horizon problem, which is an unresolved issue associated with the Big Bang model. Extrapolating back to the time of recombination when the cosmic microwave background (CMB) was emitted, we obtain a particle horizon of about
which corresponds to a proper size at that time of:
Since we observe the CMB to be emitted essentially from our particle horizon (), our expectation is that parts of the cosmic microwave background (CMB) that are separated by about a fraction of a great circle across the sky of
(an angular size of ) should be out of causal contact with each other. That the entire CMB is in thermal equilibrium and approximates a blackbody so well is therefore not explained by the standard explanations about the way the expansion of the universe proceeds. The most popular resolution to this problem is cosmic inflation.
See also
Cosmological horizon
Observable universe
References
Physical cosmology | Particle horizon | [
"Physics",
"Astronomy"
] | 920 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
331,921 | https://en.wikipedia.org/wiki/Common%20name | In biology, a common name of a taxon or organism (also known as a vernacular name, English name, colloquial name, country name, popular name, or farmer's name) is a name that is based on the normal language of everyday life; and is often contrasted with the scientific name for the same organism, which is often based in Latin. A common name is sometimes frequently used, but that is not always the case.
In chemistry, IUPAC defines a common name as one that, although it unambiguously defines a chemical, does not follow the current systematic naming convention, such as acetone, systematically 2-propanone, while a vernacular name describes one used in a lab, trade or industry that does not unambiguously describe a single chemical, such as copper sulfate, which may refer to either copper(I) sulfate or copper(II) sulfate.
Sometimes common names are created by authorities on one particular subject, in an attempt to make it possible for members of the general public (including such interested parties as fishermen, farmers, etc.) to be able to refer to one particular species of organism without needing to be able to memorise or pronounce the scientific name. Creating an "official" list of common names can also be an attempt to standardize the use of common names, which can sometimes vary a great deal between one part of a country and another, as well as between one country and another country, even where the same language is spoken in both places.
Use as part of folk taxonomy
A common name intrinsically plays a part in a classification of objects, typically an incomplete and informal classification, in which some names are degenerate examples in that they are unique and lack reference to any other name, as is the case with say, ginkgo, okapi, and ratel. Folk taxonomy, which is a classification of objects using common names, has no formal rules and need not be consistent or logical in its assignment of names, so that say, not all flies are called flies (for example Braulidae, the so-called "bee lice") and not every animal called a fly is indeed a fly (such as dragonflies and mayflies). In contrast, scientific or biological nomenclature is a global system that attempts to denote particular organisms or taxa uniquely and definitively, on the assumption that such organisms or taxa are well-defined and generally also have well-defined interrelationships; accordingly the ICZN has formal rules for biological nomenclature and convenes periodic international meetings to further that purpose.
Common names and the binomial system
The form of scientific names for organisms, called binomial nomenclature, is superficially similar to the noun-adjective form of vernacular names or common names which were used by non-modern cultures. A collective name such as owl was made more precise by the addition of an adjective such as screech. Linnaeus himself published a flora of his homeland Sweden, Flora Svecica (1745), and in this, he recorded the Swedish common names, region by region, as well as the scientific names. The Swedish common names were all binomials (e.g. plant no. 84 Råg-losta and plant no. 85 Ren-losta); the vernacular binomial system thus preceded his scientific binomial system.
Linnaean authority William T. Stearn said:
Geographic range of use
The geographic range over which a particularly common name is used varies; some common names have a very local application, while others are virtually universal within a particular language. Some such names even apply across ranges of languages; the word for cat, for instance, is easily recognizable in most Germanic and many Romance languages. Many vernacular names, however, are restricted to a single country and colloquial names to local districts.
Some languages also have more than one common name for the same animal. For example, in Irish, there are many terms that are considered outdated but still well-known for their somewhat humorous and poetic descriptions of animals.
Constraints and problems
Common names are used in the writings of both professionals and laymen. Lay people sometimes object to the use of scientific names over common names, but the use of scientific names can be defended, as it is in these remarks from a book on marine fish:
Because common names often have a very local distribution, the same fish in a single area may have several common names.
Because of ignorance of relevant biological facts among the lay public, a single species of fish may be called by several common names, because individuals in the species differ in appearance depending on their maturity, gender, or can vary in appearance as a morphological response to their natural surroundings, i.e. ecophenotypic variation.
In contrast to common names, formal taxonomic names imply biological relationships between similarly named creatures.
Because of incidental events, contact with other languages, or simple confusion, common names in a given region will sometimes change with time.
In a book that lists over 1200 species of fishes more than half have no widely recognised common name; they either are too nondescript or too rarely seen to have earned any widely accepted common name.
Conversely, a single common name often applies to multiple species of fishes. The lay public might simply not recognise or care about subtle differences in appearance between only very distantly related species.
Many species that are rare, or lack economic importance, do not have a common name.
Coining common names
In scientific binomial nomenclature, names commonly are derived from classical or modern Latin or Greek or Latinised forms of vernacular words or coinages; such names generally are difficult for laymen to learn, remember, and pronounce and so, in such books as field guides, biologists commonly publish lists of coined common names. Many examples of such common names simply are attempts to translate the scientific name into English or some other vernacular. Such translation may be confusing in itself, or confusingly inaccurate, for example, gratiosus does not mean "gracile" and gracilis does not mean "graceful".
The practice of coining common names has long been discouraged; de Candolle's Laws of Botanical Nomenclature, 1868, the non-binding recommendations that form the basis of the modern (now binding) International Code of Nomenclature for algae, fungi, and plants contains the following:
Various bodies and the authors of many technical and semi-technical books do not simply adapt existing common names for various organisms; they try to coin (and put into common use) comprehensive, useful, authoritative, and standardised lists of new names. The purpose typically is:
to create names from scratch where no common names exist
to impose a particular choice of name where there is more than one common name
to improve existing common names
to replace them with names that conform more to the relatedness of the organisms
Other attempts to reconcile differences between widely separated regions, traditions, and languages, by arbitrarily imposing nomenclature, often reflect narrow perspectives and have unfortunate outcomes. For example, members of the genus Burhinus occur in Australia, Southern Africa, Eurasia, and South America. A recent trend in field manuals and bird lists is to use the name "thick-knee" for members of the genus. This, in spite of the fact that the majority of the species occur in non-English-speaking regions and have various common names, not always English. For example, "Dikkop" is the centuries-old South African vernacular name for their two local species: Burhinus capensis is the Cape dikkop (or "gewone dikkop", not to mention the presumably much older Zulu name "umBangaqhwa"); Burhinus vermiculatus is the "water dikkop". The thick joints in question are not even, in fact, the birds' knees, but the intertarsal joints—in lay terms the ankles. Furthermore, not all species in the genus have "thick knees", so the thickness of the "knees" of some species is not of clearly descriptive significance. The family Burhinidae has members that have various common names even in English, including "stone curlews", so the choice of the name "thick-knees" is not easy to defend but is a clear illustration of the hazards of the facile coinage of terminology.
Lists that include common names
Lists of general interest
Plants
Plant by common name
Garden plants
Culinary herbs and spices
Poisonous plants
Plants in the Bible
Vegetables
Useful plants
Animals
Birds by region
Mammals by region
List of fish common names
Plants and animals
Invasive species
Collective nouns
For collective nouns for various subjects, see a list of collective nouns (e.g. a flock of sheep, pack of wolves).
Official lists
Some organizations have created official lists of common names, or guidelines for creating common names, hoping to standardize the use of common names.
For example, the Australian Fish Names List or AFNS was compiled through a process involving work by taxonomic and seafood industry experts, drafted using the CAAB (Codes for Australian Aquatic Biota) taxon management system of the CSIRO, and including input through public and industry consultations by the Australian Fish Names Committee (AFNC). The AFNS has been an official Australian Standard since July 2007 and has existed in draft form (The Australian Fish Names List) since 2001.
Seafood Services Australia (SSA) serve as the Secretariat for the AFNC. SSA is an accredited Standards Australia (Australia's peak non-government standards development organisation) Standards Development
The Entomological Society of America maintains a database of official common names of insects, and proposals for new entries must be submitted and reviewed by a formal committee before being added to the listing.
Efforts to standardize English names for the amphibians and reptiles of North America (north of Mexico) began in the mid-1950s. The dynamic nature of taxonomy necessitates periodical updates and changes in the nomenclature of both scientific and common names. The Society for the Study of Amphibians and Reptiles (SSAR) published an updated list in 1978, largely following the previous established examples, and subsequently published eight revised editions ending in 2017. More recently the SSAR switched to an online version with a searchable database. Standardized names for the amphibians and reptiles of Mexico in Spanish and English were first published in 1994, with a revised and updated list published in 2008.
A set of guidelines for the creation of English names for birds was published in The Auk in 1978. It gave rise to Birds of the World: Recommended English Names and its Spanish and French companions.
The Academy of the Hebrew Language publish from time to time short dictionaries of common name in Hebrew for species that occur in Israel or surrounding countries e.g. for Reptilia in 1938, Osteichthyes in 2012, and Odonata in 2015.
See also
Folk taxonomy
List of historical common names
Scientific terminology
:Category:Plant common names
Specific name (zoology)
References
Citations
Sources
Stearn, William T. (1959). "The Background of Linnaeus's Contributions to the Nomenclature and Methods of Systematic Biology". Systematic Zoology 8: 4–22.
External links
Plant names
Multilingual, Multiscript Plant Name Database
The use of common names
Chemical Names of Common Substances
Plantas medicinales / Medicinal plants (database)
Biological nomenclature
Common names of organisms
Flora without expected TNC conservation status | Common name | [
"Biology"
] | 2,315 | [
"Biological nomenclature",
"Common names of organisms"
] |
332,079 | https://en.wikipedia.org/wiki/Ground%20truth | Ground truth is information that is known to be real or true, provided by direct observation and measurement (i.e. empirical evidence) as opposed to information provided by inference.
Etymology
The Oxford English Dictionary (s.v. ground truth) records the use of the word Groundtruth in the sense of 'fundamental truth' from Henry Ellison's poem "The Siberian Exile's Tale", published in 1833.
Statistics and machine learning
"Ground truth" may be seen as a conceptual term relative to the knowledge of the truth concerning a specific question. It is the ideal expected result. This is used in statistical models to prove or disprove research hypotheses. The term "ground truthing" refers to the process of gathering the proper objective (provable) data for this test. Compare with gold standard.
For example, suppose we are testing a stereo vision system to see how well it can estimate 3D positions. The "ground truth" might be the positions given by a laser rangefinder which is known to be much more accurate than the camera system.
Bayesian spam filtering is a common example of supervised learning. In this system, the algorithm is manually taught the differences between spam and non-spam. This depends on the ground truth of the messages used to train the algorithm – inaccuracies in the ground truth will correlate to inaccuracies in the resulting spam/non-spam verdicts.
Remote sensing
In remote sensing, "ground truth" refers to information collected at the imaged location. Ground truth allows image data to be related to real features and materials on the ground. The collection of ground truth data enables calibration of remote-sensing data, and aids in the interpretation and analysis of what is being sensed. Examples include cartography, meteorology, analysis of aerial photographs, satellite imagery and other techniques in which data are gathered at a distance.
More specifically, ground truth may refer to a process in which "pixels" on a satellite image are compared to what is imaged (at the time of capture) in order to verify the contents of the "pixels" in the image (noting that the concept of "pixel" is imaging-system-dependent). In the case of a classified image, supervised classification can help to determine the accuracy of the classification by the remote sensing system which can minimize error in the classification.
Ground truth is usually done on site, correlating what is known with surface observations and measurements of various properties of the features of the ground resolution cells under study in the remotely sensed digital image. The process also involves taking geographic coordinates of the ground resolution cell with GPS technology and comparing those with the coordinates of the "pixel" being studied provided by the remote sensing software to understand and analyze the location errors and how it may affect a particular study.
Ground truth is important in the initial supervised classification of an image. When the identity and location of land cover types are known through a combination of field work, maps, and personal experience these areas are known as training sites. The spectral characteristics of these areas are used to train the remote sensing software using decision rules for classifying the rest of the image. These decision rules such as Maximum Likelihood Classification, Parallelopiped Classification, and Minimum Distance Classification offer different techniques to classify an image. Additional ground truth sites allow the remote sensor to establish an error matrix that validates the accuracy of the classification method used. Different classification methods may have different percentages of error for a given classification project. It is important that the remote sensor chooses a classification method that works best with the number of classifications used while providing the least amount of error.
Ground truth also helps with atmospheric correction. Since images from satellites have to pass through the atmosphere, they can get distorted because of absorption in the atmosphere. So ground truth can help fully identify objects in satellite photos.
Errors of commission
An example of an error of commission is when a pixel reports the presence of a feature (such a tree) that, in reality, is absent (no tree is actually present). Ground truthing ensures that the error matrices have a higher accuracy percentage than would be the case if no pixels were ground-truthed. This value is the inverse of the user's accuracy, i.e. Commission Error = 1 - user's accuracy.
Errors of omission
An example of an error of omission is when pixels of a certain type, for example, maple trees, are not classified as maple trees. The process of ground-truthing helps to ensure that the pixel is classified correctly and the error matrices are more accurate. This value is the inverse of the producer's accuracy, i.e. Omission Error = 1 - producer's accuracy
Geographical information systems
In GIS the spatial data is modeled as field (like in remote sensing raster images) or as object (like in vectorial map representation). They are modeled from the real world (also named geographical reality), typically by a cartographic process (illustrated).
Geographic information systems such as GIS, GPS, and GNSS, have become so widespread that the term "ground truth" has taken on special meaning in that context. If the location coordinates returned by a location method such as GPS are an estimate of a location, then the "ground truth" is the actual location on Earth. A smart phone might return a set of estimated location coordinates such as 43.87870,-103.45901. The ground truth being estimated by those coordinates is the tip of George Washington's nose on Mount Rushmore. The accuracy of the estimate is the maximum distance between the location coordinates and the ground truth. We could say in this case that the estimate accuracy is 10 meters, meaning that the point on earth represented by the location coordinates is thought to be within 10 meters of George's nose—the ground truth. In slang, the coordinates indicate where we think George Washington's nose is located, and the ground truth is where it really is. In practice a smart phone or hand-held GPS unit is routinely able to estimate the ground truth within 6–10 meters. Specialized instruments can reduce GPS measurement error to under a centimeter.
Military usage
US military slang uses "ground truth" to refer to the facts comprising a tactical situation—as opposed to intelligence reports, mission plans, and other descriptions reflecting the conative or policy-based projections of the industrial·military complex. The term appears in the title of the Iraq War documentary film The Ground Truth (2006), and also in military publications, for example Stars and Stripes saying: "Stripes decided to figure out what the ground truth was in Iraq."
See also
Baseline (science)
Calibration
Foundationalism
References
External links
Forestry Organization Remote Sensing Technology Project (includes an example of an error matrix)
Applications of computer vision
Automatic identification and data capture
Computational linguistics
Machine learning task
Satellite meteorology | Ground truth | [
"Technology"
] | 1,399 | [
"Computational linguistics",
"Natural language and computing",
"Data",
"Automatic identification and data capture"
] |
332,090 | https://en.wikipedia.org/wiki/Computably%20enumerable%20set | In computability theory, a set S of natural numbers is called computably enumerable (c.e.), recursively enumerable (r.e.), semidecidable, partially decidable, listable, provable or Turing-recognizable if:
There is an algorithm such that the set of input numbers for which the algorithm halts is exactly S.
Or, equivalently,
There is an algorithm that enumerates the members of S. That means that its output is a list of all the members of S: s1, s2, s3, ... . If S is infinite, this algorithm will run forever, but each element of S will be returned after a finite amount of time. Note that these elements do not have to be listed in a particular way, say from smallest to largest.
The first condition suggests why the term semidecidable is sometimes used. More precisely, if a number is in the set, one can decide this by running the algorithm, but if the number is not in the set, the algorithm can run forever, and no information is returned. A set that is "completely decidable" is a computable set. The second condition suggests why computably enumerable is used. The abbreviations c.e. and r.e. are often used, even in print, instead of the full phrase.
In computational complexity theory, the complexity class containing all computably enumerable sets is RE. In recursion theory, the lattice of c.e. sets under inclusion is denoted .
Formal definition
A set S of natural numbers is called computably enumerable if there is a partial computable function whose domain is exactly S, meaning that the function is defined if and only if its input is a member of S.
Equivalent formulations
The following are all equivalent properties of a set S of natural numbers:
Semidecidability:
The set S is computably enumerable. That is, S is the domain (co-range) of a partial computable function.
The set S is (referring to the arithmetical hierarchy).
There is a partial computable function f such that:
Enumerability:
The set S is the range of a partial computable function.
The set S is the range of a total computable function, or empty. If S is infinite, the function can be chosen to be injective.
The set S is the range of a primitive recursive function or empty. Even if S is infinite, repetition of values may be necessary in this case.
Diophantine:
There is a polynomial p with integer coefficients and variables x, a, b, c, d, e, f, g, h, i ranging over the natural numbers such that (The number of bound variables in this definition is the best known so far; it might be that a lower number can be used to define all Diophantine sets.)
There is a polynomial from the integers to the integers such that the set S contains exactly the non-negative numbers in its range.
The equivalence of semidecidability and enumerability can be obtained by the technique of dovetailing.
The Diophantine characterizations of a computably enumerable set, while not as straightforward or intuitive as the first definitions, were found by Yuri Matiyasevich as part of the negative solution to Hilbert's Tenth Problem. Diophantine sets predate recursion theory and are therefore historically the first way to describe these sets (although this equivalence was only remarked more than three decades after the introduction of computably enumerable sets).
Examples
Every computable set is computably enumerable, but it is not true that every computably enumerable set is computable. For computable sets, the algorithm must also say if an input is not in the set – this is not required of computably enumerable sets.
A recursively enumerable language is a computably enumerable subset of a formal language.
The set of all provable sentences in an effectively presented axiomatic system is a computably enumerable set.
Matiyasevich's theorem states that every computably enumerable set is a Diophantine set (the converse is trivially true).
The simple sets are computably enumerable but not computable.
The creative sets are computably enumerable but not computable.
Any productive set is not computably enumerable.
Given a Gödel numbering of the computable functions, the set (where is the Cantor pairing function and indicates is defined) is computably enumerable (cf. picture for a fixed x). This set encodes the halting problem as it describes the input parameters for which each Turing machine halts.
Given a Gödel numbering of the computable functions, the set is computably enumerable. This set encodes the problem of deciding a function value.
Given a partial function f from the natural numbers into the natural numbers, f is a partial computable function if and only if the graph of f, that is, the set of all pairs such that f(x) is defined, is computably enumerable.
Properties
If A and B are computably enumerable sets then A ∩ B, A ∪ B and A × B (with the ordered pair of natural numbers mapped to a single natural number with the Cantor pairing function) are computably enumerable sets. The preimage of a computably enumerable set under a partial computable function is a computably enumerable set.
A set is called co-computably-enumerable or co-c.e. if its complement is computably enumerable. Equivalently, a set is co-r.e. if and only if it is at level of the arithmetical hierarchy. The complexity class of co-computably-enumerable sets is denoted co-RE.
A set A is computable if and only if both A and the complement of A are computably enumerable.
Some pairs of computably enumerable sets are effectively separable and some are not.
Remarks
According to the Church–Turing thesis, any effectively calculable function is calculable by a Turing machine, and thus a set S is computably enumerable if and only if there is some algorithm which yields an enumeration of S. This cannot be taken as a formal definition, however, because the Church–Turing thesis is an informal conjecture rather than a formal axiom.
The definition of a computably enumerable set as the domain of a partial function, rather than the range of a total computable function, is common in contemporary texts. This choice is motivated by the fact that in generalized recursion theories, such as α-recursion theory, the definition corresponding to domains has been found to be more natural. Other texts use the definition in terms of enumerations, which is equivalent for computably enumerable sets.
See also
RE (complexity)
Recursively enumerable language
Arithmetical hierarchy
References
Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ; .
Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987. .
Soare, Robert I. Recursively enumerable sets and degrees. Bull. Amer. Math. Soc. 84 (1978), no. 6, 1149–1181.
Computability theory
Theory of computation | Computably enumerable set | [
"Mathematics"
] | 1,609 | [
"Computability theory",
"Mathematical logic"
] |
332,095 | https://en.wikipedia.org/wiki/Teip | A teip (also taip, tayp, teyp; Chechen and Ingush: тайпа, romanized: taypa , family, kin, clan, tribe) is a Chechen and Ingush tribal organization or clan, self-identified through descent from a common ancestor or geographic location. It is a sub-unit of the tukkhum and shahar. There are about 150 Chechen and 120 Ingush teips. Teips played an important role in the socioeconomic life of the Chechen and Ingush peoples before and during the Middle Ages, and continue to be an important cultural part to this day.
Traditional rules and features
Common teip rules and some features include:
The right of communal land tenure.
Common revenge practices for the murder of a teip member or insulting of the members of a teip.
Unconditional exogamy.
Election of a teip representative.
Election of a headman.
Election of a military leader in case of war.
Open sessions of the Council of Elders.
The right of the teip to depose its representatives.
Representation of women is done by male relatives.
The right of adoption of outside people.
The transfer of property of departed members to members of the teip.
The teip has a defined territory.
The teip constructed a teip tower or another building or natural monument convenient as a shelter, e.g. a fortress.
The teip had its own teip cemetery.
The teip tradition of hospitality.
Identity, land and descent
Teips being sub-units of tukkhums, members of the same teip are traditionally thought to descend from a common ancestor, and thus are considered distant blood relatives. Teip names were often derived from an ancestral founder. As is also true of many other North Caucasian peoples, traditionally, Chechen and Ingush men were expected to know the names and places of origin of ancestors on their father's side, going back many generations, with the most common number being considered 7. Many women also memorized this information, and keener individuals can often recite their maternal ancestral line as well. The memorization of the information serves as a way to impute clan loyalty to younger generations. Among peoples of the Caucasus, traditionally, large scale land disputes could sometimes be solved with the help of mutual knowledge of whose ancestors resided where and when.
A teip's ancestral land was thus held as sacred, because of its close link to teip identity. It was typically marked by clan symbols, including the clan cemetery, tower, and sanctuary. Land being scarce in mountainous Ingushetia and Chechnya, after the feudal system was overthrown, each teip claimed a definite area of land. Land boundaries were marked by stones with specific marks pointing to a local place of worship. While at first land was owned collectively, individual cultivation ultimately became the norm. In old Chechen and Ingush tradition, women were allowed to own land. The vehement Ingush and Chechen opposition to Soviet collectivization has been explained by the threat it posed to the traditional customs of land allotment.
Political function
Each teip had an elected council of elders, a court of justice, and its own set of customs. The civilian chief, referred to as the thamda or kh'alkhancha, chaired the council of elders. The baechcha, meanwhile, was the military leader.
Subdivisions
The teip has its own subdivisions, in order of their progressive nesting, the , the , and the . The consists of households sharing the same family name, while the is a number of units that together form a common lineage, however that is not always the case. The basic social unit, meanwhile, was the household, consisting of the extended family spanning three or four generations, referred to as the tsa or the , with married daughters usually living with in the household of their spouse. Brothers would share the same land and livestock.
Formation of new teips
The number of teips has been unstable in recent history. While there were 59 Chechen and Ingush teips in the early 19th century, this swelled to a hundred by the mid-19th century, and today there are about 170. New teips could be founded when a large broke off and claimed the title of a full-fledged teip.
List of teips
Below is a list of teips with the Chechen tukkhum to which it may belong.
Cheberloy tukkhum ();
Achalo ();
Nizhaloy ();
Makazhoy ();
Rigakhoy ();
Buni ();
Sharoy tukkhum ();
Shatoy tukkhum ();
Varandoy ();
Keloy ()
Tumsoy ();
Ovkhoy tukkhum ();
Veappii ();
Melkhi tukkhum ();
Nokhchmakhkakhoy tukkhum ();
Alleroy ();
Belgatoy ();
Benoy ();
Biltoy ();
Chartoy ();
Chermoy ();
Tsontaroy ();
Elistanzhkhoy ();
Engnoy ();
Ersenoy ();
Gendargenoy ();
Gordaloy ();
Gunoy ();
Kharachoy ();
Kurchaloy ();
Shonoy ();
Yalkhoy ();
Zandkhoy ();
Orstkhoy tukkhum (Russian: Орстхой);
Tsechoy ();
Anastoy ();
Galai ();
Ghoandaloy ();
Merzhoy ();
Guloy (Russian: Гулой);
Yalkharoy ();
Khaikharoy ();
Chantiy tukkhum ();
Chanti ();
Tukkhum is not known / Without a Tukkhum;
Chinkhoy ();
Dishni ();
Marshaloy ();
Mulkoy ();
Nashkhoy ();
Peshkhoy ();
Satoy ();
Turkoy ();
Terloy tukkhum ();
Khindkhoy ();
Kalkhoy ();
Yalkhoroy ();
Zumsoy ();
Zurzaqoy ().
As well as a list of teips included in the ethno-territorial Ingush societies Shahar''Zhayrakhoy Shahar ();
Ahrievs ();
Borovs ();
Lyanovs ();
Tsurovs ();
Khamatkhanovs ();Fyappiy Shahar ();
Gelatkhoy ();
Kharpkhoy ();
Salgkhoy ();
Torshkhoy ();
Korakhoy ();
Väppiy ();Khamkhoy Shahar ();
Egikhoy ();
Khamkhoy ();
Targimkhoy ();
Barakhoy ();
Barkinkhoy ();
Tumkhoy ();
Barkkhanoy ();
Leimoy ();
Khulkhoy ();Tshoroy Shahar ();
Tshoroy ();
Ozdoy ()
Mokhloy ()Galashkakhoy Shahar ();Orstkhoy Shahar ();
Ghoandaloy ();
Tsechoy ();
Anastoy (Russian: Анастой);
Galai ();
Belharoy ();
Merzhoy ();
Guloy ();
Muzhakhoy ();
Khaikharoy ();
Yalkharoy ();Chulkhoy''' Shahar ();
See also
Tukkhum
History of Chechnya
Medieval history of Christianity in Chechnya
References
Bibliography
Russian sources
External links
Teips on chechen.org (In Russian )
Russia and Eurasia Review (pdf)
Traditional social organisation of the Chechens (pdf)
A complete list of all Chechen Teips
Chechnya
Kinship and descent
Nakh peoples
Nakh culture
Tribes of the Caucasus | Teip | [
"Biology"
] | 1,730 | [
"Behavior",
"Human behavior",
"Kinship and descent"
] |
332,183 | https://en.wikipedia.org/wiki/Ian%20Stewart%20%28mathematician%29 | Ian Nicholas Stewart (born 24 September 1945) is a British mathematician and a popular-science and science-fiction writer. He is Emeritus Professor of Mathematics at the University of Warwick, England.
Education and early life
Stewart was born in 1945 in Folkestone, England. While in the sixth form at Harvey Grammar School in Folkestone he came to the attention of the mathematics teacher. The teacher had Stewart sit mock A-level examinations without any preparation along with the upper-sixth students; Stewart was placed first in the examination. He was awarded a scholarship to study at the University of Cambridge as an undergraduate student of Churchill College, Cambridge, where he studied the Mathematical Tripos and obtained a first-class Bachelor of Arts degree in mathematics in 1966. Stewart then went to the University of Warwick where his PhD on Lie algebras was supervised by Brian Hartley and completed in 1969.
Career and research
After his PhD, Stewart was offered an academic position at Warwick. He is well known for his popular expositions of mathematics and his contributions to catastrophe theory.
While at Warwick, Stewart edited the mathematical magazine Manifold. He also wrote a column called "Mathematical Recreations" for Scientific American magazine from 1991 to 2001. This followed the work of past columnists like Martin Gardner, Douglas Hofstadter, and A. K. Dewdney. Altogether, he wrote 96 columns for Scientific American, which were later reprinted in the books "Math Hysteria", "How to Cut a Cake: And Other Mathematical Conundrums" and "Cows in the Maze".
Stewart has held visiting academic positions in Germany (1974), New Zealand (1976), and the US (University of Connecticut 1977–78, University of Houston 1983–84).
Stewart has published more than 140 scientific papers, including a series of influential papers co-authored with Jim Collins on coupled oscillators and the symmetry of animal gaits.
Stewart has collaborated with Jack Cohen and Terry Pratchett on four popular science books based on Pratchett's Discworld. In 1999 Terry Pratchett made both Jack Cohen and Professor Ian Stewart "Honorary Wizards of the Unseen University" at the same ceremony at which the University of Warwick gave Terry Pratchett an honorary degree.
In March 2014 Ian Stewart's iPad app, Incredible Numbers by Professor Ian Stewart, launched in the App Store. The app was produced in partnership with Profile Books and Touch Press.
Mathematics and popular science
Manifold, mathematical magazine published at the University of Warwick (1960s)
Nut-crackers: Puzzles and Games to Boggle the Mind (Piccolo Books) with John Jaworski, 1971.
Concepts of Modern Mathematics (1975)
Oh! Catastrophe (1982, in French)
Does God Play Dice? The New Mathematics of Chaos (1989)
Game, Set and Math (1991)
Fearful Symmetry (1992)
Another Fine Math You've Got Me Into (1992)
The Collapse of Chaos: Discovering Simplicity in a Complex World, with Jack Cohen (1995)
Nature's Numbers: The Unreal Reality of Mathematics (1995)
What is Mathematics? – originally by Richard Courant and Herbert Robbins, second edition revised by Ian Stewart (1996)
From Here to Infinity (1996), originally published as The Problems of Mathematics (1987)
Figments of Reality, with Jack Cohen (1997)
The Magical Maze: Seeing the World Through Mathematical Eyes (1998)
Life's Other Secret (1998)
What Shape is a Snowflake? (2001)
Flatterland (2001) (See Flatland)
The Annotated Flatland (2002)
Evolving the Alien: The Science of Extraterrestrial Life, with Jack Cohen (2002). Second edition published as What Does a Martian Look Like? The Science of Extraterrestrial Life.
Math Hysteria (2004)
The Mayor of Uglyville's Dilemma (2005)
Letters to a Young Mathematician (2006)
How to Cut a Cake: And Other Mathematical Conundrums (2006)
Why Beauty Is Truth: A History of Symmetry (2007)
Taming the infinite: The story of Mathematics from the first numbers to chaos theory (2008)
Professor Stewart's Cabinet of Mathematical Curiosities (2008)
Professor Stewart's Hoard of Mathematical Treasures: Another Drawer from the Cabinet of Curiosities (2009)
Cows in the Maze: And Other Mathematical Explorations (2010)
The Mathematics of Life (2011)
In Pursuit of the Unknown: 17 Equations That Changed the World (2012)
Symmetry: A Very Short Introduction (2013)
Visions of Infinity: The Great Mathematical Problems (2013)
Professor Stewart's Casebook of Mathematical Mysteries (2014)
Incredible Numbers by Professor Ian Stewart (iPad app) (2014)
Calculating the Cosmos: How Mathematics Unveils the Universe (2016)
Infinity: A Very Short Introduction (2017), Oxford University Press.
Significant Figures: The Lives and Work of Great Mathematicians (2017)
Do Dice Play God? The Mathematics of Uncertainty (2019), Profile Books.
What's the use ?: How mathematics shapes everyday life? (2021), Basic Books.
What's the use ?: The Unreasonable Effectiveness of Mathematics (2021), Profile Books.
Computer programming
Easy Programming for the ZX Spectrum (1982), with Robin Jones, Shiva Publishing Ltd.,
Computer Puzzles For Spectrum & ZX81 (1982), with Robin Jones, Shiva Publishing Ltd.,
Timex Sinclair 1000: Programs, Games, and Graphics, with Robin Jones, Birkhäuser,
Spectrum Machine Code (1983), with Robin Jones, Shiva Publishing Ltd.,
Further Programming for the ZX Spectrum (1983), with Robin Jones, Shiva Publishing Ltd.,
Gateway to Computing with the ZX Spectrum (1984), Shiva Publishing Ltd.,
Science of Discworld series
The Science of Discworld, with Jack Cohen and Terry Pratchett
The Science of Discworld II: The Globe, with Jack Cohen and Terry Pratchett
The Science of Discworld III: Darwin's Watch, with Jack Cohen and Terry Pratchett
The Science of Discworld IV: Judgement Day, with Jack Cohen and Terry Pratchett
Textbooks
Catastrophe Theory and its Applications, with Tim Poston, Pitman, 1978. .
The Foundations of Mathematics, 2nd Edition, Ian Stewart, David Tall. Oxford University Press, 2015.
Algebraic number theory and Fermat's last theorem, 4th Edition, Ian Stewart, David Tall. Chapman & Hall/CRC, 2015
Complex Analysis, 2nd Edition, Ian Stewart, David Tall. Cambridge University Press, 2018.
Galois Theory, 5th Edition, Chapman & Hall/CRC, 2022 Galois Theory Errata for 3rd Edition
Science fiction
Wheelers, with Jack Cohen (fiction)
Heaven, with Jack Cohen, , Aspect, May 2004 (fiction)
Science and mathematics
Awards and honours
In 1995 Stewart received the Michael Faraday Medal and in 1997 he gave the Royal Institution Christmas Lecture on The Magical Maze. He was elected as a Fellow of the Royal Society in 2001. Stewart was the first recipient in 2008 of the Christopher Zeeman Medal, awarded jointly by the London Mathematical Society (LMS) and the Institute of Mathematics and its Applications (IMA) for his work on promoting mathematics.
Personal life
Stewart married Avril, in 1970. They met at a party at a house that Avril was renting while she trained as a nurse. They have two sons. He lists his recreations as science fiction, painting, guitar, keeping fish, geology, Egyptology and snorkelling.
References
External links
20th-century English mathematicians
21st-century English mathematicians
People from Folkestone
English science writers
Fellows of the Royal Society
Alumni of the University of Warwick
Alumni of Churchill College, Cambridge
1945 births
Living people
Academics of the University of Warwick
Academics of Gresham College
Mathematics popularizers
British textbook writers
Recreational mathematicians | Ian Stewart (mathematician) | [
"Mathematics"
] | 1,591 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
332,193 | https://en.wikipedia.org/wiki/Phantom%20power | Phantom power, in the context of professional audio equipment, is DC electric power equally applied to both signal wires in balanced microphone cables, forming a phantom circuit, to operate microphones that contain active electronic circuitry.
It is best known as a convenient power source for condenser microphones, though many active direct boxes also use it. The technique is also used in other applications where power supply and signal communication take place over the same wires.
Phantom power supplies are often built into mixing consoles, microphone preamplifiers and similar equipment. In addition to powering the circuitry of a microphone, traditional condenser microphones also use phantom power for polarizing the microphone's transducer element.
History
Phantom powering was first used for landline copper wire-based plain old telephone service since the introduction of the rotary dial telephone in 1919. One such application in the telephone system was to provide a DC signalling path around transformer-connected amplifiers such as analogue line transmission systems.
The first known commercially available phantom-powered microphone was the Schoeps model CMT 20, which came out in 1964, built to the specifications of French radio with 9–12 volt DC phantom power; the positive pole of this powering was grounded. Microphone preamplifiers of the Nagra IV-series tape recorders offered this type of powering as an option for many years and Schoeps continued to support "negative phantom" until the CMT series was discontinued in the mid-1970s, but it is obsolete now.
In 1966, Neumann GmbH presented a new type of transistorized microphone to the Norwegian Broadcasting Corporation, NRK. Norwegian Radio had requested phantom-powered operation. Since NRK already had 48-volt power available in their studios for their emergency lighting systems, this voltage was used for powering the new microphones (model KM 84), and is the origin of 48-volt phantom power. This arrangement was later standardized in DIN 45596.
Standards
The International Electrotechnical Commission Standards Committee's "Multimedia systems – Guide to the recommended characteristics of analogue interfaces to achieve interoperability" (IEC 61938:2018) specifies parameters for microphone phantom power delivery. Three variants are defined by the document: P12, P24 and P48. In addition, two additional variants (P12L and SP48) are mentioned for specialized applications. Most microphones now use the P48 standard (maximum available power is 240 mW). Although 12 and 48-volt systems are still in use, the standard recommends a 24-volt supply for new systems.
Technical information
Phantom powering consists of a phantom circuit where direct current is applied equally through the two signal lines of a balanced audio connector (in modern equipment, both pins 2 and 3 of an XLR connector). The supply voltage is referenced to the ground pin of the connector (pin 1 of an XLR), which normally is connected to the cable shield or a ground wire in the cable or both. When phantom powering was introduced, one of its advantages was that the same type of balanced, shielded microphone cable that studios were already using for dynamic microphones could be used for condenser microphones. This is in contrast to microphones with vacuum-tube circuitry, most of which require special, multi-conductor cables.
With phantom power, the supply voltage is effectively invisible to balanced microphones that do not use it, which includes most dynamic microphones. A balanced signal consists only of the differences in voltage between two signal lines; phantom powering places the same DC voltage on both signal lines of a balanced connection. This is in marked contrast to another, slightly earlier method of powering known as "parallel powering" or "T-powering" (from the German term Tonaderspeisung), in which DC was overlaid directly onto the signal in differential mode. Connecting a conventional microphone to an input that had parallel powering enabled could very well damage the microphone.
The IEC 61938 Standard defines 48-volt, 24-volt, and 12-volt phantom powering. The signal conductors are positive, both fed through resistors of equal value (6.81 kΩ for 48 V, 1.2 kΩ for 24 V, and 680 Ω for 12 V), and the shield is ground. The 6.81 kΩ value is not critical, but the resistors must be matched to within 0.1% or better to maintain good common-mode rejection in the circuit. The 24-volt version of phantom powering, proposed quite a few years after the 12 and 48 V versions, was also included in the DIN standard and is in the IEC standard, but it was never widely adopted by equipment manufacturers.
Nearly all modern mixing consoles have a switch for turning phantom power on or off; in most high-end equipment this can be done individually by channel, while on smaller mixers a single master switch may control power delivery to all channels. Phantom power can be blocked in any channel with a 1:1 isolation transformer or blocking capacitors. Phantom powering can cause equipment malfunction or even damage if used with cables or adapters that connect one side of the input to ground, or if certain equipment other than microphones is connected to it.
Instrument amplifiers rarely provide phantom power. To use equipment requiring it with these amplifiers, a separate power supply must be inserted into the line. These are readily available commercially, or alternatively are one of the easier projects for the amateur electronics constructor.
Caveats
Some microphones offer a choice of internal battery powering or (external) phantom powering. In some such microphones, it is advisable to remove the internal batteries when phantom power is being used since batteries may corrode and leak chemicals. Other microphones are specifically designed to switch over to the internal batteries if an external supply fails.
Phantom powering is not always implemented correctly or adequately, even in professional-quality preamps, mixers, and recorders. In part, this is because first-generation (late-1960s through mid-1970s) 48-volt phantom-powered condenser microphones had simple circuitry and required only small amounts of operating current (typically less than 1 mA per microphone), so the phantom supply circuits typically built into recorders, mixers, and preamps of that time were designed on the assumption that this current would be adequate. The original DIN 45596 phantom-power specification called for a maximum of 2 mA. This practice has carried forward to the present; many 48-volt phantom power supply circuits, especially in low-cost and portable equipment, simply cannot supply more than 1 or 2 mA total without breaking down. Some circuits also have significant additional resistance in series with the standard pair of supply resistors for each microphone input; this may not affect low-current microphones much, but it can disable microphones that need more current.
Mid-1970s and later condenser microphones designed for 48-volt phantom powering often require much more current (e.g., 2–4 mA for Neumann transformerless microphones, 4–5 mA for the Schoeps CMC ("Colette") series and Josephson microphones, 5–6 mA for most Shure KSM-series microphones, 8 mA for CAD Equiteks and 10 mA for Earthworks). The IEC standard gives 10 mA as the maximum allowed current per microphone. If its required current is not available, a microphone may still put out a signal, but it cannot deliver its intended level of performance. The specific symptoms vary somewhat, but the most common result will be reduction of the maximum sound pressure level that the microphone can handle without overload (distortion). Some microphones will also show lower sensitivity (output level for a given sound-pressure level).
Most ground lift switches have the unwanted effect of disconnecting phantom power. There must always be a DC current path between pin 1 of the microphone and the negative side of the 48-volt supply if power is to reach the microphone's electronics. Lifting the ground, which is normally pin 1, breaks this path and disables the phantom power supply.
There is a common belief that connecting a dynamic or ribbon microphone to a phantom-powered input will damage it. There are three possibilities for this damage to occur. If there is a fault in the cable, phantom power may damage some mics by applying a voltage across the output of the microphone. Equipment damage is also possible if a phantom-powered input connected to an unbalanced dynamic microphone or electronic musical instruments. The transient generated when a microphone is hot-plugged into an input with active phantom power can damage the microphone and possibly the preamp circuit of the input because not all pins of the microphone connector make contact at the same time, and there is an instant when current can flow to charge the capacitance of the cable from one side of the phantom-powered input and not the other. This is particularly a problem with long microphone cables. It is considered good practice to disable phantom power to devices that don't require it.
Digital phantom power
Digital microphones complying with the AES 42 standard may be provided with phantom power at 10 volts, impressed on both audio leads and ground. This supply can furnish up to 250 mA to digital microphones. A keyed variation of the usual XLR connector, the XLD connector, may be used to prevent accidental interchange of analog and digital devices.
Other microphone powering techniques
T-power, also known as A-B powering or T12, described in DIN 45595, is an alternative to phantom powering that is still widely used in the world of production film sound. Many mixers and recorders intended for that market have a T-power option. The method is considered obsolete as power supply noise is added to the output audio signal. Many older Sennheiser and Schoeps microphones use this powering method, although newer recorders and mixers are phasing out this option. Adapter barrels, and dedicated power supplies, are made to accommodate T-powered microphones. In this scheme, 12volts is applied through 180ohm resistors between the microphone's "hot" terminal (XLR pin 2) and the microphone's "cold" terminal (XLR pin 3). This results in a 12-volt potential difference with significant current capability across pins 2 and 3, which would likely cause permanent damage if applied to a dynamic or ribbon microphone.
Plug-in-power (PiP) is the low-current 3–5 V supply provided at the microphone jack of some consumer equipment, such as portable recorders and computer sound cards. It is also defined in IEC 61938. It is unlike phantom power since it is an unbalanced interface with a low voltage (around +5volts) connected to the signal conductor with return through the sleeve; the DC power is in common with the audio signal from the microphone. A capacitor is used to block the DC from subsequent audio frequency circuits. It is often used for powering electret microphones, which will not function without power. It is suitable only for powering microphones specifically designed for use with this type of power supply. Damage may result if these microphones are connected to true (48 V) phantom power through a 3.5 mm to XLR adapter that connects the XLR shield to the 3.5 mm sleeve. Plug-in-power is covered by Japanese standard CP-1203A:2007.
These alternative powering schemes are sometimes improperly referred to as "phantom power" and should not be confused with true 48-volt phantom powering described above.
Some condenser microphones can be powered with a 1.5-volt cell contained in a small compartment in the microphone or in an external housing.
Phantom power is sometimes used by workers in avionics to describe the DC bias voltage used to power aviation microphones, which use a lower voltage than professional audio microphones. Phantom power used in this context is 8–16volts DC in series with a 470ohm (nominal) resistor as specified in RTCA Inc. standard DO-214. These microphones evolved from the carbon microphones used in the early days of aviation and the telephone which relied on a DC bias voltage across the carbon microphone element.
Other uses
Phantom power is also used in applications other than microphones:
Active antennas
Low-noise block downconverter
Power over Ethernet
Notes
See also
Bias tee
Power-line communication, data communication over mains electricity
Simplex signaling
References
External links
The Schoeps CMT 20 microphone of 1964 – the world's first phantom-powered microphone
Phantom Powering – Balanced Lines, Phantom Powering, Grounding, and Other Arcane Mysteries. Loud Technologies Inc, 2003
Powering microphones – a collection of information and circuits for powering electret microphone capsules
Microphone Design and Operation – contains alternative condenser microphone powering techniques including T-power/12T/A-B powering/DIN 45595
DIY tester – for the presence of phantom power and limited wiring testing
Audio engineering
Microphones | Phantom power | [
"Engineering"
] | 2,704 | [
"Electrical engineering",
"Audio engineering"
] |
332,222 | https://en.wikipedia.org/wiki/Hushmail | Hushmail is an encrypted proprietary web-based email service offering PGP-encrypted e-mail and vanity domain service. Hushmail uses OpenPGP standards. If public encryption keys are available to both recipient and sender (either both are Hushmail users or have uploaded PGP keys to the Hush keyserver), Hushmail can convey authenticated, encrypted messages in both directions. For recipients for whom no public key is available, Hushmail will allow a message to be encrypted by a password (with a password hint) and stored for pickup by the recipient, or the message can be sent in cleartext. In July 2016, the company launched an iOS app that offers end-to-end encryption and full integration with the webmail settings. The company is located in Vancouver, British Columbia, Canada.
History
Hushmail was founded by Cliff Baltzley in 1999 after he left Ultimate Privacy.
Accounts
Individuals
There is one type of paid account, Hushmail for Personal Use, which provides 10GB of storage, as well as IMAP and POP3 service.
Businesses
The standard business account provides the same features as the paid individual account, plus other features like vanity domain, email forwarding, catch-all email, user admin, archive, and Business Associate Agreements for healthcare plans. Features like secure forms and electronic signatures are available in specific plans.
Additional security features include hidden IP addresses in e-mail headers, two-step verification and HIPAA-compliant encryption.
Instant messaging
An instant messaging service, Hush Messenger, was offered until July 1, 2011.
Compromises to email privacy
Hushmail received favorable reviews in the press. It was believed that possible threats, such as demands from the legal system to reveal the content of traffic through the system, were not imminent in Canada unlike the United States and that if data were to be handed over, encrypted messages would be available only in encrypted form.
Developments in November 2007 led to doubts amongst security-conscious users about Hushmail's security specifically, concern over a backdoor. The issue originated with the non-Java version of the Hush system. It performed the encrypt/decrypt steps on Hush's servers, and then used SSL to transmit the data to the user. The data is available as cleartext during this small window of time, with the passphrase being capturable at this point, facilitating the decryption of all stored messages and future messages using this passphrase. Hushmail stated that the Java version is also vulnerable, in that they may be compelled to deliver a compromised Java applet to a user.
Hushmail supplied cleartext copies of private email messages associated with several addresses at the request of law enforcement agencies under a Mutual Legal Assistance Treaty with the United States: e.g. in the case of United States v. Stumbo. In addition, the contents of emails between Hushmail addresses were analyzed, and 12 CDs were supplied to U.S. authorities. Hushmail privacy policy states that it logs IP addresses in order "to analyze market trends, gather broad demographic information, and prevent abuse of our services."
Hush Communications, the company that provides Hushmail, states that it will not release any user data without a court order from the Supreme Court of British Columbia, Canada and that other countries seeking access to user data must apply to the government of Canada via an applicable Mutual Legal Assistance Treaty. Hushmail states, "...that means that there is no guarantee that we will not be compelled, under a court order issued by the Supreme Court of British Columbia, Canada, to treat a user named in a court order differently, and compromise that user's privacy" and "[...]if a court order has been issued by the Supreme Court of British Columbia compelling us to reveal the content of your encrypted email, the "attacker" could be Hush Communications, the actual service provider."
See also
Comparison of mail servers
Comparison of webmail providers
References
External links
Cryptographic software
Webmail
Internet privacy software
OpenPGP
Internet properties established in 1999 | Hushmail | [
"Mathematics"
] | 841 | [
"Cryptographic software",
"Mathematical software"
] |
332,229 | https://en.wikipedia.org/wiki/Additive%20group | An additive group is a group of which the group operation is to be thought of as addition in some sense. It is usually abelian, and typically written using the symbol + for its binary operation.
This terminology is widely used with structures equipped with several operations for specifying the structure obtained by forgetting the other operations. Examples include the additive group of the integers, of a vector space and of a ring. This is particularly useful with rings and fields to distinguish the additive underlying group from the multiplicative group of the invertible elements.
In older terminology, an additive subgroup of a ring has also been known as a modul or module (not to be confused with a module).
References
Algebraic structures
Group theory | Additive group | [
"Mathematics"
] | 145 | [
"Mathematical structures",
"Mathematical objects",
"Group theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
332,256 | https://en.wikipedia.org/wiki/Gabriel%27s%20horn | A Gabriel's horn (also called Torricelli's trumpet) is a type of geometric figure that has infinite surface area but finite volume. The name refers to the Christian tradition where the archangel Gabriel blows the horn to announce Judgment Day. The properties of this figure were first studied by Italian physicist and mathematician Evangelista Torricelli in the 17th century.
These colourful informal names and the allusion to religion came along later.
Torricelli's own name for it is to be found in the Latin title of his paper , written in 1643, a truncated acute hyperbolic solid, cut by a plane.
Volume 1, part 1 of his published the following year included that paper and a second more orthodox (for the time) Archimedean proof of its theorem about the volume of a truncated acute hyperbolic solid.
This name was used in mathematical dictionaries of the 18th century, including "Hyperbolicum Acutum" in Harris' 1704 dictionary and in Stone's 1726 one, and the French translation in d'Alembert's 1751 one.
Although credited with primacy by his contemporaries, Torricelli was not the first to describe an infinitely long shape with a finite volume or area.
The work of Nicole Oresme in the 14th century had either been forgotten by, or was unknown to them.
Oresme had posited such things as an infinitely long shape constructed by subdividing two squares of finite total area 2 using a geometric series and rearranging the parts into a figure, infinitely long in one dimension, comprising a series of rectangles.
Mathematical definition
Gabriel's horn is formed by taking the graph of
with the domain and rotating it in three dimensions about the axis. The discovery was made using Cavalieri's principle before the invention of calculus, but today, calculus can be used to calculate the volume and surface area of the horn between and , where . Using integration (see Solid of revolution and Surface of revolution for details), it is possible to find the volume and the surface area :
The value can be as large as required, but it can be seen from the equation that the volume of the part of the horn between and will never exceed ; however, it does gradually draw nearer to as increases. Mathematically, the volume approaches as approaches infinity. Using the limit notation of calculus,
The surface area formula above gives a lower bound for the area as 2 times the natural logarithm of . There is no upper bound for the natural logarithm of , as approaches infinity. That means, in this case, that the horn has an infinite surface area. That is to say,
In
Torricelli's original non-calculus proof used an object, slightly different to the aforegiven, that was constructed by truncating the acute hyperbolic solid with a plane perpendicular to the x axis and extending it from the opposite side of that plane with a cylinder of the same base.
Whereas the calculus method proceeds by setting the plane of truncation at and integrating along the x axis, Torricelli proceeded by calculating the volume of this compound solid (with the added cylinder) by summing the surface areas of a series of concentric right cylinders within it along the y axis and showing that this was equivalent to summing areas within another solid whose (finite) volume was known.
In modern terminology this solid was created by constructing a surface of revolution of the function (for strictly positive )
and Torricelli's theorem was that its volume is the same as the volume of the right cylinder with height and radius :
Torricelli showed that the volume of the solid could be derived from the surface areas of this series of concentric right cylinders whose radii were and heights .
Substituting in the formula for the surface areas of (just the sides of) these cylinders yields a constant surface area for all cylinders of .
This is also the area of a circle of radius and the nested surfaces of the cylinders (filling the volume of the solid) are thus equivalent to the stacked areas of the circles of radius stacked from 0 to , and hence the volume of the aforementioned right cylinder, which is known to be :
(The volume of the added cylinder is of course and thus the volume of the truncated acute hyperbolic solid alone is . If , as in the modern calculus derivation, .)
In the this is one of two proofs of the volume of the (truncated) acute hyperbolic solid.
The use of Cavalieri's indivisibles in this proof was controversial at the time and the result shocking (Torricelli later recording that Gilles de Roberval had attempted to disprove it); so when the was published, the year after , Torricelli also supplied a second proof based upon orthodox Archimedean principles showing that the right cylinder (height radius ) was both upper and lower bound for the volume.
Ironically, this was an echo of Archimedes' own caution in supplying two proofs, mechanical and geometrical, in his Quadrature of the Parabola to Dositheus.
Apparent paradox
When the properties of Gabriel's horn were discovered, the fact that the rotation of an infinitely large section of the plane about the axis generates an object of finite volume was considered a paradox. While the section lying in the plane has an infinite area, any other section parallel to it has a finite area. Thus the volume, being calculated from the "weighted sum" of sections, is finite.
Another approach is to treat the solid as a stack of disks with diminishing radii. The sum of the radii produces a harmonic series that goes to infinity. However, the correct calculation is the sum of their squares. Every disk has a radius and an area or . The series diverges, but the series converges. In general, for any real , the series converges. (see Particular values of the Riemann zeta function for more detail on this result)
The apparent paradox formed part of a dispute over the nature of infinity involving many of the key thinkers of the time, including Thomas Hobbes, John Wallis, and Galileo Galilei.
There is a similar phenomenon that applies to lengths and areas in the plane. The area between the curves and from 1 to infinity is finite, but the lengths of the two curves are clearly infinite.
In lecture 16 of his 1666 , Isaac Barrow held that Torricelli's theorem had constrained Aristotle's general dictum (from De Caelo book 1, part 6) that "there is no proportion between the finite and the infinite". Aristotle had himself, strictly speaking, been making a case for the impossibility of the physical existence of an infinite body rather than a case for its impossibility as a geometrical abstract.
Barrow had been adopting the contemporary 17th-century view that Aristotle's dictum and other geometrical axioms were (as he had said in lecture 7) from "some higher and universal science", underpinning both mathematics and physics.
Thus Torricelli's demonstration of an object with a relation between a finite (volume) and an infinite (area) contradicted this dictum, at least in part.
Barrow's explanation was that Aristotle's dictum still held, but only in a more limited fashion when comparing things of the same type, length with length, area with area, volume with volume, and so forth.
It did not hold when comparing things of two different genera (area with volume, for example) and thus an infinite area could be connected to a finite volume.
Others used Torricelli's theorem to bolster their own philosophical claims, unrelated to mathematics from a modern viewpoint.
Ignace-Gaston Pardies in 1671 used the acute hyperbolic solid to argue that finite humans could comprehend the infinite, and proceeded to offer it as proof of the existences of God and immaterial souls.
Since finite matter could not comprehend the infinite, Pardies argued, the fact that humans could comprehend this proof showed that humans must be more than matter, and have immaterial souls.
In contrast, Antoine Arnauld argued that because humans perceived a paradox here, human thought was limited in what it could comprehend, and thus is not up to the task of disproving divine, religious, truths.
Hobbes' and Wallis' dispute was actually within the realm of mathematics: Wallis enthusiastically embracing the new concepts of infinity and indivisibles, proceeding to make further conclusions based upon Torricelli's work and to extend it to employ arithmetic rather than Torricelli's geometric arguments; and Hobbes claiming that since mathematics is derived from real world perceptions of finite things, "infinite" in mathematics can only mean "indefinite".
These led to strongly worded letters by each to the Royal Society and in Philosophical Transactions, Hobbes resorting to namecalling Wallis "mad" at one point.
In 1672 Hobbes tried to re-cast Torricelli's theorem as about a finite solid that was extended indefinitely, in an attempt to hold on to his contention that "natural light" (i.e. common sense) told us that an infinitely long thing must have an infinite volume.
This aligned with Hobbes' other assertions that the use of the idea of a zero-width line in geometry was erroneous, and that Cavalieri's idea of indivisibles was ill-founded.
Wallis argued that there existed geometrical shapes with finite area/volume but no centre of gravity based upon Torricelli, stating that understanding this required more of a command of geometry and logic "than M. Hobs is Master of".
He also restructured the arguments in arithmetical terms as the sums of arithmetic progressions, sequences of arithmetic infinitesimals rather than sequences of geometric indivisibles.
Oresme had already demonstrated that an infinitely long shape can have a finite area where, as one dimension tends towards infinitely large, another dimension tends towards infinitely small.
In Barrow's own words "the infinite diminution of one dimension compensates for the infinite increase of the other", in the case of the acute hyperbolic solid by the equation of the Apollonian hyperbola .
Painter's paradox
Since the horn has finite volume but infinite surface area, there is an apparent paradox that the horn could be filled with a finite quantity of paint and yet that paint would not be sufficient to coat its surface.
However, this paradox is again only an apparent paradox caused by an incomplete definition of "paint", or by using contradictory definitions of paint for the actions of filling and painting.
One could be postulating a "mathematical" paint that is infinitely divisible (or can be infinitely thinned, or simply zero-width like the zero-width geometric lines that Hobbes took issue with) and capable of travelling at infinite speed, or a "physical" paint with the properties of paint in the real world.
With either one, the apparent paradox vanishes:
With "mathematical" paint, it does not follow in the first place that an infinite surface area requires an infinite volume of paint, as infinite surface area times zero-thickness paint is indeterminate.
With physical paint, painting the outside of the solid would require an infinite amount of paint because physical paint has a non-zero thickness. Torricelli's theorem does not talk about a layer of finite width on the outside of the solid, which in fact would have infinite volume. Thus there is no contradiction between infinite volume of paint and infinite surface area to cover. It is also impossible to paint the interior of the solid, the finite volume of Torricelli's theorem, with physical paint, so no contradiction exists. This is because physical paint can only fill an approximation of the volume of the solid. The molecules do not completely tile 3-dimensional space and leave gaps, and there is a point where the "throat" of the solid becomes too narrow for paint molecules to flow down.
Physical paint travels at a bounded speed and would take an infinite amount of time to flow down. This also applies to "mathematical" paint of zero thickness if one does not additionally postulate it flowing at infinite speed.
Other different postulates of "mathematical" paint, such as infinite-speed paint that gets thinner at a fast enough rate, remove the paradox too. For volume of paint, as the surface area to be covered tends towards infinity, the thickness of the paint tends towards zero. Like with the solid itself, the infinite increase of the surface area to be painted in one dimension is compensated by the infinite decrease in another dimension, the thickness of the paint.
Converse
The converse of Torricelli's acute hyperbolic solid is a surface of revolution that has a finite surface area but an infinite volume.
In response to Torricelli's theorem, after learning of it from Marin Mersenne, Christiaan Huygens and René-François de Sluse wrote letters to each other about extending the theorem to other infinitely long solids of revolution; which have been mistakenly identified as finding such a converse.
Jan A. van Maanen, professor of mathematics at the University of Utrecht, reported in the 1990s that he once mis-stated in a conference at Kristiansand that de Sluse wrote to Huygens in 1658 that he had found such a shape:
to be told in response (by Tony Gardiner and Man-Keung Siu of the University of Hong Kong) that any surface of rotation with a finite surface area would of necessity have a finite volume.
Professor van Maanen realized that this was a misinterpretation of de Sluse's letter, and that what de Sluse was actually reporting that the solid "goblet" shape, formed by rotating the cissoid of Diocles and its asymptote about the y axis, had a finite volume (and hence "small weight") and enclosed a cavity of infinite volume.
Huygens first showed that the area of the rotated two-dimensional shape (between the cissoid and its asymptote) was finite, calculating its area to be 3 times the area of the generating circle of the cissoid, and de Sluse applied Pappus's centroid theorem to show that the solid of revolution thus has finite volume, being a product of that finite area and the finite orbit of rotation.
The area being rotated is finite; de Sluse did not actually say anything about the surface area of the resultant rotated volume.
Such a converse cannot occur (assuming Euclidean geometry) when revolving a continuous function on a closed set.
Theorem
Let be a continuously differentiable function. Write for the solid of revolution of the graph about the axis. If the surface area of is finite, then so is the volume.
Proof
Since the lateral surface area is finite, the limit superior:
Therefore, there exists a such that the supremum } is finite. Hence,
must be finite, since is a continuous function, which implies that is bounded on the interval .
Finally, the volume:
Therefore: if the area is finite, then the volume must also be finite.
See also
References
Reference bibliography
Further reading
External links
Torricelli's trumpet at PlanetMath
"Gabriel's Horn" by John Snyder, the Wolfram Demonstrations Project, 2007.
Gabriel's Horn: An Understanding of a Solid with Finite Volume and Infinite Surface Area by Jean S. Joseph.
Mathematical paradoxes
Paradoxes of infinity
Calculus
Horn
Surfaces
Eponyms in geometry
Last Judgment | Gabriel's horn | [
"Mathematics"
] | 3,157 | [
"Eponyms in geometry",
"Calculus",
"Paradoxes of infinity",
"Mathematical objects",
"Infinity",
"Mathematical paradoxes",
"Geometry",
"Mathematical problems"
] |
332,264 | https://en.wikipedia.org/wiki/Computable%20set | In computability theory, a set of natural numbers is called computable, recursive, or decidable if there is an algorithm which takes a number as input, terminates after a finite amount of time (possibly depending on the given number) and correctly decides whether the number belongs to the set or not.
A set which is not computable is called noncomputable or undecidable.
A more general class of sets than the computable ones consists of the computably enumerable (c.e.) sets, also called semidecidable sets. For these sets, it is only required that there is an algorithm that correctly decides when a number is in the set; the algorithm may give no answer (but not the wrong answer) for numbers not in the set.
Formal definition
A subset of the natural numbers is called computable if there exists a total computable function such that if and if . In other words, the set is computable if and only if the indicator function is computable.
Examples and non-examples
Examples:
Every finite or cofinite subset of the natural numbers is computable. This includes these special cases:
The empty set is computable.
The entire set of natural numbers is computable.
Each natural number (as defined in standard set theory) is computable; that is, the set of natural numbers less than a given natural number is computable.
The subset of prime numbers is computable.
A recursive language is a computable subset of a formal language.
The set of Gödel numbers of arithmetic proofs described in Kurt Gödel's paper "On formally undecidable propositions of Principia Mathematica and related systems I" is computable; see Gödel's incompleteness theorems.
Non-examples:
The set of Turing machines that halt is not computable.
The isomorphism class of two finite simplicial complexes is not computable.
The set of busy beaver champions is not computable.
Hilbert's tenth problem is not computable.
Properties
If A is a computable set then the complement of A is a computable set. If A and B are computable sets then A ∩ B, A ∪ B and the image of A × B under the Cantor pairing function are computable sets.
A is a computable set if and only if A and the complement of A are both computably enumerable (c.e.). The preimage of a computable set under a total computable function is a computable set. The image of a computable set under a total computable bijection is computable. (In general, the image of a computable set under a computable function is c.e., but possibly not computable).
A is a computable set if and only if it is at level of the arithmetical hierarchy.
A is a computable set if and only if it is either the range of a nondecreasing total computable function, or the empty set. The image of a computable set under a nondecreasing total computable function is computable.
See also
Decidability (logic)
Recursively enumerable language
Recursive language
Recursion
References
Cutland, N. Computability. Cambridge University Press, Cambridge-New York, 1980. ;
Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ;
Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987.
External links
Computability theory
Theory of computation | Computable set | [
"Mathematics"
] | 788 | [
"Computability theory",
"Mathematical logic"
] |
332,307 | https://en.wikipedia.org/wiki/Sociable%20number | In mathematics, sociable numbers are numbers whose aliquot sums form a periodic sequence. They are generalizations of the concepts of perfect numbers and amicable numbers. The first two sociable sequences, or sociable chains, were discovered and named by the Belgian mathematician Paul Poulet in 1918. In a sociable sequence, each number is the sum of the proper divisors of the preceding number, i.e., the sum excludes the preceding number itself. For the sequence to be sociable, the sequence must be cyclic and return to its starting point.
The period of the sequence, or order of the set of sociable numbers, is the number of numbers in this cycle.
If the period of the sequence is 1, the number is a sociable number of order 1, or a perfect number—for example, the proper divisors of 6 are 1, 2, and 3, whose sum is again 6. A pair of amicable numbers is a set of sociable numbers of order 2. There are no known sociable numbers of order 3, and searches for them have been made up to as of 1970.
It is an open question whether all numbers end up at either a sociable number or at a prime (and hence 1), or, equivalently, whether there exist numbers whose aliquot sequence never terminates, and hence grows without bound.
Example
As an example, the number 1,264,460 is a sociable number whose cyclic aliquot sequence has a period of 4:
The sum of the proper divisors of () is
1 + 2 + 4 + 5 + 10 + 17 + 20 + 34 + 68 + 85 + 170 + 340 + 3719 + 7438 + 14876 + 18595 + 37190 + 63223 + 74380 + 126446 + 252892 + 316115 + 632230 = 1547860,
the sum of the proper divisors of () is
1 + 2 + 4 + 5 + 10 + 20 + 193 + 386 + 401 + 772 + 802 + 965 + 1604 + 1930 + 2005 + 3860 + 4010 + 8020 + 77393 + 154786 + 309572 + 386965 + 773930 = 1727636,
the sum of the proper divisors of () is
1 + 2 + 4 + 521 + 829 + 1042 + 1658 + 2084 + 3316 + 431909 + 863818 = 1305184, and
the sum of the proper divisors of () is
1 + 2 + 4 + 8 + 16 + 32 + 40787 + 81574 + 163148 + 326296 + 652592 = 1264460.
List of known sociable numbers
The following categorizes all known sociable numbers by the length of the corresponding aliquot sequence:
It is conjectured that if n is congruent to 3 modulo 4 then there is no such sequence with length n.
The 5-cycle sequence is: 12496, 14288, 15472, 14536, 14264
The only known 28-cycle is: 14316, 19116, 31704, 47616, 83328, 177792, 295488, 629072, 589786, 294896, 358336, 418904, 366556, 274924, 275444, 243760, 376736, 381028, 285778, 152990, 122410, 97946, 48976, 45946, 22976, 22744, 19916, 17716 .
These two sequences provide the only sociable numbers below 1 million (other than the perfect and amicable numbers).
Searching for sociable numbers
The aliquot sequence can be represented as a directed graph, , for a given integer , where denotes the
sum of the proper divisors of .
Cycles in represent sociable numbers within the interval . Two special cases are loops that represent perfect numbers and cycles of length two that represent amicable pairs.
Conjecture of the sum of sociable number cycles
It is conjectured that as the number of sociable number cycles with length greater than 2 approaches infinity, the proportion of the sums of the sociable number cycles divisible by 10 approaches 1 .
References
H. Cohen, On amicable and sociable numbers, Math. Comp. 24 (1970), pp. 423–429
External links
A list of known sociable numbers
Extensive tables of perfect, amicable and sociable numbers
A003416 (smallest sociable number from each cycle) and A122726 (all sociable numbers) in OEIS
Arithmetic dynamics
Divisor function
Integer sequences
Number theory | Sociable number | [
"Mathematics"
] | 997 | [
"Sequences and series",
"Discrete mathematics",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Arithmetic dynamics",
"Combinatorics",
"Numbers",
"Number theory",
"Dynamical systems"
] |
332,326 | https://en.wikipedia.org/wiki/Chlordecone | Chlordecone, better known in the United States under the brand name Kepone, is an organochlorine compound and a colourless solid. It is an obsolete insecticide, now prohibited in the western world, but only after many thousands of tonnes had been produced and used. Chlordecone is a known persistent organic pollutant that was banned globally by the Stockholm Convention on Persistent Organic Pollutants in 2009.
Synthesis
Chlordecone is made by dimerizing hexachlorocyclopentadiene and hydrolyzing to a ketone.
It is also the main degradation product of mirex.
History
In the U.S., chlordecone, commercialized under the brand name "Kepone", was produced by Allied Signal Company and LifeSciences Product Company in Hopewell, Virginia. The improper handling and dumping of the substance (including the waste materials generated in its manufacturing process) into the nearby James River (U.S.) in the 1960s and 1970s drew national attention to its toxic effects on humans and wildlife. After two physicians, Dr. Yi-nan Chou and Dr. Robert S. Jackson of the Virginia Health Department, notified the Centers for Disease Control that employees of the company had been found to have toxic chemical poisoning, LifeSciences voluntarily closed its plant on 4 July 1975, and cleanup of the contamination began and a 100-mile section of the James River was closed to fishing while state health officials looked for other persons who might have been injured. At least 29 people in the area were hospitalized as a result of their exposure to Kepone.
The product is made in a Diels-Alder reaction shared with pesticides like chlordane and endosulfan. Chlordecone is cited amongst a handful of other noxious substances as the driver for Gerald Ford's half-hearted approval in 1976 of the Toxic Substances Control Act, which "remains one of the most controversial regulatory bills ever passed".
Regulation
In the US, Chlordecone was not federally regulated until after the Hopewell disaster, in which 29 factory workers were hospitalized with various ailments, including neurological.
In France it was banned on the mainland only, in 1993.
In 2009, chlordecone was included in the Stockholm Convention on Persistent Organic Pollutants, which bans its production and use worldwide.
On 14 March 2024, the French National Assembly assumed responsibility for the chlordecone contamination affecting populations in Martinique and Guadeloupe.
Toxicology
Chlordecone can accumulate in the liver and the distribution in the human body is regulated by binding of the pollutant or its metabolites to lipoproteins like LDL and HDL. The LC50 (LC = lethal concentration) is 35 μg/ L for Etroplus maculatus, 22–95 μg/kg for blue gill and trout. Chlordecone bioaccumulates in animals by factors up to a million-fold.
Workers with repeated exposure suffer severe convulsions resulting from degradation of the synaptic junctions.
Chronic low level exposure appears to cause prostate cancer in men, and "significant excesses of deaths were observed for stomach cancer in women and pancreatic cancer in women".
Chlordecone has been found to act as an agonist of the GPER (GPR30), which interacts strongly with the estrogen sex hormone estradiol.
Incidents
The history of chlordecone incidents are reviewed in Who's Poisoning America?: Corporate Polluters and Their Victims in the Chemical Age (1982).
James River estuary
In July 1975, Virginia Governor Mills Godwin Jr. shut down the James River to fishing for 100 miles, from Richmond to the Chesapeake Bay. This ban remained in effect for 13 years, until efforts to clean up the river began to show results.
Due to the pollution risks, many fishermen, marinas, seafood businesses, and restaurants, along with their employees along the river suffered economic losses. In 1981, a large group of these entities sued Allied Chemical in federal district court (Eastern District of Virginia), claiming special economic damages from Allied's negligent damage to the fish and wildlife. In a case that sometimes appears in law school courses on Remedies, the court rejected the traditional "economic-loss rule", which requires physical impact causing personal injury or property damage to receive economic damages, and instead allowed a limited group of the plaintiffs—the fishing boat owners, the marinas, and the bait and tackle shops—to recover economic damages from Allied Chemical.
French Antilles
The French islands of Martinique and Guadeloupe are heavily contaminated with chlordecone, following years of its massive and unrestricted use on banana plantations. Despite a 1990 ban on the substance in mainland France, the economically powerful banana planters lobbied intensively to obtain a waiver to keep using Kepone until 1993. They argued that no alternative pesticide was available, which has since been disputed. After the 1993 ban, the banana planters were discreetly granted derogations to use their remaining stocks, and a 2005 report prepared by the French National Assembly states that after the 1993 ban was imposed, the chemical was illegally imported to the islands under the name Curlone, and continued to be used for many years. Since 2003, local authorities in the two islands have restricted the cultivation of various food crops because the soil is badly contaminated by chlordecone. A 2018 large-scale study by the French public health agency, Santé publique France, shows that 95% of the inhabitants of Guadeloupe and 92% of those of Martinique are contaminated by the chemical. Guadeloupe has one of the highest prostate cancer diagnosis rates in the world.
In popular culture
Kepone was the name of an American indie rock band from Richmond, Virginia formed in 1991.
The Dead Kennedys recorded a song named "Kepone Factory", a satire of the controversy surrounding Allied Signal and their negligence regarding employee safety, for their 1981 album In God We Trust, Inc..
References
External links
Terradaily: Pesticide blamed for 'health disaster' in French Caribbean
EPA releases a Toxicological Review of Kepone (External Review Draft) for public comment – 01/2008
CDC – NIOSH Pocket Guide to Chemical Hazards
Obsolete pesticides
Carcinogens
GPER agonists
Ketones
Organochloride insecticides
James River (Virginia)
Endocrine disruptors
IARC Group 2B carcinogens
Persistent organic pollutants under the Stockholm Convention
Male reproductive toxicants
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution
Xenoestrogens
Cyclobutanes
1975 disasters in the United States
1975 in the environment
Neurotoxins
Presidency of Gerald Ford | Chlordecone | [
"Chemistry",
"Environmental_science"
] | 1,383 | [
"Toxicology",
"Male reproductive toxicants",
"Persistent organic pollutants under the Stockholm Convention",
"Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution",
"Endocrine disruptors",
"Ketones",
"Functional groups",
"Carcinogens",
"Neurochemistry",
"Neu... |
332,341 | https://en.wikipedia.org/wiki/Industrial%20data%20processing | Industrial data processing is a branch of applied computer science that covers the area of design and programming of computerized systems which are not computers as such — often referred to as embedded systems (PLCs, automated systems, intelligent instruments, etc.). The products concerned contain at least one microprocessor or microcontroller, as well as couplers (for I/O).
Another current definition of industrial data processing is that it concerns those computer programs whose variables in some way represent physical quantities; for example the temperature and pressure of a tank, the position of a robot arm, etc.
Computer engineering | Industrial data processing | [
"Technology",
"Engineering"
] | 124 | [
"Computer engineering",
"Computer science stubs",
"Computer science",
"Computing stubs",
"Electrical engineering"
] |
332,372 | https://en.wikipedia.org/wiki/Rate%20%28mathematics%29 | In mathematics, a rate is the quotient of two quantities, often represented as a fraction. If the divisor (or fraction denominator) in the rate is equal to one expressed as a single unit, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the dividend (the fraction numerator) of the rate expresses the corresponding rate of change in the other (dependent) variable. In some cases, it may be regarded as a change to a value, which is caused by a change of a value in respect to another value. For example, acceleration is a change in velocity with respect to time
Temporal rate is a common type of rate ("per unit of time"), such as speed, heart rate, and flux.
In fact, often rate is a synonym of rhythm or frequency, a count per second (i.e., hertz); e.g., radio frequencies or sample rates.
In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate; for example, a heart rate is expressed as "beats per minute".
Rates that have a non-time divisor or denominator include exchange rates, literacy rates, and electric field (in volts per meter).
A rate defined using two numbers of the same units will result in a dimensionless quantity, also known as ratio or simply as a rate (such as tax rates) or counts (such as literacy rate). Dimensionless rates can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%), fraction, or multiple.
Properties and examples
Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions.
A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity).
A set of sequential indices may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define I by assigning consecutive integers to companies, to political subdivisions (such as states), to different investments, etc. The reason for using indices I is so a set of ratios (i=0, N) can be used in an equation to calculate a function of the rates such as an average of a set of ratios. For example, the average velocity found from the set of v I 's mentioned above. Finding averages may involve using weighted averages and possibly using the harmonic mean.
A ratio r=a/b has both a numerator "a" and a denominator "b". The value of a and b may be a real number or integer. The inverse of a ratio r is 1/r = b/a. A rate may be equivalently expressed as an inverse of its value if the ratio of its units is also inverse. For example, 5 miles (mi) per kilowatt-hour (kWh) corresponds to 1/5 kWh/mi (or 200 Wh/mi).
Rates are relevant to many aspects of everyday life. For example:
How fast are you driving? The speed of the car (often expressed in miles per hour) is a rate. What interest does your savings account pay you? The amount of interest paid per year is a rate.
Rate of change
Consider the case where the numerator of a rate is a function where happens to be the denominator of the rate . A rate of change of with respect to (where is incremented by ) can be formally defined in two ways:
where f(x) is the function with respect to x over the interval from a to a+h. An instantaneous rate of change is equivalent to a derivative.
For example, the average speed of a car can be calculated using the total distance traveled between two points, divided by the travel time. In contrast, the instantaneous velocity can be determined by viewing a speedometer.
Temporal rates
In chemistry and physics:
Speed, the rate of change of position, or the change of position per unit of time
Acceleration, the rate of change in speed, or the change in speed per unit of time
Power, the rate of doing work, or the amount of energy transferred per unit time
Frequency, the number of occurrences of a repeating event per unit of time
Angular frequency and rotation speed, the number of turns per unit of time
Reaction rate, the speed at which chemical reactions occur
Volumetric flow rate, the volume of fluid which passes through a given surface per unit of time; e.g., cubic meters per second
Counts-per-time rates
Radioactive decay, the amount of radioactive material in which one nucleus decays per second, measured in becquerels
In computing:
Bit rate, the number of bits that are conveyed or processed by a computer per unit of time
Symbol rate, the number of symbol changes (signaling events) made to the transmission medium per second
Sampling rate, the number of samples (signal measurements) per second
Miscellaneous definitions:
Rate of reinforcement, number of reinforcements per unit of time, usually per minute
Heart rate, usually measured in beats per minute
Economics/finance rates/ratios
Exchange rate, how much one currency is worth in terms of the other
Inflation rate, the ratio of the change in the general price level during a year to the starting price level
Interest rate, the price a borrower pays for the use of the money they do not own (ratio of payment to amount borrowed)
Price–earnings ratio, market price per share of stock divided by annual earnings per share
Rate of return, the ratio of money gained or lost on an investment relative to the amount of money invested
Tax rate, the tax amount divided by the taxable income
Unemployment rate, the ratio of the number of people who are unemployed to the number in the labor force
Wage rate, the amount paid for working a given amount of time (or doing a standard amount of accomplished work) (ratio of payment to time)
Other rates
Birth rate, and mortality rate, the number of births or deaths scaled to the size of that population, per unit of time
Literacy rate, the proportion of the population over age fifteen that can read and write
Sex ratio or gender ratio, the ratio of males to females in a population
See also
Derivative
Gradient
Hertz
Slope
References
Measurement
Quotients
de:Rate | Rate (mathematics) | [
"Physics",
"Mathematics"
] | 1,372 | [
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Arithmetic",
"Quotients"
] |
332,389 | https://en.wikipedia.org/wiki/Anastrozole | Anastrozole, sold under the brand name Arimidex among others, is an antiestrogenic medication used in addition to other treatments for breast cancer. Specifically it is used for hormone receptor-positive breast cancer. It has also been used to prevent breast cancer in those at high risk. It is taken by mouth.
Common side effects of anastrozole include hot flashes, altered mood, joint pain, and nausea. Severe side effects include an increased risk of heart disease and osteoporosis. Use during pregnancy may harm the baby. Anastrozole is in the aromatase-inhibiting family of medications. It works by blocking the production of estrogens in the body, and hence has antiestrogenic effects.
Anastrozole was patented in 1987 and was approved for medical use in 1995. It is on the World Health Organization's List of Essential Medicines. Anastrozole is available as a generic medication. In 2022, it was the 179th most commonly prescribed medication in the United States, with more than 2million prescriptions.
Medical uses
Breast cancer
Anastrozole is used in the treatment and prevention of breast cancer in women. The Arimidex, Tamoxifen, Alone or in Combination (ATAC) trial was of localized breast cancer and women received either anastrozole, the selective estrogen receptor modulator tamoxifen, or both for five years, followed by five years of follow-up. After more than 5 years the group that received anastrozole had better results than the tamoxifen group. The trial suggested that anastrozole is the preferred medical therapy for postmenopausal women with localized estrogen receptor-positive breast cancer.
Early puberty
Anastrozole is used at a dosage of 0.5 to 1 mg/day in combination with the antiandrogen bicalutamide in the treatment of peripheral precocious puberty, for instance due to familial male-limited precocious puberty (testotoxicosis) and McCune–Albright syndrome, in boys.
Available forms
Anastrozole is available in the form of 1 mg oral tablets. No alternative forms or routes are available.
Contraindications
Contraindications of anastrozole include hypersensitivity to anastrozole or any other component of anastrozole formulations, pregnancy, and breastfeeding. Hypersensitivity reactions to anastrozole including anaphylaxis, angioedema, and urticaria have been observed.
Side effects
Common side effects of anastrozole (≥10% incidence) include hot flashes,
asthenia, arthritis, pain, arthralgia, hypertension, depression, nausea and vomiting, rash, osteoporosis, bone fractures, back pain, insomnia, headache, bone pain, peripheral edema, coughing, dyspnea, pharyngitis, and lymphedema. Serious but rare adverse effects (<0.1% incidence) include skin reactions such as lesions, ulcers, or blisters; allergic reactions with swelling of the face, lips, tongue, and/or throat that may cause difficulty swallowing or breathing; and abnormal liver function tests as well as hepatitis.
Interactions
Anastrozole is thought to have clinically negligible inhibitory effects on the cytochrome P450 enzymes CYP1A2, CYP2A6, CYP2D6, CYP2C8, CYP2C9, and CYP2C19. As a result, it is thought that drug interactions of anastrozole with cytochrome P450 substrates are unlikely. No clinically significant drug interactions have been reported with anastrozole as of 2003.
Anastrozole does not affect circulating levels of tamoxifen or its major metabolite N-desmethyltamoxifen. However, tamoxifen has been found to decrease steady-state area-under-the-curve levels of anastrozole by 27%. But estradiol levels were not significantly different in the group that received both anastrozole and tamoxifen compared to the anastrozole alone group, so the decrease in anastrozole levels is not thought to be clinically important.
Pharmacology
Pharmacodynamics
Anastrozole works by reversibly binding to the aromatase enzyme, and through competitive inhibition blocks the conversion of androgens to estrogens in peripheral (extragonadal) tissues. The medication has been found to achieve 96.7% to 97.3% inhibition of aromatase at a dosage of 1 mg/day and 98.1% inhibition of aromatase at a dosage of 10 mg/day in humans. As such, 1 mg/day is considered to be the minimal dosage required to achieve maximal suppression of aromatase with anastrozole. This decrease in aromatase activity results in an at least 85% decrease in estradiol levels in postmenopausal women. Levels of corticosteroids and other adrenal steroids are unaffected by anastrozole.
Pharmacokinetics
The bioavailability of anastrozole in humans is unknown, but it was found to be well-absorbed in animals. Absorption of anastrozole is linear over a dosage range of 1 to 20 mg/day in humans and does not change with repeated administration. Food does not significantly influence the extent of absorption of anastrozole. Peak levels of anastrozole occur a median 3 hours after administration, but with a wide range of 2 to 12 hours. Steady-state levels of anastrozole are achieved within 7 to 10 days of continuous administration, with 3.5-fold accumulation. However, maximal suppression of estradiol levels occurs within 3 or 4 days of therapy.
Active efflux of anastrozole by P-glycoprotein at the blood–brain barrier has been found to limit the central nervous system penetration of anastrozole in rodents, whereas this was not the case with letrozole and vorozole. As such, anastrozole may have peripheral selectivity in humans, although this has yet to be confirmed. In any case, estradiol is synthesized peripherally and readily crosses the blood–brain barrier, so anastrozole would still expected to reduce estradiol levels in the central nervous system to a certain degree. The plasma protein binding of anastrozole is 40%.
The metabolism of anastrozole is by N-dealkylation, hydroxylation, and glucuronidation. Inhibition of aromatase is due to anastrozole itself rather than to metabolites, with the major circulating metabolite being inactive. The elimination half-life of anastrozole is 40 to 50 hours (1.7 to 2.1 days). This allows for convenient once-daily administration. The medication is eliminated predominantly by metabolism in the liver (83 to 85%) but also by residual excretion by the kidneys unchanged (11%). Anastrozole is excreted primarily in urine but also to a lesser extent in feces.
Chemistry
Anastrozole is a nonsteroidal benzyl triazole. It is also known as α,α,α',α'-tetramethyl-5-(1H-1,2,4-triazol-1-ylmethyl)-m-benzenediacetonitrile. Anastrozole is structurally related to letrozole, fadrozole, and vorozole, with all being classified as azoles.
History
Anastrozole was patented by Imperial Chemical Industries (ICI) in 1987 and was approved for medical use, specifically the treatment of breast cancer, in 1995.
Society and culture
Generic names
Anastrozole is the generic name of the drug and its , , , and .
Brand names
Anastrozole is primarily sold under the brand name Arimidex. However, it is also marketed under a variety of other brand names throughout the world.
Availability
Anastrozole is available widely throughout the world.
Research
Anastrozole is surprisingly ineffective at treating gynecomastia, in contrast to selective estrogen receptor modulators like tamoxifen.
Anastrozole was under development for the treatment of female infertility but did not complete development and hence was never approved for this indication.
An anastrozole and levonorgestrel vaginal ring (developmental code name BAY 98–7196) was under development for use as a hormonal contraceptive and treatment for endometriosis, but development was discontinued in November 2018 and the formulation was never marketed.
Anastrozole increases testosterone levels in males and has been studied as an alternative method of androgen replacement therapy in men with hypogonadism. However, there are concerns about its long-term influence on bone mineral density in this patient population, as well as other adverse effects.
References
27-Hydroxylase inhibitors
Aromatase inhibitors
Drugs developed by AstraZeneca
Hormonal antineoplastic drugs
Nitriles
Peripherally selective drugs
Triazoles
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Anastrozole | [
"Chemistry"
] | 1,995 | [
"Nitriles",
"Functional groups"
] |
332,410 | https://en.wikipedia.org/wiki/Preamplifier | A preamplifier, also known as a preamp, is an electronic amplifier that converts a weak electrical signal into an output signal strong enough to be noise-tolerant and strong enough for further processing, or for sending to a power amplifier and a loudspeaker. Without this, the final signal would be noisy or distorted. They are typically used to amplify signals from analog sensors such as microphones and pickups. Because of this, the preamplifier is often placed close to the sensor to reduce the effects of noise and interference.
Description
An ideal preamp will be linear (have a constant gain through its operating range), have high input impedance (requiring only a minimal amount of current to sense the input signal) and a low output impedance (when current is drawn from the output there is minimal change in the output voltage). It is used to boost the signal strength to drive the cable to the main instrument without significantly degrading the signal-to-noise ratio (SNR). The noise performance of a preamplifier is critical. According to Friis's formula, when the gain of the preamplifier is high, the SNR of the final signal is determined by the SNR of the input signal and the noise figure of the preamplifier.
Three basic types of preamplifiers are available:
current-sensitive preamplifier
parasitic-capacitance preamplifier
charge-sensitive preamplifier.
Audio systems
In an audio system, they are typically used to amplify signals from analog sensors to line level. The second amplifier is typically a power amplifier (power amp). The preamplifier provides voltage gain (e.g., from 10 mV to 1 V) but no significant current gain. The power amplifier provides the higher current necessary to drive loudspeakers. For these systems, some common sensors are microphones, instrument pickups, and phonographs. Preamplifiers are often integrated into the audio inputs on mixing consoles, DJ mixers, and sound cards. They can also be stand-alone devices.
Examples
The integrated preamplifier in a foil electret microphone.
The first stages of an instrument amplifier, which is then sent to the power amplifier. With instrument amplifiers, the preamp is often designed to produce overdrive or distortion effects.
A stand-alone unit for use in live music and recording studio applications.
As part of a stand-alone channel strip or channel strip built into an audio mixing desk.
A masthead amplifier used with television receiver antenna or a satellite receiver dish.
The circuit inside of a hard drive connected to the magnetic heads or the circuit inside of CD/DVD drive which connects to the photodiodes.
A switched capacitor circuit used to null the effects of mismatch offset in most CMOS comparator-based flash analog-to-digital converters
Due to its unique coloration, some preamplifiers can be emulated in software to be used in mixing.
See also
Low-noise amplifier (LNA)
Instrumentation amplifier
Buffer amplifier
Logarithmic resistor ladder
References
External links
Electronic amplifiers
Audio engineering | Preamplifier | [
"Technology",
"Engineering"
] | 660 | [
"Electrical engineering",
"Audio engineering",
"Electronic amplifiers",
"Amplifiers"
] |
332,411 | https://en.wikipedia.org/wiki/Arrested%20decay | Arrested decay is a term coined by the U.S. State of California, to explain how it would preserve its Bodie State Historic Park. A more common application of this concept is the preservation of war ruins as memorials.
United States
At Bodie State Historic Park, the structures will be maintained, but only to the extent that they will not be allowed to fall over or otherwise deteriorate in a major way.
Any building that was standing in 1962, when Bodie became a State Park, may be rebuilt or preserved as the photographs of 1962 showed them. By putting new roofs on the buildings, rebuilding foundations, and resealing glass that is in window frames, the State is able to keep buildings from naturally decaying.
Eastern State Penitentiary in Philadelphia, Pennsylvania, uses a similar system, though it uses the term "preserved ruin."
Croatia
The authorities in Vukovar, Croatia decided to keep the old water tower in the city as it is found today and as it had become after the war — gnarled by artillery.
Berlin, Germany
Several buildings destroyed in the Second World War have been preserved in their ruined condition as memorials. These include part of the facade of the Anhalter Bahnhof and the belfry of the Kaiser Wilhelm Memorial Church.
Sarajevo, Bosnia
The authorities in Sarajevo, Bosnia have also preserved the building of the daily newspaper Oslobođenje to this day the way it was shelled during the Bosnian War.
Hiroshima, Japan
In 1996, the Hiroshima Peace Memorial was acknowledged as a UNESCO World Heritage Site. Originally completed in 1905, the building was known at the time of the Hiroshima atomic bomb explosion on August 6, 1945, as the Hiroshima Prefectural Industrial Promotion Hall. Although suffering considerable damage, it was the closest structure to the hypocenter of the explosion to withstand the blast without being leveled to the ground. It has been preserved in the condition it was in after the bombing to serve as a symbol of hope for world peace and nuclear disarmament.
Oradour-sur-Glane, France
Oradour-sur-Glane was a village in the Limousin région of France that was destroyed on 10 June 1944, when 642 of its inhabitants – men, women and children – were murdered by a German Waffen-SS company. Although a new village was built after World War II, away from the ruins of the former village, the old village – the site of the massacre – still stands as a memorial to the dead and as being representative of similar sites and events. Part of the memorial includes items recovered from the burned-out buildings: watches stopped at the time their owners were burned alive; glasses – melted from the intense heat; and various personal items and money.
References
Building
Historic preservation | Arrested decay | [
"Engineering"
] | 557 | [
"Construction",
"Building"
] |
332,416 | https://en.wikipedia.org/wiki/Dexamethasone | Dexamethasone is a fluorinated glucocorticoid medication used to treat rheumatic problems, a number of skin diseases, severe allergies, asthma, chronic obstructive pulmonary disease (COPD), croup, brain swelling, eye pain following eye surgery, superior vena cava syndrome (a complication of some forms of cancer), and along with antibiotics in tuberculosis. In adrenocortical insufficiency, it may be used in combination with a mineralocorticoid medication such as fludrocortisone. In preterm labor, it may be used to improve outcomes in the baby. It may be given by mouth, as an injection into a muscle, as an injection into a vein, as a topical cream or ointment for the skin or as a topical ophthalmic solution to the eye. The effects of dexamethasone are frequently seen within a day and last for about three days.
The long-term use of dexamethasone may result in thrush, bone loss, cataracts, easy bruising, or muscle weakness. It is in pregnancy category C in the United States, meaning that it should only be used when the benefits are predicted to be greater than the risks. In Australia, the oral use is category A, meaning it has been frequently used in pregnancy and not been found to cause problems to the baby. It should not be taken when breastfeeding. Dexamethasone has anti-inflammatory and immunosuppressant effects.
Dexamethasone was first synthesized in 1957 by Philip Showalter Hench and was approved for medical use in 1958. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 234th most commonly prescribed medication in the United States, with more than 1million prescriptions. It is available as a generic medication. In 2022, the combination of dexamethasone with neomycin and polymyxin B was the 274th most commonly prescribed medication in the United States, with more than 800,000 prescriptions.
Medical uses
Anti-inflammatory
Dexamethasone is used to treat many inflammatory and autoimmune disorders, such as rheumatoid arthritis and bronchospasm. Idiopathic thrombocytopenic purpura, a decrease in numbers of platelets due to an immune problem, responds to 40 mg daily for four days; it may be administered in 14-day cycles. It is unclear whether dexamethasone in this condition is significantly better than other glucocorticoids.
It is also given in small amounts before and/or after some forms of dental surgery, such as the extraction of the wisdom teeth, an operation that often causes puffy, swollen cheeks.
Dexamethasone is commonly given as a treatment for croup in children. A single dose can reduce the swelling of the airway to improve breathing and reduce discomfort.
Dexamethasone is sometimes injected into the heel when treating plantar fasciitis or heel pain, sometimes in conjunction with triamcinolone acetonide. There is no evidence that this treatment helps in the long term, however, dexamethasone may provide short-term pain relief.
It may be useful to counteract allergic anaphylactic shock, however this is not usually recommended by clinical guidelines.
It is present in certain eye drops – particularly after eye surgery – and as a nasal spray, and certain ear drops (can be combined with an antibiotic and an antifungal). Dexamethasone intravitreal steroid implants have been approved by the US Food and Drug Administration (FDA) to treat ocular conditions such as diabetic macular edema, central retinal vein occlusion, and uveitis. However, the evidence is poor quality relating to the treatment of uveitis, with the potential side effects (cataract progression and raised intraocular pressure) being significant, and the benefits not certainly greater than standard treatment. Dexamethasone has also been used with antibiotics to treat acute endophthalmitis.
Dexamethasone is used in transvenous screw-in cardiac pacing leads to minimize the inflammatory response of the myocardium. The steroid is released into the myocardium as soon as the screw is extended and can play a significant role in minimizing the acute pacing threshold due to the reduction of inflammatory response. The typical quantity present in a lead tip is less than 1.0 mg.
Dexamethasone may be administered before antibiotics in cases of bacterial meningitis. Gram-negative bacteria — to which the causative agent of bacterial meningitis, neisseria meningitidis, belongs — have highly immunogenic lipopolysaccharides as a component of their cell membrane and trigger a strong inflammatory response. Pre-administration of dexamethasone before the administration of antibiotics acts to reduce that response, thus reducing hearing loss and neurological damage.
Cancer
People with cancer undergoing chemotherapy are often given dexamethasone to counteract certain side effects of their antitumor treatments. Dexamethasone can increase the antiemetic effect of 5-HT3 receptor antagonists, such as ondansetron. The exact mechanism of this interaction is not well-defined, but it has been theorized that this effect may be due to, among many other causes, inhibition of prostaglandin synthesis, anti-inflammatory effects, immunosuppressive effects, decreased release of endogenous opioids, or a combination of the aforementioned.
In brain tumors (primary or metastatic), dexamethasone is used to counteract the development of edema, which could eventually compress other brain structures. It is also given in cord compression, where a tumor is compressing the spinal cord. Evidence on the safety and efficacy of using dexamethasone to treat malignant brain tumors is not clear.
Dexamethasone is also used as a direct chemotherapeutic agent in certain hematological malignancies, especially in the treatment of multiple myeloma, in which dexamethasone is given alone or in combination with other chemotherapeutic drugs, including most commonly with thalidomide (Thal-dex), lenalidomide, bortezomib (Velcade, Vel-dex), or a combination of doxorubicin (Adriamycin) and vincristine or bortezomib/lenalidomide/dexamethasone.
COVID-19
Dexamethasone is recommended by the National Health Service in the UK and the National Institutes of Health (NIH) in the US for people with COVID-19 who need either mechanical ventilation or supplemental oxygen (without ventilation).
The Infectious Diseases Society of America (IDSA) guideline panel suggests the use of glucocorticoids for people with severe COVID-19, defined as people with SpO2 ≤94% on room air, and those who require supplemental oxygen, mechanical ventilation, or extracorporeal membrane oxygenation (ECMO). The IDSA recommends against the use of glucocorticoids for those with COVID-19 without hypoxemia requiring supplemental oxygen.
The World Health Organization (WHO) recommends systemic corticosteroids rather than no systemic corticosteroids for the treatment of people with COVID-19 (strong recommendation, based on moderate certainty evidence). The WHO suggests not to use corticosteroids in the treatment of people with non-severe COVID-19 (conditional recommendation, based on low certainty evidence).
The Oxford University RECOVERY Trial issued a press release announcing preliminary results that the drug could reduce deaths by about a third in participants on ventilators and by about a fifth in participants on oxygen; it did not benefit people who did not require respiratory support. A meta-analysis of seven clinical trials of critically ill COVID-19 participants, each treated with one of three different corticosteroids found a statistically significant reduction in death. The largest reduction was obtained with dexamethasone (36% compared to placebo).
In September 2020, the European Medicines Agency (EMA) endorsed the use of dexamethasone in adults and adolescents, from twelve years of age and weighing at least , who require supplemental oxygen therapy. Dexamethasone can be taken by mouth or given as an injection or infusion (drip) into a vein.
In November 2020, the Public Health Agency of Canada's Clinical Pharmacology Task Group recommended dexamethasone for hospitalized patients requiring mechanical ventilation. Although dexamethasone, and other glucocorticoids, reduce mortality in COVID-19 they have also been associated with an increased risk of secondary infections, secondary infections being a significant issue in critically ill COVID-19 patients.
The mechanism of action of dexamethasone involves suppression of late-stage interferon type I programs in severe COVID-19 patients.
Surgery
Dexamethasone is used fairly regularly, often as a single intravenous dose, during surgery to prevent postoperative nausea and vomiting, manage pain, potentially reduce the amount of pain medication required, and help reduce post-surgery hospitalisation time. The adverse effects of taking steroids after surgery on wound healing, blood sugar levels, and in diabetics are not completely understood; however, dexamethasone likely does not increase the risk of postoperative infections.
Endocrine
Dexamethasone is the treatment for the very rare disorder of glucocorticoid resistance.
In adrenal insufficiency and Addison's disease, dexamethasone is prescribed when the patient does not respond well to prednisone or methylprednisolone.
It can be used in congenital adrenal hyperplasia in older adolescents and adults to suppress adrenocorticotropic hormone (ACTH) production. It is typically given at night.
Pregnancy
Dexamethasone may be given to women at risk of delivering prematurely to promote maturation of the fetus's lungs. This administration, given from one day to one week before delivery, has been associated with low birth weight, although not with increased rates of neonatal death.
Dexamethasone has also been used during pregnancy as an off-label prenatal treatment for the symptoms of congenital adrenal hyperplasia (CAH) in female babies. CAH causes a variety of physical abnormalities, notably ambiguous genitalia. Early prenatal CAH treatment has been shown to reduce some CAH symptoms, but it does not treat the underlying congenital disorder. This use is controversial: it is inadequately studied, only around one in ten of the fetuses of women treated are at risk of the condition, and serious adverse events have been documented. Experimental use of dexamethasone in pregnancy for fetal CAH treatment was discontinued in Sweden when one in five cases had adverse events.
A small clinical trial found long-term effects on verbal working memory among the small group of children treated prenatally, but the small number of test subjects means the study cannot be considered definitive.
High-altitude illnesses
Dexamethasone is used in the treatment of high-altitude cerebral edema (HACE), as well as high-altitude pulmonary edema (HAPE). It is commonly carried on mountain-climbing expeditions to help climbers deal with complications of altitude sickness.
Nausea and vomiting
Intravenous dexamethasone is effective for the prevention of nausea and vomiting in people who had surgery and whose post-operative pain was treated with long-acting spinal or epidural spinal opioids.
The combination of dexamethasone and a 5-HT3 receptor antagonist such as ondansetron is more effective than a 5-HT3 receptor antagonist alone in preventing postoperative nausea and vomiting.
Sore throat
A single dose of dexamethasone or another steroid speeds the improvement of a sore throat.
Contraindications
Contraindications of dexamethasone include, but are not limited to:
Uncontrolled infections
Known hypersensitivity to dexamethasone
Cerebral malaria
Systemic fungal infection
Concurrent treatment with live virus vaccines (including smallpox vaccine)
Adverse effects
The exact incidence of the adverse effects of dexamethasone is not available, hence estimates have been made as to the incidence of the adverse effects below based on the adverse effects of related corticosteroids and on available documentation on dexamethasone.
Common
Acne
Amnesia
Birth defect
Cataract (in long-term treatment, occurs in about 10% of patients)
Confusion
Depression
Dyspepsia
Euphoria
Headaches
Hiccups (in long-term treatment, occurs in about 11% of patients)
Hyperglycemia
Hypertension
Impaired skin healing and wound repair
Increased appetite
Increased risk of viral, bacterial, fungal, and parasitic infections
Insomnia
Irritability
Malaise
steroid induced Muscle atrophy and myopathy
Nausea
Ocular hypertension
Osteoporosis
Vertigo
Vomiting
Weight gain
Unknown frequency
Abdominal distension
Adrenal suppression
Allergic reactions (including anaphylaxis)
Arterial thrombosis
Aspergillosis
Bruising
Candidiasis
Cardiomyopathy
Cleft palate
Corneal or scleral thinning
Cushing's syndrome
Edema
Esophageal ulcer
Facial plethora
Glaucoma
Growth stunting (in children)
Herpes zoster
Hypernatremia
Hypertriglyceridemia
Hypocalcemia
Hypokalemia
Intracranial hypertension (with long-term treatment)
Leukocytosis
Mania
Mucormycosis
Pancreatitis (inflammation of the pancreas)
Papilledema
Peptic ulcer
Protein catabolism (causing nitrogen depletion)
Psychological dependence
Psychosis
Seizures
Skin atrophy
Striae
Telangiectasia
Thromboembolism
Venous thrombosis
Vertebral collapse
Withdrawal
Sudden withdrawal after long-term treatment with corticosteroids can lead to
Adrenal insufficiency
Arthralgia
Conjunctivitis
Death
Fever
Hypotension
Myalgia
Nodule (medicine) (painful, itchy skin condition)
Rhinitis
Weight loss
Interactions
Known drug interactions include:
Inducers of hepatic microsomal enzymes such as barbiturates, phenytoin, and rifampicin can reduce the half-life of dexamethasone.
Cotreatment with oral contraceptives can increase its volume of distribution.
Pharmacology
Pharmacodynamics
As a glucocorticoid, dexamethasone is an agonist of the glucocorticoid receptor (GR). It is highly selective for the GR over the mineralocorticoid receptor (MR), and in relation to this, has minimal mineralocorticoid activity. This is in contrast to endogenous corticosteroids like cortisol, which bind to and activate both the GR and the MR. Dexamethasone is 25 times more potent than hydrocortisone (cortisol) as a glucocorticoid. Its affinity (Ki) for the GR was about 1.2nM in one study.
The activation of the GR by dexamethasone results in dose-dependent suppression of the hypothalamic–pituitary–adrenal axis (HPA axis) and of production of endogenous corticosteroids by the adrenal glands, thereby reducing circulating endogenous concentrations of corticosteroids like cortisol and corticosterone.
Dexamethasone poorly penetrates the blood–brain barrier into the central nervous system due to binding to P-glycoprotein. However, higher doses of dexamethasone override the export capacity of P-glycoprotein and enter the brain to produce central activation of GRs. In conjunction with the suppression of endogenous corticosteroids by dexamethasone, this results in skewed ratios of activation of peripheral versus central GRs as well as skewed ratios of activation of GRs versus MRs when compared to non-synthetic corticosteroids. These differences can have significant clinical relevance.
Chemistry
Dexamethasone is a synthetic pregnane corticosteroid and derivative of cortisol (hydrocortisone) and is also known as 1-dehydro-9α-fluoro-16α-methylhydrocortisone or as 9α-fluoro-11β,17α,21-trihydroxy-16α-methylpregna-1,4-diene-3,20-dione. The molecular and crystal structure of dexamethasone has been determined by X-ray crystallography. It is a stereoisomer of betamethasone, the two compounds differing only in the spatial configuration of the methyl group at position 16 (see steroid nomenclature).
Synthesis
To synthesize dexamethasone, 16β-methylprednisolone acetate is dehydrated to the 9,11-dehydro derivative. This is then reacted with a source of hypobromite, such as basic N-bromosuccinimide, to form the 9α-bromo-11β-hydrin derivative, which is then ring-closed to an epoxide. A ring-opening reaction with hydrogen fluoride in tetrahydrofuran gives dexamethasone.
Spectroscopy
In chemistry, spectroscopy is used to analyze products of reactions. To understand if dexamethasone is synthesized from a reaction, spectroscopy must be taken and compared to the literature spectrum. There are multiple spectroscopy analyses that can be taken including 1H NMR, 13C NMR, IR, Mass spectrometry, and UV/vis spectroscopy.The NMR spectrum shown above can be used to compare to product synthesized through reactions to figure out if Dexamethasone was synthesized. 1H NMR, among other things, shows that there are 29 hydrogens and 13C NMR shows that there are 22 carbons.
Using IR spectroscopy, the peaks show the functional groups found in the molecule. You can see peaks at 3472, 1662, and 1618 representing alcohol, aldehyde, and alkene functional groups. UV-vis spectroscopy is another way to analyze a product to figure out what it is.
Finally, mass spectroscopy showed peaks at: 393.1, 355.2 147.1 m/z. The peak at 393.1 m/z is the peak for dexamethasone as its molecular weight is 392.46 m/z.
History
Dexamethasone was first synthesized by Philip Showalter Hench in 1957. It was introduced for medical use in 1958.
On 16 June 2020, the RECOVERY Trial announced preliminary results stating that dexamethasone improves survival rates of hospitalized patients with COVID-19 receiving oxygen or on a ventilator. Benefits were only observed in patients requiring respiratory support; those who did not require breathing support saw a worse survival rate than the control group, although the difference may have been due to chance.
A preprint containing the full dataset was published on 22 June 2020, and demand for dexamethasone surged after the publication of the preprint. The preliminary report was published in The New England Journal of Medicine on 18 July 2020. The final report was published in February 2021.
The World Health Organization (WHO) states that dexamethasone should be reserved for seriously ill and critical patients receiving COVID-19 treatment in a hospital setting, and the WHO Director-General stated that "WHO emphasizes that dexamethasone should only be used for patients with severe or critical disease, under close clinical supervision. There is no evidence this drug works for patients with mild disease or as a preventative measure, and it could cause harm." In July 2020, the WHO stated they were in the process of updating treatment guidelines to include dexamethasone or other steroids. In September 2020, the WHO released updated guidance on using corticosteroids for COVID-19.
In July 2020, the European Medicines Agency (EMA) started reviewing results from the RECOVERY study arm that involved the use of dexamethasone in the treatment of patients with COVID-19 admitted to the hospital to provide an opinion on the results and in particular the potential use of dexamethasone for the treatment of adults with COVID-19. In September 2020, the EMA received an application for marketing authorization of dexamethasone for COVID-19.
Society and culture
Price
Dexamethasone is inexpensive. In the United States a month of medication is typically priced less than . In India, a course of treatment for preterm labor is about . The drug is available in most areas of the world.
Nonmedical use
Dexamethasone is given in legal Bangladesh brothels to prostitutes not yet of legal age, causing weight gain aimed at making them appear older and healthier to customers and police.
Dexamethasone and most glucocorticoids are banned by sporting bodies including the World Anti-Doping Agency.
Veterinary use
Combined with marbofloxacin CAS number 115550-35-1and clotrimazole, dexamethasone is available under the name Aurizon, CAS number 50-02-2, and used to treat difficult ear infections, especially in dogs. It can also be combined with trichlormethiazide to treat horses with swelling of distal limbs and general bruising.
References
External links
Drugs developed by AbbVie
Chemical substances for emergency medicine
COVID-19 drug development
CYP3A4 inducers
Fluorinated corticosteroids
Glucocorticoids
Halohydrins
Drugs developed by Novartis
Organofluorides
Otologicals
Peripherally selective drugs
Pregnane X receptor agonists
Pregnanes
Wikipedia medicine articles ready to translate
World Health Organization essential medicines | Dexamethasone | [
"Chemistry"
] | 4,657 | [
"Chemicals in medicine",
"COVID-19 drug development",
"Drug discovery",
"Chemical substances for emergency medicine"
] |
332,462 | https://en.wikipedia.org/wiki/Media%20access%20unit | A media access unit (MAU), also known as a multistation access unit (MAU or MSAU), is a device to attach multiple network stations in a ring topology when the cabling is done in a star topology as a Token Ring network, internally wired to connect the stations into a logical ring (generally passive i.e. non-switched and unmanaged; however managed Token Ring MAUs do exist in the form of CAUs, or Controlled Access Units).
Passive Token Ring was an active IBM networking product in the 1997 time frame, after which it was rapidly displaced by switched networking.
Advantages and disadvantages
Passive networking without power
The majority of IBM-implemented (actual) passive Token Ring MAUs operated without the requirement of power; instead the passive MAU used a series of relays that adjusted themselves as data is passed through: this is also why Token Ring generally used relays to terminate disconnected or failed ports. The power-less IBM 8228 Multistation Access Unit requires a special 'Setup Aid' tool to re-align the relays after the unit has been moved which causes them to be in incorrect states: this is accomplished by a 9v battery sending a charge to snap the relays back in a proper state. The advantages of having a MAU operate without power is that they can be placed in areas without outlets, the disadvantage is that they must be primed each time the internal relays experience excessive force. The IBM 8226 MAU, while containing a power jack, primarily uses this for the LEDs: relays are still used inside the unit but do not require priming.
Bandwidth
In theory, this networking technology supported large geographic areas (with a total ring circumference of several kilometers). But with the bandwidth shared by all stations, in practice separate networks spanning smaller areas were joined using bridges. This bridged network technology was soon displaced by high-bandwidth switched networks.
Fault tolerance
Multistation Access Units contain relays to short out non-operating stations. Multiple MAUs can be connected into a larger ring through their ring in/ring out connectors.
An MAU is also called a "ring in a box". The loop that used to make up the ring of the token Ring is now integrated into this device. In Token Ring, when a link is broken in the ring, the entire network goes down; however with an MAU, the broken circuit is closed within 1ms; allowing stations on the ring to have their cords unplugged without disabling the entire network.
See also
Lobe Attachment Module
References
External links
Networking hardware | Media access unit | [
"Engineering"
] | 528 | [
"Computer networks engineering",
"Networking hardware"
] |
332,621 | https://en.wikipedia.org/wiki/Hereditarily%20finite%20set | In mathematics and set theory, hereditarily finite sets are defined as finite sets whose elements are all hereditarily finite sets. In other words, the set itself is finite, and all of its elements are finite sets, recursively all the way down to the empty set.
Formal definition
A recursive definition of well-founded hereditarily finite sets is as follows:
Base case: The empty set is a hereditarily finite set.
Recursion rule: If are hereditarily finite, then so is .
Only sets that can be built by a finite number of applications of these two rules are hereditarily finite.
Representation
This class of sets is naturally ranked by the number of bracket pairs necessary to represent the sets:
(i.e. , the Neumann ordinal "0")
(i.e. or , the Neumann ordinal "1")
and then also (i.e. , the Neumann ordinal "2"),
, as well as ,
... sets represented with bracket pairs, e.g. . There are six such sets
... sets represented with bracket pairs, e.g. . There are twelve such sets
... sets represented with bracket pairs, e.g. or (i.e. , the Neumann ordinal "3")
... etc.
In this way, the number of sets with bracket pairs is
Discussion
The set is an example for such a hereditarily finite set and so is the empty set , as noted.
On the other hand, the sets or are examples of finite sets that are not hereditarily finite. For example, the first cannot be hereditarily finite since it contains at least one infinite set as an element, when .
The class of all hereditarily finite sets is denoted by , meaning that the cardinality of each member is smaller than . (Analogously, the class of hereditarily countable sets is denoted by .)
is in bijective correspondence with .
It can also be denoted by , which denotes the th stage of the von Neumann universe.
So here it is a countable set.
Models
Ackermann coding
In 1937, Wilhelm Ackermann introduced an encoding of hereditarily finite sets as natural numbers.
It is defined by a function that maps each hereditarily finite set to a natural number, given by the following recursive definition:
For example, the empty set contains no members, and is therefore mapped to an empty sum, that is, the number zero. On the other hand, a set with distinct members is mapped to .
The inverse is given by
where BIT denotes the BIT predicate.
The Ackermann coding can be used to construct a model of finitary set theory in the natural numbers. More precisely, (where is the converse relation of , swapping its two arguments) models Zermelo–Fraenkel set theory ZF without the axiom of infinity. Here, each natural number models a set, and the relation models the membership relation between sets.
Graph models
The class can be seen to be in exact correspondence with a class of rooted trees, namely those without non-trivial symmetries (i.e. the only automorphism is the identity):
The root vertex corresponds to the top level bracket and each edge leads to an element (another such set) that can act as a root vertex in its own right. No automorphism of this graph exist, corresponding to the fact that equal branches are identified (e.g. , trivializing the permutation of the two subgraphs of shape ).
This graph model enables an implementation of ZF without infinity as data types and thus an interpretation of set theory in expressive type theories.
Graph models exist for ZF and also set theories different from Zermelo set theory, such as non-well founded theories. Such models have more intricate edge structure.
In graph theory, the graph whose vertices correspond to hereditarily finite sets and edges correspond to set membership is the Rado graph or random graph.
Axiomatizations
Theories of finite sets
In the common axiomatic set theory approaches, the empty set also represents the first von Neumann ordinal number, denoted . All finite von Neumann ordinals are indeed hereditarily finite and, thus, so is the class of sets representing the natural numbers. In other words, includes each element in the standard model of natural numbers and so a set theory expressing must necessarily contain them as well.
Now note that Robinson arithmetic can already be interpreted in ST, the very small sub-theory of Zermelo set theory Z− with its axioms given by Extensionality, Empty Set and Adjunction. All of has a constructive axiomatization involving these axioms and e.g. Set induction and Replacement.
Axiomatically characterizing the theory of hereditarily finite sets, the negation of the axiom of infinity may be added. As the theory validates the other axioms of , this establishes that the axiom of infinity is not a consequence these other axioms.
ZF
The hereditarily finite sets are a subclass of the Von Neumann universe. Here, the class of all well-founded hereditarily finite sets is denoted . Note that this is also a set in this context.
If we denote by the power set of , and by the empty set, then can be obtained by setting for each integer . Thus, can be expressed as
and all its elements are finite.
This formulation shows, again, that there are only countably many hereditarily finite sets: is finite for any finite , its cardinality is in Knuth's up-arrow notation (a tower of powers of two), and the union of countably many finite sets is countable.
Equivalently, a set is hereditarily finite if and only if its transitive closure is finite.
See also
Constructive set theory
Finite set
Hereditary set
Hereditarily countable set
Hereditary property
Rooted trees
References
Set theory | Hereditarily finite set | [
"Mathematics"
] | 1,243 | [
"Mathematical logic",
"Set theory"
] |
332,640 | https://en.wikipedia.org/wiki/Quotient | In arithmetic, a quotient (from 'how many times', pronounced ) is a quantity produced by the division of two numbers. The quotient has widespread use throughout mathematics. It has two definitions: either the integer part of a division (in the case of Euclidean division) or a fraction or ratio (in the case of a general division). For example, when dividing 20 (the dividend) by 3 (the divisor), the quotient is 6 (with a remainder of 2) in the first sense and (a repeating decimal) in the second sense.
In metrology (International System of Quantities and the International System of Units), "quotient" refers to the general case with respect to the units of measurement of physical quantities.
Ratios is the special case for dimensionless quotients of two quantities of the same kind.
Quotients with a non-trivial dimension and compound units, especially when the divisor is a duration (e.g., "per second"), are known as rates.
For example, density (mass divided by volume, in units of kg/m3) is said to be a "quotient", whereas mass fraction (mass divided by mass, in kg/kg or in percent) is a "ratio".
Specific quantities are intensive quantities resulting from the quotient of a physical quantity by mass, volume, or other measures of the system "size".
Notation
The quotient is most frequently encountered as two numbers, or two variables, divided by a horizontal line. The words "dividend" and "divisor" refer to each individual part, while the word "quotient" refers to the whole.
Integer part definition
The quotient is also less commonly defined as the greatest whole number of times a divisor may be subtracted from a dividend—before making the remainder negative. For example, the divisor 3 may be subtracted up to 6 times from the dividend 20, before the remainder becomes negative:
20 − 3 − 3 − 3 − 3 − 3 − 3 ≥ 0,
while
20 − 3 − 3 − 3 − 3 − 3 − 3 − 3 < 0.
In this sense, a quotient is the integer part of the ratio of two numbers.
Quotient of two integers
A rational number can be defined as the quotient of two integers (as long as the denominator is non-zero).
A more detailed definition goes as follows:
A real number r is rational, if and only if it can be expressed as a quotient of two integers with a nonzero denominator. A real number that is not rational is irrational.
Or more formally:
Given a real number r, r is rational if and only if there exists integers a and b such that and .
The existence of irrational numbers—numbers that are not a quotient of two integers—was first discovered in geometry, in such things as the ratio of the diagonal to the side in a square.
More general quotients
Outside of arithmetic, many branches of mathematics have borrowed the word "quotient" to describe structures built by breaking larger structures into pieces. Given a set with an equivalence relation defined on it, a "quotient set" may be created which contains those equivalence classes as elements. A quotient group may be formed by breaking a group into a number of similar cosets, while a quotient space may be formed in a similar process by breaking a vector space into a number of similar linear subspaces.
See also
Product (mathematics)
Quotient category
Quotient graph
Integer division
Quotient module
Quotient object
Quotient of a formal language, also left and right quotient
Quotient ring
Quotient set
Quotient space (topology)
Quotient type
Quotition and partition
References
External links | Quotient | [
"Mathematics"
] | 801 | [
"Arithmetic",
"Quotients"
] |
332,652 | https://en.wikipedia.org/wiki/Geniac | Geniac was an educational toy sold as a mechanical computer designed and marketed by Edmund Berkeley, with Oliver Garfield from 1955 to 1958, but with Garfield continuing without Berkeley through the 1960s. The name stood for "Genius Almost-automatic Computer" but suggests a portmanteau of genius and ENIAC (the first fully electronic general-purpose computer).
Construction
The Geniac kit consisted of a wedge-shaped case, a console panel, and nearly 400 other parts available for assembly. It was powered by a flashlight battery.
Basically a rotary switch construction set, the Geniac contained six perforated masonite disks, into the back of which brass jumpers could be inserted. The jumpers made electrical connections between slotted brass bolt heads sitting out from the similarly perforated masonite back panel. To the bolts were attached wires behind the panel. The circuit comprised a battery, such wires from it to, and between, switch positions, wires from the switches to indicator flashlight bulbs set along the panel's middle, and return wires to the battery to complete the circuit.
Setting up Geniac to solve a new problem or perform a new operation involved rewiring the jumpers on the back panel, a task advertised as taking only a few minutes.
Operation
With this basic setup Geniac could use combinational logic only, its outputs depending entirely on inputs manually set. It had no active elements at all – no relays, tubes, or transistors – to allow a machine state to automatically influence subsequent states. Thus, Geniac didn't have memory and couldn't solve problems using sequential logic. All sequencing was performed manually by the operator, sometimes following fairly complicated printed directions (turn this wheel in this direction if this light lights, etc.)
The main instruction book, as well as a supplementary book of wiring diagrams, gave jumper positions and wiring diagrams for building a number of "machines," which could realize fairly complicated Boolean equations. A copy of Claude Shannon's groundbreaking thesis in the subject, A Symbolic Analysis of Relay and Switching Circuits, was also included.
A typical project
A typical project was a primitive "Masculine–Feminine Testing Machine". The user was instructed to answer five questions related to gender, such as "Which makes a better toy for a child: (a) electric train? (b) a doll with a complete wardrobe?" Having wired five of the six rotary switches and set them to "off" positions, questions could be asked. For each "a" answer, a switch was turned to one of two "on" positions, setting a circuit segment; for each "b" answer, the other "on" position. The circuitry sensed the cumulative effect of the switch positions, the circuit being completed, and a "more masculine" or "more feminine" bulb lit, once three masculine or three feminine answers were recorded.
Popularity
Widely advertised in magazines such as Galaxy Science Fiction, the Geniac provided many youths with their first hands-on introduction to computer concepts and Boolean logic.
Brainiac
A nearly identical product, called Brainiac, was introduced in 1958 by Edmund Berkeley, after he had a falling out with Oliver Garfield.
Helical slide rule
Oliver Garfield also sold the Otis King's Patent Calculator, a helical slide rule, under the Geniac brand. Initially he resold the ones manufactured by Carbic, Ltd., but his later products had no serial numbers so were probably his own version.
See also
Digi-Comp I
Digi-Comp II
WDR paper computer
References
External links
Geniac photo and description at www.oldcomputermuseum.com
Brainiac K-30 photo and description at www.oldcomputermuseum.com
Geniac manuals, diagrams and other documents hosted at www.computercollector.com
Magazine ads and articles about Geniac at Modernmechanix blog
Article on Geniac at Early Computers Project
Geniac on list of early personal computers at Blinkenlights.com, with link to article on how it works by a gifted operator
Mechanical computers
Educational toys
Computer-related introductions in 1955 | Geniac | [
"Physics",
"Technology"
] | 835 | [
"Physical systems",
"Machines",
"Mechanical computers"
] |
332,666 | https://en.wikipedia.org/wiki/Fa%C3%A7ade | A façade or facade (; ) is generally the front part or exterior of a building. It is a loanword from the French (), which means "frontage" or "face".
In architecture, the façade of a building is often the most important aspect from a design standpoint, as it sets the tone for the rest of the building. From the engineering perspective, the façade is also of great importance due to its impact on energy efficiency. For historical façades, many local zoning regulations or other laws greatly restrict or even forbid their alteration.
Etymology
The word is a loanword from the French , which in turn comes from the Italian , from meaning 'face', ultimately from post-classical Latin . The earliest usage recorded by the Oxford English Dictionary is 1656.
Façades added to earlier buildings
It was quite common in the Georgian period for existing houses in English towns to be given a fashionable new façade. For example, in the city of Bath, The Bunch of Grapes in Westgate Street appears to be a Georgian building, but the appearance is only skin deep and some of the interior rooms still have Jacobean plasterwork ceilings.
This new construction has happened also in other places: in Santiago de Compostela the three-metre-deep Casa do Cabido was built to match the architectural order of the square, and the main Churrigueresque façade of the Santiago de Compostela Cathedral, facing the Plaza del Obradoiro, is actually encasing and concealing the older Portico of Glory.
High rise façades
In modern high-rise building, the exterior walls are often suspended from the concrete floor slabs. Examples include curtain walls and precast concrete walls. The façade can at times be required to have a fire-resistance rating, for instance, if two buildings are very close together, to lower the likelihood of fire spreading from one building to another.
In general, the façade systems that are suspended or attached to the precast concrete slabs will be made from aluminum (powder coated or anodized) or stainless steel. In recent years more lavish materials such as titanium have sometimes been used, but due to their cost and susceptibility to panel edge staining these have not been popular.
Whether rated or not, fire protection is always a design consideration. The melting point of aluminum, , is typically reached within minutes of the start of a fire. Fire stops for such building joints can be qualified, too. Putting fire sprinkler systems on each floor has a profoundly positive effect on the fire safety of buildings with curtain walls.
The extended use of new materials, like polymers, resulted in an increase of high-rise building façade fires over the past few years, since they are more flammable than traditional materials.
Some building codes also limit the percentage of window area in exterior walls. When the exterior wall is not rated, the perimeter slab edge becomes a junction where rated slabs are abutting an unrated wall. For rated walls, one may also choose rated windows and fire doors, to maintain that wall's rating.
Film sets and theme parks
On a film set and within most themed attractions, many of the buildings are only façade, which are far cheaper than actual buildings, and not subject to building codes (within film sets). In film sets, they are simply held up with supports from behind, and sometimes have boxes for actors to step in and out of from the front if necessary for a scene. Within theme parks, they are usually decoration for the interior ride or attraction, which is based on a simple building design.
Examples
See also
Curtain wall (architecture)
Double-skin façade
Façadism
Potemkin village
References
Citations
Sources
Knaack, Ulrich; Klein, Tillmann; Bilow, Marcel; Auer, Thomas (2007). Façades: Principles of Construction. Boston/Basel/Berlin: Birkhäuser. (English) / (German)
Giving buildings an illusion of grandeur
Facades of Casas Chorizo in Buenos Aires, Argentina
Further reading
The article outlines the development of the façade in ecclesiastical architecture from the early Christian period to the Renaissance.
Architectural elements
Building engineering | Façade | [
"Technology",
"Engineering"
] | 834 | [
"Building engineering",
"Architectural elements",
"Civil engineering",
"Components",
"Architecture"
] |
332,718 | https://en.wikipedia.org/wiki/Siegfried%20Line | The Siegfried Line, known in German as the Westwall (= western bulwark), was a German defensive line built during the late 1930s. Started in 1936, opposite the French Maginot Line, it stretched more than from Kleve on the border with the Netherlands, along the western border of Nazi Germany, to the town of Weil am Rhein on the border with Switzerland. The line featured more than 18,000 bunkers, tunnels and tank traps.
From September 1944 to March 1945, the Siegfried Line was subjected to a large-scale Allied offensive.
Name
The official German name for the defensive line construction program before and during the Second World War changed several times during the late 1930s. It came to be known as the "Westwall", but in English it was referred to as the "Siegfried Line" or, sometimes, the "West Wall". Various German names reflected different areas of construction:
Border Watch programme (pioneering programme) for the most advanced positions (1938)
Limes programme (1938)
Western Air Defense Zone (1938)
Aachen–Saar programme (1939)
Geldern Emplacement between Brüggen and Kleve (1939–1940)
The programmes were given the highest priority, putting a heavy demand on the available resources.
The origin of the name "Westwall" is unknown, but it appeared in popular use from the middle of 1939. There is a record of Hitler sending an Order of the Day to soldiers and workers at the "Westwall" on 20 May 1939.
History
Minor early role
At the start of World War II, the Siegfried Line had serious weaknesses. After the war, German General Alfred Jodl said that it had been "little better than a building site in 1939" and, when Field Marshal Gerd von Rundstedt inspected the line, the weak construction and inadequate weapons caused him to laugh. Despite France's declaration of war against Germany in September 1939, there was no major combat involving the Siegfried Line at the start of the campaign in the West, except for a minor offensive by the French. Instead, both sides remained in a safe position behind their defences, during the so-called Phoney War.
The Reich Ministry of Public Enlightenment and Propaganda drew foreign attention to the unfinished Westwall, in several instances showcasing incomplete or test positions to portray the project finished and ready for action. During the Battle of France, French forces made minor attacks against some parts of the line, but the majority was left untested in battle. When the campaign finished, transportable weapons and materials, such as metal doors, were removed from the Siegfried Line and used in other places such as the Atlantic Wall defences. The concrete sections were left in place in the countryside and soon became completely unfit for defense. The bunkers were used for storage instead.
Reactivation in 1944
With the D-Day landings in Normandy on 6 June 1944, war in the West broke out once more. On 24 August 1944, Hitler gave a directive for renewed construction on the Siegfried Line. 20,000 forced labourers and members of the Reichsarbeitsdienst (Reich Labour Service), most of whom were 14 to 16-year-old boys, attempted to re-equip the line for defensive purposes. Local people were also called in to carry out work, mostly building anti-tank ditches.
Even during construction, it was becoming clear that the bunkers could not withstand newly developed armour-piercing weapons. At the same time as the reactivation of the Siegfried Line, small concrete "Tobruks" were built along the borders of the occupied area. Those bunkers were mostly dugouts for single soldiers.
Clashes
In August 1944, the first clashes took place on the Siegfried Line. The section of the line where most fighting took place was the Hürtgenwald (Hürtgen Forest) area in the Eifel, south-east of Aachen. The Aachen Gap was the logical route into Germany's Rhineland and its main industrial area, so it was where the Germans concentrated their defence.
The Americans committed an estimated 120,000 troops plus reinforcements to the Battle of Hürtgen Forest. The battle in the heavily forested area claimed the lives of 24,000 American soldiers, along with 9,000 so-called non-battle casualties — those evacuated because of fatigue, exposure, accidents and disease. The German death toll is not documented. After the Battle of Hürtgen Forest, the Battle of the Bulge began, a last-ditch attempt by the Germans to reverse the course of the war in the West. The offensive started in the area south of the Hürtgenwald, between Monschau and the Luxembourgish town of Echternach. German loss of life and material was severe and the effort failed. There were serious clashes along other parts of the Siegfried Line and defending soldiers in many bunkers refused to surrender, often fighting to the death. By early 1945, the last Siegfried Line bunkers had fallen at the Saar and Hunsrück.
The British 21st Army Group, which included US formations, also attacked the Siegfried Line. The resulting fighting brought total US losses to approximately 68,000. In addition, the First Army incurred over 50,000 non-battle casualties and the Ninth Army over 20,000. That brought the overall cost of the Siegfried Line Campaign, in US personnel, close to 140,000.
Postwar period
During the post-war period, many sections of the Siegfried Line were removed using explosives.
Preservation and destruction
In North Rhine Westphalia, about thirty bunkers still remain. Most of the rest were either destroyed with explosives or covered with earth. Tank traps still exist in many areas and, in the Eifel, they run over several kilometres. Zweibrücken Air Base was built on top of the Siegfried Line. When the base was still open, the remnants of several old bunkers could be seen in the tree line near the main gate. Another bunker was outside the base perimeter fence near the base hospital. Once the base was closed, workers, digging up the base's fuel tanks, discovered lost bunkers buried below the tanks.
Since 1997, with the motto "The value of the unpleasant as a memorial" (Der Denkmalswert des Unerfreulichen), an effort has been made to preserve the remains of the Siegfried Line as a historical monument. It was intended to stop reactionary fascist groups from using the Siegfried Line for propaganda purposes.
At the same time, state funding was still being provided to destroy the remains of the Siegfried Line. Consequently, emergency archaeological digs took place whenever any part of the line was to be removed, for example for road building. Archaeological activity was not able to stop the destruction of those sections, but furthered scientific knowledge and revealed details of the line's construction.
Environmental conservation
Nature conservationists consider the remains of the Siegfried Line valuable as a chain of biotopes where, thanks to its size, rare animals and plants can take refuge and reproduce. That effect is magnified by the fact that the concrete ruins cannot be used for agricultural or forestry purposes.
Westwall construction programmes
Border Watch
Small bunkers with thick walls were set up with three embrasures towards the front. Sleeping accommodations were hammocks. In exposed positions, similar small bunkers were erected with small round armoured "lookout" sections on the roofs. The programme was carried out by the Border Watch (Grenzwacht), a small military troop activated in the Rhineland immediately after the region was re-militarised by Germany from 1936 onwards, after having been de-militarised following the First World War.
Limes
The Limes programme began in 1938 following an order by Hitler to strengthen fortifications on the western German border. Limes refers to the former borders of the Roman Empire; the cover story for the programme was that it was an archaeological study.
Its Type 10 bunkers were more strongly constructed than the earlier border fortifications. These had thick ceilings and walls. A total of 3,471 were built along the entire length of the Siegfried Line. They featured a central room or shelter for 10–12 men with a stepped embrasure facing backwards and a combat section higher. This elevated section had embrasures at the front and sides for machine guns. More embrasures were provided for riflemen, and the entire structure was constructed so as to be safe against poison gas.
Heating was from a safety oven, the chimney of which was covered with a thick grating. Space was tight, with about per soldier, who was given a sleeping-place and a stool; the commanding officer had a chair. Surviving examples still retain signs warning "Walls have ears" and "Lights out when embrasures are open!"
Aachen-Saar
The Aachen-Saar programme bunkers were similar to those of the Limes programme: Type 107 double MG casemates with concrete walls up to thick. One difference was that there were no embrasures at the front, only at the sides of the bunkers. Embrasures were only built at the front in special cases and were then protected with heavy metal doors. This construction phase included the towns of Aachen and Saarbrücken, which were initially west of the Limes Programme defence line.
Western Air Defence Zone
The Western Air Defence Zone (Luftverteidigungszone West or LVZ West) continued parallel to the two other lines toward the east and consisted mainly of concrete flak foundations. Scattered MG 42 and MG 34 emplacements added additional defence against both air and land targets. Flak turrets were designed to force enemy planes to fly higher, thus decreasing the accuracy of their bombing. These towers were protected at close range by bunkers from the Limes and Aachen-Saar programmes.
Geldern Emplacement
The Geldern Emplacement lengthened the Siegfried Line northwards as far as Kleve on the Rhine and was built after the start of the Second World War. The Siegfried Line originally ended in the north near Brüggen in the Viersen district. The primary constructions were unarmed dugouts, but their extremely strong concrete design afforded excellent protection to the occupants. For camouflage they were often built near farms.
Elements
Standard construction elements such as large Regelbau bunkers, smaller concrete "pillboxes", and "dragon's teeth" anti-tank obstacles were built as part of each construction phase, sometimes by the thousands. Frequently vertical steel rods would be interspersed between the teeth. This standardisation was the most effective use of scarce raw materials, transport and workers, but proved an ineffective tank barrier as US bulldozers simply pushed bridges of soil over these devices.
"Dragon's teeth" tank traps were also known as Höcker in German ('humps' or 'pimples' in English) because of their shape. These blocks of reinforced concrete stand in several rows on a single foundation. There are two typical sorts of barrier: Type 1938 with four rows of teeth getting higher toward the back, and Type 1939 with five rows of such teeth. Many other irregular lines of teeth were also built. Another design of tank obstacle, known as the Czech hedgehog, was made by welding together several bars of steel in such a way that any tank rolling over it would get stuck and possibly damaged. If the contour of the land allowed it, water-filled ditches were dug instead of tank traps. Examples of this kind of defence are those north of Aachen near Geilenkirchen.
Working conditions
The early fortifications were mostly built by private firms, but the private sector was unable to provide the number of workers needed for the programmes that followed; this gap was filled by the Todt Organisation. With this organisation's help, huge numbers of forced labourers – up to 500,000 at a time – worked on the Siegfried Line. Transport of materials and workers from all across Germany was managed by the Deutsche Reichsbahn railway company, which took advantage of the well-developed strategic railway lines built on Germany's western border in World War I.
Working conditions were highly dangerous. For example, the most primitive means had to be used to handle and assemble extremely heavy armour plating, weighing up to .
Life on the building site and after work was monotonous, and many people gave up and left. Most workers received the West Wall Medal for their service.
In propaganda
German propaganda, both at home and abroad, repeatedly portrayed the Westwall during its construction as an unbreachable bulwark. At the start of the war, the opposing troops remained behind their own defence lines.
As a morale booster for British troops marching off to France, the Siegfried Line was the subject of a popular song: "We're Going to Hang out the Washing on the Siegfried Line".
A French version by Ray Ventura ("On ira pendre notre linge sur la ligne Siegfried") met a great success during the Phoney War (Drôle de guerre).
When asked about the Siegfried Line, General George S. Patton reportedly said "Fixed fortifications are monuments to man's stupidity."
See also
Similar border fortifications
Festungsfront Oder-Warthe-Bogen or Ostwall
Atlantic Wall
Linea P (Spain)
Maginot line
Czechoslovak border fortifications
Alpine Wall
National Redoubt (Switzerland)
Mannerheim Line
Surviving elements
List of surviving elements of the Siegfried Line
Besseringen B-Werk, museum in a preserved bunker complex
Orscholz Switch (aka Siegfried Switch), part of Siegfried Line and scene of heavy fighting between German and US troops
Regelbau, standard bunker construction
Siegfried Line Museum, Pirmasens
References
Further reading
Kauffmann, J.E. and Jurga, Robert M. Fortress Europe: European Fortifications of World War II, Da Capo Press, 2002.
– full text
External links
BunkerBlog: All about German fortifications 1933–1945
Bunkersite.com: About bunkers built by the Germans during 1933–1945 in the whole of Europe
http://www.westwallmuseum-irrel.de/
German Doctrine of the Stabilized Front, Report by US Military Intelligence Division, August 1943
Bunkers in Europe (include: Siegfried Line)
Pillbox Warfare in the Siegfried Line
Storming Simserhof near Bitche – 1944
»You enter Germany: Bloody Huertgen and the Siegfried Line« – Documentary by Achim Konejung and Aribert Weis; 2007
Der Weltkrieg war vor deiner Tuer – The little Siegfried line (German: WMTS Wetterau-Main-Tauber-Stellung) in the east of the Siegfried line
German World War II defensive lines
Historic defensive lines
Military installations of the Wehrmacht
World War II sites in Germany
Rhine Province | Siegfried Line | [
"Engineering"
] | 3,003 | [
"Siegfried Line",
"Military engineering",
"Fortification lines",
"Historic defensive lines"
] |
332,770 | https://en.wikipedia.org/wiki/Lila%20%28Hinduism%29 | Lila ( ) or leela () can be loosely translated as "divine play". The concept of lila asserts that creation, instead of being an objective for achieving any purpose, is rather an outcome of the playful nature of the divine. As the divine is perfect, it could have no want fulfilled, thereby signifying freedom, instead of necessity, behind the creation.
The concept of lila is common to both non-dualist and dualist philosophical schools of Indian philosophy, but has a markedly different significance in each. Within non-dualism, lila is a way of describing all reality, including the cosmos, as the outcome of creative play by the divine absolute (Brahman). In Vaishnavism, lila refers to the activities of God and devotee, as well as the macrocosmic actions of the manifest universe.
Translation
There are multiple theories about the derivation of lila. It may be derived from the Sanskrit root lal, which suggests playfulness of children or someone delicate.
According to Edwin Bryant, lila cannot be translated as "sport" or "game," since those words suggest a motivation of competition. In contrast, lila is "pure play, or spontaneous pastime,” which has no purpose other than experiencing joy.
Appearance in texts
Lila first appears in the Brahmasūtra 2.1.33 as "lokavat tu līlākaivalyam" (However, [it is] but līlā, as [occurs] in daily experience.) This sutra responds to the objection that Brahman is not the cause of the world because causation requires motive. The reason given is that Brahman's act of creation is lila, in the same way lila takes place in the world. Shankara, in his commentary, likens Brahman to a king whose needs have been fulfilled, but engages in recreational activity. In another comparison, he says that it is Brahman's nature to create freely as it is our nature to inhale and exhale. Further, lila is not a necessary attribute of Brahman i.e. Brahman does not have to engage in lila.
In Vaishnavism, lila refers to the activities of God and his devotee, as well as the macrocosmic actions of the manifest universe, as seen in Srimad Bhagavatam, verse 3.26.4:sa eṣa prakṛtiḿ sūkṣmāḿ daivīḿ guṇamayīḿ vibhuḥ yadṛcchayaivopagatām abhyapadyata līlayā"As his pastimes, that Supreme Divine Personality, the greatest of the great, accepted the subtle material energy which is invested with three material modes of nature."
Interpretations
Hindu denominations differ on how a human should react to awareness of lila.
In Pushtimarga worship, devotees experience the sentiments of lila through practices such as adorning the image of Krishna, singing devotional songs, and offering food.
According to a Gaudiya interpretation of the Bhagavat Purana, Krishna's lilas on earth are a manifest counterpart to his unmanifest eternal lila in his abode.
Other uses
Lila also includes Raslila plays in which human actors re-enact Krishna and Rama's divine play to remember the deities and experience their presence.
Lila is comparable to the Western theological position of Pandeism, which describes the Universe as God taking a physical form in order to experience the interplay between the elements of the Universe.
"The Lila Solution" is a proposed answer to the problem of evil. It suggests that God cannot be blamed for sufferings because God is simply playing without any motivation. Lipner argues that since God is not "playful" by nature, but effortlessly acts as such, God maintains the law of karma and rebirth even while playing.
See also
Avatar
Ludus amoris, western mystical conception of divine play
The Mysterious Pastimes of Mohini Murti
Radha Ramana
Ramlila
Rasa lila
Trimurti (Brahma, Vishnu, Shiva)
References
Further reading
Philosophies of India, Heinrich Zimmer and Joseph Campbell, Princeton University Press, 1969.
The Integral Advaitism of Sri Aurobindo, Ram Shanker Misra, Motilal Banarsidass Publishers Pvt Ltd, Delhi, 1998.
The Domain of Constant Excess: Plural Worship at the Munnesvaram Temples in Sri Lanka, Rohan Bastin, Berghahn Books, 2002.
Purifying the Earthly Body of God: Religion and Ecology in Hindu Indi, Lance E. Nelson, State University of New York Press, 1998.
The Gods at Play: Lila in South Asia, William Sturman Sax, ed., Oxford University Press, 1995, .
"Playing", Richard Schechner, Play & Culture, 1988, Vol. 1, pp. 3–19.
The Gods at Play: Lila in South Asia, David Mason, Palgrave Macmillan, 2009.
External links
Maha Lilah : Portuguese version of Gyan Chaupad
A Here-Now glossary entry
Shirdi Sai Baba Lila
Hindu philosophical concepts
Reality
Play (activity) | Lila (Hinduism) | [
"Biology"
] | 1,040 | [
"Play (activity)",
"Behavior",
"Human behavior"
] |
332,902 | https://en.wikipedia.org/wiki/Olympiad | An olympiad (, Olympiás) is a period of four years, particularly those associated with the ancient and modern Olympic Games.
Although the ancient Olympics were established during Greece's Archaic Era, it was not until Hippias that a consistent list was established and not until Ephorus in the Hellenistic period that the first recorded Olympic contest was used as a calendar epoch. Ancient authors agreed that other Olympics had been held before the race won by Coroebus but disagreed on how many; the convention was established to place Coroebus's victory at a time equivalent to the summer of 776 in the Proleptic Julian calendar, and to treat it as Year 1 of Olympiad 1. Olympiad 2 began with the next games in the summer of 772 .
Thus, for N less than 195, Olympiad N is reckoned as having started in the year and ended four years later. For N greater than or equal to 195, Olympiad N began in and ended four years later. By extrapolation, the year of the th Olympiad begins roughly around 2 August .
In reference to the modern Olympics, their Olympiads are four year periods beginning on January 1 of the year of the Summer Games. Thus, the modern Olympiad I began 1 January 1896, Olympiad II began 1 January 1900, and so on. Olympiad XXXIII began 1 January 2024. Because the Julian and Gregorian calendars go directly from 1 to 1, the cycle of modern Olympiads is ahead of the ancient cycle by one year.
Ancient Olympics
Each olympiad started with the holding of the games, which originally began on the first or second full moon after the summer solstice. After the introduction of the Metonic cycle about 432 BC, the start of the games was determined slightly differently. Within each olympiad, time was reckoned by referring to its 1st, 2nd, 3rd, or 4th year. Ancient writers sometimes describe their Olympiads as lasting five years but do so by counting inclusively; in fact each comprised a four year period. For example, the first year of Olympiad 140 began in the summer of 220 and lasted until the middle of 219 . After the 2nd, 3rd, and 4th years of Olympiad 140, the games in the summer of 216 would begin the first year of Olympiad 141.
Historians
The sophist Hippias was the first writer to compile a comprehensive list of the Olympic victors (, olympioníkes). Although his Olympic Record (, Olympionikō̂n Anagraphḗ) is now entirely lost, it apparently formed the basis of all later Olympic dating. By the time of Eratosthenes, his dating of Coroebus's victory to 776 had been generally accepted. The panhellenic nature of the games, their regular schedule, and the improved victor list allowed Greek historians to use the Olympiads as a way of reckoning time that did not depend on the various calendars of the city-states. (See e.g. the Attic calendar of the Athenians.) The first to do so consistently was Timaeus of Tauromenium in the third century . Nevertheless, since for events of the early history of the games the reckoning was used in retrospect, some of the dates given by later historian for events before the 5th century are very unreliable. In the 2nd century, Phlegon of Tralles summarized the events of each Olympiad in a book called Olympiads; fragments survive in the work of the Byzantine writer Photius. Christian chroniclers continued to use this Greek system of dating as a way of synchronizing biblical events with Greek and Roman history. In the 3rd century, Sextus Julius Africanus compiled a list of Olympic victors up to 217 , and this list has been preserved in the Chronicle of Eusebius.
Examples of Ancient Olympiad dates
Early historians sometimes used the names of Olympic victors as a method of dating events to a specific year. For instance, Thucydides says in his account of the year 428 BC: "It was the Olympiad in which the Rhodian Dorieus gained his second victory."
Dionysius of Halicarnassus dates the foundation of Rome to the first year of the seventh Olympiad, 752 & 751 . Since Rome was founded on April 21, which was in the last half of the ancient Olympic year, it would be 751 specifically. In Book 1 chapter 75 Dionysius states: "...Romulus, the first ruler of the city, began his reign in the first year of the seventh Olympiad, when Charops at Athens was in the first year of his ten-year term as archon."
Diodorus Siculus dates the Persian invasion of Greece to 480 : "Calliades was archon in Athens, and the Romans made Spurius Cassius and Proculus Verginius Tricostus consuls, and the Eleians celebrated the Seventy-fifth Olympiad, that in which Astylus of Syracuse won the stadion. It was in this year that king Xerxes made his campaign against Greece."
Jerome, in his Latin translation of the Chronicle of Eusebius, dates the birth of Jesus Christ to year 3 of Olympiad 194, the 42nd year of the reign of the emperor Augustus, which equates to the year 2 .
Anolympiad
Though the games were held without interruption, on more than one occasion they were held by others than the Eleians. The Eleians declared such games Anolympiads (non-Olympics), but it is assumed the winners were nevertheless recorded.
End of the era
During the 3rd century, records of the games are so scanty that historians are not certain whether after 261 they were still held every four years. Some winners were recorded though, until the 293rd and last Olympiad of 393. In 394, Roman Emperor Theodosius I outlawed the games at Olympia as pagan. Though it would have been possible to continue the reckoning by just counting four-year periods, by the middle of the 5th century reckoning by Olympiads had ceased.
Modern Olympics
Start and end
The Summer Olympics are more correctly referred to as the Games of the Olympiad. The first poster to announce the games using this term was the one for the 1932 Summer Olympics, in Los Angeles, using the phrase: Call to the games of the Xth Olympiad.
The modern Olympiad is a period of four years: the first Olympiad started on 1 January 1896, and an Olympiad starts on 1 January of the years evenly divisible by four.
This means that the count of the Olympiads continues, even if Olympic Games are cancelled: For instance, the regular intervals would have meant (summer) Olympic Games should have occurred in 1940 and 1944, but both were cancelled due to World War II.
Nonetheless, the count of the Olympiads continued: The 1936 Games were those of the XI Olympiad, while the next Summer Games were those of 1948, which were the Games of the XIV Olympiad. The current Olympiad is the XXXIII of the modern era, which began on 1 January 2024.
Note, however, that the official numbering of the Winter Olympics does not count Olympiads, it counts only the Games themselves.
For example, the first Winter Games, in 1924, are not designated as Winter Games of the VII Olympiad, but as the I Winter Olympic Games. (The first Winter Games were termed as "Olympic" in a later year.)
The 1936 Summer Games were the Games of the XI Olympiad. After the 1940 and 1944 Summer Games were canceled due to World War II, the Games resumed in 1948 as the Games of the XIV Olympiad. However, the 1936 Winter Games were the IV Winter Olympic Games, and on the resumption of the Winter Games in 1948, the event was designated the V Winter Olympic Games.
The 2020 Summer Games were the Games of the XXXII Olympiad. On 24 March 2020, due to the COVID-19 pandemic, it was postponed to 2021 rather than cancelled, and thus becoming the first postponement in the 124-year history of the Olympics.
Some media people have from time to time referred to a particular (e.g., the nth) Winter Olympics as "the Games of the nth Winter Olympiad", perhaps believing it to be the correct formal name for the Winter Games by analogy with that of the Summer Games. Indeed, at least one IOC-published article has applied this nomenclature as well. This analogy is sometimes extended further by media references to "Summer Olympiads".
However, the IOC does not seem to make an official distinction between Olympiads for the summer and winter games, and such usage, particularly for the Winter Olympics, is inconsistent with the numbering discussed above.
Quadrennium
Some Olympic Committees often use the term quadrennium, which it claims refers to the same four-year period. However, it indicates these quadrennia in calendar years, starting with the first year after the Summer Olympics and ending with the year the next Olympics are held. This would suggest a more precise period of four years, but, for example, the 2001–2004 Quadrennium would then not be exactly the same period as the XXVII Olympiad, which was 2000–2003.
Cultural Olympiad
A Cultural Olympiad is a concept protected by the International Olympic Committee and may be used only within the limits defined by an Organizing Committee for the Olympic Games. From one Games to the next, the scale of the Cultural Olympiad varies considerably, sometimes involving activity over the entire Olympiad and other times emphasizing specific periods within it. Baron Pierre de Coubertin established the principle of Olympic Art Competitions at a special congress in Paris in 1906, and the first official programme was presented during the 1912 Games in Stockholm. These competitions were also named the "Pentathlon of the Muses", as their purpose was to bring artists to present their work and compete for "art" medals across five categories: architecture, music, literature, sculpture and painting.
Nowadays, while there are no competitions as such, cultural and artistic practice is displayed via the Cultural Olympiad. The 2010 Winter Olympics in Vancouver presented the Cultural Olympiad Digital Edition. The 2012 Olympics included an extensive Cultural Olympiad with the London 2012 Festival in the host city, and events elsewhere including the World Shakespeare Festival produced by the RSC. The 2016 games' Cultural Olympiad was scaled back due to Brazil's recession; there was no published programme, with director Carla Camurati promising "secret" and "spontaneous" events such as flash mobs. Cultural events in time for the 2020 Summer Olympics in Tokyo were planned before being canceled due to pandemic restrictions in Japan. Instead, an alternative virtual event was held.
Other uses
The English term is still often used popularly to indicate the games themselves, a usage that is uncommon in ancient Greek (as an Olympiad is most often the time period between and including sets of games). It is also used to indicate international competitions other than physical sports. This includes international science olympiads, such as the International Geography Olympiad, International Mathematical Olympiad, International Forensics Olympiad, and the International Linguistics Olympiad and their associated national qualifying tests (e.g., the United States of America Mathematical Olympiad, the USA Forensics Olympiad or the United Kingdom Linguistics Olympiad), and also events in mind-sports, such as the Science Olympiad, Mindsport Olympiad, Chess Olympiad, International History Olympiad and Computer Olympiad. In these cases Olympiad is used to indicate a regular event of international competition for top achieving participants; it does not necessarily indicate a four-year period.
In some languages, like Czech and Slovak, Olympiad () is the correct term for the games.
The Olympiad (L'Olimpiade) is also the name of some 60 operas set in Ancient Greece.
Notes
General
Specific
References
External links
Chris Bennett, The Olympiad System, on tyndalehouse.com
Valerie Vaughan, The Origin of the Olympics: Ancient Calendars and the Race Against Time (2002) on OneReed.com, an astrologically-oriented site.
Hellenic Month Established Per Athens
Ancient Olympic Games
Calendar eras
Olympic culture
Units of time | Olympiad | [
"Physics",
"Mathematics"
] | 2,461 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
332,907 | https://en.wikipedia.org/wiki/Ornithopter | An ornithopter (from Greek ornis, ornith- 'bird' and pteron 'wing') is an aircraft that flies by flapping its wings. Designers sought to imitate the flapping-wing flight of birds, bats, and insects. Though machines may differ in form, they are usually built on the same scale as flying animals. Larger, crewed ornithopters have also been built and some have been successful. Crewed ornithopters are generally powered either by engines or by the pilot.
Early history
Some early crude flight attempts may have been intended to achieve flapping-wing flight, but probably only a glide was actually achieved. They include the purported flights of the 11th-century Catholic monk Eilmer of Malmesbury (recorded in the 12th century) and the 9th-century poet Abbas Ibn Firnas (recorded in the 17th century). Roger Bacon, writing in 1260, was also among the first to consider a technological means of flight. In 1485, Leonardo da Vinci began to study the flight of birds. He grasped that humans are too heavy, and not strong enough, to fly using wings simply attached to the arms. He, therefore, sketched a device in which the aviator lies down on a plank and works two large, membranous wings using hand levers, foot pedals, and a system of pulleys.
In 1841, an ironsmith kalfa (journeyman), Manojlo, who "came to Belgrade from Vojvodina", attempted flying with a device described as an ornithopter ("flapping wings like those of a bird"). Refused by the authorities a permit to take off from the belfry of Saint Michael's Cathedral, he clandestinely climbed to the rooftop of the Dumrukhana (import tax head office) and took off, landing in a heap of snow, and surviving.
The first ornithopters capable of flight were constructed in France. Jobert in 1871 used a rubber band to power a small model bird. Alphonse Pénaud, Abel Hureau de Villeneuve, and Victor Tatin also made rubber-powered ornithopters during the 1870s. Tatin's ornithopter was perhaps the first to use active torsion of the wings, and apparently it served as the basis for a commercial toy offered by Pichancourt 1889. Gustave Trouvé was the first to use internal combustion, and his 1890 model flew a distance of 80 meters in a demonstration for the French Academy of Sciences. The wings were flapped by gunpowder charges activating a Bourdon tube.
From 1884 on, Lawrence Hargrave built scores of ornithopters powered by rubber bands, springs, steam, or compressed air. He introduced the use of small flapping wings providing the thrust for a larger fixed wing; this innovation eliminated the need for gear reduction, thereby simplifying the construction.
E. P. Frost made ornithopters starting in the 1870s; first models were powered by steam engines, then in the 1900s, an internal-combustion craft large enough for a person was built, though it did not fly.
In the 1930s, Alexander Lippisch and the National Socialist Flyers Corps of Nazi Germany constructed and successfully flew a series of internal combustion-powered ornithopters, using Hargrave's concept of small flapping wings, but with aerodynamic improvements resulting from the methodical study.
Erich von Holst, also working in the 1930s, achieved great efficiency and realism in his work with ornithopters powered by rubber bands. He achieved perhaps the first success of an ornithopter with a bending wing, intended to imitate more closely the folding wing action of birds, although it was not a true variable-span wing such as those of birds.
Around 1960, Percival Spencer successfully flew a series of uncrewed ornithopters using internal combustion engines ranging from displacement, and having wingspans up to . In 1961, Percival Spencer and Jack Stephenson flew the first successful engine-powered, remotely piloted ornithopter, known as the Spencer Orniplane. The Orniplane had a wingspan, weighed , and was powered by a -displacement two-stroke engine. It had a biplane configuration, to reduce oscillation of the fuselage.
Crewed flight
Crewed ornithopters fall into two general categories: Those powered by the muscular effort of the pilot (human-powered ornithopters), and those powered by an engine.
Around 1894, Otto Lilienthal, an aviation pioneer, became famous in Germany for his widely publicized and successful glider flights. Lilienthal also studied bird flight and conducted some related experiments. He constructed an ornithopter, although its complete development was prevented by his untimely death on 9 August 1896 in a glider accident.
In 1929, a man-powered ornithopter designed by Alexander Lippisch (designer of the Messerschmitt Me 163 Komet) flew a distance of after tow launch. Since a tow launch was used, some have questioned whether the aircraft was capable of flying on its own. Lippisch asserted that the aircraft was actually flying, not making an extended glide. (Precise measurement of altitude and velocity over time would be necessary to resolve this question.) Most of the subsequent human-powered ornithopters likewise used a tow launch, and flights were brief simply because human muscle power diminishes rapidly over time.
In 1942, Adalbert Schmid made a much longer flight of a human-powered ornithopter at Munich-Laim. It travelled a distance of , maintaining a height of throughout most of the flight. Later this same aircraft was fitted with a Sachs motorcycle engine. With the engine, it made flights up to 15 minutes in duration. Schmid later constructed a ornithopter, based on the Grunau-Baby IIa sailplane, which was flown in 1947. The second aircraft had flapping outer wing panels.
The French engineer René Riout devoted himself for three decades to the realization of flapping wing ornithopters. In 1905 he invented his first models. In 1909 he won the gold medal in the Lépine competition for a reduced model. In 1913 he worked on the development of a model ordered by a pilot, the Dubois-Riout. The tests were stopped in 1916. In 1937, he finalized the Riout 102T Alérion, certainly the most successful piloted flapping wing ornithopter until the second decade of the 21st century. Unfortunately, the conclusions of the wind tunnel tests were not favorable to the continuation of the project.
In 2005, Yves Rousseau was given the Paul Tissandier Diploma, awarded by the FAI for contributions to the field of aviation. Rousseau attempted his first human-muscle-powered flight with flapping wings in 1995. On 20 April 2006, at his 212th attempt, he succeeded in flying a distance of , observed by officials of the Aero Club de France. On his 213th flight attempt, a gust of wind led to a wing breaking up, causing the pilot to be gravely injured and rendered paraplegic.
A team at the University of Toronto Institute for Aerospace Studies, headed by Professor James DeLaurier, worked for several years on an engine-powered, piloted ornithopter. In July 2006, at the Bombardier Airfield at Downsview Park in Toronto, Professor DeLaurier's machine, the UTIAS Ornithopter No.1 made a jet-assisted takeoff and 14-second flight. According to DeLaurier, the jet was necessary for sustained flight, but the flapping wings did most of the work.
On August 2, 2010, Todd Reichert of the same institution piloted a human-powered ornithopter named Snowbird. The wingspan, aircraft was constructed from carbon fibre, balsa, and foam. The pilot sat in a small cockpit suspended below the wings and pumped a bar with his feet to operate a system of wires that flapped the wings up and down. Towed by a car until airborne, it then sustained flight for almost 20 seconds. It flew with an average speed of . Similar tow-launched flights were made in the past, but improved data collection verified that the ornithopter was capable of self-powered flight once aloft.
Applications for uncrewed ornithopters
Because ornithopters can be made to resemble birds or insects, they could be used for military applications such as aerial reconnaissance without alerting the enemies that they are under surveillance. Several ornithopters have been flown with video cameras on board, some of which can hover and maneuver in small spaces. In 2011, AeroVironment demonstrated a remotely piloted ornithopter resembling a large hummingbird for possible spy missions.
Led by Paul B. MacCready (of Gossamer Albatross fame), AeroVironment developed a half-scale radio-controlled model of the giant pterosaur, Quetzalcoatlus northropi, for the Smithsonian Institution in the mid-1980s. It was built to star in the IMAX movie On the Wing. The model had a wingspan and featured a complex computerized autopilot control system, just as the full-sized pterosaur relied on its neuromuscular system to make constant adjustments in flight.
Researchers hope to eliminate the motors and gears of current designs by more closely imitating animal flight muscles. Georgia Tech Research Institute's Robert C. Michelson is developing a reciprocating chemical muscle for use in microscale flapping-wing aircraft. Michelson uses the term "entomopter" for this type of ornithopter. SRI International is developing polymer artificial muscles that may also be used for flapping-wing flight.
In 2002, Krister Wolff and Peter Nordin of Chalmers University of Technology in Sweden, built a flapping-wing robot that learned flight techniques. The balsa-wood design was driven by machine learning software technology known as a steady-state linear evolutionary algorithm. Inspired by natural evolution, the software "evolves" in response to feedback on how well it performs a given task. Although confined to a laboratory apparatus, their ornithopter evolved behavior for maximum sustained lift force and horizontal movement.
Since 2002, Prof. Theo van Holten has been working on an ornithopter that is constructed like a helicopter. The device is called the "ornicopter" and was made by constructing the main rotor so that it would have no reaction torque.
In 2008, Amsterdam Airport Schiphol started using a realistic-looking mechanical hawk designed by falconer Robert Musters. The radio-controlled robot bird is used to scare away birds that could damage the engines of airplanes.
In 2012, RoBird (formerly Clear Flight Solutions), a spin-off of the University of Twente, started making artificial birds of prey (called RoBird®) for airports and agricultural and waste-management industries.
Adrian Thomas and Alex Caccia founded Animal Dynamics Ltd in 2015, to develop a mechanical analogue of dragonflies to be used as a drone that will outperform quadcopters. The work is funded by the Defence Science and Technology Laboratory, the research arm of the British Ministry of Defence, and the United States Air Force.
Hobby
Hobbyists can build and fly their own ornithopters. These range from light-weight models powered by rubber bands, to larger models with radio control.
The rubber-band-powered model can be fairly simple in design and construction. Hobbyists compete for the longest flight times with these models. An introductory model can be fairly simple in design and construction, but the advanced competition designs are extremely delicate and challenging to build. Roy White holds the United States national record for indoor rubber-powered, with his flight time of 21 minutes, 44 seconds.
Commercial free-flight rubber-band-powered toy ornithopters have long been available. The first of these was sold under the name Tim Bird in Paris in 1879. Later models were also sold as Tim Bird (made by G de Ruymbeke, France, since 1969).
Commercial radio-controlled designs stem from Percival Spencer's engine-powered Seagulls, developed circa 1958, and Sean Kinkade's work in the late 1990s to present day. The wings are usually driven by an electric motor. Many hobbyists enjoy experimenting with their own new wing designs and mechanisms. The opportunity to interact with real birds in their own domain also adds great enjoyment to this hobby. Birds are often curious and will follow or investigate the model while it is flying. In a few cases, RC birds have been attacked by birds of prey, crows, and even cats. More recent cheaper models such as the Dragonfly from WowWee have extended the market from dedicated hobbyists to the general toy market.
Some helpful resources for hobbyists include The Ornithopter Design Manual, book written by Nathan Chronister, and The Ornithopter Zone web site, which includes a large amount of information about building and flying these models.
Ornithopters were also of interest as the subject of one of the former events in the American nationwide Science Olympiad event list. The event ("Flying Bird") entailed building a self-propelled ornithopter to exacting specifications, with points awarded for high flight time and low weight. Bonus points were also awarded if the ornithopter happened to look like a real bird.
Aerodynamics
As demonstrated by birds, flapping wings offer potential advantages in maneuverability and energy savings compared with fixed-wing aircraft, as well as potentially vertical take-off and landing. It has been suggested that these advantages are greatest at small sizes and low flying speeds, but the development of comprehensive aerodynamic theory for flapping remains an outstanding problem due to the complex non-linear nature of such unsteady separating flows.
Unlike airplanes and helicopters, the driving airfoils of the ornithopter have a flapping or oscillating motion, instead of rotary. As with helicopters, the wings usually have a combined function of providing both lift and thrust. Theoretically, the flapping wing can be set to zero angle of attack on the upstroke, so it passes easily through the air. Since typically the flapping airfoils produce both lift and thrust, drag-inducing structures are minimized. These two advantages potentially allow a high degree of efficiency.
Wing design
If future crewed motorized ornithopters cease to be "exotic", imaginary, unreal aircraft and start to serve humans as junior members of the aircraft family, designers and engineers will need to solve not only wing design problems but many other problems involved in making them safe and reliable aircraft. Some of these problems, such as stability, controllability, and durability, are necessary for all aircraft. Other problems specific to ornithopters will appear; optimizing flapping-wing design is only one of them.
An effective ornithopter must have wings capable of generating both thrust, the force that propels the craft forward, and lift, the force (perpendicular to the direction of flight) that keeps the craft airborne. These forces must be strong enough to counter the effects of drag and the weight of the craft.
Leonardo's ornithopter designs were inspired by his study of birds, and conceived the use of flapping motion to generate thrust and provide the forward motion necessary for aerodynamic lift. However, using materials available at that time the craft would be too heavy and require too much energy to produce sufficient lift or thrust for flight. Alphonse Pénaud introduced the idea of a powered ornithopter in 1874. His design had limited power and was uncontrollable, causing it to be transformed into a toy for children. More recent vehicles, such as the human-powered ornithopters of Lippisch (1929) and Emiel Hartman (1959), were capable powered gliders but required a towing vehicle in order to take off and may not have been capable of generating sufficient lift for sustained flight. Hartman's ornithopter lacked the theoretical background of others based on the study of winged flight, but exemplified the idea of an ornithopter as a birdlike machine rather than a machine that directly copies birds' method of flight. The 1960s saw powered uncrewed ornithopters of various sizes capable of achieving and sustaining flight, providing valuable real-world examples of mechanical winged flight. In 1991, Harris and DeLaurier flew the first successful engine-powered remotely piloted ornithopter in Toronto, Canada. In 1999, a piloted ornithopter based on this design flew, capable of taking off from level pavement and executing sustained flight.
An ornithopter's flapping wings and their motion through the air are designed to maximize the amount of lift generated within limits of weight, material strength and mechanical complexity. A flexible wing material can increase efficiency while keeping the driving mechanism simple. In wing designs with the spar sufficiently forward of the airfoil that the aerodynamic center is aft of the elastic axis of the wing, aeroelastic deformation causes the wing to move in a manner close to its ideal efficiency (in which pitching angles lag plunging displacements by approximately 90 degrees.) Flapping wings increase drag and are not as efficient as propeller-powered aircraft. Some designs achieve increased efficiency by applying more power on the down stroke than on the upstroke, as do most birds.
In order to achieve the desired flexibility and minimum weight, engineers and researchers have experimented with wings that require carbon fiber, plywood, fabric, and ribs, with a stiff, strong trailing edge. Any mass located aft of the empennage reduces the wing's performance, so lightweight materials and empty space are used where possible. To minimize drag and maintain the desired shape, choice of a material for the wing surface is also important. In DeLaurier's experiments, a smooth aerodynamic surface with a double-surface airfoil is more efficient at producing lift than a single-surface airfoil.
Other ornithopters do not necessarily act like birds or bats in flight. Typically birds and bats have thin and cambered wings to produce lift and thrust. Ornithopters with thinner wings have a limited angle of attack but provide optimum minimum-drag performance for a single lift coefficient.
Although hummingbirds fly with fully extended wings, such flight is not feasible for an ornithopter. If an ornithopter wing were to fully extend and twist and flap in small movements it would cause a stall, and if it were to twist and flap in very large motions, it would act like a windmill causing an inefficient flying situation.
A team of engineers and researchers called "Fullwing" has created an ornithopter that has an average lift of over 8 pounds, an average thrust of 0.88 pounds, and a propulsive efficiency of 54%. The wings were tested in a low-speed wind tunnel measuring the aerodynamic performance, showing that the higher the frequency of the wing beat, the higher the average thrust of the ornithopter.
In fiction
Ornithopters have been depicted in fiction several times, including Frank Herbert's Dune series, where they are the primary form of air transportation used by House Atreides in the desert climate of the planet Arrakis.
See also
Cyclogyro
Gyroplane
Human-powered aircraft
Insectothopter
Micro air vehicle
Micromechanical Flying Insect
Nano Hummingbird
Rotary-wing aircraft
References
Further reading
Chronister, Nathan. (1999). The Ornithnopter Design Manual. Published by The Ornithopter Zone.
Mueller, Thomas J. (2001). "Fixed and flapping wing aerodynamics for micro air vehicle applications". Virginia: American Inst. of Aeronautics and Astronautics.
Azuma, Akira (2006). "The Biokinetics of Flying and Swimming". Virginia: American Institute of Aeronautics and Astronautics 2nd Edition. .
DeLaurier, James D. "The Development and Testing of a Full-Scale Piloted Ornithopter." Canadian Aeronautics and Space Journal. 45. 2 (1999), 72–82. (accessed November 30, 2010).
Warrick, Douglas, Bret Tobalske, Donald Powers, and Michael Dickinson. "The Aerodynamics of Hummingbird Flight." American Institute of Aeronautics and Astronautics 1–5. Web. 30 Nov 2010.
Crouch, Tom D. Aircraft of the National Air and Space Museum. Fourth ed. Lilienthal Standard Glider. Smithsonian Institution, 1991.
Bilstein, Roger E. Flight in America 1900–1983. First ed. Gliders and Airplanes. Baltimore, Maryland: Johns Hopkins University Press, 1984. (pages 8–9)
Crouch, Tom D. Wings. A History of Aviation from Kites to the Space Age. First ed. New York: W.W. Norton & Company, Inc., 2003. (pages 44–53)
Anderson, John D. A history of aerodynamics and its impact on flying machines. Cambridge: United Kingdom, 1997.
External links
, two-minute flight of an eight-foot radio-controlled ornithopter
Aircraft configurations | Ornithopter | [
"Engineering"
] | 4,406 | [
"Aircraft configurations",
"Aerospace engineering"
] |
332,950 | https://en.wikipedia.org/wiki/Longevity%20myths | Longevity myths are traditions about long-lived people (generally supercentenarians), either as individuals or groups of people, and practices that have been believed to confer longevity, but which current scientific evidence does not support, nor the reasons for the claims. While literal interpretations of such myths may appear to indicate extraordinarily long lifespans, experts believe such figures may be the result of incorrect translations of number systems through various languages, coupled along with the cultural and symbolic significance of certain numbers.
The phrase "longevity tradition" may include "purifications, rituals, longevity practices, meditations, and alchemy" that have been believed to confer greater human longevity, especially in Chinese culture.
Modern science indicates various ways in which genetics, diet, and lifestyle affect human longevity. It also allows us to determine the age of human remains with a fair degree of precision.
The record for the maximum verified lifespan in the modern world is years for women (Jeanne Calment) and 116 years for men (Jiroemon Kimura). Some scientists estimate that in case of the most ideal conditions people can live up to 127 years. This does not exclude the theoretical possibility that in the case of a fortunate combination of mutations there could be a person who lives longer. Though the lifespan of humans is one of the longest in nature, there are animals that live longer. For example, some individuals of the Galapagos tortoise live more than 175 years, and some individuals of the bowhead whale more than 200 years. Some scientists cautiously suggest that the human body can have sufficient resources to live up to 150 years.
Extreme longevity claims in religion
Abrahamic religions
Judaism
Several parts of the Hebrew Bible, including the Torah, Joshua, Job, and Chronicles, mention individuals with very long lifespans, up to the 969 years of Methuselah.
The Sefer haYashar narrates that all of the long-lived people belonged to a special class and that Methusaleh was the last member. Methusaleh also lived long enough to evangelize with his grandson Noah in the antediluvian world.
Christianity
Some Christian apologists explain the extreme ages in the Hebrew Bible (or Old Testament) as ancient mistranslations that converted the word "month" to "year", mistaking lunar cycles for solar ones: this would turn an age of 969 years into a more reasonable 969 lunar months, or about 78.3 solar years. Donald Etz says that the Genesis 5 numbers were multiplied by ten by a later editor.
Both these interpretations introduce an inconsistency: they would mean that the ages of the first nine patriarchs at fatherhood, ranging from 62 to 230 years in the manuscripts, would then be transformed into an implausible range such as 5 to years. Others say that the first list, of only 10 names for 1,656 years, may contain generational gaps, which would have been represented by the lengthy lifetimes attributed to the patriarchs. Nineteenth-century critic Vincent Goehlert suggests the lifetimes "represented epochs merely, to which were given the names of the personages especially prominent in such epochs, who, in consequence of their comparatively long lives, were able to acquire an exalted influence".
Those biblical scholars that teach literal interpretation give explanations for the advanced ages of the early patriarchs. In one view, man was originally to have everlasting life, but as sin was introduced into the world by Adam, its influence became greater with each generation and God progressively shortened man's life. In a second view, before Noah's flood, a "firmament" over the earth () contributed to people's advanced ages.
The Bible's own (brief) explanation for these ages approaches the question from a different angle, explaining instead the relative shortness of normal lives in (CSB): "And the Lord said, 'My Spirit will not remain with mankind forever, because they are corrupt. Their days will be 120 years.
Conservative apologist William Lane Craig believes that the longevity myths should be understood as 'mytho-history', where the ages of culturally significant figures were exaggerated to make a political or theological point. He points to similar practices found in neighboring cultures such as the Babylonians and argues that both Hebrews and Babylonians were aware that human longevity was biologically unfeasible. Similar arguments were made by professor Robert Gnuse.
Here are some more modern examples of Christian longevity claims:
is said to have lived in Bivona, Italy, 1448–1578 (age ), according to the archive of Monastero di San Paolo in Bivona located in Palermo.
Around 1912, the Maharishi of Kailash was said by missionary Sadhu Sundar Singh to be a Christian hermit of over 300 years of age in a Himalayan mountain cave, with whom he spent some time in deep fellowship. Singh said the Maharishi was born in Alexandria, Egypt, and baptized by the nephew of St. Francis Xavier.
Islam
Ibrahim (إِبْرَاهِيم) was said to have lived to years. His wife Sarah is the only woman in the Old Testament whose age is given. She died at 127 ().
In the Quran, Noah allegedly lived for 950 years with his people.
According to 19th-century scholars, Abdul Azziz al-Hafeed al-Habashi (عبد العزيز الحبشي) lived 673–674 Gregorian years, or Islamic years, between 581 and 1276 AH (equivalent to 1185–1859 AD).
In Twelver Shia Islam, Hujjat-Allah al-Mahdi is believed to currently be in Occultation and still alive (age ).
Buddhism
Vipassī, the twenty-second of twenty-eight Buddhas, lived for either 80,000 or 100,000 years. In Vipassī's time, the longevity of humans was 84,000 years.
Taṇhaṅkara, the first Buddha, lived for 100,000 years.
Falun Gong
Chapter 2 of Falun Gong by Li Hongzhi (2001) states,
Hinduism
The Hindu god Rama is said to have ruled his kingdom Ayodhya for 11,000 years by the time he died according to the Ramayana.
Rama's father Dasharatha lived for more than 60,000 years according to the Ramayana.
Bhagiratha did tapas for 1000 deva or god years (360,000 years in Human years) to please Ganga, to gain the release of his 60,000 great-uncles from the curse of saint Kapila. So, Bhagiratha lived for more than 360,000 years.
The Hindu god Krishna is said to have lived for 125 years and 8 months from 3228 BCE to 3102 BCE. According to Hindu scriptures, the age of Kali Yuga began after he ascended to his abode Vaikuntha.
Ashwatthama, a hero of the Mahabharatha, is said to be over 6,000 years old and still alive.
Devraha Baba (died June 19, 1990) claimed to have lived for more than 900 years.
Trailanga Swami reportedly lived in Kashi since 1737; the journal Prabuddha Bharata puts his birth , corresponding to year 1529 of the Shaka era (age ), upon his death in 1887.
The sadhaka Lokenath Brahmachari reportedly lived 1730–1890 (age ).
Shivapuri Baba, also known as Swami Govindanath Bharati, was a Hindu saint who purportedly lived from 1826 to 1963, making him allegedly years old at the time of his death. He had 18 audiences with Queen Victoria.
Jainism
Extreme lifespans are ascribed to the Tirthankaras, for instance:
Neminatha was said to have lived for over 10,000 years before his ascension.
Naminatha was said to have lived for over 20,000 years before his ascension.
Munisuvrata was said to have lived for over 30,000 years before his ascension.
Māllīnātha was said to have lived for over 56,000 years before his ascension.
Aranatha was said to have lived for over 84,000 years before his ascension.
Kunthunatha was said to have lived for over 200,000 years before his ascension.
Shantinatha was said to have lived for over 800,000 years before his ascension.
Dharmanatha was said to have lived for over 2,500,000 years before his ascension.
Anantanatha was said to have lived for over 3,500,000 years before his ascension.
Vimalanatha was said to have lived for over 6,000,000 years before his ascension.
Vasupujya was said to have lived for over 7,200,000 years before his ascension.
Shreyansanatha was said to have lived for over 8,400,000 years before his ascension.
Sikhism
Baba Sri Chand, the founder of Udasi sect, was said to live 134 years.
Baba Biram Das, an udasi saint, is said to have lived for 321 years.
Taoism
The term xian refers to deified persons who have achieved immortality. The Old Man of the South Pole is a common archetype and symbol for longevity.
Theosophy/New Age
Mahavatar Babaji is said to be an "Unascended Master" purportedly many centuries old (said to have been born as early as 203 AD) and is claimed to live in the Himalayas. The Hindu guru Paramhansa Yogananda claimed to have met him and was supposedly one of his disciples.
Ancient extreme longevity claims
These include claims prior to , before the fall of the Roman empire.
China
Fu Xi (伏羲) was supposed to have lived for 197 years.
Lucian wrote about the "Seres" (a Chinese people), claiming they lived for over 300 years.
Zuo Ci who lived during the Three Kingdoms Period was said to have lived for 300 years.
In Chinese legend, Peng Zu was believed to have lived for over 800 years during the Yin Dynasty (殷朝, 16th to 11th centuries BC).
Emperors
The Yellow Emperor was said to have lived for 113 years.
Emperor Yao was said to have lived for 118 years.
Emperor Shun was said to have lived for 110 years.
Egypt
The Egyptian historian Manetho, drawing upon earlier sources, begins his Egyptian king list with the Graeco-Egyptian god Hephaestus (Ptah) who "was king for 9,000 years".
Greece
A book Macrobii ("Long-Livers") is a work devoted to longevity. It was attributed to the ancient Greek author Lucian, although it is now accepted that he could not have written it. Most examples given in it are lifespans of 80 to 100 years, but some are much longer:
Tiresias, the blind seer of Thebes, over 600 years.
Nestor, over 300 years.
Members of the "Seres" (a Chinese people), over 300 years.
According to one tradition, Epimenides of Crete (7th, 6th centuries BC) lived nearly 300 years.
Japan
Some early emperors of Japan are said to have ruled for more than a century, according to the tradition documented in the Kojiki, viz., Emperor Jimmu and Emperor Kōan.
Emperor Jimmu (traditionally, 13 February 711 BC – 11 March 585 BC) lived 126 years according to the Kojiki. These dates correspond to , on the proleptic Julian and Gregorian calendars.
Emperor Kōan, according to Nihon Shoki, lived 137 years (from 427 BC to 291 BC).
Korea
Dangun, the first ruler of Korea, is said to have been born in 2333 BCE and to have died in 425 BCE at the age of 1,908 years.
Taejo of Goguryeo (46/47 – 165) is claimed to have reigned in Korea for 93 years beginning at age 7. After his retirement, the Samguk Sagi and Samguk Yusa give his age at death as , while the Book of the Later Han states he died in 121 at age .
Persian empire
The reigns of several shahs in the Shahnameh, an epic poem by Ferdowsi, are given as longer than a century:
Zahhak, 1,000 years.
Jamshid, 700 years.
Fereydun, 500 years.
Askani, 200 years.
Kay Kāvus, 150 years.
Manuchehr, 120 years.
Lohrasp, 120 years.
Goshtasp, 120 years.
Ancient Rome
In Roman times, Pliny wrote about longevity records from the census carried out in 74 AD under Vespasian. In one region of Italy many people allegedly lived past 100; four were said to be 130, others up to 140.
Sumer
Age claims for the earliest eight Sumerian kings in the major recension of the Sumerian King List were in units and fractions of shar (3,600 years) and totaled 67 shar or 241,200 years.
In the only ten-king tablet recension of this list three kings (Alalngar, [...], kidunnu, and En-men-dur-ana) are recorded as having reigned 72,000 years together. The major recension assigns 43,200 years to the reign of En-men-lu-ana, and 36,000 years each to those of Alalngar and Dumuzid.
Vietnam
Kinh Dương Vương, the first King of Vietnam, is said to be born in 2919 BC and died in 2792 BC (aged about 127 years).
Lạc Long Quân reigned from 2793 BC to 2524 BC (about 269 years).
Modern extreme longevity claims
This list includes claims of longevity of 130 and older from the 14th century onward. All birth year and age claims are alleged unless stated otherwise.
Isolated
Documented
The following cases have been documented in detail over time.
Other
The Assamese polymath Sankardev (1449–1568) allegedly lived to the age of 118.
Albrecht von Haller allegedly collected examples of 62 people ages 110–120, 29 ages 120–130, and 15 ages 130–140.
A 1973 National Geographic article on longevity reported, as a very aged people, the Burusho–Hunza people in the Hunza Valley of the mountains of Pakistan.
Swedish death registers contain detailed information on thousands of centenarians going back to 1749; the maximum age at death reported between 1751 and 1800 was 147.
Cases of extreme longevity in the United Kingdom were listed by James Easton in 1799, who covered 1,712 cases documented between 66 BC and 1799, the year of publication; Charles Hulbert also edited a book containing a list of cases in 1825.
A periodical The Aesculapian Register, written by physicians and published in Philadelphia in 1824, listed a number of cases, including several purported to have lived over 130. The authors said the list was taken from the Dublin Magazine.
Deaths officially reported in the Russian Empire in 1815 listed 1,068 centenarians, including 246 supercentenarians (50 at age 120–155 and one even older). Time magazine considered that, by the Soviet Union, longevity had elevated to a state-supported "Methuselah cult". The USSR insisted on its citizens' unrivaled longevity by claiming 592 people (224 male, 368 female) over age 120 in a 15 January 1959 census and 100 citizens of the Russian SSR alone aged 120 to 156 in March 1960. According to the opinion of Time magazine, in Georgia such claims were fostered by Georgian-born Joseph Stalin's apparent hope that such longevity might rub off on him. Zhores A. Medvedev, who demonstrated that all 500-plus claims failed birth-record validation and other tests, said that Stalin "liked the idea that [other] Georgians lived to be 100".
An early 1812 Peterburgskaya Gazeta reports a man between ages 200 and 225 in the diocese of Yekaterinoslav (now Dnipro, Ukraine).
Medieval era
Poland
Piast Kołodziej, King of Poland, died in 861 at the alleged age of 120 (740 AD/861 AD).
Wales
Welsh bard Llywarch Hen (Heroic Elegies) died in the parish of Llanvor, traditionally about age 150.
England
Edgar Ætheling, English prince who was briefly King of England after the death of Harold Godwinson at the Battle of Hastings in late 1066. Edgar is said to have died shortly after 1126, when William of Malmesbury wrote that he "now grows old in the country in privacy and quiet". However, two pipe rolls exist from the years 1158 and 1167 which list Edgar. The historian Edward Augustus Freeman stated that this referred either to Edgar (aged at least 115), to a son of his, or to another person who bore the title Ætheling.
Practices
Diets
According to a 2021 review, there is no clinical evidence that any dietary practice contributes to longevity.
Alchemy
Traditions that have been believed to confer greater human longevity include alchemy.
Nicolas Flamel (early 1330s – ) was a 14th-century scrivener who developed a reputation as alchemist and creator of an "elixir of life" that conferred immortality upon himself and his wife Perenelle. His arcanely inscribed tombstone is preserved at the Musée de Cluny in Paris.
Fridericus (Ludovicus) Gualdus (), author of "Revelation of the True Chemical Wisdom", lived in Venice in the 1680s. His age was reported in a letter in a contemporary Dutch newspaper to be over 400. By some accounts, when asked about a portrait he carried, he said it was of himself, painted by Titian (who died in 1576), but gave no explanation and left Venice the following morning. By another account, Gualdus left Venice due to religious accusations and died in 1724. The "Compass der Weisen" alludes to him as still alive in 1782 and nearly 600 years old.
Fountain of Youth
See also
Notes
References
Bibliography
Demography
Mythological archetypes | Longevity myths | [
"Environmental_science"
] | 3,760 | [
"Demography",
"Environmental social science"
] |
332,952 | https://en.wikipedia.org/wiki/Cannula | A cannula (; Latin meaning 'little reed'; : cannulae or cannulas) is a tube that can be inserted into the body, often for the delivery or removal of fluid or for the gathering of samples. In simple terms, a cannula can surround the inner or outer surfaces of a trocar needle thus extending the effective needle length by at least half the length of the original needle. Its size mainly ranges from 14 to 26 gauge. Different-sized cannula have different colours as coded.
Decannulation is the permanent removal of a cannula (extubation), especially of a tracheostomy cannula, once a physician determines it is no longer needed for breathing.
Medicine
Cannulas normally come with a trocar inside. The trocar is a needle, which punctures the body in order to get into the intended space.
Intravenous cannulas are the most common in hospital use. A variety of cannulas are used to establish cardiopulmonary bypass in cardiac surgery. A nasal cannula is a piece of plastic tubing that runs under the nose and is used to administer oxygen.
Intravenous cannulation
A venous cannula is inserted into a vein, primarily for the administration of intravenous fluids, for obtaining blood samples and for administering medicines. An arterial cannula is inserted into an artery, commonly the radial artery, and is used during major operations and in critical care areas to measure beat-to-beat blood pressure and to draw repeated blood samples. Insertion of the venous cannula can be a painful procedure that can lead to anxiety and stress. Use of a vapocoolant (cold spray) immediately before cannulation reduces pain during the procedure, without increasing the difficulty of cannulation.
Complications may arise in the vein as a result of the cannulation procedure, the four main groups of complication are:
hematoma: a collection of blood, which can result from failure to puncture the vein when the cannula is inserted or when the cannula is removed. The selection of an appropriate vein and gently applying pressure slightly above the insertion point on removal of the cannula may prevent this.
infiltration: when infusate enters the subcutaneous tissue instead of the vein. To prevent this, a cannula with accurate trim distances may be used. It is essential to fix the cannula in place firmly.
embolism: this can be caused by air, a thrombus, or fragment of a catheter breaking off and entering the venous system. It can cause a pulmonary embolism. Air emboli can be avoided by making sure that there is no air in the system. A thromboembolism can be avoided by using a smaller cannula.
phlebitis: an inflammation of the vein resulting from mechanical or chemical irritation or from an infection. Phlebitis can be avoided by carefully choosing the site for cannulation and by checking the type of infusate used.
Aortic cannulation
An aortic cannula may be placed in the aorta, for example in a diseased ascending aorta, using the Seldinger technique.
Nasal cannulation and oral-nasal cannulation
A nasal cannula or an oral–nasal cannula consists of a flexible tube, usually with multiple short, open-ended branches for comfortable insertion into the nostrils and/or mouth, and may be used for the delivery of a gas (such as pure oxygen), a gas mixture (as, for example, during anesthesia), or to measure airflow into and out of the nose and/or mouth.
Tracheotomy tube
The removal of a tracheotomy tube is referred to as decannulation.
Veterinary use
A cannula is used in an emergency procedure to relieve pressure and bloating in cattle and sheep with ruminal tympany, due most commonly to their accidentally grazing wilted legume or legume-dominant pastures, particularly alfalfa, ladino, and red and white clover.
Cannulas are a component used in the insertion of the Verichip.
Much larger cannulas are used to research about the digestive system of cannulated cows.
Aesthetic medicine and anti-ageing
In aesthetic medicine, a blunt-tip cannula or microcannula (also called smooth tip microcannula, blunt tipped cannula, or simply microcannula) is a small tube with an edge that is not sharp and an extrusion port or pore near the tip which is designed for atraumatic subdermal injections of fluids or gels.
Depending on the size of the internal diameter, it can be used either for the injection of cosmetic wrinkle fillers like hyaluronic acid, collagen, poly-L-lactic acid, CaHA, etc., or for fat transfer (Liposuction). The American Society for Aesthetic Plastic surgery notes additional soft tissue fillers like calcium hydroxy-apatite and polymethylmethacrylate. The advantage of using these is that they are less painful, have less risk of bruising, have less swelling, and a better safety profile. Accidental intravascular injections are more difficult with blunt-tip microcannulas, reducing the risk of skin necrosis, ulcers, and embolization to the retinal artery which can result in blindness. Indeed, in May 2015, the USA issued a warning of these risks as an FDA Safety Communication on the "Unintentional Injection of Soft Tissue Filler Into Blood Vessels In the Face".
In January 2012, the "Dermasculpt" microcannula was approved by the FDA for use in the United States for use with soft tissue fillers followed by the "Magic Needle", "Softfil", "TSK STERiGLIDE™ by Air-Tite Products", and "Sculpt-face". The primary structural differences between microcannulas is the distance of the extrusion port or pore from the tip (closer is more precise), the bluntness of the tip (tapered blunt tip is easier for entry), and the flexibility of the shaft (enough flexibility to move around sensitive structures but enough rigidity for precise placement).
Since microcannula tips are blunt, a Pilot or Introducer needle is required for entry through the skin and the technique is to thread the microcannula through this tiny opening. Microcannula cosmetic injection techniques have been developed on how to best place cosmetic wrinkle fillers such as the Long MicroCannula Double Cross-Hatched Fan and the Wiggle Progression techniques.
In April 2016, the concept of the use of microcannula to inject more than cosmetic fillers was first published. The technique of Microcannula Injected Local Anesthesia (MILA) was described on the use of microcannula to inject local anesthesia with less pain, bruising, and swelling. Also introduced were Accelerated Healing After Platelet-Rich Plasma (AHA-PRP), Accelerated Healing After Platelet-Rich Fibrin Matrix (AHA-PRFM), and the use of microcannula to dissolve Sculptra nodules.
Body piercing
Cannulas are used in body piercing when using a standard IV needle (usually between 18GA and 12GA, although may be as large as 0GA, in which case the procedure is known as dermal punching and uses a biopsy punch without a cannula), and for inserting hooks for suspensions.
During piercing, the fistula is created by inserting the needle. The needle is then removed, leaving the cannula in place, which is sometimes trimmed down. The cannula is then removed and sterile jewelry is inserted into the fistula simultaneously, in order to minimise trauma to the fresh fistula caused by insertion of blunt-ended jewelry.
Non-medical use
In biological research, a push-pull cannula, which both withdraws and injects fluid, can be used to determine the effect of a certain chemical on a specific cell. The push part of the cannula is filled with a physiological solution plus the chemical of interest and is then injected slowly into the local cellular environment of a cell. The pull cannula then draws liquid from the extracellular medium, thus measuring the cellular response to the chemical of interest. This technique is especially used for neuroscience.
In general aviation, a cannula refers to a piece of plastic tubing that runs under the nose and is used to administer oxygen in non-pressurized aircraft flying 10,000 feet above sea level.
In synthetic chemistry, a cannula refers to a piece of stainless steel or plastic tubing used to transfer liquids or gases from one vessel to another without exposure to air. See more at Cannula transfer.
See also
Catheter
Hypodermic needle
Cannulated cow
Cannulated bar
References
External links
Decannulation, Cincinnati Children's Hospital Medical Center
Medical equipment | Cannula | [
"Biology"
] | 1,899 | [
"Medical equipment",
"Medical technology"
] |
332,989 | https://en.wikipedia.org/wiki/Vegetative%20reproduction | Vegetative reproduction (also known as vegetative propagation, vegetative multiplication or cloning) is a form of asexual reproduction occurring in plants in which a new plant grows from a fragment or cutting of the parent plant or specialized reproductive structures, which are sometimes called vegetative propagules.
Many plants naturally reproduce this way, but it can also be induced artificially. Horticulturists have developed asexual propagation techniques that use vegetative propagules to replicate plants. Success rates and difficulty of propagation vary greatly. Monocotyledons typically lack a vascular cambium, making them more challenging to propagate.
Plant propagation
Plant propagation is the process of plant reproduction of a species or cultivar, and it can be sexual or asexual. It can happen through the use of vegetative parts of the plants, such as leaves, stems, and roots to produce new plants or through growth from specialized vegetative plant parts.
While many plants reproduce by vegetative reproduction, they rarely exclusively use that method to reproduce. Vegetative reproduction is not evolutionary advantageous; it does not allow for genetic diversity and could lead plants to accumulate deleterious mutations. Vegetative reproduction is favored when it allows plants to produce more offspring per unit of resource than reproduction through seed production. In general, juveniles of a plant are easier to propagate vegetatively.
Although most plants normally reproduce sexually, many can reproduce vegetatively, or can be induced to do so via hormonal treatments. This is because meristematic cells capable of cellular differentiation are present in many plant tissues.
Vegetative propagation is usually considered a cloning method. However, root cuttings of thornless blackberries (Rubus fruticosus) will revert to thorny type because the adventitious shoot develops from a cell that is genetically thorny. Thornless blackberry is a chimera, with the epidermal layers genetically thornless but the tissue beneath it genetically thorny.
Grafting is often not a complete cloning method because seedlings are used as rootstocks. In that case, only the top of the plant is clonal. In some crops, particularly apples, the rootstocks are vegetatively propagated so the entire graft can be clonal if the scion and rootstock are both clones. Apomixis (including apospory and diplospory) is a type of reproduction that does not involve fertilization. In flowering plants, unfertilized seeds are produced, or plantlets that grow instead of flowers. Hawkweed (Hieracium), dandelion (Taraxacum), some citrus (Citrus) and many grasses such as Kentucky bluegrass (Poa pratensis) all use this form of asexual reproduction. Bulbils are sometimes formed instead of the flowers of garlic.
Mechanisms
Meristem tissue makes the process of asexual reproduction possible. It is normally found in stems, leaves, and tips of stems and roots and consists of undifferentiated cells that are constantly dividing allowing for plant growth and give rise to plant tissue systems. The meristem tissue's ability to continuously divide allows for vegetative propagation to occur.
Another important ability that allows for vegetative propagation is the ability to develop adventitious roots which arise from other vegetative parts of the plants such as the stem or leaves. These roots allow for the development of new plants from body parts from other plants.
Advantages and disadvantages
Advantages
There are several advantages of vegetative reproduction, mainly that the produced offspring are clones of their parent plants. If a plant has favorable traits, it can continue to pass down its advantageous genetic information to its offspring. It can be economically beneficial for commercial growers to clone a certain plant to ensure consistency throughout their crops. Vegetative propagation also allows plants to avoid the costly and complex process of producing sexual reproduction organs such as flowers and the subsequent seeds and fruits. Developing an ace cultivar is extremely difficult, so, once farmers develop the desired traits in, for example, a lily, they use grafting and budding to ensure the consistency of the new cultivar and its successful production on a commercial level. However, as can be seen in many variegated plants, this does not always apply, because many plants actually are chimeras and cuttings might reflect the attributes of only one or some of the parent cell lines. Vegetative propagation also allows plants to circumvent the immature seedling phase and reach the mature phase faster. In nature, that increases the chances for a plant to successfully reach maturity, and, commercially, it saves farmers a lot of time and money as it allows for faster crop overturn.
Vegetative reproduction offers research advantages in several areas of biology and has practical usage when it comes to afforestation. The most common use made of vegetative propagation by forest geneticists and tree breeders has been to move genes from selected trees to some convenient location, usually designated a gene bank, clone bank, clone-holding orchard, or seed orchard where their genes can be recombined in pedigreed offspring.
Some analyses suggest that vegetative reproduction is a characteristic which makes a plant species more likely to become invasive. Since vegetative reproduction is often faster than sexual reproduction, it "quickly increases populations and may contribute to recovery following disturbance" (such as fires and floods).
Disadvantages
A major disadvantage of vegetative propagation is that it prevents species genetic diversity which can lead to reductions in crop yields. The plants are genetically identical and are all, therefore, susceptible to pathogenic plant viruses, bacteria and fungi that can wipe out entire crops.
Types
Natural means
Natural vegetative propagation is mostly a process found in herbaceous and woody perennial plants, and typically involves structural modifications of the stem, although any horizontal, underground part of a plant (whether stem, leaf, or root) can contribute to vegetative reproduction of a plant. Most plant species that survive and significantly expand by vegetative reproduction would be perennial almost by definition, since specialized organs of vegetative reproduction, like seeds of annuals, serve to survive seasonally harsh conditions. A plant that persists in a location through vegetative reproduction of individuals over a long period of time constitutes a clonal colony.
In a sense, this process is not one of reproduction but one of survival and expansion of biomass of the individual. When an individual organism increases in size via cell multiplication and remains intact, the process is called "vegetative growth". However, in vegetative reproduction, the new plants that result are new individuals in almost every respect except genetic. Of considerable interest is how this process appears to reset the aging clock.
As previously mentioned, plants vegetatively propagate both artificially and naturally. Most common methods of natural vegetative reproduction involve the development of a new plant from specialized structures of a mature plant. In addition to adventitious roots, roots that arise from plant structures other than the root, such as stems or leaves, modified stems, leaves and roots play an important role in plants' ability to naturally propagate. The most common modified stems, leaves and roots that allow for vegetative propagation are:
Runners
Also known as stolons, runners are modified stems that, unlike rhizomes, grow from existing stems just below the soil surface. As they are propagated, the buds on the modified stems produce roots and stems. Those buds are more separated than the ones found on the rhizome.
Examples of plants that use runners are strawberries and currants.
Bulbs
Bulbs are inflated parts of the stem within which lie the central shoots of new plants. They are typically underground and are surrounded by plump and layered leaves that provide nutrients to the new plant.
Examples of plants that use bulbs are shallots, lilies and tulips.
Tubers
Tubers develop from either the stem or the root. Stem tubers grow from rhizomes or runners that swell from storing nutrients while root tubers propagate from roots that are modified to store nutrients and get too large and produce a new plant.
Examples of stem tubers are potatoes and yams and examples of root tubers are sweet potatoes and dahlias.
Corms
Corms are solid enlarged underground stems that store nutrients in their fleshy and solid stem tissue and are surrounded by papery leaves. Corms differ from bulbs in that their centers consists of solid tissue while bulbs consist of layered leaves.
Examples of plants that use corms are gladiolus and taro.
Suckers
Also known as root sprouts, suckers are plant stems that arise from buds on the base of the parent plant's stems or roots.
Examples of plants that use suckers are apple, elm, and banana trees.
Plantlets
Plantlets are miniature structures that arise from meristem in leaf margins that eventually develops roots and drop from the leaves they grew on.
An example of a plant that uses plantlets is the Bryophyllum daigremontianum (syn. Kalanchoe daigremontianum), which is also known as mother of thousands for its many plantlets.
Keikis
Keikis are additional offshoots which develop on vegetative stems or flower stalks of several orchids genera.
Examples of plants that use keikis are the Phalaenopsis, Epidendrum, and Dendrobium genera of orchids.
Apomixis
Apomixis is the process of asexual reproduction through seed, in the absence of meiosis and fertilization, generating clonal progeny of maternal origin.
Artificial means
Vegetative propagation of particular cultivars that have desirable characteristics is very common practice. It is used by farmers and horticulturalists to produce better crops with desirable qualities. The most common methods of artificial vegetative propagation are:
Cutting
A cutting is a part of the plant, usually a stem or a leaf, is cut off and planted. Adventitious roots grow from cuttings and a new plant eventually develops. Usually those cuttings are treated with hormones before being planted to induce growth.
Grafting
Grafting involves attaching a scion, or a desired cutting, to the stem of another plant called stock that remains rooted in the ground. Eventually both tissue systems become grafted or integrated and a plant with the characteristics of the grafted plant develops, e.g. mango, guava, etc.
Layering
Layering is a process which includes the bending of plant branches or stems so that they touch the ground and are covered with soil. Adventitious roots develop from the underground part of the plant, which is known as the layer. This method of vegetative reproduction also occurs naturally. Another similar method, air layering, involved the scraping and replanting of tree branches which develop into trees. Examples are Jasmine and Bougainvillea.
Suckering
Suckers grow and form a dense compact mat that is attached to the parent plant. Too many suckers can lead to smaller crop size, so excess suckers are pruned, and mature suckers are transplanted to a new area where they develop into new plants.
Tissue culture
In tissue culture, plant cells are taken from various parts of the plant and are cultured and nurtured in a sterilized medium. The mass of developed tissue, known as the callus, is then cultured in a hormone-ladened medium and eventually develops into plantlets which are then planted and eventually develop into grown plants.
An offset is the lower part of a single culm with the rhizome axis basal to it and its roots. Planting of these is the most convenient way of propagating bamboo.
See also
Micropropagation
Hemerochory
Escaped plant
References
Plant reproduction
Asexual reproduction
Cloning | Vegetative reproduction | [
"Engineering",
"Biology"
] | 2,436 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction",
"Cloning",
"Genetic engineering",
"Asexual reproduction"
] |
333,081 | https://en.wikipedia.org/wiki/Piano%20acoustics | Piano acoustics is the set of physical properties of the piano that affect its sound. It is an area of study within musical acoustics.
String length, mass and tension
The strings of a piano vary in diameter, and therefore in mass per length, with lower strings thicker than upper. A typical range is from for the lowest bass strings to , string size 13, for the highest treble strings. These differences in string thickness follow from well-understood acoustic properties of strings.
Given two strings, equally taut and heavy, one twice as long as the other, the longer will vibrate with a pitch one octave lower than the shorter. However, if one were to use this principle to design a piano, i.e. if one began with the highest notes and then doubled the length of the strings again and again for each lower octave, it would be impossible to fit the bass strings onto a frame of any reasonable size. Furthermore, when strings vibrate, the width of the vibrations is related to the string length; in such a hypothetical ultra-long piano, the lowest strings would strike one another when played. Instead, piano makers take advantage of the fact that a heavy string vibrates more slowly than a light string of identical length and tension; thus, the bass strings on the piano are shorter than the "double with each octave" rule would predict, and are much thicker than the others.
The other factor that affects pitch, other than length, density and mass, is tension. Individual string tension in a concert grand piano may average , and have a cumulative tension exceeding each.
Inharmonicity and piano size
Any vibrating thing produces vibrations at a number of frequencies above the fundamental pitch. These are called overtones. When the overtones are integer multiples (e.g., 2×, 3× ... 6× ... ) of the fundamental frequency (called harmonics), then — neglecting damping — the oscillation is periodic, i.e. it vibrates exactly the same way over and over. Many enjoy the sound of periodic oscillations; for this reason, many musical instruments, including pianos, are designed to produce nearly periodic oscillations, that is, to have overtones as close as possible to the harmonics of the fundamental tone.
In an ideal vibrating string, when the wavelength of a wave on a stretched string is much greater than the thickness of the string (the theoretical ideal being a string of zero thickness and zero resistance to bending), the wave velocity on the string is constant and the overtones are at the harmonics. That is why so many instruments are constructed of skinny strings or thin columns of air.
However, for high overtones with short wavelengths that approach the diameter of the string, the string behaves more like a thick metal bar: its mechanical resistance to bending becomes an additional force to the tension, which 'raises the pitch' of the overtones. Only when the bending force is much smaller than the tension of the string, are its wave-speed (and the overtones pitched as harmonics) unchanged. The frequency-raised overtones (above the harmonics), called 'partials', can produce an unpleasant effect called inharmonicity. Basic strategies to reduce inharmonicity include decreasing the thickness of the string or increasing its length, choosing a flexible material with a low bending force, and increasing the tension force so that it stays much bigger than the bending force.
Winding a string allows an effective decrease in the thickness of the string. In a wound string, only the inner core resists bending while the windings function only to increase the linear density of the string. The thickness of the inner core is limited by its strength and by its tension; stronger materials allow for thinner cores at higher tensions, reducing inharmonicity. Hence, piano designers choose high-quality steel for their strings, as its strength and durability help them minimize string diameters.
If string diameter, tension, mass, uniformity, and length compromises were the only factors—all pianos could be small, spinet-sized instruments. Piano builders, however, have found that longer strings increase instrument power, harmonicity, and reverberation, and help produce a properly tempered tuning scale.
With longer strings, larger pianos achieve the longer wavelengths and tonal characteristics desired. Piano designers strive to fit the longest strings possible within the case; moreover, all else being equal, the sensible piano buyer tries to obtain the largest instrument compatible with budget and space.
Inharmonicity increases continuously as notes get further from the middle of the piano, and is one of the practical limits on the total range of the instrument. The lowest strings, which are necessarily the longest, are most limited by the size of the piano. The designer of a short piano is forced to use thick strings to increase mass density and is thus driven into accepting greater inharmonicity.
The highest strings must be under the greatest tension, yet must also be thin enough to allow for a low mass density. The limited strength of steel (i.e. a too-thin string will break under the tension) forces the piano designer to use very short and slightly thicker strings, whose short wavelengths thus generate inharmonicity.
The natural inharmonicity of a piano is used by the tuner to make slight adjustments in the tuning of a piano. The tuner stretches the notes, slightly sharpening the high notes and flatting the low notes to make overtones of lower notes have the same frequency as the fundamentals of higher notes.
See also Piano wire, piano tuning, psychoacoustics.
The Railsback curve
The Railsback curve, first measured in the 1930s by O.L. Railsback, a US college physics teacher, expresses the difference between inharmonicity-aware stretched piano tuning, and theoretically correct equal-tempered tuning in which the frequencies of successive notes are related by a constant ratio, equal to the twelfth root of two. For any given note on the piano, the deviation between the actual pitch of that note and its theoretical equal-tempered pitch is given in cents (hundredths of a semitone). The curve is derived empirically from actual pianos tuned to be pleasing to the ear, not from an exact mathematical equation.
As the Railsback curve shows, octaves are normally stretched on a well-tuned piano. That is, the high notes are tuned higher, and the low notes tuned lower, than they are in a mathematically idealized equal-tempered scale. Railsback discovered that pianos were typically tuned in this manner not because of a lack of precision, but because of inharmonicity in the strings. For a string vibrating like an ideal harmonic oscillator, the overtone series of a single played note includes many additional, higher frequencies, each of which is an integer multiple of the fundamental frequency. But in fact, inharmonicity caused by piano strings being slightly inflexible makes the overtones actually produced successively higher than they would be if the string were perfectly harmonic.
Shape of the curve
Inharmonicity in a string is caused primarily by stiffness. That stiffness is the result of piano wire's inherent hardness and ductility, together with string tension, thickness, and length. When tuners adjust the tension of the wire during tuning, they establish pitches relative to notes that have already been tuned. Those previously tuned notes have overtones that are sharpened by inharmonicity, which causes the newly established pitch to conform to the sharpened overtone. As the tuning progresses up and down the scale, the inharmonicity, hence the stretch, accumulates.
It is a common misconception that the Railsback curve demonstrates that the middle of the piano is less inharmonic than the upper and lower regions. It only appears that way because that is where the tuning starts. "Stretch" is a comparative term: by definition, no matter what pitch the tuning begins with there can be no stretch. Further, it is often construed that the upper notes of the piano are especially inharmonic, because they appear to be stretched dramatically. In fact, their stretch is a reflection of the inharmonicity of strings in the middle of the piano. Moreover, the inharmonicity of the upper notes can have no bearing on tuning, because their upper partials are beyond the range of human hearing.
As expected, the graph of the actual tuning is not a smooth curve, but a jagged line with peaks and troughs. This might be the result of imprecise tuning, inexact measurement, or the piano's innate variability in string scaling. It has also been suggested with Monte-Carlo simulation that such a shape comes from the way humans match pitch intervals.
Multiple strings
All but the lowest notes of a piano have multiple strings tuned to the same frequency. The notes with two strings are called bichords, and those with three strings are called trichords. These allow the piano to have a loud attack with a fast decay but a long sustain in the attack–decay–sustain–release (ADSR) system.
The trichords create a coupled oscillator with three normal modes (with two polarizations each). Since the strings are only weakly coupled, the normal modes have imperceptibly different frequencies. But they transfer their vibrational energy to the sounding board at significantly different rates.
The normal mode in which the three strings oscillate together is most efficient at transferring energy since all three strings pull in the same direction at the same time. It sounds loud, but decays quickly. This normal mode is responsible for the rapid staccato "attack" part of the note.
In the other two normal modes, strings do not all pull together, e.g., one pulls up while the other two pull down. There is a slow transfer of energy to the sounding board, generating a soft but near-constant sustain.
See also
Electronic tuner
Inharmonicity
References
Further reading
Ortiz-Berenguer, Luis I., F. Javier Casajús-Quirós, Marisol Torres-Guijarro, J.A. Beracoechea. Piano Transcription Using Pattern Recognition: Aspects On Parameter Extraction: Proceeds of The International Conference on Digital Audio Effects, Naples, October 2004.
External links
Five lectures on the acoustics of the piano
A. H. Benade Sound Production in Pianos
Robert W. Young, Inharmonicity of Plain Wire Piano Strings' The Journal of the Acoustical Society of America, vol 24 no. 3 (May 1952)
"The Engineering of Concert Grand Pianos" by Richard Dain, FRENG
D. Clausen, B. Hughes and W. Stuart "A design analysis of a Stuart and Sons grand piano frame"
Acoustics
Piano | Piano acoustics | [
"Physics"
] | 2,223 | [
"Classical mechanics",
"Acoustics"
] |
333,088 | https://en.wikipedia.org/wiki/Esalen%20Institute | The Esalen Institute, commonly called Esalen, is a non-profit American retreat center and intentional community in Big Sur, California, which focuses on humanistic alternative education. The institute played a key role in the Human Potential Movement beginning in the 1960s. Its innovative use of encounter groups, a focus on the mind-body connection, and their ongoing experimentation in personal awareness introduced many ideas that later became mainstream.
Esalen was founded by Michael Murphy and Dick Price in 1962. Their intention was to support alternative methods for exploring human consciousness, what Aldous Huxley described as "human potentialities". Over the next few years, Esalen became the center of practices and beliefs that make up the New Age movement, from Eastern religions/philosophy, to alternative medicine and mind-body interventions, from transpersonal to Gestalt practice.
Price ran the institute until he died in a hiking accident in 1985. In 2012, the board hired professional executives to help raise money and keep the institute profitable. Until 2016, Esalen offered over 500 workshops yearly in areas including Gestalt practice, personal growth, meditation, massage, yoga, psychology, ecology, spirituality, and organic food. In 2016, about 15,000 people attended its workshops.
In February 2017, the institute was cut off when Highway 1 was closed by a mud slide on either side of the hot springs. It closed its doors, evacuated guests via helicopter, and was forced to lay off 90% of its staff through at least July, when they reopened with limited workshop offerings. It also decided to revamp its offerings to include topics more relevant to a younger generation. As of July 2017, due to the limited access resulting from the road closures, the hot springs are only open to Esalen guests.
Early history
The grounds of the Esalen Institute were first home to a Native American tribe known as the Esselen, from whom the institute adopted its name. Carbon dating tests of artifacts found on Esalen's property have indicated a human presence as early as 2600 BCE.
The location was homesteaded by Thomas Slate on September 9, 1882, when he filed a land patent under the Homestead Act of 1862. The settlement became known as Slates Hot Springs. It was the first tourist-oriented business in Big Sur, frequented by people seeking relief from physical ailments. In 1910, the land was purchased by Henry Murphy, a Salinas, California, physician. The official business name was "Big Sur Hot Springs" although it was more generally referred to as "Slate's Hot Springs".
Founding
Stanford grads meet
Michael Murphy and Dick Price both attended Stanford University in the late 1940s and early 1950s. Both had developed an interest in human psychology and earned degrees in the subject in 1952. Price was influenced by a lecture he heard Aldous Huxley give in 1960 titled "Human Potentialities". After graduating from Stanford, Price attended Harvard University to continue studying psychology. Murphy, meanwhile, traveled to Sri Aurobindo's ashram in India where he resided for several months before returning to San Francisco.
Price's parents involuntarily committed him to a mental hospital for a year, ending on November 26, 1957. He hated the experience and thought he would like to create an environment where people could explore new ideas and thoughts without judgment and influence from the outside world. In May 1960, Price returned to San Francisco and lived at the East-West House with Taoist teacher Gia-Fu Feng. That year he met fellow Stanford University graduate Michael Murphy at Haridas Chaudhuri’s Cultural Integration Fellowship where Murphy was in residence. They met at the suggestion of Frederic Spiegelberg, a Stanford professor of comparative religion and Indic studies, with whom both had studied.
By then they had both dropped out of their graduate programs (Price at Harvard and Murphy at Stanford), and had served time in the military. Their similar experiences and interests were the basis for the partnership that created Esalen. Inspired by Buddhist practices, and based on his own understanding of Taoism, Price developed his teachings. He took what Fritz Perls had taught him and created a "Gestalt Awareness" process that is still taught and followed by many today.
Lease property
Price and Murphy wanted to create a venue where non-traditional workshops and lecturers could present their ideas free of the dogma associated with traditional education. The two began drawing up plans for a forum that would be open to ways of thinking beyond the constraints of mainstream academia while avoiding the dogma so often seen in groups organized around a single idea promoted by a charismatic leader. They envisioned offering a wide range of philosophies, religious disciplines and psychological techniques.
In 1961, they went to look at property owned by the Murphy family at Slates Hot Springs in Big Sur. It included a run-down hotel occupied in part by members of a Pentecostal church. The property was patrolled by gun-toting Hunter S. Thompson. Gay men from San Francisco filled the baths on the weekends.
Henry Murphy's widow and Michael's grandmother Vinnie "Bunnie" MacDonald Murphy, who owned the property, lived away in Salinas. She had previously refused to lease the property to anyone, even turning down an earlier request from Michael. She was afraid her grandson was going to "give the hotel to the Hindus," Murphy later said. Not long after, Thompson attempted to visit the baths with friends and got into a fistfight after antagonizing some of the gay men present. The men almost tossed him over the cliff. Murphy's father, a lawyer, finally persuaded his mother to allow her grandson to take over and she agreed to lease the property to them in 1962. The two men used capital that Price obtained from his father, who was a vice-president at Sears. They incorporated their business as a non-profit named Esalen Institute in 1963.
Develop counterculture workshops
Murphy and Price were assisted by Spiegelberg, Watts, Huxley and his wife Laura, as well as by Gerald Heard and Gregory Bateson. They modeled the concept of Esalen partially upon Trabuco College, founded by Heard as a quasi-monastic experiment in the mountains east of Irvine, California, and later donated to the Vedanta Society. Their intent was to provide "a forum to bring together a wide variety of approaches to enhancement of the human potential... including experiential sessions involving encounter groups, sensory awakening, gestalt awareness training, related disciplines." They stated that they did not want to be viewed as a "cult" or a new church but that it was to be a center where people could explore the concepts that Price and Murphy were passionate about. The philosophy of Esalen lies in the idea that "the cosmos, the universe itself, the whole evolutionary unfoldment is what a lot of philosophers call slumbering spirit. The divine is incarnate in the world and is present in us and is trying to manifest," according to Murphy.
Alan Watts gave the first lecture at Esalen in January 1962. Gia-fu Feng joined Price and Murphy, along with Bob Breckenridge, Bob Nash, Alice and Jim Sellers, as the first Esalen staff members. In the middle of that same year Abraham Maslow, a prominent humanistic psychologist, just happened to drive into the grounds and soon became an important figure at the institute. In the fall of 1962, they published a catalog advertising workshops with such titles as "Individual and Cultural Definitions of Rationality," "The Expanding Vision" and "Drug-Induced Mysticism". Their first seminar series in the fall of 1962 was "The Human Potentiality," based on a lecture by Huxley.
Fritz Perls residency
In 1964, Fritz Perls began what became a five-year long residency at Esalen, leaving a lasting influence. Perls offered many Gestalt therapy seminars at the institute until he left in July 1969. Jim Simkin and Perls led Gestalt training courses at Esalen. Simkin started a Gestalt training center on property next door that was later incorporated into Esalen's main campus.
When Perls left Esalen he considered it to be "in crisis again". He saw young people without any training leading encounter groups and he feared that charlatans would take the lead. Later, Grogan would write that Perls’ practice at Esalen had been ethically "questionable", and according to Kripal, Perls insulted Abraham Maslow.
Gestalt practice developed
Dick Price became one of Perls' closest students. Price managed the institute and developed his own form he called Gestalt practice, which he taught at Esalen until his death in a hiking accident in 1985. Michael Murphy lived in the San Francisco Bay Area and wrote non-fiction books about Esalen-related topics, as well as several novels.
Leads counterculture movement
Esalen gained popularity quickly and started to regularly publish catalogs full of programs. The facility was large enough to run multiple programs simultaneously, so Esalen created numerous resident teacher positions. Murphy recruited Will Schutz, the well-known encounter group leader, to take up permanent residence at Esalen. All this combined to firmly position Esalen in the nexus of the counterculture of the 1960s.
The institute gained increased attention in 1966 when several magazines wrote about it. George Leonard published an article in Look magazine about the California scene which mentioned Esalen and included a picture of Murphy. Time magazine published an article about Esalen in September 1967. The New York Times Magazine published an article by Leo E. Litwak in late December. Life also published an article about the resort. These articles increased the media and the public's awareness of the institute in the U.S. and abroad. Esalen responded by holding large-scale conferences in Midwestern and East Coast cities, as well as in Europe. Esalen opened a satellite center in San Francisco that offered extensive programming until it closed in the mid-1970s for financial reasons.
Programs and management
The institute continues to offer workshops about humanistic psychology, physical wellness, and spiritual awareness. The institute has also added workshops on permaculture and ecological sustainability. Other workshops cover a wide range of subjects including arts, health, Gestalt practice, integral thought, martial arts, massage, dance, mythology, philosophical inquiry, somatics, spiritual and religious studies, ecopsychology, wilderness experience, yoga, tai chi, mindfulness practice, and meditation. The institute was closed for the first half of 2017 and forced to drastically reduce staff. They also decided to revamp their offerings upon reopening to include topics more relevant to a younger generation.
Center for Theory and Research
In 1998, Esalen launched the Center for Theory and Research to initiate new areas of practice and action which foster social change and realization of the human potential. It is the research and development arm of Esalen Institute. , Michael Cornwall, who previously worked in the institutes' Schizophrenia Research Project at Agnews State Hospital, was conducting workshops titled the Alternative Views and Approaches to Psychosis Initiative at Esalen. He was inviting leaders in the field of psychosis treatment to attend the workshops.
Management changes
Esalen has been making changes to respond to internal and external factors. Dick Price was the key leader of the institute until his sudden death in a hiking accident in late 1985 brought about many changes in personnel and programming. Steven Donovan became president of the institute, and Brian Lyke served as general manager. Nancy Lunney became the director of programming, and Dick Price's son David Price served as general manager of Esalen beginning in the mid-1990s.
The baths were destroyed in 1998 by severe weather and were rebuilt at great expense, but this caused severe institutional stress. Afterward, Andy Nusbaum developed an economic plan to stabilize Esalen's finances.
In 2011, the institute commissioned the company Beyond the Leading Edge to conduct a Leadership Culture Survey to assess the quality of its leadership culture. The results were negative. The survey measured how well the leadership "builds quality relationships, fosters teamwork, collaborates, develops people, involves people in decision making and planning, and demonstrates a high level of interpersonal skill." In the "relating dimension" the survey returned a score of 18%, compared to a desired 88%. It also produced strongly dissonant scores in measures of community welfare, relating with interpersonal intelligence, clearly communicating vision, and building a sense of personal worth within the community. It ranked management as overly compliant and lacking authenticity. However, the survey found that Esalen closely matched its overall goal for customer focus.
Gordon Wheeler dramatically restructured Esalen management. These changes prompted Christine Stewart Price, the widow of Dick Price, to withdraw from the institute, and found an organization named the Tribal Ground Circle with the intention to preserve Dick Price's legacy.
Early leaders and programs
In the few years after its founding, many of the seminars like "The Value of Psychotic Experience" attempted to challenge the status quo. There were even Esalen programs that questioned the movement of which Esalen itself was a part—for instance, "Spiritual and Therapeutic Tyranny: The Willingness To Submit". There were also a series of encounter groups focused on racial prejudice.
Early leaders included many well-known individuals, including Ansel Adams, Gia-fu Feng, Buckminster Fuller, Timothy Leary,
Robert Nadeau, Linus Pauling, Carl Rogers, Virginia Satir, B.F. Skinner, and Arnold Toynbee. Rather than merely lecturing, many leaders experimented with what Huxley called the non-verbal humanities: the education of the body, the senses, and the emotions. Their intention was to help individuals develop awareness of their present flow of experience, to express this fully and accurately, and to listen to feedback. These "experiential" workshops were particularly well attended and were influential in shaping Esalen's future course.
Staff residency
Because of Esalen's isolated location, its operational staff members have lived on site from the beginning and for many years collectively contributed to the character of the institute. The community has been steeped in a form of Gestalt practice that pervades all aspects of daily life, including meeting structures, workplace practices, and individual language styles. There is a preschool on site called the Gazebo, serving the children of staff, some program participants, and affiliated local residents.
Scholars in residence
Esalen has sponsored long-term resident scholars, including notable individuals such as Gregory Bateson, Joseph Campbell, Stanislav Grof, Sam Keen, George Leonard, Fritz Perls, Ida Rolf, Virginia Satir, William Schutz, and
Alan Watts.
Esalen Massage and Bodywork Association
Bodywork has always been a significant part of the Esalen experience. In the late 1990s, the "EMBA" was organized as a semi-autonomous Esalen association for the regulation of Esalen massage practitioners.
Past initiatives and projects
Esalen Institute has sponsored many research initiatives, educational projects, and invitational conferences. The Big Sur facility has been used for these events, as well as other locations, including international sites.
Arts events
In 1964, Joan Baez led a workshop entitled "The New Folk Music" which included a free performance. This was the first of seven "Big Sur Folk Festivals" featuring many of the era's music legends. The 1969 concert included musicians who had just come from the Woodstock Festival. This event was featured in a documentary movie, Celebration at Big Sur, which was released in 1971.
John Cage and Robert Rauschenberg performed together at Esalen. Robert Bly, Lawrence Ferlinghetti, Allen Ginsberg, Michael McClure, Kenneth Rexroth (who led one of the first workshops), Gary Snyder and others held poetry readings and workshops.
In 1994, president and CEO Sharon Thom created an artist-in-residence program to provide artists with a two-week retreat in which to focus upon works in progress. These artists interacted with the staff, offered informal gatherings, and staged performances on the newly created dance platform. Located next to the Art Barn, the dance platform was used by Esalen teachers for dance and martial arts. The platform was later covered by a dome and renamed the Leonard Pavilion after deceased Esalen past president and board member, George Leonard.
In 1995 and 1996, Esalen hosted two arts festivals which gathered together artists, poets, musicians, photographers and performers, including artist Margot McLean, psychotherapist James Hillman, guitarist Michael Hedges and Joan Baez. All staff members were allowed to attend every class and performance that did not interfere with their schedules. Arts festivals have since become a popular yearly event at Esalen.
Schizophrenia Research Project
Encouraged by Dick Price, the Schizophrenia Research Project was conducted over a three-year period at Agnews State Hospital in San Jose, California, involving 80 young males diagnosed with schizophrenia. Funded in part by Esalen Institute, this program was co-sponsored by the California Department of Mental Hygiene (reorganized: CMHSA) and the National Institute of Mental Health. It explored the thesis that the health of certain patients would permanently improve if their psychotic process was not interrupted by administration of antipsychotic pharmaceutical drugs. Julian Silverman was chief of research for the project. He also served as Esalen's general manager in the 1970s. The Agnews double blind study was the largest first-episode psychosis research project ever conducted in the United States. It demonstrated that the young men given a placebo had a 75 percent lower re-hospitalization rate and much better outcomes than the men who received anti-psychotic medication. These results were used as justification for medication-free programs in the San Francisco Bay Area. Esalen has recently begun to revive some of this interest in schizophrenia and psychosis, and hosted the R.D. Laing Symposium and workshops on compassionately responding to psychosis.
Publishing
Starting in 1969, in association with Viking Press, the institute published a series of 17 books about Esalen-related topics, including the first edition of Michael Murphy's novel, Golf in the Kingdom (1971). Some of these books remain in print. In the mid-1980s, Esalen entered into a joint publishing arrangement with Lindisfarne Press to publish a small library of Russian philosophical and theological books.
Soviet–American Exchange Program
In 1979, Esalen began the Soviet–American Exchange Program (later renamed: Track Two, an institute for citizen diplomacy). This initiative came at a time when Cold War tensions were at their peak. The program was credited with substantial success in fostering peaceful private exchanges between citizens of the "super powers". In the 1980s, Michael Murphy and his wife Dulce were instrumental in organizing the program with Soviet citizen Joseph Goldin, in order to provide a vehicle for citizen-to-citizen relations between Russians and Americans. In 1982, Esalen and Goldin pioneered the first U.S.–Soviet Space Bridge, allowing Soviet and American citizens to speak directly with one another via satellite communication. In 1988, Esalen brought Abel Aganbegyan, one of Mikhail Gorbachev's chief economic advisors, to the United States. In 1989, Esalen brought Boris Yeltsin on his first trip to the United States, although Yeltsin did not visit the Esalen facility in Big Sur. Esalen arranged meetings for Yeltsin with then President George H. W. Bush as well as many other leaders in business and government. Two former presidents of the exchange program included Jim Garrison and Jim Hickman. After Gorbachev stepped down, and effectively dissolved the Soviet Union, Garrison helped establish The State of the World Forum, with Gorbachev as its convening chairman. These successes led to other Esalen citizen diplomacy programs, including exchanges with China, an initiative to further understanding among Jews, Christians and Muslims, as well as further work on Russian-American relations.
Prices and finances
2017 closure
On February 12, 2017, a number of mud and land slides closed Highway 1 in several locations to the south and north of the hot springs and caused Esalen to partially shut down. On February 18, 2017, shifting earth damaged a pier supporting the Pfeiffer Canyon Bridge north of Esalen and forced CalTrans to close Highway 1. CalTrans determined that the bridge was damaged beyond repair and announced an accelerated project to replace the bridge by September. Following closure of the bridge, Esalen was cut off, and resorted to evacuating dozens of guests by helicopter. A landslide at Mud Creek south of the hot springs severely restricted vehicle access to the resort, and Esalen temporarily closed its doors. Then, on May 20, 2017, a new slide at Mud Creek closed Highway 1 for at least a year.
On June 20, Esalen announced that it would lay off 45 staff members through at least July, leaving only about 10 percent of its staff.
Esalen partially reopened on July 28, 2017, offering limited workshops. It plans to add more seminars after the Pfeiffer Canyon Bridge reopens in September 2017.
Attendance and costs
In 2012, 600 Esalen workshops were attended by more than 12,000 people. Topics ranged from sustainable business practices to hypnosis to "The Holy Fool: Crazy Wisdom From Van Gogh to Tina Fey and The Big Lebowski."
, a weekend workshop, including the program, meals, and a place for a sleeping bag in a communal area, cost a minimum of $405 per person. A couple could rent a private room for $730 per person. Week-long workshops begin at $900 and couples are charged $1,700 per person to stay in a private room. In 2013, the institute charges participants in its month-long, residential licensed massage practitioner training programs, $4910, including board and room. In 1987, a weekend workshop along with a single room and meals cost $270, and a five-day workshop cost $530.
Revenue and expenses
In 2013, the institute reported revenue of $18,513,254, $13,066,407 from programs, and after expenses of $13,515,552 a net income of $4,997,702. In that year it paid CEO Patricia McEntee $152,077 In 2014, it reported total revenue of $15,934,586, expenses totalling $14,472,201, and net income of $1,462,385. McEntee was paid $157,839.
The company spent nearly $10 million for renovations from 2014 to 2016, including $7.4 million to renovate the main lodge and add a cafe and bar. It also spent $1.8 million on a six-room guesthouse. There is only limited internet cellular service available, but Esalen is planning to make some of its workshops available to online participants.
Lease terms
The annual cost of its 87-year lease for the 27-acre site from the Vinnie A. Murphy Trust—which extends through 2049—was $344,704 in 2014. McEntee told the Monterey County Weekly that the cost of the lease is highly discounted, and that the terms of the lease allow the trust to re-assess the lease terms in 2017. This could potentially increase the institute's rent to market value.
Past teachers
Past guest teachers include:
In popular culture
Cultural influence
Esalen has been cited as having played a key role in the cultural transformations of the 1960s. In its beginnings as a "laboratory for new thought", it was seen by some as the headquarters of the human potential movement. Its use of encounter groups, a focus on the mind-body connection, and their ongoing experimentation in personal awareness introduced many ideas to American society that later became mainstream. In its early years, guest lecturers and workshop leaders included many leading thinkers, psychologists, and philosophers including Erik Erikson, Ken Kesey, Alan Watts, John C. Lilly, Buckminster Fuller, Aldous Huxley, Linus Pauling, Fritz Perls, Joseph Campbell, Robert Bly and Carl Rogers.
Esalen has also been the subject of some criticism and controversy. The Economist wrote, "For many others in America and around the world, Esalen stands more vaguely for that metaphorical point where ‘East meets West’ and is transformed into something uniquely and mystically American or New Agey. And for a great many others yet, Esalen is simply that notorious bagno-bordello where people had sex and got high throughout the 1960s and 1970s before coming home talking psychobabble and dangling crystals."
The Human Potential Movement was criticized for espousing an ethic that the inner-self should be freely expressed in order to reach a person's true potential. Some people saw this ethic as an aspect of Esalen's culture. The historian Christopher Lasch wrote that humanistic techniques encourage narcissistic, spiritual materialistic or self-obsessive thoughts and behaviors. In 1990 a graffiti artist spray painted "Jive shit for rich white folk" on the entrance to Esalen, highlighting class and race issues. Some thought that this was a regression of progress away from true spiritual growth. Michel Houellebecq's Atomised traces the New Age movement's influence on the novel's protagonists to older generations' chance meetings at Esalen.
Popular media
Films
In the comedy-drama Bob & Carol & Ted & Alice (1969), sophisticated Los Angeles residents Bob (played by Robert Culp) and Carol Sanders (Natalie Wood) spend a weekend of emotional honesty at an Esalen-style retreat, after which they return to their life determined to embrace free love and complete openness.
Literature
In Thomas Pynchon's novel Inherent Vice (2009) and Paul Thomas Anderson's eponymous 2014 film adaptation, the Chryskylodon Institute is modeled after Esalen.
In Norman Rush's novel Mating (1992), Esalen is referred to as a "twit factory."
Television
The BBC television series, The Century of the Self (2002), is critical of the Human Potentials Movement and includes video segments recorded at Esalen.
The Mad Men show finale, "Person to Person" (airdate May 17, 2015), features Don and Stephanie staying at an Esalen-like coastline retreat in the year 1970.
In True Detective season 2, the Panticapaeum Institute is largely based on the Esalen Institute.
Music
On July 10, 1968, The Beatles guitarist George Harrison was given sitar lessons at Esalen by Ravi Shankar for the movie Raga.
References
Notes
Works cited
Further reading
External links
1962 establishments in California
Gestalt therapy
Hot springs of California
Human Potential Movement
Buildings and structures in Monterey County, California
Personal development
Tourist attractions in Monterey County, California
Big Sur
New Age communities
New Age organizations | Esalen Institute | [
"Biology"
] | 5,471 | [
"Personal development",
"Behavior",
"Human behavior"
] |
333,163 | https://en.wikipedia.org/wiki/Halfbeak | Hemiramphidae is a family of fishes that are commonly called halfbeaks, spipe fish or spipefish. They are a geographically widespread and numerically abundant family of epipelagic fish inhabiting warm waters around the world. The halfbeaks are named for their distinctive jaws, in which the lower jaws are significantly longer than the upper jaws. The similar viviparous halfbeaks (family Zenarchopteridae) have often been included in this family.
Though not commercially important themselves, these forage fish support artisanal fisheries and local markets worldwide. They are also fed upon by other commercially important predatory fishes, such as billfishes, mackerels, and sharks.
Taxonomy
In 1758, Carl Linnaeus was the first to scientifically describe a halfbeak, Esox brasiliensis (now Hemiramphus brasiliensis). In 1775 Peter Forsskål described two more species as Esox, Esox far and Esox marginatus. It was not until 1816 that Georges Cuvier created the genus Hemiramphus; from then on, all three were classified as Hemiramphus. In 1859, Gill erected Hemiramphidae, deriving its name from Hemiramphus, the family's type genus. The name comes from the Greek hemi, meaning half, and rhamphos, meaning beak or bill.
There are currently eight genera (including 60 species) within the family Hemirampphidae:
Arrhamphus Günther, 1866
Chriodorus Goode & Bean, 1882
Euleptorhamphus Gill, 1859
Hemiramphus Cuvier, 1816
Hyporhamphus Gill, 1859
Melapedalion Fowler, 1934
Oxyporhamphus Gill, 1864
Rhynchorhamphus Fowler, 1928
This family is primarily marine and found in the Atlantic, Pacific, and Indian Oceans, though some inhabit estuaries and rivers.
Evolution
The halfbeaks' fossil record extends into the Lower Tertiary. The earliest known halfbeak is "Hemiramphus" edwardsi from the Eocene at Monte Bolca, Italy. Apart from differences in the length of the upper and lower jaws, recent and fossil halfbeaks are distinguished by the fusion of the third pair of upper pharyngeal bones into a plate.
Phylogeny
The phylogeny of the halfbeaks is in a state of flux.
On the one hand, there is little question that they are most closely related to three other families of streamlined, surface water fishes: the flyingfishes, needlefishes, and sauries. Traditionally, these four families have been taken to together comprise the order Beloniformes. The halfbeaks and flyingfishes are considered to form one group, the superfamily Exocoetoidea, and the needlefishes and sauries another, the superfamily Scomberesocoidea.
On the other hand, recent studies have demonstrated that rather than forming a single monophyletic group (a clade), the halfbeak family actually includes a number of lineages ancestral to the flyingfishes and the needlefishes. In other words, as traditionally defined, the halfbeak family is paraphyletic.
Within the subfamily Hemiramphinae, the "flying halfbeak" genus Oxyporhamphus has proved to be particularly problematic; while morphologically closer to the flyingfishes, molecular evidence places it with Hemiramphus and Euleptorhamphus. Together, these three genera form the sister group to the flyingfish family. The other two hemiramphine genera Hyporhamphus and Arrhamphus form another clade of less clear placement.
Rather than being closely related to the flyingfishes, the subfamily Zenarchopterinae appears to be the sister group of the needlefishes and sauries. This is based on the pharyngeal jaw apparatus, sperm ultrastructure, and molecular evidence. However, this hypothesis has awkward implications for how the morphological evolution of the group is understood, because the fused pharyngeal plate has been considered reliably diagnostic of the halfbeak family. Furthermore, the existing theory that because juvenile needlefish pass through a developmental stage where the lower jaw is longer than the upper jaw (the so-called "halfbeak stage") the theory that halfbeaks are paedomorphic needlefish is untenable. In fact the unequal lengths of the upper and lower jaws of halfbeaks appears to be the basal condition, with needlefish being relatively derived in comparison.
Morphology
The halfbeaks are elongate, streamlined fish adapted to living in open water. Halfbeaks can grow to over SL in the case of Euleptorhampus viridis. The scales are relatively large, cycloid (smooth), and easily detached. There are no spines in the fins. A distinguishing characteristic is that the third pair of upper pharyngeal bones are anklylosed (fused) into a plate. Halfbeaks are one of several fish families that lack a stomach, all of which possess a pharyngeal jaw apparatus (pharyngeal mill). Most species have an extended lower jaw, at least as juveniles, though this feature may be lost as the fish mature, as with Chriodorus, for example.
As is typical for surface dwelling, open water fish, most species are silvery, darker above and lighter below, an example of countershading. The tip of the lower jaw is bright red or orange in most species.
Halfbeaks carry several adaptations to feeding at the water surface. The eyes and nostrils are at the top of the head and the upper jaw is mobile, but not the lower jaw. Combined with their streamlined shape and the concentration of fins towards the back (similar to that of a pike), these adaptations allow halfbeaks to locate, catch, and swallow food items very effectively.
Range and habitat
Halfbeaks inhabit warm seas, predominantly at the surface, in the Atlantic, Indian, and Pacific oceans. A small number are found in estuaries. Most species of marine halfbeaks are known from continental coastlines, but some extend into the western and central Pacific, and one species (Hyporhamphus ihi) is endemic to New Zealand. Hemiramphus is a worldwide marine genus.
Ecology and behavior
Feeding
Marine halfbeaks are omnivores feeding on algae; marine plants such as seagrasses; plankton; invertebrates such as pteropods and crustaceans; and smaller fishes. For some subtropical species at least, juveniles are more predatory than adults. Some tropical species feed on animals during the day and plants at night, while other species alternate between carnivory in the summer and herbivory in the winter. They are in turn eaten by many ecologically and commercially important fish, such as billfish, mackerel, and sharks, and so are a key link between trophic levels.
Behavior
Marine halfbeaks are typically pelagic schooling forage fish. The southern sea garfish Hyporhamphus melanochir for example is found in sheltered bays, coastal seas, estuaries around southern Australia in waters down to a depth of . These fish school near the surface at night but swim closer to the sea floor during the day, particularly among beds of seagrasses. Genetic analysis of the different sub-populations of the eastern sea garfish Hyporhamphus melanochir in South Australian coastal waters reveals that there is a small but consistent migration of individuals among theme, sufficient to keep them genetically homogeneous.
Some marine halfbeaks, including Euleptorhamphus velox and Euleptorhamphus viridis, are known for their ability to jump out of the water and glide over the surface for considerable distances, and have consequently sometimes been called flying halfbeaks.
Reproduction
Hemiramphidae species are all external fertilizers. They are usually egg-layers and often produce relatively small numbers of fairly large eggs for fish of their size, typically in shallow coastal waters, such as the seagrass meadows of Florida Bay. The eggs of Hemiramphus brasiliensis and H. balao are typically in diameter and have attaching filaments. They hatch when they grow to about in diameter. Hyporhamphus melanochir eggs are slightly larger, around in diameter, and are unusually large when they hatch, being up to in size.
Relatively little is known about the ecology of juvenile marine halfbeaks, though estuarine habitats seem to be favored by at least some species. The southern sea garfish Hyporhamphus melanochir grows rapidly at first, attaining a length of up to in the first three years, after which point growth slows. This species lives for a maximum age of about 9 years, at which point the fish reach up to and weigh about .
Relationship to humans
Halfbeak fisheries
Halfbeaks are not a major target for commercial fisheries, though small fisheries for them exist in some places, for example in South Australia where fisheries target the southern sea garfish (Hyporhamphus melanochir). and the eastern sea garfish (Hyporhamphus australis). Halfbeaks are caught by a variety of methods including seines and pelagic trawls, dip-netting under lights at night, and with haul nets. They are utilized fresh, dried, smoked, or salted, and they are considered good eating. However, even where halfbeaks are targeted by fisheries, they tend to be of secondary importance compared with other edible fish species.
In some localities significant bait fisheries exist to supply sport fishermen. One study of a bait fishery in Florida that targets Hemiramphus brasiliensis and Hemiramphus balao suggests that despite increases in the size of the fishery the population is stable and the annual catch is valued at around $500,000.
See also
USS Halfbeak (SS-352) American submarine named after these fish
References
Beloniformes
Fish of Hawaii
Extant Eocene first appearances
Paraphyletic groups | Halfbeak | [
"Biology"
] | 2,094 | [
"Phylogenetics",
"Paraphyletic groups"
] |
333,170 | https://en.wikipedia.org/wiki/Fluctuation%20theorem | The fluctuation theorem (FT), which originated from statistical mechanics, deals with the relative probability that the entropy of a system which is currently away from thermodynamic equilibrium (i.e., maximum entropy) will increase or decrease over a given amount of time. While the second law of thermodynamics predicts that the entropy of an isolated system should tend to increase until it reaches equilibrium, it became apparent after the discovery of statistical mechanics that the second law is only a statistical one, suggesting that there should always be some nonzero probability that the entropy of an isolated system might spontaneously decrease; the fluctuation theorem precisely quantifies this probability.
Statement
Roughly, the fluctuation theorem relates to the probability distribution of the time-averaged irreversible entropy production, denoted . The theorem states that, in systems away from equilibrium over a finite time t, the ratio between the probability that takes on a value A and the probability that it takes the opposite value, −A, will be exponential in At.
In other words, for a finite non-equilibrium system in a finite time, the FT gives a precise mathematical expression for the probability that entropy will flow in a direction opposite to that dictated by the second law of thermodynamics.
Mathematically, the FT is expressed as:
This means that as the time or system size increases (since is extensive), the probability of observing an entropy production opposite to that dictated by the second law of thermodynamics decreases exponentially. The FT is one of the few expressions in non-equilibrium statistical mechanics that is valid far from equilibrium.
Note that the FT does not state that the second law of thermodynamics is wrong or invalid. The second law of thermodynamics is a statement about macroscopic systems. The FT is more general. It can be applied to both microscopic and macroscopic systems. When applied to macroscopic systems, the FT is equivalent to the second law of thermodynamics.
History
The FT was first proposed and tested using computer simulations, by Denis Evans, E.G.D. Cohen and Gary Morriss in 1993. The first derivation was given by Evans and Debra Searles in 1994. Since then, much mathematical and computational work has been done to show that the FT applies to a variety of statistical ensembles. The first laboratory experiment that verified the validity of the FT was carried out in 2002. In this experiment, a plastic bead was pulled through a solution by a laser. Fluctuations in the velocity were recorded that were opposite to what the second law of thermodynamics would dictate for macroscopic systems. In 2020, observations at high spatial and spectral resolution of the solar photosphere have shown that solar turbulent convection satisfies the symmetries predicted by the fluctuation relation at a local level.
Second law inequality
A simple consequence of the fluctuation theorem given above is that if we carry out an arbitrarily large ensemble of experiments from some initial time t=0, and perform an ensemble average of time averages of the entropy production, then an exact consequence of the FT is that the ensemble average cannot be negative for any value of the averaging time t:
This inequality is called the second law inequality. This inequality can be proved for systems with time dependent fields of arbitrary magnitude and arbitrary time dependence.
It is important to understand what the second law inequality does not imply. It does not imply that the ensemble averaged entropy production is non-negative at all times. This is untrue, as consideration of the entropy production in a viscoelastic fluid subject to a sinusoidal time dependent shear rate shows (e.g., rogue waves). In this example the ensemble average of the time integral of the entropy production over one cycle is however nonnegative – as expected from the second law inequality.
Nonequilibrium partition identity
Another remarkably simple and elegant consequence of the fluctuation theorem is the so-called "nonequilibrium partition identity" (NPI):
Thus in spite of the second law inequality, which might lead you to expect that the average would decay exponentially with time, the exponential probability ratio given by the FT exactly cancels the negative exponential in the average above leading to an average which is unity for all time.
Implications
There are many important implications from the fluctuation theorem. One is that small machines (such as nanomachines or even mitochondria in a cell) will spend part of their time actually running in "reverse". What is meant by "reverse" is that it is possible to observe that these small molecular machines are able to generate work by taking heat from the environment. This is possible because there exists a symmetry relation in the work fluctuations associated with the forward and reverse changes a system undergoes as it is driven away from thermal equilibrium by the action of an external perturbation, which is a result predicted by the Crooks fluctuation theorem. The environment itself continuously drives these molecular machines away from equilibrium and the fluctuations it generates over the system are very relevant because the probability of observing an apparent violation of the second law of thermodynamics becomes significant at this scale.
This is counterintuitive because, from a macroscopic point of view, it would describe complex processes running in reverse. For example, a jet engine running in reverse, taking in ambient heat and exhaust fumes to generate kerosene and oxygen. Nevertheless, the size of such a system makes this observation almost impossible to occur. Such a process is possible to be observed microscopically because, as it has been stated above, the probability of observing a "reverse" trajectory depends on system size and is significant for molecular machines if an appropriate measurement instrument is available. This is the case with the development of new biophysical instruments such as the optical tweezers or the atomic force microscope. Crooks fluctuation theorem has been verified through RNA folding experiments.
Dissipation function
Strictly speaking the fluctuation theorem refers to a quantity known as the dissipation function. In thermostatted nonequilibrium states that are close to equilibrium, the long time average of the dissipation function is equal to the average entropy production. However the FT refers to fluctuations rather than averages. The dissipation function is defined as
where k is the Boltzmann constant, is the initial (t = 0) distribution of molecular states , and is the molecular state arrived at after time t, under the exact time reversible equations of motion. is the INITIAL distribution of those time evolved states.
Note: in order for the FT to be valid we require that . This condition is known as the condition of ergodic consistency. It is widely satisfied in common statistical ensembles - e.g. the canonical ensemble.
The system may be in contact with a large heat reservoir in order to thermostat the system of interest. If this is the case is the heat lost to the reservoir over the time (0,t) and T is the absolute equilibrium temperature of the reservoir. With this definition of the dissipation function the precise statement of the FT simply replaces entropy production with the dissipation function in each of the FT equations above.
Example: If one considers electrical conduction across an electrical resistor in contact with a large heat reservoir at temperature T, then the dissipation function is
the total electric current density J multiplied by the voltage drop across the circuit, , and the system volume V, divided by the absolute temperature T, of the heat reservoir times the Boltzmann constant. Thus the dissipation function is easily recognised as the Ohmic work done on the system divided by the temperature of the reservoir. Close to equilibrium the long time average of this quantity is (to leading order in the voltage drop), equal to the average spontaneous entropy production per unit time. However, the fluctuation theorem applies to systems arbitrarily far from equilibrium where the definition of the spontaneous entropy production is problematic.
Relation to Loschmidt's paradox
The second law of thermodynamics, which predicts that the entropy of an isolated system out of equilibrium should tend to increase rather than decrease or stay constant, stands in apparent contradiction with the time-reversible equations of motion for classical and quantum systems. The time reversal symmetry of the equations of motion show that if one films a given time dependent physical process, then playing the movie of that process backwards does not violate the laws of mechanics. It is often argued that for every forward trajectory in which entropy increases, there exists a time reversed anti trajectory where entropy decreases, thus if one picks an initial state randomly from the system's phase space and evolves it forward according to the laws governing the system, decreasing entropy should be just as likely as increasing entropy. It might seem that this is incompatible with the second law of thermodynamics which predicts that entropy tends to increase. The problem of deriving irreversible thermodynamics from time-symmetric fundamental laws is referred to as Loschmidt's paradox.
The mathematical derivation of the fluctuation theorem and in particular the second law inequality shows that, for a nonequilibrium process, the ensemble averaged value for the dissipation function will be greater than zero. This result requires causality, i.e. that cause (the initial conditions) precede effect (the value taken on by the dissipation function). This is clearly demonstrated in section 6 of that paper, where it is shown how one could use the same laws of mechanics to extrapolate backwards from a later state to an earlier state, and in this case the fluctuation theorem would lead us to predict the ensemble average dissipation function to be negative, an anti-second law. This second prediction, which is inconsistent with the real world, is obtained using an anti-causal assumption. That is to say that effect (the value taken on by the dissipation function) precedes the cause (here the later state has been incorrectly used for the initial conditions). The fluctuation theorem shows how the second law is a consequence of the assumption of causality. When we solve a problem we set the initial conditions and then let the laws of mechanics evolve the system forward in time, we don't solve problems by setting the final conditions and letting the laws of mechanics run backwards in time.
Summary
The fluctuation theorem is of fundamental importance to non-equilibrium statistical mechanics.
The FT (together with the universal causation proposition) gives a generalisation of the second law of thermodynamics which includes as a special case, the conventional second law. It is then easy to prove the Second Law Inequality and the NonEquilibrium Partition Identity. When combined with the central limit theorem, the FT also implies the Green-Kubo relations for linear transport coefficients, close to equilibrium. The FT is however, more general than the Green-Kubo Relations because unlike them, the FT applies to fluctuations far from equilibrium. In spite of this fact, scientists have not yet been able to derive the equations for nonlinear response theory from the FT.
The FT does not imply or require that the distribution of time averaged dissipation be Gaussian. There are many examples known where the distribution of time averaged dissipation is non-Gaussian and yet the FT (of course) still correctly describes the probability ratios.
Lastly the theoretical constructs used to prove the FT can be applied to nonequilibrium transitions between two different equilibrium states. When this is done the so-called Jarzynski equality or nonequilibrium work relation, can be derived. This equality shows how equilibrium free energy differences can be computed or measured (in the laboratory), from nonequilibrium path integrals. Previously quasi-static (equilibrium) paths were required.
The reason why the fluctuation theorem is so fundamental is that its proof requires so little. It requires:
knowledge of the mathematical form of the initial distribution of molecular states,
that all time evolved final states at time t, must be present with nonzero probability in the distribution of initial states (t = 0) – the so-called condition of ergodic consistency and
an assumption of time reversal symmetry.
In regard to the latter "assumption", while the equations of motion of quantum dynamics may be time-reversible, quantum processes are nondeterministic by nature. What state a wave function collapses into cannot be predicted mathematically, and further the unpredictability of a quantum system comes not from the myopia of an observer's perception, but on the intrinsically nondeterministic nature of the system itself.
In physics, the laws of motion of classical mechanics exhibit time reversibility, as long as the operator π reverses the conjugate momenta of all the particles of the system, i.e. (T-symmetry).
In quantum mechanical systems, however, the weak nuclear force is not invariant under T-symmetry alone; if weak interactions are present reversible dynamics are still possible, but only if the operator π also reverses the signs of all the charges and the parity of the spatial co-ordinates (C-symmetry and P-symmetry). This reversibility of several linked properties is known as CPT symmetry.
Thermodynamic processes can be reversible or irreversible, depending on the change in entropy during the process.
See also
Linear response function
Green's function (many-body theory)
Loschmidt's paradox
Le Chatelier's principle – a nineteenth century principle that defied a mathematical proof until the advent of the Fluctuation Theorem.
Crooks fluctuation theorem – an example of transient fluctuation theorem relating the dissipated work in non equilibrium transformations to free energy differences.
Jarzynski equality – another nonequilibrium equality closely related to the fluctuation theorem and to the second law of thermodynamics
Green–Kubo relations – there is a deep connection between the fluctuation theorem and the Green–Kubo relations for linear transport coefficients – like shear viscosity or thermal conductivity
Ludwig Boltzmann
Thermodynamics
Brownian motor
Notes
References
Statistical mechanics theorems
Physical paradoxes
Non-equilibrium thermodynamics | Fluctuation theorem | [
"Physics",
"Mathematics"
] | 2,937 | [
"Theorems in dynamical systems",
"Non-equilibrium thermodynamics",
"Statistical mechanics theorems",
"Theorems in mathematical physics",
"Dynamical systems",
"Statistical mechanics",
"Physics theorems"
] |
333,219 | https://en.wikipedia.org/wiki/Eulerian%20path | In graph theory, an Eulerian trail (or Eulerian path) is a trail in a finite graph that visits every edge exactly once (allowing for revisiting vertices). Similarly, an Eulerian circuit or Eulerian cycle is an Eulerian trail that starts and ends on the same vertex. They were first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg problem in 1736. The problem can be stated mathematically like this:
Given the graph in the image, is it possible to construct a path (or a cycle; i.e., a path starting and ending on the same vertex) that visits each edge exactly once?
Euler proved that a necessary condition for the existence of Eulerian circuits is that all vertices in the graph have an even degree, and stated without proof that connected graphs with all vertices of even degree have an Eulerian circuit. The first complete proof of this latter claim was published posthumously in 1873 by Carl Hierholzer. This is known as Euler's Theorem:
A connected graph has an Euler cycle if and only if every vertex has an even number of incident edges.
The term Eulerian graph has two common meanings in graph theory. One meaning is a graph with an Eulerian circuit, and the other is a graph with every vertex of even degree. These definitions coincide for connected graphs.
For the existence of Eulerian trails it is necessary that zero or two vertices have an odd degree; this means the Königsberg graph is not Eulerian. If there are no vertices of odd degree, all Eulerian trails are circuits. If there are exactly two vertices of odd degree, all Eulerian trails start at one of them and end at the other. A graph that has an Eulerian trail but not an Eulerian circuit is called semi-Eulerian.
Definition
An Eulerian trail, or Euler walk, in an undirected graph is a walk that uses each edge exactly once. If such a walk exists, the graph is called traversable or semi-eulerian.
An Eulerian cycle, also called an Eulerian circuit or Euler tour, in an undirected graph is a cycle that uses each edge exactly once. If such a cycle exists, the graph is called Eulerian or unicursal. The term "Eulerian graph" is also sometimes used in a weaker sense to denote a graph where every vertex has even degree. For finite connected graphs the two definitions are equivalent, while a possibly unconnected graph is Eulerian in the weaker sense if and only if each connected component has an Eulerian cycle.
For directed graphs, "path" has to be replaced with directed path and "cycle" with directed cycle.
The definition and properties of Eulerian trails, cycles and graphs are valid for multigraphs as well.
An Eulerian orientation of an undirected graph G is an assignment of a direction to each edge of G such that, at each vertex v, the indegree of v equals the outdegree of v. Such an orientation exists for any undirected graph in which every vertex has even degree, and may be found by constructing an Euler tour in each connected component of G and then orienting the edges according to the tour. Every Eulerian orientation of a connected graph is a strong orientation, an orientation that makes the resulting directed graph strongly connected.
Properties
An undirected graph has an Eulerian cycle if and only if every vertex has even degree, and all of its vertices with nonzero degree belong to a single connected component.
An undirected graph can be decomposed into edge-disjoint cycles if and only if all of its vertices have even degree. So, a graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint cycles and its nonzero-degree vertices belong to a single connected component.
An undirected graph has an Eulerian trail if and only if exactly zero or two vertices have odd degree, and all of its vertices with nonzero degree belong to a single connected component.
A directed graph has an Eulerian cycle if and only if every vertex has equal in degree and out degree, and all of its vertices with nonzero degree belong to a single strongly connected component. Equivalently, a directed graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint directed cycles and all of its vertices with nonzero degree belong to a single strongly connected component.
A directed graph has an Eulerian trail if and only if at most one vertex has at most one vertex has every other vertex has equal in-degree and out-degree, and all of its vertices with nonzero degree belong to a single connected component of the underlying undirected graph.
Constructing Eulerian trails and circuits
Fleury's algorithm
Fleury's algorithm is an elegant but inefficient algorithm that dates to 1883. Consider a graph known to have all edges in the same component and at most two vertices of odd degree. The algorithm starts at a vertex of odd degree, or, if the graph has none, it starts with an arbitrarily chosen vertex. At each step it chooses the next edge in the path to be one whose deletion would not disconnect the graph, unless there is no such edge, in which case it picks the remaining edge left at the current vertex. It then moves to the other endpoint of that edge and deletes the edge. At the end of the algorithm there are no edges left, and the sequence from which the edges were chosen forms an Eulerian cycle if the graph has no vertices of odd degree, or an Eulerian trail if there are exactly two vertices of odd degree.
While the graph traversal in Fleury's algorithm is linear in the number of edges, i.e. , we also need to factor in the complexity of detecting bridges. If we are to re-run Tarjan's linear time bridge-finding algorithm after the removal of every edge, Fleury's algorithm will have a time complexity of . A dynamic bridge-finding algorithm of allows this to be improved to , but this is still significantly slower than alternative algorithms.
Hierholzer's algorithm
Hierholzer's 1873 paper provides a different method for finding Euler cycles that is more efficient than Fleury's algorithm:
Choose any starting vertex v, and follow a trail of edges from that vertex until returning to v. It is not possible to get stuck at any vertex other than v, because the even degree of all vertices ensures that, when the trail enters another vertex w there must be an unused edge leaving w. The tour formed in this way is a closed tour, but may not cover all the vertices and edges of the initial graph.
As long as there exists a vertex u that belongs to the current tour but that has adjacent edges not part of the tour, start another trail from u, following unused edges until returning to u, and join the tour formed in this way to the previous tour.
Since we assume the original graph is connected, repeating the previous step will exhaust all edges of the graph.
By using a data structure such as a doubly linked list to maintain the set of unused edges incident to each vertex, to maintain the list of vertices on the current tour that have unused edges, and to maintain the tour itself, the individual operations of the algorithm (finding unused edges exiting each vertex, finding a new starting vertex for a tour, and connecting two tours that share a vertex) may be performed in constant time each, so the overall algorithm takes linear time, .
This algorithm may also be implemented with a deque. Because it is only possible to get stuck when the deque represents a closed tour, one should rotate the deque by removing edges from the tail and adding them to the head until unstuck, and then continue until all edges are accounted for. This also takes linear time, as the number of rotations performed is never larger than (intuitively, any "bad" edges are moved to the head, while fresh edges are added to the tail)
Counting Eulerian circuits
Complexity issues
The number of Eulerian circuits in digraphs can be calculated using the so-called BEST theorem, named after de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte. The formula states that the number of Eulerian circuits in a digraph is the product of certain degree factorials and the number of rooted arborescences. The latter can be computed as a determinant, by the matrix tree theorem, giving a polynomial time algorithm.
BEST theorem is first stated in this form in a "note added in proof" to the Aardenne-Ehrenfest and de Bruijn paper (1951). The original proof was bijective and generalized the de Bruijn sequences. It is a variation on an earlier result by Smith and Tutte (1941).
Counting the number of Eulerian circuits on undirected graphs is much more difficult. This problem is known to be #P-complete. In a positive direction, a Markov chain Monte Carlo approach, via the Kotzig transformations (introduced by Anton Kotzig in 1968) is believed to give a sharp approximation for the number of Eulerian circuits in a graph, though as yet there is no proof of this fact (even for graphs of bounded degree).
Special cases
An asymptotic formula for the number of Eulerian circuits in the complete graphs was determined by McKay and Robinson (1995):
A similar formula was later obtained by M.I. Isaev (2009) for complete bipartite graphs:
Applications
Eulerian trails are used in bioinformatics to reconstruct the DNA sequence from its fragments. They are also used in CMOS circuit design to find an optimal logic gate ordering. There are some algorithms for processing trees that rely on an Euler tour of the tree (where each edge is treated as a pair of arcs). The de Bruijn sequences can be constructed as Eulerian trails of de Bruijn graphs.
In infinite graphs
In an infinite graph, the corresponding concept to an Eulerian trail or Eulerian cycle is an Eulerian line, a doubly-infinite trail that covers all of the edges of the graph. It is not sufficient for the existence of such a trail that the graph be connected and that all vertex degrees be even; for instance, the infinite Cayley graph shown, with all vertex degrees equal to four, has no Eulerian line. The infinite graphs that contain Eulerian lines were characterized by . For an infinite graph or multigraph to have an Eulerian line, it is necessary and sufficient that all of the following conditions be met:
is connected.
has countable sets of vertices and edges.
has no vertices of (finite) odd degree.
Removing any finite subgraph from leaves at most two infinite connected components in the remaining graph, and if has even degree at each of its vertices then removing leaves exactly one infinite connected component.
Undirected Eulerian graphs
Euler stated a necessary condition for a finite graph to be Eulerian as all vertices must have even degree. Hierholzer proved this is a sufficient condition in a paper published in 1873. This leads to the following necessary and sufficient statement for what a finite graph must have to be Eulerian: An undirected connected finite graph is Eulerian if and only if every vertex of G has even degree.
The following result was proved by Veblen in 1912: An undirected connected graph is Eulerian if and only if it is the disjoint union of some cycles.Hierholzer developed a linear time algorithm for constructing an Eulerian tour in an undirected graph.
Directed Eulerian graphs
It is possible to have a directed graph that has all even out-degrees but is not Eulerian. Since an Eulerian circuit leaves a vertex the same number of times as it enters that vertex, a necessary condition for an Eulerian circuit to exist is that the in-degree and out-degree are equal at each vertex. Obviously, connectivity is also necessary. König proved that these conditions are also sufficient. That is, a directed graph is Eulerian if and only if it is connected and the in-degree and out-degree are equal at each vertex.
In this theorem it doesn't matter whether "connected" means "weakly connected" or "strongly connected" since they are equivalent for Eulerian graphs.
Hierholzer's linear time algorithm for constructing an Eulerian tour is also applicable to directed graphs.
Mixed Eulerian graphs
All mixed graphs that are both even and symmetric are guaranteed to be Eulerian. However, this is not a necessary condition, as it is possible to construct a non-symmetric, even graph that is Eulerian.
Ford and Fulkerson proved in 1962 in their book Flows in Networks a necessary and sufficient condition for a graph to be Eulerian, viz., that every vertex must be even and satisfy the balance condition, i.e. for every subset of vertices S, the difference between the number of arcs leaving S and entering S must be less than or equal to the number of edges incident with S.
The process of checking if a mixed graph is Eulerian is harder than checking if an undirected or directed graph is Eulerian because the balanced set condition concerns every possible subset of vertices.
See also
Eulerian matroid, an abstract generalization of Eulerian graphs
Five room puzzle
Handshaking lemma, proven by Euler in his original paper, showing that any undirected connected graph has an even number of odd-degree vertices
Hamiltonian path – a path that visits each vertex exactly once.
Route inspection problem, search for the shortest path that visits all edges, possibly repeating edges if an Eulerian path does not exist.
Veblen's theorem, which states that graphs with even vertex degree can be partitioned into edge-disjoint cycles regardless of their connectivity
Notes
References
Bibliography
. Translated as .
Euler, L., "Solutio problematis ad geometriam situs pertinentis", Comment. Academiae Sci. I. Petropolitanae 8 (1736), 128–140.
.
Lucas, E., Récréations Mathématiques IV, Paris, 1921.
Fleury, "Deux problemes de geometrie de situation", Journal de mathematiques elementaires (1883), 257–261.
T. van Aardenne-Ehrenfest and N. G. de Bruijn (1951) "Circuits and trees in oriented linear graphs", Simon Stevin 28: 203–217.
W. T. Tutte and C. A. B. Smith (1941) "On Unicursal Paths in a Network of Degree 4", American Mathematical Monthly 48: 233–237.
External links
Discussion of early mentions of Fleury's algorithm.
Euler tour at Encyclopedia of Mathematics.
Graph theory objects
Leonhard Euler | Eulerian path | [
"Mathematics"
] | 3,156 | [
"Mathematical relations",
"Graph theory",
"Graph theory objects"
] |
333,270 | https://en.wikipedia.org/wiki/List%20of%20Internet%20radio%20stations | This is a list of Internet radio stations, including traditional broadcast stations which stream programming over the Internet as well as Internet-only stations.
General
BBC
Radio France
Indian Internet Radios
Boxout.fm
RadioJoyAlukkas.com
Sarawakian Internet Radios
Radio Free Sarawak
Raidió Teilifís Éireann
Rai – Radiotelevisione Italiana
Rai Radio 1 (News/Talk)
Rai Radio 2 (Adult contemporary music)
Rai Radio 3 (Classical music)
Rai Südtirol (radio station) (in German language)
Rai Italia Radio (closed 2011) – international
Yle
YleX
Yle Radio Suomi
Yle X3M
Yle Vega
Weather
AccuWeather
Current affairs
C-SPAN
Entertainment
Music
Terrestrial and satellite stations
BBC (see section above)
CBC Radio Three
Gaydar Radio
Radio Caroline
Sirius Internet Radio, the Internet radio product of Sirius Satellite Radio
XM Radio Online, the Internet radio product of XM Satellite Radio
Community/public/campus/college/university stations
Religious stations
Jewish Rock Radio
KLOVE
Latter-day Saints Channel
Vatican Radio
Vision Radio Network (Australia)
Tourist & park information stations
CFPE-FM
Corporate-owned stations
Audacy-owned stations
Internet-only
Radio stations
!
Internet | List of Internet radio stations | [
"Technology"
] | 245 | [
"Computing-related lists",
"Internet-related lists"
] |
333,306 | https://en.wikipedia.org/wiki/Regular%20polygon | In Euclidean geometry, a regular polygon is a polygon that is direct equiangular (all angles are equal in measure) and equilateral (all sides have the same length). Regular polygons may be either convex or star. In the limit, a sequence of regular polygons with an increasing number of sides approximates a circle, if the perimeter or area is fixed, or a regular apeirogon (effectively a straight line), if the edge length is fixed.
General properties
These properties apply to all regular polygons, whether convex or star:
A regular n-sided polygon has rotational symmetry of order n.
All vertices of a regular polygon lie on a common circle (the circumscribed circle); i.e., they are concyclic points. That is, a regular polygon is a cyclic polygon.
Together with the property of equal-length sides, this implies that every regular polygon also has an inscribed circle or incircle that is tangent to every side at the midpoint. Thus a regular polygon is a tangential polygon.
A regular n-sided polygon can be constructed with compass and straightedge if and only if the odd prime factors of n are distinct Fermat primes.
A regular n-sided polygon can be constructed with origami if and only if for some , where each distinct is a Pierpont prime.
Symmetry
The symmetry group of an n-sided regular polygon is the dihedral group Dn (of order 2n): D2, D3, D4, ... It consists of the rotations in Cn, together with reflection symmetry in n axes that pass through the center. If n is even then half of these axes pass through two opposite vertices, and the other half through the midpoint of opposite sides. If n is odd then all axes pass through a vertex and the midpoint of the opposite side.
Regular convex polygons
All regular simple polygons (a simple polygon is one that does not intersect itself anywhere) are convex. Those having the same number of sides are also similar.
An n-sided convex regular polygon is denoted by its Schläfli symbol . For , we have two degenerate cases:
Monogon {1} Degenerate in ordinary space. (Most authorities do not regard the monogon as a true polygon, partly because of this, and also because the formulae below do not work, and its structure is not that of any abstract polygon.)
Digon {2}; a "double line segment" Degenerate in ordinary space. (Some authorities do not regard the digon as a true polygon because of this.)
In certain contexts all the polygons considered will be regular. In such circumstances it is customary to drop the prefix regular. For instance, all the faces of uniform polyhedra must be regular and the faces will be described simply as triangle, square, pentagon, etc.
Angles
For a regular convex n-gon, each interior angle has a measure of:
degrees;
radians; or
full turns,
and each exterior angle (i.e., supplementary to the interior angle) has a measure of degrees, with the sum of the exterior angles equal to 360 degrees or 2π radians or one full turn.
As n approaches infinity, the internal angle approaches 180 degrees. For a regular polygon with 10,000 sides (a myriagon) the internal angle is 179.964°. As the number of sides increases, the internal angle can come very close to 180°, and the shape of the polygon approaches that of a circle. However the polygon can never become a circle. The value of the internal angle can never become exactly equal to 180°, as the circumference would effectively become a straight line (see apeirogon). For this reason, a circle is not a polygon with an infinite number of sides.
Diagonals
For , the number of diagonals is ; i.e., 0, 2, 5, 9, ..., for a triangle, square, pentagon, hexagon, ... . The diagonals divide the polygon into 1, 4, 11, 24, ... pieces.
For a regular n-gon inscribed in a circle of radius , the product of the distances from a given vertex to all other vertices (including adjacent vertices and vertices connected by a diagonal) equals n.
Points in the plane
For a regular simple n-gon with circumradius R and distances di from an arbitrary point in the plane to the vertices, we have
For higher powers of distances from an arbitrary point in the plane to the vertices of a regular -gon, if
,
then
,
and
,
where is a positive integer less than .
If is the distance from an arbitrary point in the plane to the centroid of a regular -gon with circumradius , then
,
where = 1, 2, …, .
Interior points
For a regular n-gon, the sum of the perpendicular distances from any interior point to the n sides is n times the apothem (the apothem being the distance from the center to any side). This is a generalization of Viviani's theorem for the n = 3 case.
Circumradius
The circumradius R from the center of a regular polygon to one of the vertices is related to the side length s or to the apothem a by
For constructible polygons, algebraic expressions for these relationships exist .
The sum of the perpendiculars from a regular n-gon's vertices to any line tangent to the circumcircle equals n times the circumradius.
The sum of the squared distances from the vertices of a regular n-gon to any point on its circumcircle equals 2nR2 where R is the circumradius.
The sum of the squared distances from the midpoints of the sides of a regular n-gon to any point on the circumcircle is 2nR2 − ns2, where s is the side length and R is the circumradius.
If are the distances from the vertices of a regular -gon to any point on its circumcircle, then
.
Dissections
Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into or parallelograms.
These tilings are contained as subsets of vertices, edges and faces in orthogonal projections m-cubes.
In particular, this is true for any regular polygon with an even number of sides, in which case the parallelograms are all rhombi.
The list gives the number of solutions for smaller polygons.
Area
The area A of a convex regular n-sided polygon having side s, circumradius R, apothem a, and perimeter p is given by
For regular polygons with side s = 1, circumradius R = 1, or apothem a = 1, this produces the following table: (Since as , the area when tends to as grows large.)
Of all n-gons with a given perimeter, the one with the largest area is regular.
Constructible polygon
Some regular polygons are easy to construct with compass and straightedge; other regular polygons are not constructible at all.
The ancient Greek mathematicians knew how to construct a regular polygon with 3, 4, or 5 sides, and they knew how to construct a regular polygon with double the number of sides of a given regular polygon. This led to the question being posed: is it possible to construct all regular n-gons with compass and straightedge? If not, which n-gons are constructible and which are not?
Carl Friedrich Gauss proved the constructibility of the regular 17-gon in 1796. Five years later, he developed the theory of Gaussian periods in his Disquisitiones Arithmeticae. This theory allowed him to formulate a sufficient condition for the constructibility of regular polygons:
A regular n-gon can be constructed with compass and straightedge if n is the product of a power of 2 and any number of distinct Fermat primes (including none).
(A Fermat prime is a prime number of the form ) Gauss stated without proof that this condition was also necessary, but never published his proof. A full proof of necessity was given by Pierre Wantzel in 1837. The result is known as the Gauss–Wantzel theorem.
Equivalently, a regular n-gon is constructible if and only if the cosine of its common angle is a constructible number—that is, can be written in terms of the four basic arithmetic operations and the extraction of square roots.
Regular skew polygons
A regular skew polygon in 3-space can be seen as nonplanar paths zig-zagging between two parallel planes, defined as the side-edges of a uniform antiprism. All edges and internal angles are equal.
More generally regular skew polygons can be defined in n-space. Examples include the Petrie polygons, polygonal paths of edges that divide a regular polytope into two halves, and seen as a regular polygon in orthogonal projection.
In the infinite limit regular skew polygons become skew apeirogons.
Regular star polygons
A non-convex regular polygon is a regular star polygon. The most common example is the pentagram, which has the same vertices as a pentagon, but connects alternating vertices.
For an n-sided star polygon, the Schläfli symbol is modified to indicate the density or "starriness" m of the polygon, as {n/m}. If m is 2, for example, then every second point is joined. If m is 3, then every third point is joined. The boundary of the polygon winds around the center m times.
The (non-degenerate) regular stars of up to 12 sides are:
Pentagram – {5/2}
Heptagram – {7/2} and {7/3}
Octagram – {8/3}
Enneagram – {9/2} and {9/4}
Decagram – {10/3}
Hendecagram – {11/2}, {11/3}, {11/4} and {11/5}
Dodecagram – {12/5}
m and n must be coprime, or the figure will degenerate.
The degenerate regular stars of up to 12 sides are:
Tetragon – {4/2}
Hexagons – {6/2}, {6/3}
Octagons – {8/2}, {8/4}
Enneagon – {9/3}
Decagons – {10/2}, {10/4}, and {10/5}
Dodecagons – {12/2}, {12/3}, {12/4}, and {12/6}
Depending on the precise derivation of the Schläfli symbol, opinions differ as to the nature of the degenerate figure. For example, {6/2} may be treated in either of two ways:
For much of the 20th century (see for example ), we have commonly taken the /2 to indicate joining each vertex of a convex {6} to its near neighbors two steps away, to obtain the regular compound of two triangles, or hexagram. Coxeter clarifies this regular compound with a notation {kp}[k{p}]{kp} for the compound {p/k}, so the hexagram is represented as {6}[2{3}]{6}. More compactly Coxeter also writes 2{n/2}, like 2{3} for a hexagram as compound as alternations of regular even-sided polygons, with italics on the leading factor to differentiate it from the coinciding interpretation.
Many modern geometers, such as Grünbaum (2003), regard this as incorrect. They take the /2 to indicate moving two places around the {6} at each step, obtaining a "double-wound" triangle that has two vertices superimposed at each corner point and two edges along each line segment. Not only does this fit in better with modern theories of abstract polytopes, but it also more closely copies the way in which Poinsot (1809) created his star polygons – by taking a single length of wire and bending it at successive points through the same angle until the figure closed.
Duality of regular polygons
All regular polygons are self-dual to congruency, and for odd n they are self-dual to identity.
In addition, the regular star figures (compounds), being composed of regular polygons, are also self-dual.
Regular polygons as faces of polyhedra
A uniform polyhedron has regular polygons as faces, such that for every two vertices there is an isometry mapping one into the other (just as there is for a regular polygon).
A quasiregular polyhedron is a uniform polyhedron which has just two kinds of face alternating around each vertex.
A regular polyhedron is a uniform polyhedron which has just one kind of face.
The remaining (non-uniform) convex polyhedra with regular faces are known as the Johnson solids.
A polyhedron having regular triangles as faces is called a deltahedron.
See also
Euclidean tilings by convex regular polygons
Platonic solid
List of regular polytopes and compounds
Equilateral polygon
Carlyle circle
Notes
References
Further reading
Lee, Hwa Young; "Origami-Constructible Numbers".
Grünbaum, B.; Are your polyhedra the same as my polyhedra?, Discrete and comput. geom: the Goodman-Pollack festschrift, Ed. Aronov et al., Springer (2003), pp. 461–488.
Poinsot, L.; Memoire sur les polygones et polyèdres. J. de l'École Polytechnique 9 (1810), pp. 16–48.
External links
Regular Polygon description With interactive animation
Incircle of a Regular Polygon With interactive animation
Area of a Regular Polygon Three different formulae, with interactive animation
Renaissance artists' constructions of regular polygons at Convergence
Types of polygons
Regular polytopes | Regular polygon | [
"Physics"
] | 3,060 | [
"Uniform polytopes",
"Symmetry",
"Regular polytopes"
] |
333,365 | https://en.wikipedia.org/wiki/Modal%20logic | Modal logic is a kind of logic used to represent statements about necessity and possibility. It plays a major role in philosophy and related fields as a tool for understanding concepts such as knowledge, obligation, and causation. For instance, in epistemic modal logic, the formula can be used to represent the statement that is known. In deontic modal logic, that same formula can represent that is a moral obligation. Modal logic considers the inferences that modal statements give rise to. For instance, most epistemic modal logics treat the formula as a tautology, representing the principle that only true statements can count as knowledge. However, this formula is not a tautology in deontic modal logic, since what ought to be true can be false.
Modal logics are formal systems that include unary operators such as and , representing possibility and necessity respectively. For instance the modal formula can be read as "possibly " while can be read as "necessarily ". In the standard relational semantics for modal logic, formulas are assigned truth values relative to a possible world. A formula's truth value at one possible world can depend on the truth values of other formulas at other accessible possible worlds. In particular, is true at a world if is true at some accessible possible world, while is true at a world if is true at every accessible possible world. A variety of proof systems exist which are sound and complete with respect to the semantics one gets by restricting the accessibility relation. For instance, the deontic modal logic D is sound and complete if one requires the accessibility relation to be serial.
While the intuition behind modal logic dates back to antiquity, the first modal axiomatic systems were developed by C. I. Lewis in 1912. The now-standard relational semantics emerged in the mid twentieth century from work by Arthur Prior, Jaakko Hintikka, and Saul Kripke. Recent developments include alternative topological semantics such as neighborhood semantics as well as applications of the relational semantics beyond its original philosophical motivation. Such applications include game theory, moral and legal theory, web design, multiverse-based set theory, and social epistemology.
Syntax of modal operators
Modal logic differs from other kinds of logic in that it uses modal operators such as and . The former is conventionally read aloud as "necessarily", and can be used to represent notions such as moral or legal obligation, knowledge, historical inevitability, among others. The latter is typically read as "possibly" and can be used to represent notions including permission, ability, compatibility with evidence. While well-formed formulas of modal logic include non-modal formulas such as , it also contains modal ones such as , , , and so on.
Thus, the language of basic propositional logic can be defined recursively as follows.
If is an atomic formula, then is a formula of .
If is a formula of , then is too.
If and are formulas of , then is too.
If is a formula of , then is too.
If is a formula of , then is too.
Modal operators can be added to other kinds of logic by introducing rules analogous to #4 and #5 above. Modal predicate logic is one widely used variant which includes formulas such as . In systems of modal logic where and are duals, can be taken as an abbreviation for , thus eliminating the need for a separate syntactic rule to introduce it. However, separate syntactic rules are necessary in systems where the two operators are not interdefinable.
Common notational variants include symbols such as and in systems of modal logic used to represent knowledge and and in those used to represent belief. These notations are particularly common in systems which use multiple modal operators simultaneously. For instance, a combined epistemic-deontic logic could use the formula read as "I know P is permitted". Systems of modal logic can include infinitely many modal operators distinguished by indices, i.e. , , , and so on.
Semantics
Relational semantics
Basic notions
The standard semantics for modal logic is called the relational semantics. In this approach, the truth of a formula is determined relative to a point which is often called a possible world. For a formula that contains a modal operator, its truth value can depend on what is true at other accessible worlds. Thus, the relational semantics interprets formulas of modal logic using models defined as follows.
A relational model is a tuple where:
is a set of possible worlds
is a binary relation on
is a valuation function which assigns a truth value to each pair of an atomic formula and a world, (i.e. where is the set of atomic formulae)
The set is often called the universe. The binary relation is called an accessibility relation, and it controls which worlds can "see" each other for the sake of determining what is true. For example, means that the world is accessible from world . That is to say, the state of affairs known as is a live possibility for . Finally, the function is known as a valuation function. It determines which atomic formulas are true at which worlds.
Then we recursively define the truth of a formula at a world in a model :
iff
iff
iff and
iff for every element of , if then
iff for some element of , it holds that and
According to this semantics, a formula is necessary with respect to a world if it holds at every world that is accessible from . It is possible if it holds at some world that is accessible from . Possibility thereby depends upon the accessibility relation , which allows us to express the relative nature of possibility. For example, we might say that given our laws of physics it is not possible for humans to travel faster than the speed of light, but that given other circumstances it could have been possible to do so. Using the accessibility relation we can translate this scenario as follows: At all of the worlds accessible to our own world, it is not the case that humans can travel faster than the speed of light, but at one of these accessible worlds there is another world accessible from those worlds but not accessible from our own at which humans can travel faster than the speed of light.
Frames and completeness
The choice of accessibility relation alone can sometimes be sufficient to guarantee the truth or falsity of a formula. For instance, consider a model whose accessibility relation is reflexive. Because the relation is reflexive, we will have that for any regardless of which valuation function is used. For this reason, modal logicians sometimes talk about frames, which are the portion of a relational model excluding the valuation function.
A relational frame is a pair where is a set of possible worlds, is a binary relation on .
The different systems of modal logic are defined using frame conditions. A frame is called:
reflexive if w R w, for every w in G
symmetric if w R u implies u R w, for all w and u in G
transitive if w R u and u R q together imply w R q, for all w, u, q in G.
serial if, for every w in G there is some u in G such that w R u.
Euclidean if, for every u, t, and w, w R u and w R t implies u R t (by symmetry, it also implies t R u, as well as t R t and u R u)
The logics that stem from these frame conditions are:
K := no conditions
D := serial
T := reflexive
B := reflexive and symmetric
S4 := reflexive and transitive
S5 := reflexive and Euclidean
The Euclidean property along with reflexivity yields symmetry and transitivity. (The Euclidean property can be obtained, as well, from symmetry and transitivity.) Hence if the accessibility relation R is reflexive and Euclidean, R is provably symmetric and transitive as well. Hence for models of S5, R is an equivalence relation, because R is reflexive, symmetric and transitive.
We can prove that these frames produce the same set of valid sentences as do the frames where all worlds can see all other worlds of W (i.e., where R is a "total" relation). This gives the corresponding modal graph which is total complete (i.e., no more edges (relations) can be added). For example, in any modal logic based on frame conditions:
if and only if for some element u of G, it holds that and w R u.
If we consider frames based on the total relation we can just say that
if and only if for some element u of G, it holds that .
We can drop the accessibility clause from the latter stipulation because in such total frames it is trivially true of all w and u that w R u. But this does not have to be the case in all S5 frames, which can still consist of multiple parts that are fully connected among themselves but still disconnected from each other.
All of these logical systems can also be defined axiomatically, as is shown in the next section. For example, in S5, the axioms , and (corresponding to symmetry, transitivity and reflexivity, respectively) hold, whereas at least one of these axioms does not hold in each of the other, weaker logics.
Topological semantics
Modal logic has also been interpreted using topological structures. For instance, the Interior Semantics interprets formulas of modal logic as follows.
A topological model is a tuple where is a topological space and is a valuation function which maps each atomic formula to some subset of . The basic interior semantics interprets formulas of modal logic as follows:
iff
iff
iff and
iff for some we have both that and also that for all
Topological approaches subsume relational ones, allowing non-normal modal logics. The extra structure they provide also allows a transparent way of modeling certain concepts such as the evidence or justification one has for one's beliefs. Topological semantics is widely used in recent work in formal epistemology and has antecedents in earlier work such as David Lewis and Angelika Kratzer's logics for counterfactuals.
Axiomatic systems
The first formalizations of modal logic were axiomatic. Numerous variations with very different properties have been proposed since C. I. Lewis began working in the area in 1912. Hughes and Cresswell (1996), for example, describe 42 normal and 25 non-normal modal logics. Zeman (1973) describes some systems Hughes and Cresswell omit.
Modern treatments of modal logic begin by augmenting the propositional calculus with two unary operations, one denoting "necessity" and the other "possibility". The notation of C. I. Lewis, much employed since, denotes "necessarily p" by a prefixed "box" (□p) whose scope is established by parentheses. Likewise, a prefixed "diamond" (◇p) denotes "possibly p". Similar to the quantifiers in first-order logic, "necessarily p" (□p) does not assume the range of quantification (the set of accessible possible worlds in Kripke semantics) to be non-empty, whereas "possibly p" (◇p) often implicitly assumes (viz. the set of accessible possible worlds is non-empty). Regardless of notation, each of these operators is definable in terms of the other in classical modal logic:
□p (necessarily p) is equivalent to ("not possible that not-p")
◇p (possibly p) is equivalent to ("not necessarily not-p")
Hence □ and ◇ form a dual pair of operators.
In many modal logics, the necessity and possibility operators satisfy the following analogues of de Morgan's laws from Boolean algebra:
"It is not necessary that X" is logically equivalent to "It is possible that not X".
"It is not possible that X" is logically equivalent to "It is necessary that not X".
Precisely what axioms and rules must be added to the propositional calculus to create a usable system of modal logic is a matter of philosophical opinion, often driven by the theorems one wishes to prove; or, in computer science, it is a matter of what sort of computational or deductive system one wishes to model. Many modal logics, known collectively as normal modal logics, include the following rule and axiom:
N, Necessitation Rule: If p is a theorem/tautology (of any system/model invoking N), then □p is likewise a theorem (i.e. ).
K, Distribution Axiom:
The weakest normal modal logic, named "K" in honor of Saul Kripke, is simply the propositional calculus augmented by □, the rule N, and the axiom K. K is weak in that it fails to determine whether a proposition can be necessary but only contingently necessary. That is, it is not a theorem of K that if □p is true then □□p is true, i.e., that necessary truths are "necessarily necessary". If such perplexities are deemed forced and artificial, this defect of K is not a great one. In any case, different answers to such questions yield different systems of modal logic.
Adding axioms to K gives rise to other well-known modal systems. One cannot prove in K that if "p is necessary" then p is true. The axiom T remedies this defect:
T, Reflexivity Axiom: (If p is necessary, then p is the case.)
T holds in most but not all modal logics. Zeman (1973) describes a few exceptions, such as S10.
Other well-known elementary axioms are:
4:
B:
D:
5:
These yield the systems (axioms in bold, systems in italics):
K := K + N
T := K + T
S4 := T + 4
S5 := T + 5
D := K + D.
K through S5 form a nested hierarchy of systems, making up the core of normal modal logic. But specific rules or sets of rules may be appropriate for specific systems. For example, in deontic logic, (If it ought to be that p, then it is permitted that p) seems appropriate, but we should probably not include that . In fact, to do so is to commit the naturalistic fallacy (i.e. to state that what is natural is also good, by saying that if p is the case, p ought to be permitted).
The commonly employed system S5 simply makes all modal truths necessary. For example, if p is possible, then it is "necessary" that p is possible. Also, if p is necessary, then it is necessary that p is necessary. Other systems of modal logic have been formulated, in part because S5 does not describe every kind of modality of interest.
Structural proof theory
Sequent calculi and systems of natural deduction have been developed for several modal logics, but it has proven hard to combine generality with other features expected of good structural proof theories, such as purity (the proof theory does not introduce extra-logical notions such as labels) and analyticity (the logical rules support a clean notion of analytic proof). More complex calculi have been applied to modal logic to achieve generality.
Decision methods
Analytic tableaux provide the most popular decision method for modal logics.
Modal logics in philosophy
Alethic logic
Modalities of necessity and possibility are called alethic modalities. They are also sometimes called special modalities, from the Latin species. Modal logic was first developed to deal with these concepts, and only afterward was extended to others. For this reason, or perhaps for their familiarity and simplicity, necessity and possibility are often casually treated as the subject matter of modal logic. Moreover, it is easier to make sense of relativizing necessity, e.g. to legal, physical, nomological, epistemic, and so on, than it is to make sense of relativizing other notions.
In classical modal logic, a proposition is said to be
possible if it is not necessarily false (regardless of whether it is actually true or actually false);
necessary if it is not possibly false (i.e. true and necessarily true);
contingent if it is not necessarily false and not necessarily true (i.e. possible but not necessarily true);
impossible if it is not possibly true (i.e. false and necessarily false).
In classical modal logic, therefore, the notion of either possibility or necessity may be taken to be basic, where these other notions are defined in terms of it in the manner of De Morgan duality. Intuitionistic modal logic treats possibility and necessity as not perfectly symmetric.
For example, suppose that while walking to the convenience store we pass Friedrich's house, and observe that the lights are off. On the way back, we observe that they have been turned on.
"Somebody or something turned the lights on" is necessary.
"Friedrich turned the lights on", "Friedrich's roommate Max turned the lights on" and "A burglar named Adolf broke into Friedrich's house and turned the lights on" are contingent.
All of the above statements are possible.
It is impossible that Socrates (who has been dead for over two thousand years) turned the lights on.
(Of course, this analogy does not apply alethic modality in a truly rigorous fashion; for it to do so, it would have to axiomatically make such statements as "human beings cannot rise from the dead", "Socrates was a human being and not an immortal vampire", and "we did not take hallucinogenic drugs which caused us to falsely believe the lights were on", ad infinitum. Absolute certainty of truth or falsehood exists only in the sense of logically constructed abstract concepts such as "it is impossible to draw a triangle with four sides" and "all bachelors are unmarried".)
For those having difficulty with the concept of something being possible but not true, the meaning of these terms may be made more comprehensible by thinking of multiple "possible worlds" (in the sense of Leibniz) or "alternate universes"; something "necessary" is true in all possible worlds, something "possible" is true in at least one possible world.
Physical possibility
Something is physically, or nomically, possible if it is permitted by the laws of physics. For example, current theory is thought to allow for there to be an atom with an atomic number of 126, even if there are no such atoms in existence. In contrast, while it is logically possible to accelerate beyond the speed of light, modern science stipulates that it is not physically possible for material particles or information.
Metaphysical possibility
Philosophers debate if objects have properties independent of those dictated by scientific laws. For example, it might be metaphysically necessary, as some who advocate physicalism have thought, that all thinking beings have bodies and can experience the passage of time. Saul Kripke has argued that every person necessarily has the parents they do have: anyone with different parents would not be the same person.
Metaphysical possibility has been thought to be more restricting than bare logical possibility (i.e., fewer things are metaphysically possible than are logically possible). However, its exact relation (if any) to logical possibility or to physical possibility is a matter of dispute. Philosophers also disagree over whether metaphysical truths are necessary merely "by definition", or whether they reflect some underlying deep facts about the world, or something else entirely.
Epistemic logic
Epistemic modalities (from the Greek episteme, knowledge), deal with the certainty of sentences. The □ operator is translated as "x is certain that…", and the ◇ operator is translated as "For all x knows, it may be true that…" In ordinary speech both metaphysical and epistemic modalities are often expressed in similar words; the following contrasts may help:
A person, Jones, might reasonably say both: (1) "No, it is not possible that Bigfoot exists; I am quite certain of that"; and, (2) "Sure, it's possible that Bigfoots could exist". What Jones means by (1) is that, given all the available information, there is no question remaining as to whether Bigfoot exists. This is an epistemic claim. By (2) he makes the metaphysical claim that it is possible for Bigfoot to exist, even though he does not: there is no physical or biological reason that large, featherless, bipedal creatures with thick hair could not exist in the forests of North America (regardless of whether or not they do). Similarly, "it is possible for the person reading this sentence to be fourteen feet tall and named Chad" is metaphysically true (such a person would not somehow be prevented from doing so on account of their height and name), but not alethically true unless you match that description, and not epistemically true if it is known that fourteen-foot-tall human beings have never existed.
From the other direction, Jones might say, (3) "It is possible that Goldbach's conjecture is true; but also possible that it is false", and also (4) "if it is true, then it is necessarily true, and not possibly false". Here Jones means that it is epistemically possible that it is true or false, for all he knows (Goldbach's conjecture has not been proven either true or false), but if there is a proof (heretofore undiscovered), then it would show that it is not logically possible for Goldbach's conjecture to be false—there could be no set of numbers that violated it. Logical possibility is a form of alethic possibility; (4) makes a claim about whether it is possible (i.e., logically speaking) that a mathematical truth to have been false, but (3) only makes a claim about whether it is possible, for all Jones knows, (i.e., speaking of certitude) that the mathematical claim is specifically either true or false, and so again Jones does not contradict himself. It is worthwhile to observe that Jones is not necessarily correct: It is possible (epistemically) that Goldbach's conjecture is both true and unprovable.
Epistemic possibilities also bear on the actual world in a way that metaphysical possibilities do not. Metaphysical possibilities bear on ways the world might have been, but epistemic possibilities bear on the way the world may be (for all we know). Suppose, for example, that I want to know whether or not to take an umbrella before I leave. If you tell me "it is possible that it is raining outside" – in the sense of epistemic possibility – then that would weigh on whether or not I take the umbrella. But if you just tell me that "it is possible for it to rain outside" – in the sense of metaphysical possibility – then I am no better off for this bit of modal enlightenment.
Some features of epistemic modal logic are in debate. For example, if x knows that p, does x know that it knows that p? That is to say, should □P → □□P be an axiom in these systems? While the answer to this question is unclear, there is at least one axiom that is generally included in epistemic modal logic, because it is minimally true of all normal modal logics (see the section on axiomatic systems):
K, Distribution Axiom: .
It has been questioned whether the epistemic and alethic modalities should be considered distinct from each other. The criticism states that there is no real difference between "the truth in the world" (alethic) and "the truth in an individual's mind" (epistemic). An investigation has not found a single language in which alethic and epistemic modalities are formally distinguished, as by the means of a grammatical mood.
Temporal logic
Temporal logic is an approach to the semantics of expressions with tense, that is, expressions with qualifications of when. Some expressions, such as '2 + 2 = 4', are true at all times, while tensed expressions such as 'John is happy' are only true sometimes.
In temporal logic, tense constructions are treated in terms of modalities, where a standard method for formalizing talk of time is to use two pairs of operators, one for the past and one for the future (P will just mean 'it is presently the case that P'). For example:
FP : It will sometimes be the case that P
GP : It will always be the case that P
PP : It was sometime the case that P
HP : It has always been the case that P
There are then at least three modal logics that we can develop. For example, we can stipulate that,
= P is the case at some time t
= P is the case at every time t
Or we can trade these operators to deal only with the future (or past). For example,
= FP
= GP
or,
= P and/or FP
= P and GP
The operators F and G may seem initially foreign, but they create normal modal systems. FP is the same as ¬G¬P. We can combine the above operators to form complex statements. For example, PP → □PP says (effectively), Everything that is past and true is necessary.
It seems reasonable to say that possibly it will rain tomorrow, and possibly it will not; on the other hand, since we cannot change the past, if it is true that it rained yesterday, it cannot be true that it may not have rained yesterday. It seems the past is "fixed", or necessary, in a way the future is not. This is sometimes referred to as accidental necessity. But if the past is "fixed", and everything that is in the future will eventually be in the past, then it seems plausible to say that future events are necessary too.
Similarly, the problem of future contingents considers the semantics of assertions about the future: is either of the propositions 'There will be a sea battle tomorrow', or 'There will not be a sea battle tomorrow' now true? Considering this thesis led Aristotle to reject the principle of bivalence for assertions concerning the future.
Additional binary operators are also relevant to temporal logics (see Linear temporal logic).
Versions of temporal logic can be used in computer science to model computer operations and prove theorems about them. In one version, ◇P means "at a future time in the computation it is possible that the computer state will be such that P is true"; □P means "at all future times in the computation P will be true". In another version, ◇P means "at the immediate next state of the computation, P might be true"; □P means "at the immediate next state of the computation, P will be true". These differ in the choice of Accessibility relation. (P always means "P is true at the current computer state".) These two examples involve nondeterministic or not-fully-understood computations; there are many other modal logics specialized to different types of program analysis. Each one naturally leads to slightly different axioms.
Deontic logic
Likewise talk of morality, or of obligation and norms generally, seems to have a modal structure. The difference between "You must do this" and "You may do this" looks a lot like the difference between "This is necessary" and "This is possible". Such logics are called deontic, from the Greek for "duty".
Deontic logics commonly lack the axiom T semantically corresponding to the reflexivity of the accessibility relation in Kripke semantics: in symbols, . Interpreting □ as "it is obligatory that", T informally says that every obligation is true. For example, if it is obligatory not to kill others (i.e. killing is morally forbidden), then T implies that people actually do not kill others. The consequent is obviously false.
Instead, using Kripke semantics, we say that though our own world does not realize all obligations, the worlds accessible to it do (i.e., T holds at these worlds). These worlds are called idealized worlds. P is obligatory with respect to our own world if at all idealized worlds accessible to our world, P holds. Though this was one of the first interpretations of the formal semantics, it has recently come under criticism.
One other principle that is often (at least traditionally) accepted as a deontic principle is D, , which corresponds to the seriality (or extendability or unboundedness) of the accessibility relation. It is an embodiment of the Kantian idea that "ought implies can". (Clearly the "can" can be interpreted in various senses, e.g. in a moral or alethic sense.)
Intuitive problems with deontic logic
When we try to formalize ethics with standard modal logic, we run into some problems. Suppose that we have a proposition K: you have stolen some money, and another, Q: you have stolen a small amount of money. Now suppose we want to express the thought that "if you have stolen some money, it ought to be a small amount of money". There are two likely candidates,
(1)
(2)
But (1) and K together entail □Q, which says that it ought to be the case that you have stolen a small amount of money. This surely is not right, because you ought not to have stolen anything at all. And (2) does not work either: If the right representation of "if you have stolen some money it ought to be a small amount" is (2), then the right representation of (3) "if you have stolen some money then it ought to be a large amount" is . Now suppose (as seems reasonable) that you ought not to steal anything, or . But then we can deduce via and (the contrapositive of ); so sentence (3) follows from our hypothesis (of course the same logic shows sentence (2)). But that cannot be right, and is not right when we use natural language. Telling someone they should not steal certainly does not imply that they should steal large amounts of money if they do engage in theft.
Doxastic logic
Doxastic logic concerns the logic of belief (of some set of agents). The term doxastic is derived from the ancient Greek doxa which means "belief". Typically, a doxastic logic uses □, often written "B", to mean "It is believed that", or when relativized to a particular agent s, "It is believed by s that".
Metaphysical questions
In the most common interpretation of modal logic, one considers "logically possible worlds". If a statement is true in all possible worlds, then it is a necessary truth. If a statement happens to be true in our world, but is not true in all possible worlds, then it is a contingent truth. A statement that is true in some possible world (not necessarily our own) is called a possible truth.
Under this "possible worlds idiom", to maintain that Bigfoot's existence is possible but not actual, one says, "There is some possible world in which Bigfoot exists; but in the actual world, Bigfoot does not exist". However, it is unclear what this claim commits us to. Are we really alleging the existence of possible worlds, every bit as real as our actual world, just not actual? Saul Kripke believes that 'possible world' is something of a misnomer – that the term 'possible world' is just a useful way of visualizing the concept of possibility. For him, the sentences "you could have rolled a 4 instead of a 6" and "there is a possible world where you rolled a 4, but you rolled a 6 in the actual world" are not significantly different statements, and neither commit us to the existence of a possible world. David Lewis, on the other hand, made himself notorious by biting the bullet, asserting that all merely possible worlds are as real as our own, and that what distinguishes our world as actual is simply that it is indeed our world – this world. That position is a major tenet of "modal realism". Some philosophers decline to endorse any version of modal realism, considering it ontologically extravagant, and prefer to seek various ways to paraphrase away these ontological commitments. Robert Adams holds that 'possible worlds' are better thought of as 'world-stories', or consistent sets of propositions. Thus, it is possible that you rolled a 4 if such a state of affairs can be described coherently.
Computer scientists will generally pick a highly specific interpretation of the modal operators specialized to the particular sort of computation being analysed. In place of "all worlds", you may have "all possible next states of the computer", or "all possible future states of the computer".
Further applications
Modal logics have begun to be used in areas of the humanities such as literature, poetry, art and history. In the philosophy of religion, modal logics are commonly used in arguments for the existence of God.
History
The basic ideas of modal logic date back to antiquity. Aristotle developed a modal syllogistic in Book I of his Prior Analytics (ch. 8–22), which Theophrastus attempted to improve. There are also passages in Aristotle's work, such as the famous sea-battle argument in De Interpretatione §9, that are now seen as anticipations of the connection of modal logic with potentiality and time. In the Hellenistic period, the logicians Diodorus Cronus, Philo the Dialectician and the Stoic Chrysippus each developed a modal system that accounted for the interdefinability of possibility and necessity, accepted axiom T (see below), and combined elements of modal logic and temporal logic in attempts to solve the notorious Master Argument. The earliest formal system of modal logic was developed by Avicenna, who ultimately developed a theory of "temporally modal" syllogistic. Modal logic as a self-aware subject owes much to the writings of the Scholastics, in particular William of Ockham and John Duns Scotus, who reasoned informally in a modal manner, mainly to analyze statements about essence and accident.
In the 19th century, Hugh MacColl made innovative contributions to modal logic, but did not find much acknowledgment. C. I. Lewis founded modern modal logic in a series of scholarly articles beginning in 1912 with "Implication and the Algebra of Logic". Lewis was led to invent modal logic, and specifically strict implication, on the grounds that classical logic grants paradoxes of material implication such as the principle that a falsehood implies any proposition. This work culminated in his 1932 book Symbolic Logic (with C. H. Langford), which introduced the five systems S1 through S5.
After Lewis, modal logic received little attention for several decades. Nicholas Rescher has argued that this was because Bertrand Russell rejected it. However, Jan Dejnozka has argued against this view, stating that a modal system which Dejnozka calls "MDL" is described in Russell's works, although Russell did believe the concept of modality to "come from confusing propositions with propositional functions", as he wrote in The Analysis of Matter.
Ruth C. Barcan (later Ruth Barcan Marcus) developed the first axiomatic systems of quantified modal logic — first and second order extensions of Lewis' S2, S4, and S5. Arthur Norman Prior warned her to prepare well in the debates concerning quantified modal logic with Willard Van Orman Quine, because of bias against modal logic.
The contemporary era in modal semantics began in 1959, when Saul Kripke (then only a 18-year-old Harvard University undergraduate) introduced the now-standard Kripke semantics for modal logics. These are commonly referred to as "possible worlds" semantics. Kripke and A. N. Prior had previously corresponded at some length. Kripke semantics is basically simple, but proofs are eased using semantic-tableaux or analytic tableaux, as explained by E. W. Beth.
A. N. Prior created modern temporal logic, closely related to modal logic, in 1957 by adding modal operators [F] and [P] meaning "eventually" and "previously". Vaughan Pratt introduced dynamic logic in 1976. In 1977, Amir Pnueli proposed using temporal logic to formalise the behaviour of continually operating concurrent programs. Flavors of temporal logic include propositional dynamic logic (PDL), (propositional) linear temporal logic (LTL), computation tree logic (CTL), Hennessy–Milner logic, and T.
The mathematical structure of modal logic, namely Boolean algebras augmented with unary operations (often called modal algebras), began to emerge with J. C. C. McKinsey's 1941 proof that S2 and S4 are decidable, and reached full flower in the work of Alfred Tarski and his student Bjarni Jónsson (Jónsson and Tarski 1951–52). This work revealed that S4 and S5 are models of interior algebra, a proper extension of Boolean algebra originally designed to capture the properties of the interior and closure operators of topology. Texts on modal logic typically do little more than mention its connections with the study of Boolean algebras and topology. For a thorough survey of the history of formal modal logic and of the associated mathematics, see Robert Goldblatt (2006).
See also
Accessibility relation
Conceptual necessity
Counterpart theory
David Kellogg Lewis
De dicto and de re
Description logic
Doxastic logic
Dynamic logic
Enthymeme
Free choice inference
Hybrid logic
Interior algebra
Interpretability logic
Kripke semantics
Metaphysical necessity
Modal verb
Multimodal logic
Multi-valued logic
Neighborhood semantics
Provability logic
Regular modal logic
Relevance logic
Strict conditional
Two-dimensionalism
Notes
References
This article includes material from the Free On-line Dictionary of Computing, used with permission under the GFDL.
Barcan-Marcus, Ruth JSL 11 (1946) and JSL 112 (1947) and "Modalities", OUP, 1993, 1995.
Beth, Evert W., 1955. "Semantic entailment and formal derivability", Mededlingen van de Koninklijke Nederlandse Akademie van Wetenschappen, Afdeling Letterkunde, N.R. Vol 18, no 13, 1955, pp 309–42. Reprinted in Jaakko Intikka (ed.) The Philosophy of Mathematics, Oxford University Press, 1969 (Semantic Tableaux proof methods).
Beth, Evert W., "Formal Methods: An Introduction to Symbolic Logic and to the Study of Effective Operations in Arithmetic and Logic", D. Reidel, 1962 (Semantic Tableaux proof methods).
Blackburn, P.; van Benthem, J.; and Wolter, Frank; Eds. (2006) Handbook of Modal Logic. North Holland.
Blackburn, Patrick; de Rijke, Maarten; and Venema, Yde (2001) Modal Logic. Cambridge University Press.
Chagrov, Aleksandr; and Zakharyaschev, Michael (1997) Modal Logic. Oxford University Press.
Chellas, B. F. (1980) Modal Logic: An Introduction. Cambridge University Press.
Cresswell, M. J. (2001) "Modal Logic" in Goble, Lou; Ed., The Blackwell Guide to Philosophical Logic. Basil Blackwell: 136–58.
Fitting, Melvin; and Mendelsohn, R. L. (1998) First Order Modal Logic. Kluwer.
James Garson (2006) Modal Logic for Philosophers. Cambridge University Press. . A thorough introduction to modal logic, with coverage of various derivation systems and a distinctive approach to the use of diagrams in aiding comprehension.
Girle, Rod (2000) Modal Logics and Philosophy. Acumen (UK). . Proof by refutation trees. A good introduction to the varied interpretations of modal logic.
Goldblatt, Robert (1992) "Logics of Time and Computation", 2nd ed., CSLI Lecture Notes No. 7. University of Chicago Press.
—— (1993) Mathematics of Modality, CSLI Lecture Notes No. 43. University of Chicago Press.
—— (2006) "Mathematical Modal Logic: a View of its Evolution", in Gabbay, D. M.; and Woods, John; Eds., Handbook of the History of Logic, Vol. 6. Elsevier BV.
Goré, Rajeev (1999) "Tableau Methods for Modal and Temporal Logics" in D'Agostino, M.; Gabbay, D.; Haehnle, R.; and Posegga, J.; Eds., Handbook of Tableau Methods. Kluwer: 297–396.
Hughes, G. E., and Cresswell, M. J. (1996) A New Introduction to Modal Logic. Routledge.
Jónsson, B. and Tarski, A., 1951–52, "Boolean Algebra with Operators I and II", American Journal of Mathematics 73: 891–939 and 74: 129–62.
Kracht, Marcus (1999) Tools and Techniques in Modal Logic, Studies in Logic and the Foundations of Mathematics No. 142. North Holland.
Lemmon, E. J. (with Scott, D.) (1977) An Introduction to Modal Logic, American Philosophical Quarterly Monograph Series, no. 11 (Krister Segerberg, series ed.). Basil Blackwell.
Lewis, C. I. (with Langford, C. H.) (1932). Symbolic Logic. Dover reprint, 1959.
Prior, A. N. (1957) Time and Modality. Oxford University Press.
Snyder, D. Paul "Modal Logic and its applications", Van Nostrand Reinhold Company, 1971 (proof tree methods).
Zeman, J. J. (1973) Modal Logic. Reidel. Employs Polish notation.
"History of logic", Britannica Online.
Further reading
Ruth Barcan Marcus, Modalities, Oxford University Press, 1993.
D. M. Gabbay, A. Kurucz, F. Wolter and M. Zakharyaschev, Many-Dimensional Modal Logics: Theory and Applications, Elsevier, Studies in Logic and the Foundations of Mathematics, volume 148, 2003, . [Covers many varieties of modal logics, e.g. temporal, epistemic, dynamic, description, spatial from a unified perspective with emphasis on computer science aspects, e.g. decidability and complexity.]
Andrea Borghini, A Critical Introduction to the Metaphysics of Modality, New York: Bloomsbury, 2016.
External links
Internet Encyclopedia of Philosophy:
"Modal Logic: A Contemporary View" – by Johan van Benthem.
"Rudolf Carnap's Modal Logic" – by MJ Cresswell.
Stanford Encyclopedia of Philosophy:
"Modal Logic" – by James Garson.
"Modern Origins of Modal Logic" – by Roberta Ballarin.
"Provability Logic" – by Rineke Verbrugge.
Edward N. Zalta, 1995, "Basic Concepts in Modal Logic."
John McCarthy, 1996, "Modal Logic."
Molle a Java prover for experimenting with modal logics
Suber, Peter, 2002, "Bibliography of Modal Logic."
List of Logic Systems List of many modal logics with sources, by John Halleck.
Advances in Modal Logic. Biannual international conference and book series in modal logic.
S4prover A tableaux prover for S4 logic
"Some Remarks on Logic and Topology" – by Richard Moot; exposits a topological semantics for the modal logic S4.
LoTREC The most generic prover for modal logics from IRIT/Toulouse University
Logic
Philosophical logic
Mathematical logic
Semantics | Modal logic | [
"Mathematics"
] | 9,233 | [
"Mathematical logic",
"Modal logic"
] |
333,420 | https://en.wikipedia.org/wiki/Archimedes%27%20principle | Archimedes' principle (also spelled Archimedes's principle) states that the upward buoyant force that is exerted on a body immersed in a fluid, whether fully or partially, is equal to the weight of the fluid that the body displaces. Archimedes' principle is a law of physics fundamental to fluid mechanics. It was formulated by Archimedes of Syracuse.
Explanation
In On Floating Bodies, Archimedes suggested that (c. 246 BC):
Archimedes' principle allows the buoyancy of any floating object partially or fully immersed in a fluid to be calculated. The downward force on the object is simply its weight. The upward, or buoyant, force on the object is that stated by Archimedes' principle above. Thus, the net force on the object is the difference between the magnitudes of the buoyant force and its weight. If this net force is positive, the object rises; if negative, the object sinks; and if zero, the object is neutrally buoyant—that is, it remains in place without either rising or sinking. In simple words, Archimedes' principle states that, when a body is partially or completely immersed in a fluid, it experiences an apparent loss in weight that is equal to the weight of the fluid displaced by the immersed part of the body(s).
Formula
Consider a cuboid immersed in a fluid, its top and bottom faces orthogonal to the direction of gravity (assumed constant across the cube's stretch). The fluid will exert a normal force on each face, but only the normal forces on top and bottom will contribute to buoyancy. The pressure difference between the bottom and the top face is directly proportional to the height (difference in depth of submersion). Multiplying the pressure difference by the area of a face gives a net force on the cuboid—the buoyancy—equaling in size the weight of the fluid displaced by the cuboid. By summing up sufficiently many arbitrarily small cuboids this reasoning may be extended to irregular shapes, and so, whatever the shape of the submerged body, the buoyant force is equal to the weight of the displaced fluid.
The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (if the surrounding fluid is of uniform density). The weight of the object in the fluid is reduced, because of the force acting on it, which is called upthrust. In simple terms, the principle states that the buoyant force (Fb) on an object is equal to the weight of the fluid displaced by the object, or the density (ρ) of the fluid multiplied by the submerged volume (V) times the gravity (g)
We can express this relation in the equation:
where denotes the buoyant force applied onto the submerged object, denotes the density of the fluid, represents the volume of the displaced fluid and is the acceleration due to gravity.
Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy.
Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting on it. Suppose that, when the rock is lowered into the water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyant force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea-floor. It is generally easier to lift an object through the water than it is to pull it out of the water.
For a fully submerged object, Archimedes' principle can be reformulated as follows:
then inserted into the quotient of weights, which has been expanded by the mutual volume
yields the formula below. The density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volume is
(This formula is used for example in describing the measuring principle of a dasymeter and of hydrostatic weighing.)
Example: If you drop wood into water, buoyancy will keep it afloat.
Example: A helium balloon in a moving car. When increasing speed or driving in a curve, the air moves in the opposite direction to the car's acceleration. However, due to buoyancy, the balloon is pushed "out of the way" by the air and will drift in the same direction as the car's acceleration.
When an object is immersed in a liquid, the liquid exerts an upward force, which is known as the buoyant force, that is proportional to the weight of the displaced liquid. The sum force acting on the object, then, is equal to the difference between the weight of the object ('down' force) and the weight of displaced liquid ('up' force). Equilibrium, or neutral buoyancy, is achieved when these two weights (and thus forces) are equal.
Forces and equilibrium
The equation to calculate the pressure inside a fluid in equilibrium is:
where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor:
Here δij is the Kronecker delta. Using this the above equation becomes:
Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function:
Then:
Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is
So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force.
The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid:
The surface integral can be transformed into a volume integral with the help of the Gauss theorem:
where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid doesn't exert force on the part of the body which is outside of it.
The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude:
where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question.
If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to
The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight
If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor.
In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore;
and therefore
showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location.
(Note: If the fluid in question is seawater, it will not have the same density (ρ) at every location. For this reason, a ship may display a Plimsoll line.)
It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined.
If the object would otherwise float, the tension to restrain it fully submerged is:
When a sinking object settles on the solid floor, it experiences a normal force of:
Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies:
Buoyancy force = weight of object in empty space − weight of object immersed in fluid
The final result would be measured in Newtons.
Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam).
Simplified model
A simplified explanation for the integration of the pressure over the contact area may be stated as follows:
Consider a cube immersed in a fluid with the upper surface horizontal.
The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side.
There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero.
The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface.
Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface.
As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence.
This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces.
This analogy is valid for variations in the size of the cube.
If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes.
An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence.
Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way.
Refinements
Archimedes' principle does not consider the surface tension (capillarity) acting on the body. Moreover, Archimedes' principle has been found to break down in complex fluids.
There is an exception to Archimedes' principle known as the bottom (or side) case. This occurs when a side of the object is touching the bottom (or side) of the vessel it is submerged in, and no liquid seeps in along that side. In this case, the net force has been found to be different from Archimedes' principle, as, since no fluid seeps in on that side, the symmetry of pressure is broken.
Principle of flotation
Archimedes' principle shows the buoyant force and displacement of fluid. However, the concept of Archimedes' principle can be applied when considering why objects float. Proposition 5 of Archimedes' treatise On Floating Bodies states that
In other words, for an object floating on a liquid surface (like a boat) or floating submerged in a fluid (like a submarine in water or dirigible in air) the weight of the displaced liquid equals the weight of the object. Thus, only in the special case of floating does the buoyant force acting on an object equal the objects weight. Consider a 1-ton block of solid iron. As iron is nearly eight times as dense as water, it displaces only 1/8 ton of water when submerged, which is not enough to keep it afloat. Suppose the same iron block is reshaped into a bowl. It still weighs 1 ton, but when it is put in water, it displaces a greater volume of water than when it was a block. The deeper the iron bowl is immersed, the more water it displaces, and the greater the buoyant force acting on it. When the buoyant force equals 1 ton, it will sink no farther.
When any boat displaces a weight of water equal to its own weight, it floats. This is often called the "principle of flotation": A floating object displaces a weight of fluid equal to its own weight. Every ship, submarine, and dirigible must be designed to displace a weight of fluid at least equal to its own weight. A 10,000-ton ship's hull must be built wide enough, long enough and deep enough to displace 10,000 tons of water and still have some hull above the water to prevent it from sinking. It needs extra hull to fight waves that would otherwise fill it and, by increasing its mass, cause it to submerge. The same is true for vessels in air: a dirigible that weighs 100 tons needs to displace 100 tons of air. If it displaces more, it rises; if it displaces less, it falls. If the dirigible displaces exactly its weight, it hovers at a constant altitude.
While they are related to it, the principle of flotation and the concept that a submerged object displaces a volume of fluid equal to its own volume are not Archimedes' principle. Archimedes' principle, as stated above, equates the buoyant force to the weight of the fluid displaced.
One common point of confusion regarding Archimedes' principle is the meaning of displaced volume. Common demonstrations involve measuring the rise in water level when an object floats on the surface in order to calculate the displaced water. This measurement approach fails with a buoyant submerged object because the rise in the water level is directly related to the volume of the object and not the mass (except if the effective density of the object equals exactly the fluid density).
Eureka
Archimedes reportedly exclaimed "Eureka" after he realized how to detect whether a crown is made of impure gold. While he did not use Archimedes' principle in the widespread tale and used displaced water only for measuring the volume of the crown, there is an alternative approach using the principle: Balance the crown and pure gold on a scale in the air and then put the scale into water. According to Archimedes' principle, if the density of the crown differs from the density of pure gold, the scale will get out of balance under water.
See also
Phragmen's voting rules – a ballot load balancing method analogous to the idea of Archimedes' principle.
References
External links
Fluid dynamics
Principle
Force
Buoyancy
Scientific laws | Archimedes' principle | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 3,679 | [
"Force",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Mass",
"Mathematical objects",
"Classical mechanics",
"Equations",
"Scientific laws",
"Piping",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
333,444 | https://en.wikipedia.org/wiki/TOC%20protocol | The TOC protocol, or Talk to OSCAR protocol, was a protocol used by some third-party AOL Instant Messenger clients and several clients that AOL produced itself. Sometime near August 19, 2005, AOL discontinued support for the protocol and no longer uses it in any of the instant messaging clients it actively maintains, such as its Windows and Mac clients for the AOL Instant Messenger and ICQ systems. However, it once did produce several of its own TOC clients, including TiK and TAC which are written in Tcl/Tk, TNT which is written in Emacs Lisp, all of which are open source, and a Java client originally called TIC which later became the Quick Buddy web applet. AOL also provided the TOC protocol specification openly to developers in the hopes that they will use it instead of the proprietary OSCAR protocol they use themselves. In July 2012, AOL turned off the TOC2 servers and it is no longer possible to connect to AIM using this protocol.
TOC was an ASCII-based protocol, while OSCAR is a binary protocol. In addition, TOC contained fewer features than its OSCAR counterpart. OSCAR provides such functionality as buddy icons, file transfer, and advertising.
How it works
TOC acted as a wrapper for the OSCAR protocol. In the grand scheme of things, the TOC server was nothing but an OSCAR client that happened to listen on a socket, translating messages between the two protocols. Upon login, the TOC client specified an OSCAR login server (presumably either or ) that the TOC server used on behalf of the client.
TOC used FLAP to encapsulate its messages just as OSCAR does, however, FLAP has been hacked in such a way that it can be implemented on the same port as an HTTP server. By default, the TOC server operated in HTTP mode, indistinguishable from a typical web server. If a connecting client, in place of an HTTP request, writes the string "" followed by two CRLFs, TOC would switch gears and start reading FLAP messages. Upon getting a user's profile, the client was expected to re-connect to TOC and use it as an HTTP server, which would host the user's profile in HTML.
Once connected, two basic message formats for communications inside of FLAP existed. Client-to-server messages were sent in a format resembling a Unix-style command line: commands with whitespace-separated arguments, quoting and backslash escape sequences. Server-to-client messages were much simpler: they were sent as colon-separated ASCII strings, in a manner similar to many Unix config files. Thus, it was quite easy to write a client, as the incoming messages were very easy to parse, and outgoing commands were easy to generate.
This is in contrast to OSCAR, which due to the binary representation of data can be more difficult to understand.
TOC2
The TOC2 protocol is an updated version of the TOC protocol, or "Talk to OSCAR" protocol. Its existence was never documented by AOL and it is only used in a few AOL Instant Messenger clients. Some clients are beginning to offer plugins for TOC2 in light of AOL's recent shutdown of their TOC servers. Like its predecessor, TOC2 is an ASCII protocol and lacks some features of OSCAR, but unlike TOC, TOC2 is known to support buddy icons and receiving file transfers (not sending). TOC2 operates in essentially the same way as TOC, as a wrapper for OSCAR. Porting code from TOC to TOC2 is remarkably easy as well.
Because of the similarities between TOC1.0 and TOC2.0, they are better defined by their differences, of which there are only a few:
In TOC1.0, connecting with an empty buddy list would make it so that others are not able to see you online. This has been corrected in TOC2.0.
In TOC1.0, there is a toc_add_deny command, but no toc_remove_deny. TOC2.0 corrects this as well.
The sign on process is basically the same as TOC1.0, but with a few new parameters: version, a number, and a code created based on the username and password. The purpose of the number is unknown. The default is 160 and it seems to have no effect if changed.
Permitting and denying have been revamped and are much easier and full-featured in TOC2.0. The commands toc2_add_permit, toc2_remove_permit, toc2_add_deny, and toc2_remove_deny are all present and accessible at any time.
Buddy list commands have also been improved. Group management is easier with the toc2_new_group and toc2_del_group commands. Also, it is possible to add or remove more than one buddy at a time, and choose the groups they are in.
A few syntax changes have also been made, and parameters have been added to some commands. The uses of these parameters are still undetermined.
Aside from this the only changes from TOC are the fact that a '2' was added on most commands both CLIENT->SERVER and vice versa. E.g. IM_IN2, UPDATE_BUDDY2, etc.
TOC2 also limits the number of ScreenNames that can log in from a single IP address (10).
See also
Comparison of instant messaging protocols
References
External links
Detailed Specification
TOC1 Protocol specification
TOC2 Protocol specification
History of AIM, OSCAR and TOC
Implementations
TOC
Py-TOC (python)
Net::AIM (perl)
PHPTocLib (php)
Tik (Tcl/Tk)
TNT (Emacs Lisp)
TAC (Tcl, non-GUI)
AOL Quick Buddy Applet (TIC) (Java Applet)
naim (ncurses, non-GUI)
miniaim (C++)
SimpleAIM (Java)
TOC2
Fluent.Toc (C#)
Raven::Aim (perl)
Plugin for Miranda Instant Messenger
PHPTocLib (PHP)
BlueTOC (PHP)
TerraIM (C++)
TNT (Emacs Lisp)
Instant messaging protocols
AOL | TOC protocol | [
"Technology"
] | 1,329 | [
"Instant messaging",
"Instant messaging protocols"
] |
333,472 | https://en.wikipedia.org/wiki/High%20Energy%20Transient%20Explorer%201 | High Energy Transient Explorer 1 (HETE-1) was a NASA astronomical satellite with international participation (mainly Japan and France).
History
The concept of a satellite capable of multi-wavelength observations of gamma-ray bursts (GRB) was discussed at the Santa Cruz, California meeting on GRBs in 1981. In 1986, the first realistic implementation of the HETE concept by a Massachusetts Institute of Technology MIT-led International Team was proposed. This concept, which was adopted, emphasized accurate locations and multi-wavelength coverage as the primary scientific goals for a sharply-focused small satellite mission which would ultimately solve the gamma-ray burst mystery.
In 1989, NASA approved funding for a low-cost "University Class" explorer satellite to search for GRBs. In 1992, the HETE-1 program was funded, and the design and construction of HETE-1 began. The original spacecraft contractor for HETE-1 was AeroAstro, Inc., of Herndon, Virginia. AeroAstro was responsible for the spacecraft bus, including power, communications, attitude control, and computers.
The instrument complement for HETE-1 consisted of four wide-field gamma-ray detectors, supplied by the CESR of Toulouse, France.
A wide-field coded-aperture X-ray imager, supplied by a collaboration of Los Alamos National Laboratory (LANL) and the Institute of Chemistry and Physics (RIKEN) of Tokyo, Japan.
Four wide-field near-UV CCD cameras, supplied by the Center for Space Research at the Massachusetts Institute of Technology.
Due to the tragic fate of HETE-1 and the continuing timeliness of GRB science, NASA agreed to a reflight of the HETE-1 satellite, using flight spare hardware from the first satellite. In July 1997, funding for a second HETE satellite was granted, with a target launch date early 2000.
Mission
The prime objective of HETE-1 was to carry out the first multi-wavelength study of GRBs with ultraviolet (UV), X-ray, and gamma-ray instruments mounted on a single, compact spacecraft. A unique feature of the HETE-1 mission was its capability to localize GRBs with ~10 arcseconds accuracy in near real time aboard the spacecraft, and to transmit these positions directly to a network of receivers at existing ground-based observatories enabling rapid, sensitive follow-up studies in the radio, infrared (IR), and visible light bands.
Spacecraft
The satellite bus for the HETE-1 satellite was designed and built by AeroAstro, Inc. (USA) of Herndon, Virginia. The HETE-1 spacecraft was Sun-pointing with four solar panels connected to the bottom of the spacecraft bus. Spacecraft attitude was to be controlled by magnetic torque coils and a momentum wheel.
Experiments
Omnidirectional Gamma-Ray Spectrometer
The Omnidirectional Gamma-Ray Spectrometer was designed to operate from 6 keV to greater than 1 MeV. The instrument consisted of four wide-field gamma-ray detectors with a total effective area of . The HETE satellite remained within the launch vehicle due to battery failure. The experiment was unable to operate.
Ultraviolet Transient Camera Array
The Ultraviolet Transient Camera Array was designed to provide accurate directional information on transient events, and to assist with spacecraft attitude determination. The instrument consisted of four ultraviolet Charge-coupled device (CCD) cameras operating in the 5 to 7 eV range.
Wide Field X-ray Monitor
The Widefield X-ray Monitor was designed to perform X-ray studies of gamma-ray bursts. The instrument consisted of coded aperture cameras, sensitive in the 2-25 keV energy range, and with location accuracy to ~ 10 arcminutes or better.
Launch
The HETE-1 satellite was launched with the Argentina satellite SAC-B. HETE-1 was lost during the launch on 4 November 1996, at 17:08:56 UTC, from Wallops Flight Facility (WFF), launch area-3. The Pegasus XL launch vehicle achieved a good orbit, but explosive bolts releasing HETE-1 from another satellite, SAC-B, and from its Dual Payload Attach Fitting (DPAF) envelope failed to charge, dooming both satellites. A battery on the third stage of the launch vehicle and responsible for these bolts cracked during the ascent. Due to its inability to deploy the solar panels, HETE lost power several days after launch.
Atmospheric entry
HETE-1 re-entered on 7 April 2002.
See also
Explorer program
References
Gamma-ray telescopes
X-ray telescopes
Space telescopes
Satellites orbiting Earth
Spacecraft launched in 1996
Spacecraft launched in 2000
Spacecraft launched by Pegasus rockets
Explorers Program | High Energy Transient Explorer 1 | [
"Astronomy"
] | 954 | [
"Space telescopes"
] |
333,528 | https://en.wikipedia.org/wiki/Japanese%20language%20and%20computers | In relation to the Japanese language and computers many adaptation issues arise, some unique to Japanese and others common to languages which have a very large number of characters. The number of characters needed in order to write in English is quite small, and thus it is possible to use only one byte (28=256 possible values) to encode each English character. However, the number of characters in Japanese is many more than 256 and thus cannot be encoded using a single byte - Japanese is thus encoded using two or more bytes, in a so-called "double byte" or "multi-byte" encoding. Problems that arise relate to transliteration and romanization, character encoding, and input of Japanese text.
Character encodings
There are several standard methods to encode Japanese characters for use on a computer, including JIS, Shift-JIS, EUC, and Unicode. While mapping the set of kana is a simple matter, kanji has proven more difficult. Despite efforts, none of the encoding schemes have become the de facto standard, and multiple encoding standards were in use by the 2000s. As of 2017, the share of UTF-8 traffic on the Internet has expanded to over 90% worldwide, and only 1.2% was for using Shift-JIS and EUC. Yet, a few popular websites including 2channel and kakaku.com are still using Shift-JIS.
Until 2000s, most Japanese emails were in ISO-2022-JP ("JIS encoding") and web pages in Shift-JIS and mobile phones in Japan usually used some form of Extended Unix Code. If a program fails to determine the encoding scheme employed, it can cause and thus unreadable text on computers.
The first encoding to become widely used was JIS X 0201, which is a single-byte encoding that only covers standard 7-bit ASCII characters with half-width katakana extensions. This was widely used in systems that were neither powerful enough nor had the storage to handle kanji (including old embedded equipment such as cash registers) because Kana-Kanji conversion required a complicated process, and output in kanji required much memory and high resolution. This means that only katakana, not kanji, was supported using this technique. Some embedded displays still have this limitation.
The development of kanji encodings was the beginning of the split. Shift JIS supports kanji and was developed to be completely backward compatible with JIS X 0201, and thus is in much embedded electronic equipment. However, Shift JIS has the unfortunate property that it often breaks any parser (software that reads the coded text) that is not specifically designed to handle it.
For example, some Shift-JIS characters include a backslash (0x5C "\") in the second byte, which is used as an escape character in many programming languages.
A parser lacking support for Shift JIS will recognize 0x5C 0x82 as an invalid escape sequence, and remove it. Therefore, the phrase cause mojibake.
This can happen for example in the C programming language, when having Shift-JIS in text strings. It does not happen in HTML since ASCII 0x00–0x3F (which includes ", %, & and some other used escape characters and string separators) do not appear as second byte in Shift-JIS, and backslash is not an escape characters there. But it can happen for JavaScript which can be embedded in HTML pages.
EUC, on the other hand, is handled much better by parsers that have been written for 7-bit ASCII (and thus EUC encodings are used on UNIX, where much of the file-handling code was historically only written for English encodings). But EUC is not backwards compatible with JIS X 0201, the first main Japanese encoding. Further complications arise because the original Internet e-mail standards only support 7-bit transfer protocols. Thus ("ISO-2022-JP", often simply called JIS encoding) was developed for sending and receiving e-mails.
In character set standards such as JIS, not all required characters are included, so gaiji ( "external characters") are sometimes used to supplement the character set. Gaiji may come in the form of external font packs, where normal characters have been replaced with new characters, or the new characters have been added to unused character positions. However, gaiji are not practical in Internet environments since the font set must be transferred with text to use the gaiji. As a result, such characters are written with similar or simpler characters in place, or the text may need to be encoded using a larger character set (such as Unicode) that supports the required character.
Unicode was intended to solve all encoding problems over all languages. The UTF-8 encoding used to encode Unicode in web pages does not have the disadvantages that Shift-JIS has. Unicode is supported by international software, and it eliminates the need for gaiji. There are still controversies, however. For Japanese, the kanji characters have been unified with Chinese; that is, a character considered to be the same in both Japanese and Chinese is given a single number, even if the appearance is actually somewhat different, with the precise appearance left to the use of a locale-appropriate font. This process, called Han unification, has caused controversy. The previous encodings in Japan, Taiwan Area, Mainland China and Korea have only handled one language and Unicode should handle all. The handling of Kanji/Chinese have however been designed by a committee composed of representatives from all four countries/areas.
Text input
Written Japanese uses several different scripts: kanji (Chinese characters), 2 sets of kana (phonetic syllabaries) and roman letters. While kana and roman letters can be typed directly into a computer, entering kanji is a more complicated process as there are far more kanji than there are keys on most keyboards. To input kanji on modern computers, the reading of kanji is usually entered first, then an input method editor (IME), also sometimes known as a front-end processor, shows a list of candidate kanji that are a phonetic match, and allows the user to choose the correct kanji. More-advanced IMEs work not by word but by phrase, thus increasing the likelihood of getting the desired characters as the first option presented. Kanji readings inputs can be either via romanization (rōmaji nyūryoku, ) or direct kana input (kana nyūryoku, ). Romaji input is more common on PCs and other full-size keyboards (although direct input is also widely supported), whereas direct kana input is typically used on mobile phones and similar devices – each of the 10 digits (1–9,0) corresponds to one of the 10 columns in the gojūon table of kana, and multiple presses select the row.
There are two main systems for the romanization of Japanese, known as Kunrei-shiki and Hepburn; in practice, "keyboard romaji" (also known as wāpuro rōmaji or "word processor romaji") generally allows a loose combination of both. IME implementations may even handle keys for letters unused in any romanization scheme, such as L, converting them to the most appropriate equivalent. With kana input, each key on the keyboard directly corresponds to one kana. The JIS keyboard system is the national standard, but there are alternatives, like the thumb-shift keyboard, commonly used among professional typists.
Direction of text
Japanese can be written in two directions. Yokogaki style writes left-to-right, top-to-bottom, as with English. Tategaki style writes first top-to-bottom, and then moves right-to-left.
To compete with Ichitaro, Microsoft provided several updates for early Japanese versions of Microsoft Word including support for downward text, such as Word 5.0 Power Up Kit and Word 98.
QuarkXPress was the most popular DTP software in Japan in 1990s, even it had a long development cycle. However, due to lacking support for downward text, it was surpassed by Adobe InDesign which had strong support for downward text through several updates.
At present, handling of downward text is incomplete. For example, HTML has no support for tategaki and Japanese users must use HTML tables to simulate it. However, CSS level 3 includes a property "writing-mode" which can render tategaki when given the value "vertical-rl" (i.e. top to bottom, right to left). Word processors and DTP software have more complete support for it.
Historical development
The lack of proper Japanese character support on computers limited the influence of large American firms in the Japanese market during the 1980s. Japan, which had been the world's second largest market for computers after the United States at the time, was dominated by domestic hardware and software makers such as NEC and Fujitsu. Microsoft Windows 3.1 offered improved Japanese language support which played a part in reducing the grip of domestic PC makers throughout the 1990s.
See also
Japanese writing system
Japanese language
Chinese input methods for computers
CJK characters
Korean language and computers
Vietnamese language and computers
Ghost characters - Erroneous kanji
References
External links
Japanese Owned computer companies in United States
A complete introduction to Japanese character encodings from 2003
Chinese, Japanese, and Korean character set standards and encoding systems from 1996
Japanese text encoding
Online Japanese Dictionary of Linguistics
Online Japanese Dictionary
Japanese writing system
Encodings of Japanese
Natural language and computing | Japanese language and computers | [
"Technology"
] | 1,966 | [
"Natural language and computing"
] |
333,625 | https://en.wikipedia.org/wiki/Bus%20rapid%20transit | Bus rapid transit (BRT), also referred to as a busway or transitway, is a trolleybus, electric bus and public transport bus service system designed to have much more capacity, reliability, and other quality features than a conventional bus system. Typically, a BRT system includes roadways that are dedicated to buses, and gives priority to buses at intersections where buses may interact with other traffic; alongside design features to reduce delays caused by passengers boarding or leaving buses, or paying fares. BRT aims to combine the capacity and speed of a light rail transit (LRT) or mass rapid transit (MRT) system with the flexibility, lower cost and simplicity of a bus system.
The world's first BRT system was the Runcorn Busway in Runcorn New Town, England, which entered service in 1971. , a total of 166 cities in six continents have implemented BRT systems, accounting for of BRT lanes and about 32.2 million passengers every day.
The majority of these are in Latin America, where about 19.6 million passengers ride daily, and which has the most cities with BRT systems, with 54, led by Brazil with 21 cities. The Latin American countries with the most daily ridership are Brazil (10.7 million), Colombia (3.0 million), and Mexico (2.5 million).
In the other regions, China (4.3 million) and Iran (2.1 million) stand out. Currently, TransJakarta is the largest BRT network in the world, with about of corridors connecting the Indonesian capital city.
Terminology
Bus rapid transit is a mode of mass rapid transit (MRT) and describes a high-capacity urban public-transit system with its own right of way, vehicles at short headways, platform-level boarding, and preticketing.
The expression "BRT" is mainly used in the Americas and China; in India, it is called "BRTS" (BRT System); in Europe it is often called a "busway" or a "BHLS" (stands for Bus with a High Level of Service). The term transitway was originated in 1981 with the opening of the OC Transpo transitway in Ottawa, Ontario, Canada.
Critics have charged that the term "bus rapid transit" has sometimes been misapplied to systems that lack most or all the essential features which differentiate it from conventional bus services. The term "bus rapid transit creep" has been used to describe severely degraded levels of bus service which fall far short of the BRT Standard promoted by the Institute for Transportation and Development Policy (ITDP) and other organizations.
Reasons for use
Compared to other common transit modes such as light rail transit (LRT), bus rapid transit (BRT) service is attractive to transit authorities because it does not cost as much to establish and operate: no track needs to be laid, bus drivers typically require less training and less pay than rail operators, and bus maintenance is less complex than rail maintenance.
Moreover, buses are more flexible than rail vehicles, because a bus route can be altered, either temporarily or permanently, to meet changing demand or contend with adverse road conditions with comparatively little investment of resources.
History
The first use of a protected busway was the East Side Trolley Tunnel in Providence, Rhode Island. It was converted from trolley to bus use in 1948. However, the first BRT system in the world was the Runcorn Busway in Runcorn, England. First conceived in the Runcorn New Town Masterplan in 1966, it opened for services in October 1971 and all were operational by 1980. The central station is at Runcorn Shopping City where buses arrive on dedicated raised busways to two enclosed stations. Arthur Ling, Runcorn Development Corporation's Master Planner, said that he had invented the concept while sketching on the back of an envelope. The town was designed around the transport system, with most residents no more than five minutes walking distance, or , from the Busway.
The second BRT system in the world was the Rede Integrada de Transporte (RIT, integrated transportation network), implemented in Curitiba, Brazil, in 1974. The Rede Integrada de Transporte was inspired by the previous transport system of the National Urban Transport Company of Peru (In Spanish: ENATRU), which only had quick access on Lima downtown, but it would not be considered BRT itself. Many of the elements that have become associated with BRT were innovations first suggested by Carlos Ceneviva, within the team of Curitiba Mayor Jaime Lerner. Initially just dedicated bus lanes in the center of major arterial roads, in 1980 the Curitiba system added a feeder bus network and inter-zone connections, and in 1992 introduced off-board fare collection, enclosed stations, and platform-level boarding. Other systems made further innovations, including platooning (three buses entering and leaving bus stops and traffic signals at once) in Porto Alegre, and passing lanes and express service in São Paulo.
In the United States, BRT began in 1977, with Pittsburgh's South Busway, operating on of exclusive lanes. Its success led to the Martin Luther King Jr. East Busway in 1983, a fuller BRT deployment including a dedicated busway of , traffic signal preemption, and peak service headway as low as two minutes. After the opening of the West Busway, in length in 2000, Pittsburgh's Busway system is today over 18.5 miles long.
The OC Transpo BRT system in Ottawa, Canada, was introduced in 1983. The first element of its BRT system was dedicated bus lanes through the city centre, with platformed stops. The introduction of exclusive separate busways (termed 'Transitway') occurred in 1983. By 1996, all of the originally envisioned 31 km Transitway system was in operation; further expansions were opened in 2009, 2011, and 2014. As of 2019, the central part of the Transitway has been converted to light rail transit, due to the downtown section being operated beyond its designed capacity.
In 1995, Quito, Ecuador, opened MetrobusQ its first BRT trolleybuses in Quito, using articulated trolleybuses.
The TransMilenio in Bogotá, Colombia, opening in 2000, was the first BRT system to combine the best elements of Curitiba's BRT with other BRT advances, and achieved the highest capacity and highest speed BRT system in the world.
In January 2004 the first BRT in Southeast Asia, TransJakarta, opened in Jakarta, Indonesia. , at , it is the longest BRT system in the world.
Africa's first BRT system was opened in Lagos, Nigeria, in March 2008 but is considered a light BRT system by many people. Johannesburg, South Africa, BRT Rea Vaya, was the first true BRT in Africa, in August 2009, carrying 16,000 daily passengers. Rea Vaya and MIO (BRT in Cali, Colombia, opened 2009) were the first two systems to combine full BRT with some services that also operated in mixed traffic, then joined the BRT trunk infrastructure.
In 2017 Marrakesh, Morocco, opened its first BRT Marrakesh trolleybus system (BHNS De Marrakesh) trolleybuses Corridors of 8 km (5.0 mi), of which 3 km (1.9 mi) of overhead wiring for operation as trolleybus.
Main features
BRT systems normally include most of the following features:
Dedicated lanes and alignment
Bus-only lanes make for faster travel and ensure that buses are not delayed by mixed traffic congestion. A median alignment bus-only keeps buses away from busy curb-side side conflicts, where cars and trucks are parking, standing and turning. Separate rights of way may be used such as the completely elevated Xiamen BRT. Transit malls or 'bus streets' may also be created in city centers.
Off-board fare collection
Fare prepayment at the station, instead of on board the bus, eliminates the delay caused by passengers paying on board. Fare machines at stations also allow riders to purchase multi-ride stored-value cards and have multiple payment options. Prepayment also allows riders to board at all doors, further speeding up stops.
Bus priority, turning and standing restrictions
Prohibiting turns for traffic across the bus lane significantly reduces delays to the buses. Bus priority will often be provided at signalized intersections to reduce delays by extending the green phase or reducing the red phase in the required direction compared to the normal sequence. Prohibiting turns may be the most important measure for moving buses through intersections.
Platform-level boarding
The station platforms for BRT systems should be level with the bus floor for quick and easy boarding, making it fully accessible for wheelchairs, disabled passengers and baby strollers, with minimal delays.
High-level platforms for high-floored buses makes it difficult to have stops outside dedicated platforms, or to have conventional buses stop at high-level platforms, so these BRT stops are distinct from street-level bus stops. Similar to rail vehicles, there is a risk of a dangerous gap between bus and platform, and is even greater due to the nature of bus operations. Kassel curbs or other methods may be used to ease quick and safe alignment of the BRT vehicle with a platform.
A popular compromise is low-floor buses with a low step at the door, which can allow easy boarding at low-platform stops compatible with other buses. This intermediate design may be used with some low- or medium-capacity BRT systems.
The MIO system in Santiago de Cali, Colombia, pioneered in 2009 the use of dual buses, with doors on the left side of the bus that are located at the height of high-level platforms, and doors on the right side that are located at curb height. These buses can use the main line with its exclusive lanes and high level platforms, located on the center of the street and thus, boarding and leaving passengers on the left side. These buses can exit the main line and use normal lanes that share with other vehicles and stop at regular stations located on sidewalks on the right side of the street.
Additional features
Groups of criteria form the BRT Standard 2016, which is updated by the Technical Committee of the BRT Standard.
High capacity vehicles
High-capacity vehicles such as articulated or even bi-articulated buses may be used, typically with multiple doors for fast entry and exit. Double-decker buses or guided buses may also be used. Advanced powertrain control may be used for a smoother ride.
Quality stations
Bottleneck BRT stations typically provide loading areas for simultaneous boarding and alighting of buses through multiple doors coordinated via displays and loudspeakers.
An example of high-quality stations include those used on TransMilenio in Bogotá since December 2000, the MIO in Cali since November 2008, Metrolinea in Bucaramanga since December 2009, Megabús in Pereira since May 2009. This design is also used in Johannesburg's Rea Vaya.
The term "station" is more flexibly applied in North America and ranges from enclosed waiting areas (Ottawa and Cleveland) to large open-sided shelters (Los Angeles and San Bernardino).
Prominent brand or identity
A unique and distinctive identity can contribute to BRT's attractiveness as an alternative to driving cars, (such as Viva, Max, TransMilenio, Metropolitano, Metronit, Select) marking stops and stations as well as the buses.
Large cities usually have big bus networks. A map showing all bus lines might be incomprehensible, and cause people to wait for low-frequency buses that may not even be running at the time they are needed. By identifying the main bus lines having high-frequency service, with a special brand and separate maps, it is easier to understand the entire network.
Public transit apps are more convenient than a static map, featuring services like trip planning, live arrival and departure times, up-to-date line schedules, local station maps, service alerts, and advisories that may affect one's current trip. Transit and Moovit are examples of apps that are available in many cities around the world. Some operators of bus rapid transit systems have developed their own apps, like Transmilenio. These apps even include all the schedules and live arrival times and stations for buses that feed the BRT, like the SITP (Sistema Integrado de Transporte Público or Public Transit Integrated System) in Bogotá.
In tunnels or subterranean structures
A special issue arises in the use of buses in metro transit structures. Since the areas where the demand for an exclusive bus right-of-way are apt to be in dense downtown areas where an above-ground structure may be unacceptable on historic, logistic, or environmental grounds, use of BRT in tunnels may not be avoidable.
Since buses are usually powered by internal combustion engines, bus metros raise ventilation issues similar to those of motor vehicle tunnels. Powerful fans typically exchange air through ventilation shafts to the surface; these are usually as remote as possible from occupied areas, to minimize the effects of noise and concentrated pollution.
A straightforward way to reduce air quality problems is to use internal combustion engines with lower emissions. The 2008 Euro V European emission standards set a limit on carbon monoxide from heavy-duty diesel engines of 1.5 g/kWh, one third of the 1992 Euro I standard. As a result, less forced ventilation will be required in tunnels to achieve the same air quality.
Another alternative is to use electric propulsion, which Seattle's Metro Bus Tunnel and Boston's Silver Line Phase II implemented. In Seattle, dual-mode (electric/diesel electric) buses manufactured by Breda were used until 2004, with the center axle driven by electric motors obtaining power from trolley wires through trolley poles in the subway, and with the rear axle driven by a conventional diesel powertrain on freeways and streets. Boston is using a similar approach, after initially using trolleybuses pending delivery of the dual-mode vehicles that was completed in 2005.
In 2004, Seattle replaced its "Transit Tunnel" fleet with diesel-electric hybrid buses, which operate similarly to hybrid cars outside the tunnel and in a low-noise, low-emissions "hush mode" (in which the diesel engine operates but does not exceed idle speed) when underground. The need to provide electric power in underground environments brings the capital and maintenance costs of such routes closer to those of light rail, and raises the question of building or eventually converting to light rail. In Seattle, the downtown transit tunnel was retrofitted for conversion to a shared hybrid-bus and light-rail facility in preparation for Seattle's Central Link Light Rail line, which opened in July 2009. In March 2019, expansion of the light rail in the tunnel moved busses back to surface streets.
Bi-articulated battery electric buses cause no problems in tunnels anymore but provide BRT capacity.
Performance
A BRT system can be measured by a number of factors. The BRT Standard was developed by the Institute for Transportation and Development Policy (ITDP) to score BRT corridors, producing a list of rated BRT corridors meeting the minimum definition of BRT. The highest rated systems received a "gold" ranking. The latest edition of the standard was published in 2016.
Other metrics used to evaluate BRT performance include:
The vehicle headway is the average time interval between vehicles on the same line. Buses can operate at headways of 10 seconds or less, but average headways on TransMilenio at busy intersections are 13 seconds, 14 seconds for the busiest section of the Metrobus (Istanbul), 7 seconds in Belo Horizonte, 6 seconds in Rio de Janeiro.
Vehicle capacity, which can range from 50 passengers for a conventional bus up to some 300 for a bi-articulated vehicle or 500.
The effectiveness of the stations to handle passenger demand. High volumes of passengers on vehicles require large bus stations and more boarding areas at busy interchange points. This is the standard bottleneck of BRT (and heavy rail).
The effectiveness of the feeder system: can these deliver people to stations at the required speed?
Local passenger demand. Without enough local demand for travel, the capacity will not be used.
Based on this data, the minimum headway and maximum current vehicle capacities, the theoretical maximum throughput measured in passengers per hour per direction (PPHPD) for a single traffic lane is some 150,000 passengers per hour (250 passengers per vehicle, one vehicle every 6 seconds). In real world conditions BRT Rio (de Janeiro, BRS Presidente Vargas) with 65.000 PPHPD holds the record, TransMilenio Bogotá and Metrobus Istanbul perform 49,000 – 45,000 PPHPD, most other busy systems operating in the 15,000 to 25,000 range.
Research of the Institute for Transportation and Development Policy (ITDP) shows a capacity ranking of MRT modes, based on reported performance of 14 light rail systems, 14 heavy rail systems (just 1-track + 3 2-track-systems "highest capacity") and 56 BRT systems.
The study concludes, that BRT-"capacity on TransMilenio exceeds all but the highest capacity heavy rail systems, and it far exceeds the highest light rail system."
Performance data of 84 systems show
37,700 passengers in peak hour per direction (PPHPD) in the best BRT system
36,000 in the best 1-track-heavy rail system
13,400 in the best light rail system
More topical are these BRT data
45,000 PPHPD in a 1-lane-system using articulated buses (2020 in Istanbul)
320 busses per hour per direction in the corridor Nossa Senhora de Copacabana in Rio de Janeiro for the year 2014 meaning a bus every 11 seconds.
65,400 PPHPD in 600 buses in the corridor Presidente Vargas in Rio de Janeiro for the years 2012 resp. 2014, which means 10 buses per minute or a bus every 6 seconds.
Comparison with light rail
After the first BRT system opened in 1971, cities were slow to adopt BRT because they believed that the capacity of BRT was limited to about 12,000 passengers per hour traveling in a given direction during peak demand. While this is a capacity rarely needed in the US (12,000 is more typical as a total daily ridership), in the developing world this capacity constraint (or rumor of a capacity constraint) was a significant argument in favor of heavy rail metro investments in some venues.
When TransMilenio opened in 2000, it changed the paradigm by giving buses a passing lane at each station stop and introducing express services within the BRT infrastructure. These innovations increased the maximum achieved capacity of a BRT system to 35,000 passengers per hour. The single-lane roads of Istanbul Metrobus had been frequently blocked by Phileas buses breaking down, causing delays for all the buses in a single direction. After focusing on Mercedes-Benz buses, capacity increased to 45,000 pph. Light rail, by comparison, has reported passenger capacities between 3,500 pph (mainly street running) to 19,000 pph (fully grade-separated).
There are conditions that favor light rail over BRT, but they are fairly narrow. These conditions are a corridor with only one available lane in each direction, more than 16,000 passengers per direction per hour but less than 20,000, and a long block length, because the train cannot block intersections. These conditions are rare, but in that specific instance, light rail might have a minimal operational advantage.
The United States Government Accountability Office (U.S. GAO) summarized in the report "Mass Transit – Bus Rapid Transit Shows Promise", the U.S. Federal Transit Administration (FTA) provided funding for the construction of heavy rail and of light rail at that time, but not of BRT. The FTA funding of BRT "rather focuses on obtaining and sharing information on projects being pursued by local transit agencies". In spite of this different funding the capital costs of BRT systems were lower in many US communities than those of light rail systems and performance often similar. The GAO stated, BRT systems were generally more flexible compared to light rail and faster. "While transit officials noted a public bias toward Light Rail, research has found that riders have no preference for rail over bus when service characteristics are equal."
Comparison with heavy rail
Fjellstrom/Wright distributed a map of the mid-term goal to expand Bogota's BRT system, TransMilenio, so that 85% of the city's 7 million inhabitants live within 500m distance to a TransMileneo line. Such an expansion program would be unrealistic for a rail-based MRT-system, according to Bogota's mayor.
An additional use of BRT is the replacement of heavy rail services, due to infrastructure damage, reduced ridership, or a combination of both where lower maintenance costs are desired while taking advantage of an existing dedicated right of way. One such system in Japan consists of portions of the JR East Kesennuma and Ofuanto Lines, which were catastrophically damaged during the 2011 Tōhoku earthquake and tsunami, and later repaired as a bus lane over the same right of way, providing improved service with much lower restoration and maintenance costs. Another system set to open in August 2023 is a portion of the JR Kyushu Hitahikosan Line, which was damaged due to torrential rain in 2017. In both cases, ridership had dropped considerably since the lines opened, and the higher capacity of a rail line is no longer needed or cost-effective compared to buses on the same alignments.
Comparison with conventional bus services
Conventional scheduled bus services use general traffic lanes, which can be slow due to traffic congestion, and the speed of bus services is further reduced by long dwell times.
In 2013, the New York City authorities noted that buses on 34th Street, which carried 33,000 bus riders a day on local and express routes, traveled at , only slightly faster than walking pace. Even despite the implementation of Select Bus Service (New York City's version of a bus rapid transit system), dedicated bus lanes, and traffic cameras on the 34th Street corridor, buses on the corridor were still found to travel at an average of 4.5 mph.
In the 1960s, Reuben Smeed predicted that the average speed of traffic in central London would be without other disincentives such as road pricing, based on the theory that this was the minimum speed that people will tolerate. When the London congestion charge was introduced in 2003, the average traffic speed was indeed which was the highest speed since the 1970s. By way of contrast, typical speeds of BRT systems range from .
Cost
The capital cost of implementing BRT is lower than for light rail: A study by the U.S. Government Accountability Office (GAO) from 2000 found that the average capital cost per mile for busways was $13.5 million while light rail average cost was $34.8 million. The total investment varies considerably due to factors such as cost of the roadway, amount of grade separation, station structures and traffic signal systems.
In 2003, a study edited by the German GTZ compared various MRT systems all over the world and concluded ″Bus Rapid Transit (BRT) can provide high-quality, metro-like transit service at a fraction of the cost of other options″.
In 2013, the analysis of a database of nineteen LRT projects, twenty-six HRT projects, and forty-two BRT projects specified "In higher income countries ... an HRT alternative is likely to cost up to 40 times as much as a BRT alternative". and a surface LRT alternative about 4 times that of a BRT alternative.
Operational cost of running a BRT system is generally lower than light rail, though the exact comparison varies, and labor costs depend heavily on wages, which vary between countries. For the same level of ridership and demand, higher labor costs in the developed world relative to developing countries will tend to encourage developed world transit operators to prefer operate services with larger but less frequent vehicles. This will allow the service to achieve the same capacity while minimizing the number of drivers. This may come as a hidden cost to passengers in lower demand routes who experience significantly lower frequencies and longer waiting times and limit gain of ridership.
In the study done by the U.S. GAO, BRT systems usually had lower cost as well based on "operating cost per vehicle hour", as on "operating cost per revenue mile", and on "operating cost per passenger trip", mainly because of lower vehicle cost and lower infrastructure cost.
An ambitious light rail system runs partly grade separated (e.g. underground), which gives free right-of-way and much faster traffic compared to passing the traffic signals needed in a surface level system. Underground BRT was suggested as early as 1954. As long as most buses still run on diesel, air quality can become a significant concern in tunnels, but the Downtown Seattle Transit Tunnel is an example of using hybrid buses, which switch to overhead electric propulsion while they are underground, eliminating diesel emissions and reducing fuel usage. Alternatives are elevated busways or - more expensive - elevated railways.
Criticism
BRT systems have been widely promoted by non-governmental organizations such as the Shell-funded EMBARQ program, Rockefeller Foundation and Institute for Transportation and Development Policy (ITDP), whose consultant pool includes the former mayor of Bogota (Colombia), Enrique Peñalosa (former president of ITDP).
Supported by contributions of bus-producing companies such as Volvo, the ITDP not only established a proposed "standard" for BRT system implementation, but developed intensive lobby activities around the world to convince local governments to select BRT systems over rail-based transportation models (subways, light trains, etc.).
"Fake" BRT systems (BRT creep)
Bus rapid transit creep is a phenomenon commonly defined as a bus rapid transit (BRT) system that fails to meet the requirements to be considered "true BRT". These systems are often marketed as a fully realized bus rapid transit system, but end up being described as more of an improvement to regular bus service by proponents of the "BRT creep" term. Notably, the Institute for Transportation and Development Policy (ITDP) published several guidelines in an attempt to define what constitutes the term of "true BRT", known as the BRT Standard, in an attempt to avert this phenomenon.
The most extreme versions of BRT creep lead to systems that cannot even truly be recognized as "Bus Rapid Transit". For example, a rating from the ITDP determined that the Boston Silver Line was best classified as "Not BRT" after local decision makers gradually decided to do away with most BRT-specific features. The study also evaluates New York City's Select Bus Service (which is supposed to be BRT-standard) as "Not BRT".
Environmental issues
Unlike electric-powered trains commonly used in rapid transit and light rail systems, bus rapid transit often uses diesel- or gasoline-fueled engines. The typical bus diesel engine causes noticeable levels of air pollution, noise and vibration. It is noted however that BRT can still provide significant environmental benefits over private cars. In addition, BRT systems can replace an inefficient conventional bus network for more efficient, faster and less polluting BRT buses. For example, Bogotá previously used 2,700 conventional buses providing transportation to 1.6 million passengers daily, while in 2013 TransMilenio transported 1.9 million passengers using only 630 BRT buses, a fleet less than a quarter in size of the old fleet, that circulates at twice the speed, with a huge reduction in air pollution.
To reduce direct emissions some systems use alternative forms of traction such as electric or hybrid engines. BRT systems can use trolleybuses to lower air pollution and noise emissions such as those in Beijing and Quito. The price penalty of installing overhead lines could be offset by the environmental benefits and potential for savings from centrally generated electricity, especially in cities where electricity is less expensive than other fuel sources. Trolleybus electrical systems can be potentially reused for future light rail conversion. TransJakarta buses use cleaner compressed natural gas-fueled engines, while Bogotá started to use hybrid buses in 2012; these hybrid systems use regenerative braking to charge batteries when the bus stops and then use electric motors to propel the bus up to 40 km/h, then automatically switching to the diesel engine for higher speeds, which allows for considerable savings in fuel consumption and pollutant dispersion.
Overcrowding and poor quality service
Many BRT systems suffer from overcrowding in buses and stations as well as long wait times for buses. In Santiago de Chile, the average of the system is six passengers per square meter () inside vehicles. Users have reported days where the buses take too long to arrive, and are too overcrowded to accept new passengers. As of June 2017, the system has an approval rating of 15% among commuters, and it has lost 27% of its passengers, who have turned mostly to cars.
In Bogotá the overcrowding was even worse; the average of TransMilenio was eight passengers per square meter (). Only 29% felt satisfied with the system. The data also showed that 23% of the citizens agreed with building more TransMilenio lines, in contrast of the 42% who considered that a rapid transit system should be built. Several cases of sexual assault had been reported by female users in TransMilenio. According to a 2012 survey made by the secretary of the woman of Bogota, 64% of women said they had been victims of sexual assault in the system. The system had even been ranked as the most dangerous transport for women. The poor quality of the system had occasioned an increment in the number of cars and motorcycles in the city; citizens preferred these transport means over TransMilenio. According to official data, the number of cars increased from approximately 666,000 in 2005 to 1,586,700 in 2016. The number of motorcycles was also growing, with 660,000 sold in Bogota in 2013, two times the number of cars sold.At the end of 2018 Transmilenio ordered 1383 new buses as a replacement of the older ones in service. 52% were compressed natural gas (CNG) buses made by Scania with Euro 6 emission rating, 48% were diesel engine made by Volvo with Euro 5 emission rating. More (or renewed?) orders have produced an impressive result: "To improve public and environmental health, the City of Bogotá has assembled a fleet of 1,485 electric buses for its public transportation system - placing the city among the three largest e-bus fleets outside of China."In the year 2022 Bogotá has won the Sustainable Transport Award, an award given out by the Institute for Transportation and Development Policy, which is partially funded by bus manufacturers. Reasons stated include the TransMilenio system and its urban cycling strategy.
The system in Jakarta had been experiencing issues, with complaints of overcrowding in buses and stations and low frequency of the routes. There were extensive safety concerns as well; rampant sexual harassment has been reported, and the fire safety of the buses has been under scrutiny after one of the buses, a Zhongtong imported from China, suddenly and spontaneously caught on fire. The quality of the service was so bad that the then-governor of Jakarta, Basuki Tjahaja Purnama, in March 2015 publicly apologized for the poor performance of the system.
Failures and reversals
The temporary unpopularity of Delhi's BRT(2016) and the riots and spontaneous user demonstrations in Bogotá(2016) raised doubts about the ability of BRTs to keep pace with increased ridership. On the other hand the speed of increased BRT ridership confirmed the research finding no general preference for rail over bus, see the end of chapter "Comparison with light rail". Bogota has regained trust and safety according to the Sustainable Transport Award 2022.
A lack of permanence of BRT has been criticized, with some arguing that BRT systems can be used as an excuse to build roads that others later try to convert for use by non-BRT vehicles. Examples of this can be found in Delhi, where a BRT system was scrapped, and in Aspen, Colorado, where drivers are lobbying the government to allow mixed-use traffic in former BRT lanes as of 2017, although in other US cities, such as Albuquerque, New Mexico, just the opposite is true. Such exuse might be a side effect of the advantages connected with the flexibility of BRT.
Experts have considered a failure of BRT to land use structure. Some cities that are sprawled and have no mixed use have insufficient ridership to make BRT economically viable. In Africa, the African Urban Institute criticized the viability of ongoing BRTs across the continent.
Impact
A 2018 study found that the introduction of a BRT network in Mexico City reduced air pollution, as measured by emissions of CO, NOX, and PM10.
See also
Autonomous Rail Rapid Transit
Bus lane
Capacitor electric vehicle
List of bus rapid transit systems
List of bus operating companies
List of guided busways and BRT systems in the United Kingdom
List of trolleybus systems
Park and ride
Quality Bus Corridor
Queue jump
Sustainable transport
Traffic engineering (transportation)
Transit bus
Transit Elevated Bus
References
Further reading
Ghadirifaraz, B., Vaziri, M., Safa, A., & Barikrou, N. (2017). A Statistical Appraisal of Bus Rapid Transit Based on Passengers Satisfaction and Priority Case Study: Isfahan City, Iran (No. 17-05108).
Poku-Boansi, M and Marsden, G (2018) Bus Rapid Transit Systems as a Governance Reform Project. Journal of Transport Geography, 70. pp. 193–202. DOI: Bus rapid transit systems as a governance reform project
External links
General information
The BRT Standard 2014 Edition Institute for Transportation and Development Policy
Bus Rapid Transit Planning Guide (2007) A very comprehensive 800 guide to creating a successful BRT system by the Institute for Transportation and Development Policy (available in English, Spanish and Portuguese)
Bus Rapid Transit, Volume 1: Case Studies in Bus Rapid Transit Transportation Research Board
Bus Rapid Transit, Volume 2: Implementation Guidelines Transportation Research Board
Across Latitudes and Cultures Bus Rapid Transit An international Centre of Excellence for BRT development
Transit Capacity and Quality of Service Manual Transportation Research Board
BRT Technologies: Assisting Drivers Operating Buses on Road Shoulders. University of Minnesota Center for Transportation Studies, Department of Mechanical Engineering
Country-specific information
Recapturing Global Leadership in Bus Rapid Transit – A Survey of Select U.S. Cities (available for download in pdf) Institute for Transportation & Development Policy (May 2011)
Bus Rapid Transit Shows Promise U.S. General Accounting Office
The National BRT Institute (USA)
Databases
Global BRT Data Database of Bus Rapid Transit systems around the world
Bus terminology
Public transport by mode
Sustainable transport
Transportation planning
Sustainable urban planning
Types of bus service | Bus rapid transit | [
"Physics"
] | 7,194 | [
"Sustainable transport",
"Transport",
"Physical systems"
] |
333,633 | https://en.wikipedia.org/wiki/B612%20Foundation | The B612 Foundation is a private nonprofit foundation headquartered in Mill Valley, California, United States, dedicated to planetary science and planetary defense against asteroids and other near-Earth object (NEO) impacts. It is led mainly by scientists, former astronauts and engineers from the Institute for Advanced Study, Southwest Research Institute, Stanford University, NASA and the space industry.
As a non-governmental organization it has conducted two lines of related research to help detect NEOs that could one day strike the Earth, and find the technological means to divert their path to avoid such collisions. It also assisted the Association of Space Explorers in helping the United Nations establish the International Asteroid Warning Network, as well as a Space Missions Planning Advisory Group to provide oversight on proposed asteroid deflection missions.
In 2012, the foundation announced it would design and build a privately financed asteroid-finding space observatory, the Sentinel Space Telescope, to be launched in 2017–2018. Once stationed in a heliocentric orbit around the Sun similar to that of Venus, Sentinel's supercooled infrared detector would have helped identify dangerous asteroids and other NEOs that pose a risk of collision with Earth. In the absence of substantive planetary defense provided by governments worldwide, B612 attempted a fundraising campaign to cover the Sentinel Mission, estimated at $450 million for 10 years of operation. Fundraising was unsuccessful, and the program was cancelled in 2017, with the Foundation pursuing a constellation of smaller satellites instead.
The B612 Foundation is named for the asteroid home of the eponymous hero of Antoine de Saint-Exupéry's 1943 book The Little Prince.
Background
When an asteroid enters the planet's atmosphere it becomes known as a 'meteor'; those that survive and fall to the Earth's surface are then called 'meteorites'. While basketball-sized meteors occur almost daily, and compact car-sized ones about yearly, they usually burn up or explode high above the Earth as bolides, (fireballs), often with little notice. During an average 24-hour period, the Earth sweeps through some 100 million particles of interplanetary dust and pieces of cosmic debris, only a very minor amount of which arrives on the ground as meteorites.
The larger in size asteroids or other near-Earth objects (NEOs) are, the less frequently they impact the planet's atmosphere—large meteors seen in the skies are extremely rare, while medium-sized ones are less so, and much smaller ones are more commonplace. Although stony asteroids often explode high in the atmosphere, some objects, especially iron-nickel meteors and other types descending at a steep angle, can explode close to ground level or even directly impact onto land or sea. In the U.S. State of Arizona, the Meteor Crater (officially named Barringer Crater) formed in a fraction of a second as nearly 160 million tonnes of limestone and bedrock were uplifted, creating its crater rim on formerly flat terrain. The asteroid that produced the Barringer Crater was only about in size; however it impacted the ground at a velocity of and struck with an impact energy of —about 625 times greater than the bomb that destroyed the city of Hiroshima during World War II. Tsunamis can also occur after a medium-sized or larger asteroid impacts an ocean surface or other large body of water.
The probability of a mid-sized asteroid (similar to the one that destroyed the Tunguska River area of Russia in 1908) hitting Earth during the 21st century has been estimated at 30%. Since the Earth is currently more populated than in previous eras, there is a greater risk of large casualties arising from a mid-sized asteroid impact. However, as of the early 2010s, only about a half of one per cent of Tunguska-type NEOs had been located by astronomers using ground-based telescope surveys.
The need for an asteroid detection program has been compared to the need for monsoon, typhoon, and hurricane preparedness. As the B612 Foundation and other organizations have publicly noted, of the different types of natural catastrophes that can occur on our planet, asteroid strikes are the only one that the world now has the technical capability to prevent.
B612 is one of several organizations to propose detailed dynamic surveys of NEOs and preventative measures such as asteroid deflection. Other groups include Chinese researchers, NASA in the United States, NEOShield in Europe, as well as the international Spaceguard Foundation. In December 2009 Roscosmos Russian Federal Space Agency director Anatoly Perminov proposed a deflection mission to the asteroid 99942 Apophis, which at the time had been thought to pose a risk of collision with Earth.
Asteroid deflection workshop
The Foundation evolved from an informal one-day workshop on asteroid deflection strategies during October 2001, organized by Dutch astrophysicist Piet Hut along with physicist and then-U.S. astronaut Ed Lu, presented at NASA's Johnson Space Center in Houston, Texas. Twenty researchers participated, principally from various NASA facilities plus the non-profit Southwest Research Institute, but as well from the University of California, University of Michigan, and the Institute for Independent Study. All were interested in contributing to the proposed creation of an asteroid deflection capability. The seminar participants included Rusty Schweickart, a former Apollo astronaut, and Clark Chapman, a planetary scientist.
Among the proposed experimental research missions discussed were the alteration of an asteroid's spin rate, as well as changing the orbit of one part of a binary asteroid pair. Following the seminar's round-table discussions the workshop generally agreed that the vehicle of choice (needed to deflect an asteroid) would be powered by a low-thrust ion plasma engine. Landing a nuclear-powered plasma engined pusher vehicle on the asteroid's surface was seen as promising, an early proposal that would later encounter a number of technical obstacles. Nuclear explosives were seen as "too risky and unpredictable" for several reasons, warranting the view that gently altering an asteroid's trajectory was the safest approach—but also a method requiring years of advance warning to successfully accomplish.
B612 Project and Foundation
The October 2001 asteroid deflection workshop participants created the "B612 Project" to further their research. Schweickart, along with Drs. Hut, Lu and Chapman, then formed the B612 Foundation on October 7, 2002, with its first goal being to "significantly alter the orbit of an asteroid in a controlled manner". Schweickart became an early public face of the foundation and served as chairman on its board of directors. In 2010, as part of an ad hoc task force on planetary defense, he advocated increasing NASA's annual budget by $250M–$300 million over a 10-year period (with an operational maintenance budget of up to $75 million per year after that) in order to more fully catalog the near-Earth objects (NEOs) that can pose a threat to Earth, and to also fully develop impact avoidance capabilities. That recommended level of budgetary support would permit up to 10–20 years of advance warning in order to create a sufficient window for the required trajectory deflection.
Their recommendations were made to a NASA Advisory Council, but were ultimately unsuccessful in obtaining Congressional funding due to NASA, lacking a legislated mandate for planetary protection, not being permitted to request it. Feeling it would be imprudent to continue waiting for substantive government or United Nations action, B612 began a fundraising campaign in 2012 to cover the approximate US$450 million cost for the development, launch and operations of an asteroid-finding space telescope, to be called Sentinel, with a goal of raising $30 to $40 million per year. The space observatory's objective would be to accurately survey NEOs from an orbit similar to that Venus, creating a large dynamic catalog of such objects that would help identify dangerous Earth-impactors, deemed a necessary precursor to mounting any asteroid deflection mission.
In March and April 2013, several weeks after the Chelyabinsk meteor explosion injured some 1,500 people, the U.S. Congress held hearings for "...the Risks, Impacts and Solutions for Space Threats". They received testimony from B612 head Ed Lu (see video at right), as well as Dr. Donald K. Yeomans, head of NASA's NEO Program Office, Dr. Michael A'Hearn of the University of Maryland and co-chair of a 2009 U.S. National Research Council study on asteroid threats, plus others. The difficulty of quickly intercepting an imminent asteroid threat to Earth was made apparent during the testimony:
As a result of a set of hearings by the NASA Advisory Committee following the Chelyabinsk explosion in 2013, in conjunction with a White House request to double its budget, NASA's Near Earth Object Program funding was increased to $40.5 M/year in its FY2014 (Fiscal Year 2014) budget. It had previously been increased to $20.5 M/year in FY2012 (about 0.1% of NASA's annual budget at the time), from an average of about $4 M/year between 2002 and 2010.
Asteroid hazard reassessment
On Earth Day, April 22, 2014, the B612 Foundation formally presented a revised assessment on the frequency of "city-killer" type impact events, based on research led by Canadian planetary scientist Peter Brown of the University of Western Ontario's (UWO) Centre for Planetary Science and Exploration. Dr. Brown's analysis, "A 500-Kiloton Airburst Over Chelyabinsk and An Enhanced Hazard from Small Impactors", published in the journals Science and Nature, was used to produce a short computer-animated video that was presented to the media at the Seattle Museum of Flight.
The nearly one and a half minute video displayed a rotating globe with the impact points of about 25 asteroids measuring more than one, and up to 600 kilotons of blast force, that struck the Earth from 2000 to 2013 (for comparison, the nuclear bomb that destroyed Hiroshima was equivalent to about 16 kilotons of TNT blast force). Of those impacts between 2000 and 2013, eight of them were as large, or larger, than the Hiroshima bomb. Only one of the asteroids, 2008 TC3, was detected in advance, some 19 hours before exploding in the atmosphere. As was the case with the 2013 Chelyabinsk meteor, no warnings were issued for any of the other impacts.
At the presentation, alongside former NASA astronauts Dr. Tom Jones and Apollo 8 astronaut Bill Anders, Foundation head Ed Lu explained that the frequency of dangerous asteroid impacts hitting Earth was from three to ten times greater than previously believed a dozen or so years ago (earlier estimates had pegged the odds as one per 300,000 years). The latest reassessment is based on worldwide infrasound signatures recorded under the auspices of the Comprehensive Nuclear-Test-Ban Treaty Organization, which monitors the planet for nuclear explosions. Dr. Brown's UWO study used infrasound signals generated by asteroids that released more than a kiloton of TNT explosive force. The study suggested that "city-killer" type impact events similar to the Tunguska event of 1908 actually occur on average about once a century instead of every thousand years, as was once previously believed. The 1908 event occurred in the remote, sparsely populated Tunguska area of Siberia, Russia, and is attributed to the likely airburst explosion of an asteroid or comet that destroyed some 80 million trees over 2,150 square kilometres (830 sq mi) of forests. The higher frequency of these types of events is interpreted as meaning that "blind luck" has mainly prevented a catastrophic impact over an inhabited area that could kill millions, a point made near the video's end.
99942 Apophis
During the first decade of the 2000s, there were serious concerns the 325 metres (1,066 ft) wide asteroid 99942 Apophis posed a risk of impacting Earth in 2036. Preliminary, incomplete data by astronomers using ground-based sky surveys resulted in the calculation of a Level 4 risk on the Torino Scale impact hazard chart. In July 2005, B612 formally asked NASA to investigate the possibility that the asteroid's post-2029 orbit could be in orbital resonance with Earth, which would increase the likelihood of a future impact. The Foundation also asked NASA to investigate whether a transponder should be placed on the asteroid to enable more accurate tracking of how its orbit would be changed by the Yarkovsky effect.
By 2008, B612 had provided estimates on a 30 kilometers-wide corridor, called a "path of risk", that would extend across the Earth's surface if an impact were to occur, as part of its effort to develop viable deflection strategies. The calculated risk-path extended from Kazakhstan across southern Russia through Siberia, across the Pacific, then right between Nicaragua and Costa Rica, crossing northern Colombia and Venezuela, and ending in the Atlantic just before reaching Africa. At that time, a computer simulation estimated Apophis's hypothetical impact in countries, such as Colombia and Venezuela, could have resulted in more than 10 million casualties. Alternately, an impact in the Atlantic or Pacific oceans could produce a deadly tsunami over 240 metres (about 800 ft) in height, capable of destroying many coastal areas and cities.
A series of later, more accurate observations of 99942 Apophis, combined with the recovery of previously unseen data, revised the odds of a collision in 2036 as being virtually nil, and effectively ruled it out.
International involvement
B612 Foundation members assisted the Association of Space Explorers (ASE) in helping obtain United Nations (UN) oversight of NEO tracking and deflection missions through the UN's Committee On the Peaceful Uses of Outer Space (UN COPUOS) along with COPUOS's Action Team 14 (AT-14) expert group. Several members of B612, also members of the ASE, worked with COPUOS since 2001 to establish international involvement for both impact disaster responses, and on deflection missions to prevent impact events. According to Foundation Chair Emeritus Rusty Schweickart in 2013, "No government in the world today has explicitly assigned the responsibility for planetary protection to any of its agencies".
In October 2013, COPUOS's Scientific and Technical Subcommittee approved several measures, later approved by the UN General Assembly in December, to deal with terrestrial asteroid impacts, including the creation of an International Asteroid Warning Network (IAWN) plus two advisory groups: the Space Missions Planning Advisory Group (SMPAG), and the Impact Disaster Planning Advisory Group (IDPAG). The IAWN warning network will act as a clearinghouse for shared information on dangerous asteroids and for any future terrestrial impact events that are identified. The Space Missions Planning Advisory Group will coordinate joint studies of the technologies for deflection missions, and as well provide oversight of actual missions. This is due to deflection missions typically involving a progressive movement of an asteroid's predicted impact point across the surface of the Earth (and also across the territories of uninvolved countries) until the NEO is deflected either ahead of, or behind the planet at the point their orbits intersect. An initial framework of international cooperation at the UN is needed, said Schweickart, to guide the policymakers of its member nations on several important NEO-related aspects. However, as asserted by the Foundation, the new UN measures only constitute a starting point. To be effective they will need to be enhanced by further policies and resources implemented at both the national and supernational levels.
At the time of the UN's policy adoption in New York City, Schweickart and four other ASE members, including B612 head Ed Lu and strategic advisers Dumitru Prunariu and Tom Jones participated in a public forum moderated by Neil deGrasse Tyson not far from the United Nations Headquarters. The panel urged the global community to adopt further important steps for planetary defense against NEO impacts. Their recommendations included:
UN delegates briefing their home countries' policymakers on the UN's newest roles;
having each country's government create detailed asteroid disaster response plans, assigning fiscal resources to deal with asteroid impacts, and delegating a lead agency to handle its disaster response in order to create clear lines of communication from the IAWN to the affected countries;
having their governments support the ASE's and B612's efforts to identify the estimated one million "city-killer" NEOs capable of impacting Earth, by deploying a space-based asteroid telescope, and
committing member states to launching an international test deflection mission within 10 years.
Sentinel Mission
The Sentinel Mission program was the cornerstone of the B612 Foundation's earlier efforts, with its preliminary design and system architecture level reviews planned for 2014, and its critical design review to be conducted in 2015. The infrared telescope would be launched atop a SpaceX Falcon 9 rocket, to be placed into a Venus-trailing Heliocentric orbit around the Sun. Orbiting between the Sun and Earth, the Sun's rays would always be behind the telescope's lens and thus never inhibit the space observatory's ability to detect asteroids or other near-Earth objects (NEOs). From the vantage of its inner-solar system orbit around the Sun, Sentinel would be able to "pick up objects that are currently difficult, if not impossible, to see in advance from Earth", such as occurred with the Chelyabinsk meteor of 2013 that went undetected until its explosion over Chelyabinsk Oblast, Russia. The Sentinel Mission was planned to provide an accurate dynamic catalog of asteroids and other NEOs made available to scientists worldwide from the International Astronomical Union's Minor Planet Center, the data collected would calculate the risk of impact events with our planet, allowing for asteroid deflection by the use of gravity tractors to divert their trajectories away from Earth.
In order to communicate with the spacecraft while it is orbiting the Sun (at about the same distance as Venus), which can be at times as far as 270 million kilometres (170 million miles) from Earth, the B612 Foundation entered into a Space Act Agreement with NASA for the use of their deep space telecommunication network.
Design and operation
Sentinel was designed to perform continuous observation and analysis during its planned -year operational life, although B612 anticipates it may continue to function for up to 10 years. Using its telescope mirror with sensors built by Ball Aerospace (makers of the Hubble Space Telescope's instruments), its mission would be to catalog 90% of asteroids with diameters larger than . There were also plans to catalog smaller Solar System objects as well.
The space observatory would measure by with a mass of and would orbit the Sun at a distance of approximately the same orbital distance as Venus, employing infrared astronomy to identify asteroids against the cold of outer space. Sentinel would scan in the 7- to 15-micron wavelength band across a 5.5 by 2-degree field of view. Its sensor array would consist of 16 detectors with coverage scanning "a 200-degree, full-angle field of regard". B612, working in partnership with Ball Aerospace, was constructing Sentinel's 51 cm aluminum mirror, designed for a large field of view with its infrared sensors cooled to using Ball's two-stage, closed-Stirling-cycle cryocooler.
B612 aimed to produce its space telescope at a significantly lower cost than traditional space science programs by making use of space hardware systems previously developed for earlier programs, rather than designing a brand new observatory. Schweickart stated that about "80% of what we're dealing with in Sentinel is Kepler, 15% Spitzer, 5% new, higher-performance infrared sensors", thus concentrating its R&D funds on the critical area of cryogenically-cooled image sensor technology, producing what it terms will be the most sensitive type of asteroid-finding telescope ever built.
Data gathered by Sentinel would be provided through existing scientific data-sharing networks that include NASA and academic institutions such as the Minor Planet Center in Cambridge, Massachusetts. Given the satellite's telescopic accuracy, Sentinel's data may have proven valuable for other possible future missions, such as asteroid mining.
Mission funding
B612 was attempting to raise approximately $450M to fund the development, launch and operational costs of the telescope, about the cost of a complex freeway interchange, or approximately $100M less than a single Air Force Next-Generation Bomber. The $450 million cost estimate is composed of $250 million to create Sentinel, plus another $200 million for 10 years of operations. In explaining the Foundation's bypassing of possible governmental grants for such a mission, Dr. Lu stated their public fundraising appeal is being driven by "[t]he tragedy of the commons: When it's everybody's problem, it's nobody's problem", referring to a lack of ownership, priority and funding that governments have assigned to asteroid threats, also stating on a different occasion "We're the only ones taking it seriously." According to another B612 board member, Rusty Schweickart, "The good news is, you can prevent it—not just get ready for it! The bad news is, it's hard to get anybody to pay attention to it when there are potholes in the road." After providing earlier Congressional testimony on the issue Schweickart was dismayed to hear from congressional staff members that, while U.S. lawmakers involved in the hearing understood the seriousness of the threat, they would likely not legislate funding for planetary defense as "making the deflection of asteroids a priority might backfire in [their] reelection campaigns".
The Foundation intended to launch Sentinel in 2017–2018, with initiation of data transfer for on-Earth processing anticipated no later than 6 months afterwards.
In the aftermath of the February 2013 Chelyabinsk meteor explosion—where an approximate asteroid entered the atmosphere undetected at about Mach 60, becoming a brilliant superbolide meteor before exploding over Chelyabinsk, Russia—the B612 foundation experienced a "surge of interest" in its project to detect asteroids, with a corresponding increase in funding donations. After providing Congressional testimony Dr. Lu noted that the many online videos recorded of the asteroid's explosion over Chelyabinsk made a significant impact on millions of viewers worldwide, saying "There's nothing like a hundred YouTube videos to do that."
Staff
Leadership
In 2014 eight key staff positions were designated, covering the offices of the chief executive officer (CEO), chief operating officer (COO), Sentinel Program Architecture (SPA), Sentinel Mission Direction (SMD), Sentinel Program Management (SPM), Sentinel Mission Science (SMS) and the Sentinel Standing Review Team (SSRT), plus Public Relations.
Ed Lu, Co-founder, B612 Foundation. Executive Director, Asteroid Institute
Edward Tsang "Ed" Lu (; born July 1, 1963) is a co-founder and the chief executive officer of the B612 Foundation, and as well, a U.S. physicist and a former NASA astronaut. He is a veteran of two Space Shuttle missions and an extended stay aboard the International Space Station which included a six-hour spacewalk outside the station performing construction work. During his three missions he logged a total of 206 days in space.
His education includes an electrical engineering degree from Cornell University, and a Ph.D. in applied physics from Stanford University. Lu became a specialist in solar physics and astrophysics as a visiting scientist at the High Altitude Observatory based in Boulder, Colorado, from 1989 until 1992. In his final year, he held a joint appointment with the Joint Institute for Laboratory Astrophysics at the University of Colorado. Lu performed postdoctoral fellow work at the Institute for Astronomy in Honolulu, Hawaii from 1992 until 1995 before being selected for NASA's Astronaut Corps in 1994.
Lu developed a number of new theoretical advances, which have provided for the first time a basic understanding of the underlying physics of solar flares. Besides his work on solar flares he has published journal articles and scientific papers on a wide range of topics including cosmology, solar oscillations, statistical mechanics, plasma physics, near-Earth asteroids, and is also a co-inventor of the gravitational tractor concept of asteroid deflection.
In 2007 Lu retired from NASA to become the Program Manager on Google's Advanced Projects Team, and also worked with Liquid Robotics as its Chief of Innovative Applications, and at Hover Inc. as its chief technology officer. While still at NASA during 2002 Lu co-founded the B612 Foundation, later serving as its Chair and in 2014 is currently its chief executive officer.
Lu holds a commercial pilot license with multi-engine instrument ratings, accumulating some 1,500 hours of flight time. Among his honors are NASA's highest awards, its Distinguished Service and Exceptional Service medals, as well as the Russian Gagarin, Komorov and Beregovoy Medals.
Tom Gavin, Chairman, Sentinel Standing Review Team
Thomas R. Gavin is the chairman of the B612 Foundation's Sentinel Standing Review Team (SSRT), and a former executive-level manager at NASA. He served with NASA for 30 years, including his position as Associate Director for Flight Programs and Mission Assurance at their Jet Propulsion Laboratory (JPL) organization, and "has been at the forefront in leading many of the most successful U.S. space missions, including Galileo's mission to Jupiter, Cassini–Huygens mission to Saturn, development of Genesis, Stardust, Mars 2001 Odyssey, Mars Exploration Rovers, SPITZER and Galaxy Evolution Explorer programs."
In 2001 he was appointed associate director for flight projects and mission success for NASA's Jet Propulsion Laboratory in May 2001. This was a new position created to provide the JPL Director's Office with oversight of flight projects. He later served as interim director for Solar System exploration. Previously, he was director of JPL's Space Science Flight Projects Directorate, which oversaw the Genesis, Mars 2001 Odyssey, Mars rovers, Spitzer Space Telescope and GALEX projects. He also served as deputy director of JPL's Space and Earth Science Programs Directorate beginning in December 1997. In June 1990 he was appointed spacecraft system manager for the Cassini–Huygens mission to Saturn, and retained that position until the project's successful launch in 1997. From 1968 to 1990 he was a member of the Galileo and Voyager project offices responsible for mission assurance. He received his bachelor's degree in chemistry from Villanova University in Pennsylvania in 1961.
Gavin has been honored on a number of occasions for exceptional work, receiving NASA's Distinguished and Exceptional Service Medals in 1981 for his work on the Voyager space probes program, NASA's Medal for Outstanding Leadership in 1991 for Galileo, and again in 1999 for the Cassini-Hygens mission. In 1997 Aviation Week and Space Technology presented its Laurels Award to him for outstanding achievement in the field of space. He also earned the American Astronomical Society's 2005 Randolph Lovelace II Award for his management of all Jet Propulsion Laboratory and NASA robotic science spacecraft missions.
Scott Hubbard, Sentinel Program Architect
Dr. G. Scott Hubbard is the B612 Foundation's Sentinel Program Architect, as well as a physicist, academic and a former executive-level manager at NASA, the U.S. space agency. He is a professor of Aeronautics and Astronautics at Stanford University and has been engaged in space-related research as well as program, project and executive management for more than 35 years including 20 years with NASA, culminating his career there as director of NASA's Ames Research Center. At Ames he was responsible for overseeing the work of some 2,600 scientists, engineers and other staff. Currently on the SpaceX Safety Advisory Panel, he previously served as NASA's sole representative on the Space Shuttle Columbia Accident Investigation Board, and also as their first Mars Exploration Program director in 2000, successfully restructuring the entire Mars program in the wake of earlier serious mission failures.
Hubbard founded NASA's Astrobiology Institute in 1998; conceived the Mars Pathfinder mission with its airbag landing system and was the manager for their highly successful Lunar Prospector Mission. Prior to joining NASA, Hubbard led a small start-up high technology company in the San Francisco Bay Area and was a staff scientist at the Lawrence Berkeley National Laboratory. Hubbard has received many honors including NASA's highest award, their Distinguished Service Medal, and the American Institute of Aeronautics and Astronautics's Von Karman Medal.
Hubbard was elected to the International Academy of Astronautics, is a Fellow of the American Institute of Aeronautics and Astronautics, has authored more than 50 scientific papers on research and technology and also holds the Carl Sagan Chair at the SETI Institute. His education includes an undergraduate degree in physics and astronomy at Vanderbilt University and a graduate degree in solid state and semiconductor physics at the University of California at Berkeley.
Marc Buie, Sentinel Mission Scientist
Dr. Marc W. Buie (b. 1958) is the foundation's Sentinel Mission Scientist, and as well a U.S. astronomer at Lowell Observatory in Flagstaff, Arizona. Buie received his B.Sc. in physics from Louisiana State University in 1980 and earned his Ph.D. in Planetary Science from the University of Arizona in 1984. He was a post-doctoral fellow at the University of Hawaii from 1985 to 1988. From 1988 to 1991, he worked at the Space Telescope Science Institute where he assisted in the planning of the first planetary observations made by the Hubble Space Telescope.
Since 1983, Pluto and its moons have been a central theme of the research done by Buie, who has published over 85 scientific papers and journal articles. He is also one of the co-discoverers of Pluto's new moons, Nix and Hydra (Pluto II and Pluto III) discovered in 2005.
Buie has worked with the Deep Ecliptic Survey team who have been responsible for the discovery of over a thousand such distant objects. He also studies the Kuiper Belt and transitional objects such as 2060 Chiron and 5145 Pholus, as well as the occasional comets as with the recent Deep impact mission that travelled to Comet Tempel 1, and near-Earth asteroids with the occasional use of the Hubble and Spitzer Space Telescopes. Buie also assists in the development of advanced astronomical instrumentation.
Asteroid 7553 Buie is named in honor of the astronomer, who has also been profiled as part of an article on Pluto in Air & Space Smithsonian magazine.
Harold Reitsema, Sentinel Mission Director
Dr. Harold James Reitsema (b. January 19, 1948, Kalamazoo, Michigan) is the foundation's Sentinel Mission Director and a U.S. astronomer. Reitsema was formerly Director of Science Mission Development at Ball Aerospace & Technologies, the B612 Foundation's prime contractor for designing and building its space telescope observatory. In his early career during the 1980s he was part of the teams that discovered new moons orbiting Neptune and Saturn through ground-based telescopic observations. Using a coronagraphic imaging system with one of the first charge-coupled devices available for astronomical use, they first observed Telesto in April 1980, just two months after being one of the first groups to observe Janus, also a moon of Saturn. Reitsema, as part of a different team of astronomers, observed Larissa in May 1981, by watching the occultation of a star by the Neptune system. Reitsema is also responsible for several advances in the use of false-color techniques applied to astronomical images.
Reitsema was a member of the Halley Multicolour Camera team on the European Space Agency Giotto spacecraft that took close-up images of Comet Halley in 1986. He has been involved in many of NASA's space science missions including the Spitzer Space Telescope, Submillimeter Wave Astronomy Satellite, the New Horizons mission to Pluto and the Kepler Space Observatory project searching for Earth-like planets orbiting distant stars similar to the Sun.
Reitsema participated in the ground-based observations of Deep Impact mission in 2005, observing the impact of the spacecraft on the Tempel 1 comet using the telescopes of the Sierra de San Pedro Mártir Observatory in Mexico, along with colleagues from the University of Maryland and the Mexican National Astronomical Observatory.
Reitsema retired from Ball Aerospace in 2008 and remains a consultant to NASA and the aerospace industry in mission design and Near-Earth Objects. His education includes his B.A. in physics from Calvin College in Grand Rapids, Michigan in 1972 and a Ph.D. in astronomy from New Mexico State University in 1977. Main-belt Asteroid 13327 Reitsema is named after him to honor his achievements.
John Troeltzsch, Sentinel Program Manager
John Troeltzsch is the B612 Foundation's Sentinel Program Manager, a senior U.S. aerospace engineer and as well a program manager with Ball Aerospace & Technologies. Ball Aerospace is the Sentinel's prime contractor responsible for its design and integration, to be later launched aboard a SpaceX Falcon 9 rocket into a Venus-trailing heliocentric orbit around the Sun. Troeltzsch's responsibilities include overseeing all requirements for the observatory's detailed design and build at Ball. As part of his 31 years of service with them, he helped create three of the Hubble Space Telescope's instruments and also managed the Spitzer Space Telescope program until its launch in 2003. Troeltzsch later became the Kepler Mission program manager at Ball in 2007.
Troeltzsch's program management abilities include experience with spacecraft systems engineering and software integration through all phases of space telescope projects, from contract definition through assembly, launch and on-station operational start up. His past project experience includes the Kepler Mission, Hubble's Goddard High Resolution Spectrograph (GHRS) and its COSTAR Space Telescope corrective optics, as well as the cryogenically-cooled instruments on the Spitzer Space Telescope.
Troeltzsch was awarded the NASA Exceptional Public Service Medal for his commitment to the success of the Kepler mission. His education includes a B.Sc. and an M.Sc. in Aerospace Engineering, both from the University of Colorado in 1983 and 1989 respectively, the latter while employed at Ball Aerospace which hired him immediately after the completion of his undergraduate degree.
David Liddle, Chair, Board of Directors
Dr. David Liddle is the foundation's Board Chair and a former technology industry executive and professor of computer science. He also holds the Chair of many boards of directors, including research institutes, in the United States.
Liddle is a partner at the venture capital firm U.S. Venture Partners, and is a co-founder and former CEO of both the Interval Research Corporation and Metaphor Computer Systems, plus a consulting professor of computer science at Stanford University, credited with heading development of the Xerox Star computer system. He served as an executive at the Xerox Corporation and IBM and currently serves on the board of directors of Inphi Corporation, the New York Times and the B612 Foundation. In January 2012, he also joined the board of directors of SRI International.
Liddle also held the chair of the board of trustees for the Santa Fe Institute, a nonprofit theoretical research center, from 1994 to 1999, and served on the U.S.'s DARPA Information, Science and Technology Committee. Additionally, he was Chair of the Computer Science and Telecommunications Board of the U.S. National Research Council due to his work on human-computer interface designs. In a field unrelated to the sciences and technology, Liddle is a Senior Fellow of the Royal College of Art in London, England.
His education includes a B.Sc. in electrical engineering from the University of Michigan and a Ph.D. in Electrical Engineering and Computer Science from the University of Toledo.
Board of directors
As of 2014 the B612 Foundation's board includes Geoffrey Baehr (formerly with Sun Microsystems and U.S. Venture Partners), plus Doctors Chapman, Piet Hut, Ed Lu (also CEO, see Leadership, above), David Liddle (Chair, see Leadership, above), and Dan Durda, a planetary scientist.
Rusty Schweickart, co-founder and chair emeritus
Russell Louis "Rusty" Schweickart (b. October 25, 1935) is a co-founder of the B612 Foundation and chair emeritus of its board of directors. He is also a former U.S. Apollo astronaut, research scientist, Air Force pilot, plus business and government executive. Schweickart, chosen in NASA's third astronaut group, is best known as the lunar module pilot on the Apollo 9 mission, the spacecraft's first crewed flight test on which he performed the first in-space test of the portable life support system used by the Apollo astronauts who walked on the Moon. Prior to joining NASA, Schweickart was a scientist at the Massachusetts Institute of Technology's Experimental Astronomy Laboratory, where he researched upper atmospheric physics and became an expert in star tracking and the stabilization of stellar images, a crucial requirement for space navigation. Schweickart's education includes a B.Sc. in aeronautical engineering and an M.Sc. in Aeronautics–Astronautics, both from the Massachusetts Institute of Technology (MIT), in 1956 and 1963 respectively. His Master's thesis was on the validation of "theoretical models of stratospheric radiance".
After serving as the backup commander of NASA's first crewed Skylab mission (the United States' first space station), he later became director of User Affairs in their Office of Applications. Schweickart left NASA in 1977 to serve for two years as California governor Jerry Brown's assistant for science and technology, and was then appointed by Brown to California's Energy Commission for five and a half years.
Schweickart co-founded the Association of Space Explorers (ASE) with other astronauts in 1984–85 and chaired the ASE's NEO Committee, producing a benchmark report, Asteroid Threats: A Call for Global Response, and submitting it to the United Nations Committee on the Peaceful Uses of Outer Space (UN COPUOS). He then co-chaired, along with astronaut Dr. Tom Jones, NASA's Advisory Council's Task Force on Planetary Defense. In 2002 he co-founded B612, also serving as its chair.
Schweickart is a Fellow of the American Astronautical Society, the International Academy of Astronautics and the California Academy of Sciences, as well as an associate fellow of the American Institute of Aeronautics and Astronautics. Among the honors he has received are the Federation Aeronautique Internationale's De la Vaulx Medal in 1970 for his Apollo 9 flight, both of NASA's Distinguished Service and Exceptional Service medals, and, unusual for an astronaut, an Emmy Award from the U.S. National Academy of Television Arts and Sciences for transmitting the first live TV pictures from space.
Clark Chapman, co-founder and board member
Clark Chapman is a B612 board member and "a planetary scientist whose research has specialized in studies of asteroids and cratering of planetary surfaces, using telescopes, spacecraft, and computers. He is a past chair of the Division for Planetary Sciences (DPS) of the American Astronomical Society and was the first editor of the Journal of Geophysical Research: Planets. He is a winner of the Carl Sagan Award for Public Understanding of Science and has worked on the science teams of the MESSENGER, Galileo and Near-Earth Asteroid Rendezvous space missions."
Chapman has a degree from Harvard University and has earned two degrees from the Massachusetts Institute of Technology, including his Ph.D., in the fields of astronomy, meteorology and the planetary sciences, and also served at the Planetary Science Institute in Tucson, Arizona. He is currently on faculty at the Southwest Research Institute of Boulder, Colorado.
Dan Durda, board member
Dr. Daniel David "Dan" Durda (b. October 26, 1965, Detroit, Michigan), is a B612 board member and "a principal scientist in the Department of Space Studies of the Southwest Research Institute's (SwRI) Boulder Colorado. He has more than 20 years experience researching the collisional and dynamical evolution of main-belt and near-Earth asteroids, Vulcanoids, Kuiper belt comets, and interplanetary dust." He is the author of 68 journal and scientific articles and has presented his reports and findings at 22 professional symposiums. He has also taught as adjunct professor in the Department of Sciences at Front Range Community College.
Durda is an active instrument-rated pilot who has flown numerous aircraft, including high performance F/A-18 Hornets and the F-104 Starfighters, and "was a 2004 NASA astronaut selection finalist. Dan is one of three SwRI payload specialists who will fly on multiple suborbital spaceflights on Virgin Galactic's Enterprise and XCOR Aerospace's Lynx."
His education includes a B.Sc. in astronomy from The University of Michigan, plus an M.Sc. and a Ph.D., both in astronomy at the University of Florida, in 1987, 1989 and 1993 respectively. Besides winning the University of Florida's Kerrick Prize "for outstanding contributions in astronomy", Asteroid 6141 Durda is named in his honour.
Strategic advisers
As of July 2014, the Foundation has taken on over twenty key advisers drawn from the sciences, the space industry and other professional fields. Their goals are to provide both advice and critiques, and assist in several other facets of the Sentinel Mission. Included among them are: Dr. Alexander Galitsky, a former Soviet computer scientist and B612 Founding Circle adviser; British Astronomer Royal, cosmologist and astrophysicist Lord Martin Rees, the Baron Rees of Ludlow; U.S. Star Trek director Alexander Singer; U.S. science journalist and writer Andrew Chaikin; British astrophysicist and songwriter Dr. Brian May; U.S. astronomer Carolyn Shoemaker; U.S. astrophysicist Dr. David Brin; Romanian cosmonaut Dumitru Prunariu; U.S. physicist and mathematician Dr. Freeman Dyson; U.S. astrophysicist and former Harvard-Smithsonian Center for Astrophysics head Dr. Irwin Shapiro; U.S. film director Jerry Zucker; British-U.S. balloonist Julian Nott; Dutch astrophysicist and B612 co-founder Dr. Piet Hut; former U.S. ambassador Philip Lader; British cosmologist and astrophysicist Dr. Roger Blandford; U.S. writer and Whole Earth Catalog founder Stewart Brand; U.S. media head Tim O'Reilly; and former U.S. NASA astronaut Dr. Tom Jones.
Tom Jones, strategic adviser
Dr. Thomas David "Tom" Jones (b. January 22, 1955) is a strategic adviser to B612, member of the NASA Advisory Council and a former U.S. astronaut and planetary scientist who has studied asteroids for NASA, engineered intelligence-gathering systems for the CIA, and helped develop advanced mission concepts to explore the Solar System. In his 11 years with NASA he flew on four Space Shuttle missions, logging a total of 53 days in space. His flight time included three spacewalks to install the centerpiece science module of the International Space Station (ISS). His publications include Planetology: Unlocking the Secrets of the Solar System.
After graduating from the U.S. Air Force Academy where he received his B.Sc. in 1977, Jones earned a Ph.D. in Planetary Sciences from the University of Arizona in 1988. His research interests included the remote sensing of asteroids, meteorite spectroscopy, and applications of space resources. In 1990 he joined Science Applications International Corporation in Washington, D.C. as a senior scientist. Dr. Jones performed advanced program planning for NASA's Goddard Space Flight Center's Solar System Exploration Division. His work there included the investigation of future robotic missions to Mars, asteroids, and the outer Solar System.
After a year of training following his selection by NASA he became an astronaut in July 1991. In 1994 he flew as mission specialists on successive flights of various Space Shuttles, running science operations on the "night shift" during STS-59, successfully deploying and retrieving two science satellites. While helping set a shuttle mission endurance record of nearly 18 days in orbit, Jones used Columbia's robotic Canadarm to release the Wake Shield satellite and later grapple it from orbit. His last space flight was in February 2001, helping to deliver the U.S. Destiny Laboratory Module to the ISS where he helped install the laboratory module in a series of three space walks lasting over 19 hours. That installation marked the start of onboard scientific research on the ISS.
Among his honors are NASA's medals and awards for Space Flight, Exceptional Service and Outstanding Leadership, plus the Federation Aeronautique Internationale's (FAI) Komarov Diploma and a NASA Graduate Student Research Fellowship.
Piet Hut, co-founder and strategic adviser
Dr. Piet Hut (b. September 26, 1952, Utrecht, The Netherlands) is a co-founder of the B612 Foundation, one of its strategic advisers, and a Dutch astrophysicist, who divides his time between research in computer simulations of dense stellar systems and broadly interdisciplinary collaborations, ranging from fields in natural science to computer science, cognitive psychology and philosophy. He is currently Program Head in Interdisciplinary Studies at the Institute for Advanced Study in Princeton, New Jersey, former home to Albert Einstein.
Hut's specialization is in "stellar and planetary dynamics; many of his more than two hundred articles are written in collaboration with colleagues from different fields, ranging from particle physics, geophysics and paleontology to computer science, cognitive psychology and philosophy." Dr. Hut was an early adviser to Lu and served as a founding member of the B612 Foundation's board of directors.
Hut has held positions in a number of faculties, including the Institute for Theoretical Physics, Utrecht University (1977–1978); the Astronomical Institute at the University of Amsterdam (1978–1981); Astronomy Department of the University of California, Berkeley (1984–1985) and in the Institute for Advanced Study, in Princeton, N.J. (1981–present). He has held honors, functions, fellowships and memberships in almost 150 different professional organizations, universities and conferences, and published over 225 papers and articles in scientific journals and symposiums, including his first in 1976 on "The Two-Body problem with a Decreasing Gravitational Constant". In 2014 he became a strategic adviser to the B612 Foundation.
His education includes an M.Sc. from the University of Utrecht and a double Ph.D. in particle physics and astrophysics from the University of Amsterdam in 1977 and 1981 respectively. He is the namesource for Asteroid 17031 Piethut honoring his work in planetary dynamics and for his co-founding of B612.
Dumitru Prunariu, strategic adviser
Dr. Dumitru-Dorin Prunariu (, b. 27 September 1952) is a retired Romanian cosmonaut and a strategic advisor to the B612 Foundation. In 1981 he flew an eight-day mission to the Soviet Salyut 6 space station where he and his crewmates completed experiments in astrophysics, space radiation, space technology, and space medicine. He received the Hero of the Socialist Republic of Romania, the Hero of the Soviet Union, the "Hermann Oberth Gold Medal", the "Golden Star Medal" and the Order of Lenin.
Prunariu is a member of the International Academy of Astronautics, the Romanian National COSPAR Committee, and the Association of Space Explorers (ASE). In 1993, until 2004, he was the permanent representative of the ASE at the United Nations Committee on the Peaceful Uses of Outer Space (UN COPUOS) and has represented Romania at COPUOS sessions since 1992. He also became the vice-president of the International Institute for Risk, Security and Communication Management (EURISC), and from 1998 to 2004 the president of the Romanian Space Agency. In 2000 he was appointed Associate Professor on Geopolitics within the Faculty of International Business and Economics, Academy of Economic Studies in Bucharest and in 2004 he was elected COPUOS's Chairman of the Scientific and Technical Subcommittee. He was then elected as COPUOS's top level chairman, serving from 2010 to 2012, and also elected as the president of the ASE with a three-year mandate.
Prunariu has co-authored several books on space flight and both presented and published numerous scientific papers. His education includes a degree in aerospace engineering in 1976 from the Politehnica University of Bucharest. His Ph.D. thesis led to improvements in the field of space flight dynamics.
Deflection methods
A number of methods have been devised to 'deflect' an asteroid or other NEO away from an Earth-impacting trajectory, so that it can entirely avoid entering the Earth's atmosphere. Given sufficient advance lead time, a change to the body's velocity of as little as one centimetre per second will allow it to avoid hitting the Earth. Proposed and experimental deflection methods include ion beam shepherds, focused solar energy and the use of mass drivers or solar sails.
Initiating a nuclear explosive device above, on, or slightly beneath, the surface of a threatening NEO is a potential deflection option, with the optimal detonation height dependent upon the NEO's composition and size. In the case of a threatening "rubble pile", the stand off, or detonation height above the surface configuration has been put forth as a means to prevent the potential fracturing of the rubble pile. However, given sufficient advance warning of an asteroid's impact, most scientists avoid endorsing explosive deflection due to the number of potential issues involved. Other methods that can accomplish NEO deflections include:
Gravity tractor
An alternative to an explosive deflection is to move a dangerous asteroid slowly and consistently over time. The effect of a tiny constant thrust can accumulate to deviate an object sufficiently from its predicted course. In 2005 Drs. Ed Lu and Stanley G. Love proposed using a large, heavy uncrewed spacecraft hovering over an asteroid to gravitationally pull the latter into a non-threatening orbit. The method will function due to the spacecraft's and asteroid's mutually gravitational attraction. When the spacecraft counters the gravitational attraction towards the asteroid by the use of, for example, an ion thruster engine, the net effect is that the asteroid is accelerated, or moved, towards the spacecraft and thus slowly deflected from the orbital path that will lead it to a collision with Earth.
While slow, this method has the advantage of working irrespective of an asteroid's composition. It would even be effective on a comet, loose rubble pile, or an object spinning at a high rate. However, a gravity tractor would likely have to spend several years stationed beside and tugging on the body to be effective. The Sentinel Space Telescope's mission is designed to provide the required advance lead time.
According to Rusty Schweickart, the gravitational tractor method also has a controversial aspect because during the process of changing an asteroid's trajectory, the point on Earth where it would most likely hit would slowly be shifted temporarily across the face of the planet. It means the threat for the entire planet might be minimized at a temporary cost of some specific states' security. Schweickart recognizes that choosing the manner and direction the asteroid should be "dragged" may be a difficult international decision, and one that should be made through the United Nations.
An early NASA analysis of deflection alternatives in 2007, stated: "'Slow push' mitigation techniques are the most expensive, have the lowest level of technical readiness, and their ability to both travel to and divert a threatening NEO would be limited unless mission durations of many years to decades are possible." But a year later in 2008 the B612 Foundation released a technical evaluation of the gravity tractor concept, produced on contract to NASA. Their report confirmed that a transponder-equipped tractor "with a simple and robust spacecraft design" can provide the needed towing service for a 140-meters-diameter equivalent, Hayabusa-shaped asteroid or other NEO.
Kinetic impact
When the asteroid is still far from Earth, a means of deflecting the asteroid is to directly alter its momentum by colliding a spacecraft with the asteroid. The further away from the Earth, the smaller the required impact force becomes. Conversely, the closer a dangerous near-Earth Object (NEO) is to Earth at the time of its discovery, the greater the force that is required to make it deviate from its collision trajectory with the Earth. Closer to Earth, the impact of a massive spacecraft is a possible solution to a pending NEO impact.
In 2005, in the wake of the successful U.S. mission that crashed its Deep Impact probe into Comet Tempel 1, China announced its plan for a more advanced version: the landing of a spacecraft probe on a small NEO in order to push it off course. In the 2000s the European Space Agency (ESA) began studying the design of a space mission named Don Quijote, which, if flown, would have been the first intentional asteroid deflection mission ever designed. ESA's Advanced Concepts Team also demonstrated theoretically that a deflection of 99942 Apophis could be achieved by sending a spacecraft weighing less than a tonne to impact against the asteroid.
ESA had originally identified two NEOs as possible targets for its Quijote mission: and (10302) 1989 ML. Neither asteroid represents a threat to Earth. In a subsequent study, two different possibilities were selected: the Amor asteroid 2003 SM84 and 99942 Apophis; the latter is of particular significance to Earth as it will make a close approach in 2029 and 2036. In 2005, ESA announced at the 44th annual Lunar and Planetary Science Conference that its mission would be combined into a joint ESA-NASA Asteroid Impact & Deflection Assessment (AIDA) mission, proposed for 2019–2022. The target selected for AIDA will be a binary asteroid, so that the deflection effect could also be observed from Earth by timing the rotation period of the binary pair. AIDA's new target, a component of binary asteroid 65803 Didymos, will be impacted at a velocity of 22,530 km/h (14,000 mph)
A NASA analysis of deflection alternatives, conducted in 2007, stated: "Non-nuclear kinetic impactors are the most mature approach and could be used in some deflection/mitigation scenarios, especially for NEOs that consist of a single small, solid body."
Funding status
The B612 Foundation is a California 501(c)(3) non-profit, private foundation. Financial contributions to the B612 Foundation are tax-exempt in the United States. Its principal offices are in Mill Valley, California; they were previously located in Tiburon, California.
Fund raising has not gone well for B612 as of June 2015. With an overall goal to raise for the project, the foundation raised only approximately in 2012 and in 2013.
Foundation name
The B612 Foundation is named in tribute to the eponymous home asteroid of the hero of Antoine de Saint-Exupéry's best-selling philosophical fable of Le Petit Prince (The Little Prince). In aviation's early pioneer years of the 1920s, Saint-Exupéry made an emergency landing on top of an African mesa covered with crushed white limestone seashells. Walking around in the moonlight he kicked a black rock and soon deduced it was a meteorite that had fallen from space.
That experience later contributed, in 1943, to his literary creation of Asteroid B-612 in his philosophical fable of a little prince fallen to Earth, with the home planetoid's name having been adapted from one of the mail planes Saint-Exupéry once flew, bearing the registration marking A-612.
Also inspired by the story is an asteroid discovered in 1993, though not identified as posing any threat to Earth, named 46610 Bésixdouze (the numerical part of its designation represented in hexadecimal as 'B612', while the textual part is French for "B six twelve"). As well, a small asteroid moon, Petit-Prince, discovered in 1998 is named in part after The Little Prince.
See also
Qingyang event
99942 Apophis
Asteroid impact prediction
Asteroid impact avoidance
Asteroid Day
Asteroid Terrestrial-impact Last Alert System (ATLAS)
Deep Space Industries
Gravity tractor
List of impact craters on Earth
List of meteor air bursts
NEOShield
Near Earth Object Surveillance Satellite (NEOS Sat)
Planetary Resources
Potentially hazardous object
Spaceguard
Spaceguard Foundation
Tunguska event
United Nations Committee on the Peaceful Uses of Outer Space (UN COPUOS)
References
Notes
Citations
Further reading
Lewis, John S. Comet And Asteroid Impact Hazards On A Populated Earth: Computer Modeling, Volume 1, Academic Press, 2000, ,
Powell, Corey S. "How to Deflect a Killer Asteroid: Researchers Come Up With Contingency Plans That Could Help Our Planet Dodge A Cosmic Bullet", Discover, September 18, 2013, pp. 58–60 (subscription).
Schweickart, Lu, Hut and Chapman. "The Asteroid Tugboat: To Prevent An Asteroid From Hitting Earth, A Space Tug Equipped With Plasma Engines Could Give It A Push", October 13, 2003, Scientific American
Steel, Duncan. Rogue Asteroids and Doomsday Comets: The Search for the Million Megaton Menace That Threatens Life on Earth, Wiley & Sons, 1995, [1997], , .
External links
B612 Foundation: early website homepage (archived)
B612 Foundation: Sentinel Mission Factsheet (Feb. 2013, PDF)
Dr. Ed Lu at TEDxMarin: Changing the Course of the Solar System (video, 14:04)
NBC Nightly News: Early-Warning Telescope Could Detect Dangerous Asteroids, broadcast April 22, 2014 (video, 2:27)
Defending Earth from Asteroids with Neil deGrasse Tyson, public presentation and moderated panel discussion with members of the Association of Space Explorers and the B612 Foundation, at the American Museum of Natural History, New York City, October 25, 2013 (video, 58:03)
NEO Threat Detection and Warning: Plans for an International Asteroid Warning Network, Presentation to the United Nations Committee on the Peaceful Uses of Outer Space (UN COPUOS) by Dr. Timothy Spahr, Director, Minor Planet Center, Smithsonian Astrophysical Observatory, February 18, 2013 (PDF)
Dr. Ed Lu Congressional Testimony, Washington, D.C., March 20, 2013, United States Senate Sub-Committee on Science and Space: "Assessing the Risks, Impacts and Solutions for Space Threats" (video, 23:49)
(DVD, video, 53:24). Also viewable (within some countries) as
Rusty's Talk: Dinosaur Syndrome Avoidance Project - How Gozit?, a July 17, 2014 presentation before an audience at NASA's Ames Research Center's Director's Colloquium, addressing the status of the three essential elements to avoiding catastrophic asteroid impacts (video, 55:34)
Charities based in California
Planetary defense organizations
Antoine de Saint-Exupéry
Non-profit organizations based in the San Francisco Bay Area
Mountain View, California
Mill Valley, California
Science and technology in the San Francisco Bay Area
Planetary science
Organizations established in 2002
2002 establishments in the United States
Astronomical surveys
Scientific research foundations in the United States
Articles containing video clips
Space science organizations
Rusty Schweickart | B612 Foundation | [
"Astronomy"
] | 12,272 | [
"Planetary defense organizations",
"Astronomical surveys",
"Works about astronomy",
"Astronomy organizations",
"Planetary science",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
333,676 | https://en.wikipedia.org/wiki/List%20of%20content%20management%20systems | Content management systems (CMS) are used to organize and facilitate collaborative content creation. Many of them are built on top of separate content management frameworks. The list is limited to notable services.
Open source software
This section lists free and open-source software that can be installed and managed on a web server.
Software as a service (SaaS)
This section lists proprietary software that includes software, hosting, and support with a single vendor. This section includes free services.
Proprietary software
This section lists proprietary software to be installed and managed on a user's own server. This section includes freeware proprietary software.
Systems listed on a light purple background are no longer in active development.
Other content management frameworks
A content management framework (CMF) is a system that facilitates the use of reusable components or customized software for managing Web content. It shares aspects of a Web application framework and a content management system (CMS).
Below is a list of notable systems that claim to be CMFs.
See also
Comparison of web frameworks
Comparison of wiki software
References
Content management systems | List of content management systems | [
"Technology"
] | 219 | [
"Computing-related lists",
"Lists of software"
] |
333,677 | https://en.wikipedia.org/wiki/Nowhere%20continuous%20function | In mathematics, a nowhere continuous function, also called an everywhere discontinuous function, is a function that is not continuous at any point of its domain. If is a function from real numbers to real numbers, then is nowhere continuous if for each point there is some such that for every we can find a point such that and . Therefore, no matter how close it gets to any fixed point, there are even closer points at which the function takes not-nearby values.
More general definitions of this kind of function can be obtained, by replacing the absolute value by the distance function in a metric space, or by using the definition of continuity in a topological space.
Examples
Dirichlet function
One example of such a function is the indicator function of the rational numbers, also known as the Dirichlet function. This function is denoted as and has domain and codomain both equal to the real numbers. By definition, is equal to if is a rational number and it is otherwise.
More generally, if is any subset of a topological space such that both and the complement of are dense in then the real-valued function which takes the value on and on the complement of will be nowhere continuous. Functions of this type were originally investigated by Peter Gustav Lejeune Dirichlet.
Non-trivial additive functions
A function is called an if it satisfies Cauchy's functional equation:
For example, every map of form where is some constant, is additive (in fact, it is linear and continuous). Furthermore, every linear map is of this form (by taking ).
Although every linear map is additive, not all additive maps are linear. An additive map is linear if and only if there exists a point at which it is continuous, in which case it is continuous everywhere. Consequently, every non-linear additive function is discontinuous at every point of its domain.
Nevertheless, the restriction of any additive function to any real scalar multiple of the rational numbers is continuous; explicitly, this means that for every real the restriction to the set is a continuous function.
Thus if is a non-linear additive function then for every point is discontinuous at but is also contained in some dense subset on which 's restriction is continuous (specifically, take if and take if ).
Discontinuous linear maps
A linear map between two topological vector spaces, such as normed spaces for example, is continuous (everywhere) if and only if there exists a point at which it is continuous, in which case it is even uniformly continuous. Consequently, every linear map is either continuous everywhere or else continuous nowhere.
Every linear functional is a linear map and on every infinite-dimensional normed space, there exists some discontinuous linear functional.
Other functions
The Conway base 13 function is discontinuous at every point.
Hyperreal characterisation
A real function is nowhere continuous if its natural hyperreal extension has the property that every is infinitely close to a such that the difference is appreciable (that is, not infinitesimal).
See also
Blumberg theoremeven if a real function is nowhere continuous, there is a dense subset of such that the restriction of to is continuous.
Thomae's function (also known as the popcorn function)a function that is continuous at all irrational numbers and discontinuous at all rational numbers.
Weierstrass functiona function continuous everywhere (inside its domain) and differentiable nowhere.
References
External links
Dirichlet Function — from MathWorld
The Modified Dirichlet Function by George Beck, The Wolfram Demonstrations Project.
Mathematical analysis
Topology
Types of functions | Nowhere continuous function | [
"Physics",
"Mathematics"
] | 731 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Topology",
"Space",
"Mathematical relations",
"Geometry",
"Spacetime",
"Types of functions"
] |
333,692 | https://en.wikipedia.org/wiki/Radiation%20protection | Radiation protection, also known as radiological protection, is defined by the International Atomic Energy Agency (IAEA) as "The protection of people from harmful effects of exposure to ionizing radiation, and the means for achieving this". Exposure can be from a source of radiation external to the human body or due to internal irradiation caused by the ingestion of radioactive contamination.
Ionizing radiation is widely used in industry and medicine, and can present a significant health hazard by causing microscopic damage to living tissue. There are two main categories of ionizing radiation health effects. At high exposures, it can cause "tissue" effects, also called "deterministic" effects due to the certainty of them happening, conventionally indicated by the unit gray and resulting in acute radiation syndrome. For low level exposures there can be statistically elevated risks of radiation-induced cancer, called "stochastic effects" due to the uncertainty of them happening, conventionally indicated by the unit sievert.
Fundamental to radiation protection is the avoidance or reduction of dose using the simple protective measures of time, distance and shielding. The duration of exposure should be limited to that necessary, the distance from the source of radiation should be maximised, and the source or the target shielded wherever possible. To measure personal dose uptake in occupational or emergency exposure, for external radiation personal dosimeters are used, and for internal dose due to ingestion of radioactive contamination, bioassay techniques are applied.
For radiation protection and dosimetry assessment the International Commission on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU) publish recommendations and data which is used to calculate the biological effects on the human body of certain levels of radiation, and thereby advise acceptable dose uptake limits.
Principles
The ICRP recommends, develops and maintains the International System of Radiological Protection, based on evaluation of the large body of scientific studies available to equate risk to received dose levels. The system's health objectives are "to manage and control exposures to ionising radiation so that deterministic effects are prevented, and the risks of stochastic effects are reduced to the extent reasonably achievable".
The ICRP's recommendations flow down to national and regional regulators, which have the opportunity to incorporate them into their own law; this process is shown in the accompanying block diagram. In most countries a national regulatory authority works towards ensuring a secure radiation environment in society by setting dose limitation requirements that are generally based on the recommendations of the ICRP.
Exposure situations
The ICRP recognises planned, emergency, and existing exposure situations, as described below;
Planned exposure – defined as "...where radiological protection can be planned in advance, before exposures occur, and where the magnitude and extent of the exposures can be reasonably predicted." These are such as in occupational exposure situations, where it is necessary for personnel to work in a known radiation environment.
Emergency exposure – defined as "...unexpected situations that may require urgent protective actions". This would be such as an emergency nuclear event.
Existing exposure – defined as "...being those that already exist when a decision on control has to be taken". These can be such as from naturally occurring radioactive materials which exist in the environment.
Regulation of dose uptake
The ICRP uses the following overall principles for all controllable exposure situations.
Justification: No unnecessary use of radiation is permitted, which means that the advantages must outweigh the disadvantages.
Limitation: Each individual must be protected against risks that are too great, through the application of individual radiation dose limits.
Optimization: This process is intended for application to those situations that have been deemed to be justified. It means "the likelihood of incurring exposures, the number of people exposed, and the magnitude of their individual doses" should all be kept As Low As Reasonably Achievable (or Reasonably Practicable) known as ALARA or ALARP. It takes into account economic and societal factors.
Factors in external dose uptake
There are three factors that control the amount, or dose, of radiation received from a source. Radiation exposure can be managed by a combination of these factors:
Time: Reducing the time of an exposure reduces the effective dose proportionally. An example of reducing radiation doses by reducing the time of exposures might be improving operator training to reduce the time they take to handle a radioactive source.
Distance: Increasing distance reduces dose due to the inverse square law. Distance can be as simple as handling a source with forceps rather than fingers. For example, if a problem arises during fluoroscopic procedure step away from the patient if feasible.
Shielding: Sources of radiation can be shielded with solid or liquid material, which absorbs the energy of the radiation. The term 'biological shield' is used for absorbing material placed around a nuclear reactor, or other source of radiation, to reduce the radiation to a level safe for humans. The shielding materials are concrete and lead shield which is 0.25 mm thick for secondary radiation and 0.5 mm thick for primary radiation
Internal dose uptake
Internal dose, due to the inhalation or ingestion of radioactive substances, can result in stochastic or deterministic effects, depending on the amount of radioactive material ingested and other biokinetic factors.
The risk from a low level internal source is represented by the dose quantity committed dose, which has the same risk as the same amount of external effective dose.
The intake of radioactive material can occur through four pathways:
inhalation of airborne contaminants such as radon gas and radioactive particles
ingestion of radioactive contamination in food or liquids
absorption of vapours such as tritium oxide through the skin
injection of medical radioisotopes such as technetium-99m
The occupational hazards from airborne radioactive particles in nuclear and radio-chemical applications are greatly reduced by the extensive use of gloveboxes to contain such material. To protect against breathing in radioactive particles in ambient air, respirators with particulate filters are worn.
To monitor the concentration of radioactive particles in ambient air, radioactive particulate monitoring instruments measure the concentration or presence of airborne materials.
For ingested radioactive materials in food and drink, specialist laboratory radiometric assay methods are used to measure the concentration of such materials.
Recommended limits on dose uptake
The ICRP recommends a number of limits for dose uptake in table 8 of ICRP report 103. These limits are "situational", for planned, emergency and existing situations. Within these situations, limits are given for certain exposed groups;
Planned exposure – limits given for occupational, medical and public exposure. The occupational exposure limit of effective dose is 20 mSv per year, averaged over defined periods of 5 years, with no single year exceeding 50 mSv. The public exposure limit is 1 mSv in a year.
Emergency exposure – limits given for occupational and public exposure
Existing exposure – reference levels for all persons exposed
The public information dose chart of the USA Department of Energy, shown here on the right, applies to USA regulation, which is based on ICRP recommendations. Note that examples in lines 1 to 4 have a scale of dose rate (radiation per unit time), whilst 5 and 6 have a scale of total accumulated dose.
ALARP & ALARA
ALARP is an acronym for an important principle in exposure to radiation and other occupational health risks and in the UK stands for As Low As Reasonably Practicable. The aim is to minimize the risk of radioactive exposure or other hazard while keeping in mind that some exposure may be acceptable in order to further the task at hand. The equivalent term ALARA, As Low As Reasonably Achievable, is more commonly used outside the UK.
This compromise is well illustrated in radiology. The application of radiation can aid the patient by providing doctors and other health care professionals with a medical diagnosis, but the exposure of the patient should be reasonably low enough to keep the statistical probability of cancers or sarcomas (stochastic effects) below an acceptable level, and to eliminate deterministic effects (e.g. skin reddening or cataracts). An acceptable level of incidence of stochastic effects is considered to be equal for a worker to the risk in other radiation work generally considered to be safe.
This policy is based on the principle that any amount of radiation exposure, no matter how small, can increase the chance of negative biological effects such as cancer. It is also based on the principle that the probability of the occurrence of negative effects of radiation exposure increases with cumulative lifetime dose. These ideas are combined to form the linear no-threshold model which says that there is not a threshold at which there is an increase in the rate of occurrence of stochastic effects with increasing dose. At the same time, radiology and other practices that involve use of ionizing radiation bring benefits, so reducing radiation exposure can reduce the efficacy of a medical practice. The economic cost, for example of adding a barrier against radiation, must also be considered when applying the ALARP principle. Computed tomography, better known as CT scans or CAT scans have made an enormous contribution to medicine, however not without some risk. The ionizing radiation used in CT scans can lead to radiation-induced cancer. Age is a significant factor in risk associated with CT scans, and in procedures involving children and systems that do not require extensive imaging, lower doses are used.
Personal radiation dosimeters
The radiation dosimeter is an important personal dose measuring instrument. It is worn by the person being monitored and is used to estimate the external radiation dose deposited in the individual wearing the device. They are used for gamma, X-ray, beta and other strongly penetrating radiation, but not for weakly penetrating radiation such as alpha particles. Traditionally, film badges were used for long-term monitoring, and quartz fibre dosimeters for short-term monitoring. However, these have been mostly superseded by thermoluminescent dosimetry (TLD) badges and electronic dosimeters. Electronic dosimeters can give an alarm warning if a preset dose threshold has been reached, enabling safer working in potentially higher radiation levels, where the received dose must be continually monitored.
Workers exposed to radiation, such as radiographers, nuclear power plant workers, doctors using radiotherapy, those in laboratories using radionuclides, and HAZMAT teams are required to wear dosimeters so a record of occupational exposure can be made. Such devices are generally termed "legal dosimeters" if they have been approved for use in recording personnel dose for regulatory purposes.
Dosimeters can be worn to obtain a whole body dose and there are also specialist types that can be worn on the fingers or clipped to headgear, to measure the localised body irradiation for specific activities.
Common types of wearable dosimeters for ionizing radiation include:
Film badge dosimeter
Quartz fibre dosimeter
Electronic personal dosimeter
Thermoluminescent dosimeter
Radiation protection
Almost any material can act as a shield from gamma or x-rays if used in sufficient amounts. Different types of ionizing radiation interact in different ways with shielding material. The effectiveness of shielding is dependent on stopping power, which varies with the type and energy of radiation and the shielding material used. Different shielding techniques are therefore used depending on the application and the type and energy of the radiation.
Shielding reduces the intensity of radiation, increasing with thickness. This is an exponential relationship with gradually diminishing effect as equal slices of shielding material are added. A quantity known as the halving-thicknesses is used to calculate this. For example, a practical shield in a fallout shelter with ten halving-thicknesses of packed dirt, which is roughly , reduces gamma rays to 1/1024 of their original intensity (i.e. 2−10).
The effectiveness of a shielding material in general increases with its atomic number, called Z, except for neutron shielding, which is more readily shielded by the likes of neutron absorbers and moderators such as compounds of boron e.g. boric acid, cadmium, carbon and hydrogen.
Graded-Z shielding is a laminate of several materials with different Z values (atomic numbers) designed to protect against ionizing radiation. Compared to single-material shielding, the same mass of graded-Z shielding has been shown to reduce electron penetration over 60%. It is commonly used in satellite-based particle detectors, offering several benefits:
protection from radiation damage
reduction of background noise for detectors
lower mass compared to single-material shielding
Designs vary, but typically involve a gradient from high-Z (usually tantalum) through successively lower-Z elements such as tin, steel, and copper, usually ending with aluminium. Sometimes even lighter materials such as polypropylene or boron carbide are used.
In a typical graded-Z shield, the high-Z layer effectively scatters protons and electrons. It also absorbs gamma rays, which produces X-ray fluorescence. Each subsequent layer absorbs the X-ray fluorescence of the previous material, eventually reducing the energy to a suitable level. Each decrease in energy produces Bremsstrahlung and Auger electrons, which are below the detector's energy threshold. Some designs also include an outer layer of aluminium, which may simply be the skin of the satellite. The effectiveness of a material as a biological shield is related to its cross-section for scattering and absorption, and to a first approximation is proportional to the total mass of material per unit area interposed along the line of sight between the radiation source and the region to be protected. Hence, shielding strength or "thickness" is conventionally measured in units of g/cm2. The radiation that manages to get through falls exponentially with the thickness of the shield. In x-ray facilities, walls surrounding the room with the x-ray generator may contain lead shielding such as lead sheets, or the plaster may contain barium sulfate. Operators view the target through a leaded glass screen, or if they must remain in the same room as the target, wear lead aprons.
Particle radiation
Particle radiation consists of a stream of charged or neutral particles, both charged ions and subatomic elementary particles. This includes solar wind, cosmic radiation, and neutron flux in nuclear reactors.
Alpha particles (helium nuclei) are the least penetrating. Even very energetic alpha particles can be stopped by a single sheet of paper.
Beta particles (electrons) are more penetrating, but still can be absorbed by a few millimetres of aluminium. However, in cases where high-energy beta particles are emitted, shielding must be accomplished with low atomic weight materials, e.g. plastic, wood, water, or acrylic glass (Plexiglas, Lucite). This is to reduce generation of Bremsstrahlung X-rays. In the case of beta+ radiation (positrons), the gamma radiation from the electron–positron annihilation reaction poses additional concern.
Neutron radiation is not as readily absorbed as charged particle radiation, which makes this type highly penetrating. In a process called neutron activation, neutrons are absorbed by nuclei of atoms in a nuclear reaction. This most often creates a secondary radiation hazard, as the absorbing nuclei transmute to the next-heavier isotope, many of which are unstable.
Cosmic radiation is not a common concern on Earth, as the Earth's atmosphere absorbs it and the magnetosphere acts as a shield, but it poses a significant problem for satellites and astronauts, especially while passing through the Van Allen Belt or while completely outside the protective regions of the Earth's magnetosphere. Frequent fliers may be at a slightly higher risk because of the decreased absorption from thinner atmosphere. Cosmic radiation is extremely high energy, and is very penetrating.
Electromagnetic radiation
Electromagnetic radiation consists of emissions of electromagnetic waves, the properties of which depend on the wavelength.
X-ray and gamma radiation are best absorbed by atoms with heavy nuclei; the heavier the nucleus, the better the absorption. In some special applications, depleted uranium or thorium are used, but lead is much more common; several cm are often required. Barium sulfate is used in some applications too. However, when the cost is important, almost any material can be used, but it must be far thicker. Most nuclear reactors use thick concrete shields to create a bioshield with a thin water-cooled layer of lead on the inside to protect the porous concrete from the coolant inside. The concrete is also made with heavy aggregates, such as Baryte or Magnetite, to aid in the shielding properties of the concrete. Gamma rays are better absorbed by materials with high atomic numbers and high density, although neither effect is important compared to the total mass per area in the path of the gamma ray.
Ultraviolet (UV) radiation is ionizing in its shortest wavelengths but is not penetrating, so it can be shielded by thin opaque layers such as sunscreen, clothing, and protective eyewear. Protection from UV is simpler than for the other forms of radiation above, so it is often considered separately.
In some cases, improper shielding can actually make the situation worse, when the radiation interacts with the shielding material and creates secondary radiation that absorbs in the organisms more readily. For example, although high atomic number materials are very effective in shielding photons, using them to shield beta particles may cause higher radiation exposure due to the production of Bremsstrahlung x-rays, and hence low atomic number materials are recommended. Also, using a material with a high neutron activation cross section to shield neutrons will result in the shielding material itself becoming radioactive and hence more dangerous than if it were not present.
Personal protective equipment
Personal protective equipment (PPE) includes all clothing and accessories which can be worn to prevent severe illness and injury as a result of exposure to radioactive material. These include an SR100 (protection for 1hr), SR200 (protection for 2 hours). Because radiation can affect humans through internal and external contamination, various protection strategies have been developed to protect humans from the harmful effects of radiation exposure from a spectrum of sources. A few of these strategies developed to shield from internal, external, and high energy radiation are outlined below.
Internal contamination protective equipment
Internal contamination protection equipment protects against the inhalation and ingestion of radioactive material. Internal deposition of radioactive material result in direct exposure of radiation to organs and tissues inside the body. The respiratory protective equipment described below are designed to minimize the possibility of such material being inhaled or ingested as emergency workers are exposed to potentially radioactive environments.
Reusable air purifying respirators (APR)
Elastic face piece worn over the mouth and nose
Contains filters, cartridges, and canisters to provide increased protection and better filtration
Powered air-purifying respirator (PAPR)
Battery powered blower forces contamination through air purifying filters
Purified air delivered under positive pressure to face piece
Supplied-air respirator (SAR)
Compressed air delivered from a stationary source to the face piece
Auxiliary escape respirator
Protects wearer from breathing harmful gases, vapours, fumes, and dust
Can be designed as an air-purifying escape respirator (APER) or a self-contained breathing apparatus (SCBA) type respirator
SCBA type escape respirators have an attached source of breathing air and a hood that provides a barrier against contaminated outside air
Self-contained breathing apparatus (SCBA)
Provides very pure, dry compressed air to full facepiece mask via a hose
Air is exhaled to environment
Worn when entering environments immediately dangerous to life and health (IDLH) or when information is inadequate to rule out IDLH atmosphere
External contamination protective equipment
External contamination protection equipment provides a barrier to shield radioactive material from being deposited externally on the body or clothes. The dermal protective equipment described below acts as a barrier to block radioactive material from physically touching the skin, but does not protect against externally penetrating high energy radiation.
Chemical-resistant inner suit
Porous overall suit—Dermal protection from aerosols, dry particles, and non hazardous liquids.
Non-porous overall suit to provide dermal protection from:
Dry powders and solids
Blood-borne pathogens and bio-hazards
Chemical splashes and inorganic acid/base aerosols
Mild liquid chemical splashes from toxics and corrosives
Toxic industrial chemicals and materials
Level C equivalent: Bunker gear
Firefighter protective clothing
Flame/water resistant
Helmet, gloves, foot gear, and hood
Level B equivalent: Non-gas-tight encapsulating suit
Designed for environments that are immediate health risks but contain no substances that can be absorbed by skin
Level A equivalent: Totally encapsulating chemical- and vapour-protective suit
Designed for environments that are immediate health risks and contain substances that can be absorbed by skin
External penetrating radiation
There are many solutions to shielding against low-energy radiation exposure like low-energy X-rays. Lead shielding wear such as lead aprons can protect patients and clinicians from the potentially harmful radiation effects of day-to-day medical examinations. It is quite feasible to protect large surface areas of the body from radiation in the lower-energy spectrum because very little shielding material is required to provide the necessary protection. Recent studies show that copper shielding is far more effective than lead and is likely to replace it as the standard material for radiation shielding.
Personal shielding against more energetic radiation such as gamma radiation is very difficult to achieve as the large mass of shielding material required to properly protect the entire body would make functional movement nearly impossible. For this, partial body shielding of radio-sensitive internal organs is the most viable protection strategy.
The immediate danger of intense exposure to high-energy gamma radiation is acute radiation syndrome (ARS), a result of irreversible bone marrow damage. The concept of selective shielding is based in the regenerative potential of the hematopoietic stem cells found in bone marrow. The regenerative quality of stem cells make it only necessary to protect enough bone marrow to repopulate the body with unaffected stem cells after the exposure: a similar concept which is applied in hematopoietic stem cell transplantation (HSCT), which is a common treatment for patients with leukemia. This scientific advancement allows for the development of a new class of relatively lightweight protective equipment that shields high concentrations of bone marrow to defer the hematopoietic sub-syndrome of acute radiation syndrome to much higher dosages.
One technique is to apply selective shielding to protect the high concentration of bone marrow stored in the hips and other radio-sensitive organs in the abdominal area. This allows first responders a safe way to perform necessary missions in radioactive environments.
Radiation protection instruments
Practical radiation measurement using calibrated radiation protection instruments is essential in evaluating the effectiveness of protection measures, and in assessing the radiation dose likely to be received by individuals. The measuring instruments for radiation protection are both "installed" (in a fixed position) and portable (hand-held or transportable).
Installed instruments
Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed "area" radiation monitors, Gamma interlock monitors, personnel exit monitors, and airborne particulate monitors.
The area radiation monitor will measure the ambient radiation, usually X-Ray, Gamma or neutrons; these are radiations that can have significant radiation levels over a range in excess of tens of metres from their source, and thereby cover a wide area.
Gamma radiation "interlock monitors" are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present. These interlock the process access directly.
Airborne contamination monitors measure the concentration of radioactive particles in the ambient air to guard against radioactive particles being ingested, or deposited in the lungs of personnel. These instruments will normally give a local alarm, but are often connected to an integrated safety system so that areas of plant can be evacuated and personnel are prevented from entering an air of high airborne contamination.
Personnel exit monitors (PEM) are used to monitor workers who are exiting a "contamination controlled" or potentially contaminated area. These can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these.
The UK National Physical Laboratory publishes a good practice guide through its Ionising Radiation Metrology Forum concerning the provision of such equipment and the methodology of calculating the alarm levels to be used.
Portable instruments
Portable instruments are hand-held or transportable. The hand-held instrument is generally used as a survey meter to check an object or person in detail, or assess an area where no installed instrumentation exists. They can also be used for personnel exit monitoring or personnel contamination checks in the field. These generally measure alpha, beta or gamma, or combinations of these.
Transportable instruments are generally instruments that would have been permanently installed, but are temporarily placed in an area to provide continuous monitoring where it is likely there will be a hazard. Such instruments are often installed on trolleys to allow easy deployment, and are associated with temporary operational situations.
In the United Kingdom the HSE has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. This covers all radiation instrument technologies, and is a useful comparative guide.
Instrument types
A number of commonly used detection instrument types are listed below, and are used for both fixed and survey monitoring.
ionization chambers
proportional counters
Geiger counters
semiconductor detectors
scintillation detectors
airborne particulate radioactivity monitoring
Radiation related quantities
The following table shows the main radiation-related quantities and units.
Spacecraft radiation challenges
Spacecraft, both robotic and crewed, must cope with the high radiation environment of outer space. Radiation emitted by the Sun and other galactic sources, and trapped in radiation "belts" is more dangerous and hundreds of times more intense than radiation sources such as medical X-rays or normal cosmic radiation usually experienced on Earth. When the intensely ionizing particles found in space strike human tissue, it can result in cell damage and may eventually lead to cancer.
The usual method for radiation protection is material shielding by spacecraft and equipment structures (usually aluminium), possibly augmented by polyethylene in human spaceflight where the main concern is high-energy protons and cosmic ray ions. On uncrewed spacecraft in high-electron-dose environments such as Jupiter missions, or medium Earth orbit (MEO), additional shielding with materials of a high atomic number can be effective. On long-duration crewed missions, advantage can be taken of the good shielding characteristics of liquid hydrogen fuel and water.
The NASA Space Radiation Laboratory makes use of a particle accelerator that produces beams of protons or heavy ions. These ions are typical of those accelerated in cosmic sources and by the Sun. The beams of ions move through a 100 m (328-foot) transport tunnel to the 37 m2 (400-square-foot) shielded target hall. There, they hit the target, which may be a biological sample or shielding material. In a 2002 NASA study, it was determined that materials that have high hydrogen contents, such as polyethylene, can reduce primary and secondary radiation to a greater extent than metals, such as aluminum. The problem with this "passive shielding" method is that radiation interactions in the material generate secondary radiation.
Active Shielding, that is, using magnets, high voltages, or artificial magnetospheres to slow down or deflect radiation, has been considered to potentially combat radiation in a feasible way. So far, the cost of equipment, power and weight of active shielding equipment outweigh their benefits. For example, active radiation equipment would need a habitable volume size to house it, and magnetic and electrostatic configurations often are not homogeneous in intensity, allowing high-energy particles to penetrate the magnetic and electric fields from low-intensity parts, like cusps in dipolar magnetic field of Earth. As of 2012, NASA is undergoing research in superconducting magnetic architecture for potential active shielding applications.
Early radiation dangers
The dangers of radioactivity and radiation were not immediately recognized. The discovery of x‑rays in 1895 led to widespread experimentation by scientists, physicians, and inventors. Many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February of that year, Professor Daniel and Dr. Dudley of Vanderbilt University performed an experiment involving x-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, a graduate of Columbia College, of his severe hand and chest burns in an x-ray demonstration, was the first of many other reports in Electrical Review.
Many experimenters including Elihu Thomson at Thomas Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an x-ray tube over a period of time and experienced pain, swelling, and blistering. Other effects, including ultraviolet rays and ozone were sometimes blamed for the damage. Many physicists claimed that there were no effects from x-ray exposure at all.
As early as 1902 William Herbert Rollins wrote almost despairingly that his warnings about the dangers involved in careless use of x-rays was not being heeded, either by industry or by his colleagues. By this time Rollins had proved that x-rays could kill experimental animals, could cause a pregnant guinea pig to abort, and that they could kill a fetus. He also stressed that "animals vary in susceptibility to the external action of X-light" and warned that these differences be considered when patients were treated by means of x-rays.
Before the biological effects of radiation were known, many physicists and corporations began marketing radioactive substances as patent medicine in the form of glow-in-the-dark pigments. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie protested against this sort of treatment, warning that the effects of radiation on the human body were not well understood. Curie later died from aplastic anaemia, likely caused by exposure to ionizing radiation. By the 1930s, after a number of cases of bone necrosis and death of radium treatment enthusiasts, radium-containing medicinal products had been largely removed from the market (radioactive quackery).
See also
CBLB502, 'Protectan', a radioprotectant drug under development for its ability to protect cells during radiotherapy.
Ex-Rad, a United States Department of Defense radioprotectant drug under development.
Health physics
Health threat from cosmic rays
International Radiation Protection Association – (IRPA). The International body concerned with promoting the science and practice of radiation protection.
Juno Radiation Vault
Non-ionizing radiation
Nuclear safety
Potassium iodide
Radiation monitoring
Radiation Protection Convention, 1960
Radiation protection reports of the European Union
Radiobiology
Radiological protection of patients
Radioresistance
Society for Radiological Protection – The principal UK body concerned with promoting the science and practice of radiation protection. It is the UK national affiliated body to IRPA
United Nations Scientific Committee on the Effects of Atomic Radiation
References
Notes
Harvard University Radiation Protection Office Providing radiation guidance to Harvard University and affiliated institutions.
Journal of Solid State Phenomena Tara Ahmadi, Use of Semi-Dipole Magnetic Field for Spacecraft Radiation Protection.
External links
- "The confusing world of radiation dosimetry" - M.A. Boyd, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems.
Nuclear physics
Radiobiology
Radiation health effects | Radiation protection | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 6,433 | [
"Radiation health effects",
"Radiobiology",
"Nuclear physics",
"Radiation effects",
"Radioactivity"
] |
333,824 | https://en.wikipedia.org/wiki/Horn%20clause | In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form that gives it useful properties for use in logic programming, formal specification, universal algebra and model theory. Horn clauses are named for the logician Alfred Horn, who first pointed out their significance in 1951.
Definition
A Horn clause is a disjunctive clause (a disjunction of literals) with at most one positive, i.e. unnegated, literal.
Conversely, a disjunction of literals with at most one negated literal is called a dual-Horn clause.
A Horn clause with exactly one positive literal is a definite clause or a strict Horn clause; a definite clause with no negative literals is a unit clause, and a unit clause without variables is a fact;
A Horn clause without a positive literal is a goal clause.
The empty clause, consisting of no literals (which is equivalent to false) is a goal clause.
These three kinds of Horn clauses are illustrated in the following propositional example:
All variables in a clause are implicitly universally quantified with the scope being the entire clause. Thus, for example:
¬ human(X) ∨ mortal(X)
stands for:
∀X( ¬ human(X) ∨ mortal(X) ),
which is logically equivalent to:
∀X ( human(X) → mortal(X) ).
Significance
Horn clauses play a basic role in constructive logic and computational logic. They are important in automated theorem proving by first-order resolution, because the resolvent of two Horn clauses is itself a Horn clause, and the resolvent of a goal clause and a definite clause is a goal clause. These properties of Horn clauses can lead to greater efficiency of proving a theorem: the goal clause is the negation of this theorem; see Goal clause in the above table. Intuitively, if we wish to prove φ, we assume ¬φ (the goal) and check whether such assumption leads to a contradiction. If so, then φ must hold. This way, a mechanical proving tool needs to maintain only one set of formulas (assumptions), rather than two sets (assumptions and (sub)goals).
Propositional Horn clauses are also of interest in computational complexity. The problem of finding truth-value assignments to make a conjunction of propositional Horn clauses true is known as HORNSAT.
This problem is P-complete and solvable in linear time. In contrast, the unrestricted Boolean satisfiability problem is an NP-complete problem.
In universal algebra, definite Horn clauses are generally called quasi-identities; classes of algebras definable by a set of quasi-identities
are called quasivarieties and enjoy some of the good properties of the more restrictive notion of a variety, i.e., an equational class. From the model-theoretical point of view, Horn sentences are important since they are exactly (up to logical equivalence) those sentences preserved under reduced products; in particular, they are preserved under direct products. On the other hand, there are sentences that are not Horn but are nevertheless preserved under arbitrary direct products.
Logic programming
Horn clauses are also the basis of logic programming, where it is common to write definite clauses in the form of an implication:
(p ∧ q ∧ ... ∧ t) → u
In fact, the resolution of a goal clause with a definite clause to produce a new goal clause is the basis of the SLD resolution inference rule, used in implementation of the logic programming language Prolog.
In logic programming, a definite clause behaves as a goal-reduction procedure. For example, the Horn clause written above behaves as the procedure:
to show u, show p and show q and ... and show t.
To emphasize this reverse use of the clause, it is often written in the reverse form:
u ← (p ∧ q ∧ ... ∧ t)
In Prolog this is written as:
u :- p, q, ..., t.
In logic programming, a goal clause, which has the logical form
∀X (false ← p ∧ q ∧ ... ∧ t)
represents the negation of a problem to be solved. The problem itself is an existentially quantified conjunction of positive literals:
∃X (p ∧ q ∧ ... ∧ t)
The Prolog notation does not have explicit quantifiers and is written in the form:
:- p, q, ..., t.
This notation is ambiguous in the sense that it can be read either as a statement of the problem or as a statement of the denial of the problem. However, both readings are correct. In both cases, solving the problem amounts to deriving the empty clause. In Prolog notation this is equivalent to deriving:
:- true.
If the top-level goal clause is read as the denial of the problem, then the empty clause represents false and the proof of the empty clause is a refutation of the denial of the problem. If the top-level goal clause is read as the problem itself, then the empty clause represents true, and the proof of the empty clause is a proof that the problem has a solution.
The solution of the problem is a substitution of terms for the variables X in the top-level goal clause, which can be extracted from the resolution proof. Used in this way, goal clauses are similar to conjunctive queries in relational databases, and Horn clause logic is equivalent in computational power to a universal Turing machine.
Van Emden and Kowalski (1976) investigated the model-theoretic properties of Horn clauses in the context of logic programming, showing that every set of definite clauses D has a unique minimal model M. An atomic formula A is logically implied by D if and only if A is true in M. It follows that a problem P represented by an existentially quantified conjunction of positive literals is logically implied by D if and only if P is true in M. The minimal model semantics of Horn clauses is the basis for the stable model semantics of logic programs.
See also
Constrained Horn clauses
Propositional calculus
Notes
References
Logic in computer science
Normal forms (logic) | Horn clause | [
"Mathematics"
] | 1,259 | [
"Mathematical logic",
"Logic in computer science"
] |
333,835 | https://en.wikipedia.org/wiki/Free%20object | In mathematics, the idea of a free object is one of the basic concepts of abstract algebra. Informally, a free object over a set A can be thought of as being a "generic" algebraic structure over A: the only equations that hold between elements of the free object are those that follow from the defining axioms of the algebraic structure. Examples include free groups, tensor algebras, or free lattices.
The concept is a part of universal algebra, in the sense that it relates to all types of algebraic structure (with finitary operations). It also has a formulation in terms of category theory, although this is in yet more abstract terms.
Definition
Free objects are the direct generalization to categories of the notion of basis in a vector space. A linear function between vector spaces is entirely determined by its values on a basis of the vector space The following definition translates this to any category.
A concrete category is a category that is equipped with a faithful functor to Set, the category of sets. Let be a concrete category with a faithful functor . Let be a set (that is, an object in Set), which will be the basis of the free object to be defined. A free object on is a pair consisting of an object in and an injection (called the canonical injection), that satisfies the following universal property:
For any object in and any map between sets , there exists a unique morphism in such that . That is, the following diagram commutes:
If free objects exist in , the universal property implies every map between two sets induces a unique morphism between the free objects built on them, and this defines a functor . It follows that, if free objects exist in , the functor , called the free functor is a left adjoint to the faithful functor ; that is, there is a bijection
Examples
The creation of free objects proceeds in two steps. For algebras that conform to the associative law, the first step is to consider the collection of all possible words formed from an alphabet. Then one imposes a set of equivalence relations upon the words, where the relations are the defining relations of the algebraic object at hand. The free object then consists of the set of equivalence classes.
Consider, for example, the construction of the free group in two generators. One starts with an alphabet consisting of the five letters . In the first step, there is not yet any assigned meaning to the "letters" or ; these will be given later, in the second step. Thus, one could equally well start with the alphabet in five letters that is . In this example, the set of all words or strings will include strings such as aebecede and abdc, and so on, of arbitrary finite length, with the letters arranged in every possible order.
In the next step, one imposes a set of equivalence relations. The equivalence relations for a group are that of multiplication by the identity, , and the multiplication of inverses: . Applying these relations to the strings above, one obtains
where it was understood that is a stand-in for , and is a stand-in for , while is the identity element. Similarly, one has
Denoting the equivalence relation or congruence by , the free object is then the collection of equivalence classes of words. Thus, in this example, the free group in two generators is the quotient
This is often written as where is the set of all words, and is the equivalence class of the identity, after the relations defining a group are imposed.
A simpler example are the free monoids. The free monoid on a set X, is the monoid of all finite strings using X as alphabet, with operation concatenation of strings. The identity is the empty string. In essence, the free monoid is simply the set of all words, with no equivalence relations imposed. This example is developed further in the article on the Kleene star.
General case
In the general case, the algebraic relations need not be associative, in which case the starting point is not the set of all words, but rather, strings punctuated with parentheses, which are used to indicate the non-associative groupings of letters. Such a string may equivalently be represented by a binary tree or a free magma; the leaves of the tree are the letters from the alphabet.
The algebraic relations may then be general arities or finitary relations on the leaves of the tree. Rather than starting with the collection of all possible parenthesized strings, it can be more convenient to start with the Herbrand universe. Properly describing or enumerating the contents of a free object can be easy or difficult, depending on the particular algebraic object in question. For example, the free group in two generators is easily described. By contrast, little or nothing is known about the structure of free Heyting algebras in more than one generator. The problem of determining if two different strings belong to the same equivalence class is known as the word problem.
As the examples suggest, free objects look like constructions from syntax; one may reverse that to some extent by saying that major uses of syntax can be explained and characterised as free objects, in a way that makes apparently heavy 'punctuation' explicable (and more memorable).
Free universal algebras
Let be a set and be an algebraic structure of type generated by . The underlying set of this algebraic structure , often called its universe, is denoted by . Let be a function. We say that (or informally just ) is a free algebra of type on the set of free generators if the following universal property holds:
For every algebra of type and every function , where is the universe of , there exists a unique homomorphism such that the following diagram commutes:
This means that .
Free functor
The most general setting for a free object is in category theory, where one defines a functor, the free functor, that is the left adjoint to the forgetful functor.
Consider a category C of algebraic structures; the objects can be thought of as sets plus operations, obeying some laws. This category has a functor, , the forgetful functor, which maps objects and morphisms in C to Set, the category of sets. The forgetful functor is very simple: it just ignores all of the operations.
The free functor F, when it exists, is the left adjoint to U. That is, takes sets X in Set to their corresponding free objects F(X) in the category C. The set X can be thought of as the set of "generators" of the free object F(X).
For the free functor to be a left adjoint, one must also have a Set-morphism . More explicitly, F is, up to isomorphisms in C, characterized by the following universal property:
Whenever is an algebra in , and is a function (a morphism in the category of sets), then there is a unique -morphism such that .
Concretely, this sends a set into the free object on that set; it is the "inclusion of a basis". Abusing notation, (this abuses notation because X is a set, while F(X) is an algebra; correctly, it is ).
The natural transformation is called the unit; together with the counit , one may construct a T-algebra, and so a monad.
The cofree functor is the right adjoint to the forgetful functor.
Existence
There are general existence theorems that apply; the most basic of them guarantees that
Whenever C is a variety, then for every set X there is a free object F(X) in C.
Here, a variety is a synonym for a finitary algebraic category, thus implying that the set of relations are finitary, and algebraic because it is monadic over Set.
General case
Other types of forgetfulness also give rise to objects quite like free objects, in that they are left adjoint to a forgetful functor, not necessarily to sets.
For example, the tensor algebra construction on a vector space is the left adjoint to the functor on associative algebras that ignores the algebra structure. It is therefore often also called a free algebra. Likewise the symmetric algebra and exterior algebra are free symmetric and anti-symmetric algebras on a vector space.
List of free objects
Specific kinds of free objects include:
free algebra
free associative algebra
free commutative algebra
free category
free strict monoidal category
free group
free abelian group
free partially commutative group
free Kleene algebra
free lattice
free Boolean algebra
free distributive lattice
free Heyting algebra
free modular lattice
free Lie algebra
free magma
free module, and in particular, vector space
free monoid
free commutative monoid
free partially commutative monoid
free ring
free semigroup
free semiring
free commutative semiring
free theory
term algebra
discrete space
See also
Generating set
Notes
External links
In nLab: free functor, free object, vector space
Mathematics articles needing expert attention
Abstract algebra
Combinatorics on words
Adjoint functors | Free object | [
"Mathematics"
] | 1,875 | [
"Mathematical structures",
"Algebra",
"Combinatorics",
"Algebraic structures",
"Category theory",
"Abstract algebra",
"Combinatorics on words",
"Free algebraic structures"
] |
333,853 | https://en.wikipedia.org/wiki/Frank%20Watson%20Dyson | Sir Frank Watson Dyson, KBE, FRS, FRSE (8 January 1868 – 25 May 1939) was an English astronomer and the ninth Astronomer Royal who is remembered today largely for introducing time signals ("pips") from Greenwich, England, and for the role he played in proving Einstein's theory of general relativity.
Biography
Dyson was born in Measham, near Ashby-de-la-Zouch, Leicestershire, the son of the Rev Watson Dyson, a Baptist minister, and his wife, Frances Dodwell. The family lived on St John Street in Wirksworth while Frank was one- to three-years-old. They moved to Yorkshire in his youth. There he attended Heath Grammar School, Halifax, and subsequently won scholarships to Bradford Grammar School and Trinity College, Cambridge, where he studied mathematics and astronomy, being placed Second Wrangler in 1889.
In 1894 he joined the Royal Astronomical Society, the British Astronomical Association and was given the post of Senior Assistant at Greenwich Observatory and worked on the Astrographic Catalogue, which was published in 1905. He was appointed Astronomer Royal for Scotland from 1905 to 1910, and Astronomer Royal (and Director of the Royal Greenwich Observatory) from 1910 to 1933. In 1928, he introduced in the Observatory a new free-pendulum clock, the most accurate clock available at that time and organised the regular wireless transmission from the GPO wireless station at Rugby of Greenwich Mean Time. He also, in 1924, introduced the distribution of the "six pips" via the BBC. He was for several years President of the British Horological Institute and was awarded their gold medal in 1928.
Dyson was noted for his study of solar eclipses and was an authority on the spectrum of the corona and on the chromosphere. He is credited with organising expeditions to observe the 1919 solar eclipse at Brazil and Príncipe, which he somewhat optimistically began preparing for prior to the Armistice of 11 November 1918. Dyson presented his observations of the solar eclipse of May 29, 1919 to a joint meeting of the Royal Society and Royal Astronomical Society on 6 November 1919. The observations confirmed Albert Einstein's theory of the effect of gravity on light which until that time had been received with some scepticism by the scientific community.
Dyson died on board a ship while travelling from Australia to England in 1939, and was buried at sea.
Honours and awards
Fellow of the Royal Society – 1901
Fellow of the Royal Society of Edinburgh – 1906
President, Royal Astronomical Society – 1911–1913
Vice-president, Royal Society – 1913–1915
Knighted – 1915
President, British Astronomical Association, 1916–1918
Royal Medal of the Royal Society – 1921
Bruce Medal of the Astronomical Society of the Pacific – 1922
Gold Medal of the Royal Astronomical Society – 1925
Knight Commander of the Order of the British Empire – 1926
Gold medal of British Horological Institute – 1928
President of the International Astronomical Union – 1928–1932
Between 1894–1906, Dyson lived at 6 Vanbrugh Hill, Blackheath, London SE3, in a house now marked by a blue plaque.
The crater Dyson on the Moon is named after him, as is the asteroid 1241 Dysona.
Family
In 1894 he married Caroline Bisset Best (d.1937), the daughter of Palemon Best, with whom he had two sons and six daughters.
Frank Dyson and Freeman Dyson
Although Frank Dyson and theoretical physicist Freeman Dyson were not known to be related, their fathers Rev Watson Dyson and George Dyson both hailed from West Yorkshire where the surname originates and is most densely clustered. Freeman Dyson credited Sir Frank with sparking his interest in astronomy: because they shared the same last name, Sir Frank's achievements were discussed by Freeman Dyson's family when he was a young boy. Inspired, Dyson's first attempt at writing was a 1931 piece of juvenilia entitled "Sir Phillip Robert's Erolunar Collision" – Sir Philip being a thinly disguised version of Sir Frank.
In popular media
Actor Alec McCowen was cast as Sir Frank Dyson in the TV series Longitude, broadcast in 2000.
Selected writings
Astronomy, Frank Dyson, London, Dent, 1910
See also
Einstein and Eddington
References
External links
Online catalogue of Dyson's working papers (part of the Royal Greenwich Observatory Archives held at Cambridge University Library)
Bruce Medal page
Awarding of Bruce Medal: PASP 34 (1922) 2
Awarding of RAS gold medal: MNRAS 85 (1925) 672
Astronomische Nachrichten 268 (1939) 395/396 (one line)
Monthly Notices of the Royal Astronomical Society 100 (1940) 238
The Observatory 62 (1939) 179
Publications of the Astronomical Society of the Pacific 51 (1939) 336
1868 births
1939 deaths
Astronomers Royal
People who died at sea
Burials at sea
20th-century English astronomers
People from Measham
Royal Medal winners
People educated at Bradford Grammar School
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Knights Commander of the Order of the British Empire
Second Wranglers
Recipients of the Bruce Medal
Recipients of the Gold Medal of the Royal Astronomical Society
Presidents of the Institute of Physics
People educated at Heath Grammar School
Academics of the University of Edinburgh
Presidents of the Royal Astronomical Society
Presidents of the International Astronomical Union
Masters of the Worshipful Company of Clockmakers | Frank Watson Dyson | [
"Astronomy"
] | 1,080 | [
"Astronomers",
"Presidents of the International Astronomical Union"
] |
333,925 | https://en.wikipedia.org/wiki/Handicap%20principle | The handicap principle is a disputed hypothesis proposed by the Israeli biologist Amotz Zahavi in 1975. It is meant to explain how "signal selection" during mate choice may lead to "honest" or reliable signalling between male and female animals which have an obvious motivation to bluff or deceive each other. The handicap principle suggests that secondary sexual characteristics are costly signals which must be reliable, as they cost the signaller resources that individuals with less of a particular trait could not afford. The handicap principle further proposes that animals of greater biological fitness signal this through handicapping behaviour, or morphology that effectively lowers overall fitness. The central idea is that sexually selected traits function like conspicuous consumption, signalling the ability to afford to squander a resource. Receivers then know that the signal indicates quality, because inferior-quality signallers are unable to produce such wastefully extravagant signals.
The handicap principle is supported by game theory modelling representing situations such as nestlings begging for food, predator-deterrent signalling, and threat displays. However, honest signals are not necessarily costly, undermining the theoretical basis for the handicap principle, which remains unconfirmed by empirical evidence.
History
Origins
The handicap principle was proposed in 1975 by the Israeli biologist Amotz Zahavi. He argued that mate choice involving what he called "signal selection" would lead to "honest" or reliable signalling between male and female animals, even though they have an interest in bluffing or deceiving each other. The handicap principle asserts that secondary sexual characteristics are costly signals, which are reliable indicators of the signaller's quality, since they cost the signaller resources that lower-quality individuals could not afford. The generality of the phenomenon is a matter of some debate and disagreement, and Zahavi's views on the scope and importance of handicaps in biology have not been accepted by the mainstream. Nevertheless, the idea has been very influential, with most researchers in the field believing that the theory explains some aspects of animal communication.
Grafen's signaling game model
The handicap principle was initially controversial, with the British biologist John Maynard Smith a notable early critic of Zahavi's ideas. However, the handicap principle gained wider acceptance because it is supported by game theory models, most notably the Scottish biologist Alan Grafen's 1990 signalling game model. This was essentially a rediscovery of the Canadian-American economist Michael Spence's job market signalling model, where the job applicant signals their quality by declaring a costly education. In Grafen's model, the courting male's quality is signalled by investment in an extravagant trait—similar to the peacock's tail. The signal is reliable if the cost to the signaller of producing it is proportionately lower for higher-quality signallers than for lower-quality ones.
A series of papers by the American biologist Thomas Getty showed that Grafen's proof of the handicap principle depends on the critical, simplifying assumption that signallers trade off costs for benefits in an additive fashion, analogous to the way humans invest money to increase income in the same currency. This is illustrated in the figures from Johnstone 1997, which show that the optimum signalling levels are different for low- and high-quality signallers. The validity of the assumption that costs and benefits are additive has been contested, in its application to the evolution of sexually selected signals. It can be reasoned that since fitness depends on the production of offspring, this is a multiplicative rather than additive function of reproductive success.
Further game theoretical models demonstrated the evolutionary stability of handicapped signals in nestlings' begging calls, in predator-deterrent signals and in threat-displays. In the classic handicap models of begging in game theory, all players are assumed to pay the same amount to produce a signal of a given level of intensity, but differ in the relative value of eliciting the desired response (donation) from the receiver. The hungrier the baby bird, the more food is of value to it, and the higher the optimal signalling level (the louder its chirping).
Cheap talk models without handicaps
Counter-examples to handicap models predate handicap models themselves. Models of signals (such as threat displays) without any handicapping costs show that what biologists call cheap talk may be an evolutionarily stable form of communication. Analysis of some begging models shows that non-communication strategies are not only evolutionarily stable, but lead to higher payoffs for both players. In human mate choice, mathematical analyses including Monte Carlo simulations suggest that costly traits ought to be more attractive to the other sex and much rarer than non-costly traits.
It was soon discovered that honest signals need not be costly at the honest equilibrium, even under conflict of interest. This conclusion was first shown in discrete models and then in continuous models. Similar results were obtained in conflict models: threat displays need not be handicaps to be honest and evolutionarily stable.
Unworkable theory lacking empirical evidence
In 2015, Simon Huttegger and colleagues wrote that the distinction between "indexes" (unfakable signals) and "fakable signals", crucial to the argument for the handicap principle, is an artefact of signalling models. They demonstrated that absent that dichotomy, cost could not be the only factor controlling signalling behaviours, and that indeed it was "probably not the most important" factor acting against deception.
Dustin J. Penn and Szabolcs Számadó stated in 2019 that there was still no empirical evidence for evolutionary pressure for wasteful biology or acts, and proposed that the handicap principle should be abandoned.
Predictions and interpretations
The handicap principle predicts that a sexual ornament, or any other signal such as visibly risky behavior, must be costly if it is to accurately advertise a trait of relevance to an individual with conflicting interests. Typical examples of handicapped signals include bird songs, the peacock's tail, courtship dances, and bowerbird bowers. American scientist Jared Diamond has proposed that certain risky human behaviours, such as bungee jumping, may be expressions of instincts that have evolved through the operation of the handicap principle. Zahavi has invoked the gift-giving potlatch ceremony as a human example of the handicap principle in action: the conspicuous generosity is costly. This interpretation of potlatch can be traced to Thorstein Veblen's use of the ceremony in his book Theory of the Leisure Class as an example of "conspicuous consumption".
The handicap principle gains further support by providing interpretations for behaviours that fit into a single unifying gene-centered view of evolution and making earlier explanations based on group selection obsolete. A classic example is that of stotting in gazelles. This behaviour consists in the gazelle initially running slowly and jumping high when threatened by a predator such as a lion or cheetah. The explanation based on group selection was that such behaviour might be adapted to alerting other gazelle to a cheetah's presence or might be part of a collective behaviour pattern of the group of gazelle to confuse the cheetah. Instead, Zahavi proposed that each gazelle was communicating that it was a fitter individual than its fellows.
Signals to members of the same species
Zahavi studied in particular the Arabian babbler, a highly social bird, with a life-length of 30 years, which appears to behave altruistically. Its helping-at-the-nest behavior, where non-parent birds assist in feeding, guarding, and caring for nestlings, often occurs among unrelated individuals. This, therefore, cannot be explained by kin selection, natural selection acting on genes that close relatives share with the altruistic individual. Zahavi reinterpreted these behaviors according to his signalling theory and its correlative, the handicap principle. The altruistic act is costly to the donor, but may improve its attractiveness to potential mates. The evolution of this condition may be explained by competitive altruism.
French biologist Patrice David showed that in the stalk-eyed fly species Cyrtodiopsis dalmanni, genetic variation underlies the response to environmental stress, such as variable food quality, of a male sexual ornament, eye span. He showed that some male genotypes always develop large eye spans, but others reduce eye span in proportion to environmental worsening. David inferred that female mate choice yields genetic benefits for offspring.
Signals to other species
Signals may be directed at predators, with the function of showing that pursuit will probably be unprofitable. Stotting, for instance, is a form of energetic jumping that certain gazelles do when they sight a predator. As this behavior gives no evident benefit and would seem to waste resources (diminishing the gazelle's head start if chased by the predator), it appeared likely to be selected against. However, it made sense when seen as a pursuit deterrence signal to predators. By investing a little energy to show a lion that it has the fitness necessary to avoid capture, a gazelle reduces the likelihood that it will have to evade the lion in an actual pursuit. The lion, faced with the demonstration of fitness, might decide that it would fail to catch this gazelle, and thus choose to avoid a probably wasted pursuit. The benefit to the gazelle is twofold. First, for the small amount of energy invested in the stotting, the gazelle might not have to expend the tremendous energy required to evade the lion. Second, if the lion is in fact capable of catching this gazelle, the gazelle's bluff may lead to its survival that day (in the event the bluff succeeds). However, the mathematical biologist John Maynard Smith commented that other explanations were possible, such as that it was an honest signal of fitness, or an honest signal that the predator had been detected, and it was hard to see how stotting could be a handicap.
Another example is provided by larks, some of which discourage merlins by sending a similar message: they sing while being chased, telling their predator that they will be difficult to capture.
Immunocompetence handicaps
The theory of immunocompetence handicaps suggests that androgen-mediated traits accurately signal condition due to the immunosuppressive effects of androgens. This immunosuppression may be either because testosterone alters the allocation of limited resources between the development of ornamental traits and other tissues, including the immune system, or because heightened immune system activity has a propensity to launch autoimmune attacks against gametes, such that suppression of the immune system enhances fertility. Healthy individuals can afford to suppress their immune system by raising their testosterone levels, at the same time augmenting secondary sexual traits and displays. A review of empirical studies into the various aspects of this theory found weak support.
See also
Aposematism
Costly signaling theory in evolutionary psychology
Fisherian runaway
Multiple sexual ornaments
Parasite-stress theory
Sacrifice
References
External links
Honest Signalling Theory: A Basic Introduction. By Carl T. Bergstrom, University of Washington, 2006.
Signalling theory
Animal communication
Ethology
Selection
Sexual selection | Handicap principle | [
"Biology"
] | 2,264 | [
"Evolutionary processes",
"Selection",
"Behavior",
"Behavioural sciences",
"Ethology",
"Sexual selection",
"Mating"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.