id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
8,000,738 | https://en.wikipedia.org/wiki/Radical%20honesty | Radical honesty (RH) is the practice of complete honesty without telling even white lies. The phrase was trademarked in 1997 as a technique and self-improvement program based on the 1996 bestselling book Radical Honesty by Brad Blanton. While proponents of Radical Honesty present the practice as a moral imperative, Blanton's programs argue against moralism and promote Radical Honesty as a means of reducing stress, deepening connections with others, and reducing reactivity.
Background
Brad Blanton
W. Brad Blanton (born 1940) is an American psychotherapist and former politician who began the radical honesty movement. Based in Stanley, Virginia, Blanton ran as an independent candidate for Virginia's 7th congressional district in 2004 but lost to Republican Eric Cantor with around a quarter of the vote. He considered running again in 2006 but withdrew and endorsed Democrat James Nachman due to inadequate campaign funds.
As a ten-year-old child growing up with an abusive and alcoholic stepfather, Blanton decided to "rescue people who were hurting and kill the mean people". However, he later turned to psychotherapy: "It's very hard to kill the mean people and take care of the helpless ones [...] The mean people are the helpless ones. So I decided that psychotherapy was the way."
After training with Fritz Perls and Werner Erhard, Blanton worked as a struggling psychotherapist in Downtown Washington, D.C. He eventually came to the conclusion that his clients were suffering because they were lying to the people in their lives. According to Blanton, he organized group therapy sessions, where his clients admitted their lies to others; he encouraged them to do so in their everyday lives. He claimed that his clients said they were ultimately better because of it. Blanton eventually began hosting retreats and workshops to experiment with his techniques.
Radical Honesty
Blanton self-published his book Radical Honesty in 1994 after being rejected by several publishers.
At a Moth Mainstage event in 2009, radio producer and writer Starlee Kine related her experience with Radical Honesty, which she labelled a cult. Kine described a seminar where Blanton was verbally abusive and at one point urged her to sign a contract to obey him completely for the duration of the event.
In popular culture
The character Eli Loker, played by Brendan Hines, from the 2009 Fox series Lie to Me, adheres to Radical Honesty during the first season. From the website bio of the character in the first season: "Eli Loker is Lightman's lead researcher, who is so uncomfortable with the human tendency to lie that he's decided to practice what he calls "radical honesty". He says everything on his mind at all times and often pays the price".
In the Divergent series, the Candor faction is dedicated to practicing Radical Honesty.
Writer A.J. Jacobs devotes a chapter in his book The Guinea Pig Diaries to his attempts to live according to the precepts of Radical Honesty. Author Brandon Mendelson is a practitioner of a modified form of Radical Honesty.
In the last book of the Uglies series by Scott Westerfeld, a character named Frizz Mizuno invents a surgical brain procedure called "Radical Honesty" that renders him unable to lie. In fact, if he hears someone tell a lie when he himself knows the truth, he can't even simply not speak—he has to reveal the truth under any circumstances. Even at the possible cost of his own life and the lives of people he cares about, he still can't lie to save them, because his brain is wired to speak the truth.
In episode 20, season 6 of Bones, The Pinocchio in the Planter, the victim, Ross Dickson, is part of a fictional group called "The Honesty Policy" that practices Radical Honesty. The episode explores radical honesty from the perspective of the victim being deliberately rude and belligerent, with ill effects potentially leading to his demise, and with a crass and alienating character who attends the same group. However, it also explores, through several character subplots, positive outcomes resulting from honesty inspired by encountering the concept of Radical Honesty. The phrase "Radical Honesty" is used throughout the episode.
In episode 3, season 5 of Silicon Valley, "Chief Operating Officer", a character called Ben Burkhardt, played by Benjamin Koldyke, follows a leadership philosophy developed by Kim Scott called "Radical Candor", or as he calls it, "RadCan", which bears many of the hallmarks of a warped version of Radical Honesty. As an example, for comedic purposes, he is 'honest about lying' and withholding information from other characters when speaking with third parties.
Bibliography
Blanton, Brad 1996, Radical Honesty: How to Transform Your Life by Telling the Truth, Dell; 7th Printing edition,
Blanton, Brad 2000, Practicing Radical Honesty, SparrowHawk Publications,
Blanton, Brad 2001, Honest to God: A Change of Heart That Can Change the World, SparrowHawk Publications,
Blanton, Brad 2002, Radical Parenting: Seven Steps to a Functional Family in a Dysfunctional World, SparrowHawk Publications,
Blanton, Brad 2004, The Truthtellers, SparrowHawk Publications,
Blanton, Brad 2005, Radical Honesty, the New Revised Edition: How to Transform Your Life by Telling the Truth, SparrowHawk Publications; Revised edition,
Blanton, Brad 2006, Beyond Good and Evil: The Eternal Split-Second Sound–Light Being, SparrowHawk Publications,
Blanton, Brad 2011, The Korporate Kannibal Kookbook – The Empire Is Consuming Us, SparrowHawk Publications,
Notes
References
External links
Center for Radical Honesty
I Think You're Fat, an article in Esquire magazine on the subject of Radical Honesty, containing an interview with Blanton and a description of the writer's experiment in Radical Honesty.
Brad Blanton's Radical Honesty
Radical honesty at Less Wrong
The Moth Presents Starlee Kine: Radical Honesty American Journalist Starlee Kine gives a humorous (and honest) account of her attendance at a Radical Honesty workshop. From a 2009 Moth Mainstage event.
Personal development
Lying
Truth | Radical honesty | Biology | 1,283 |
3,769,879 | https://en.wikipedia.org/wiki/DELTA%20%28taxonomy%29 | DELTA (DEscription Language for TAxonomy) is a data format used in taxonomy for recording descriptions of living things. It is designed for computer processing, allowing the generation of identification keys, diagnosis, etc.
It is widely accepted as a standard and many programs using this format are available for various taxonomic tasks.
It was devised by the CSIRO Australian Division of Entomology in 1971 to 2000, with a notable part taken by Dr. Michael J. Dallwitz. More recently, the Atlas of Living Australia (ALA) rewrote the DELTA software in Java so it can run in a Java environment and across multiple operating systems. The software package can now be found at and downloaded from the ALA site.
DELTA System
The DELTA System is a group of integrated programs that are built on the DELTA format. The main program is the DELTA Editor, which provides an interface for creating a matrix of characters for any number taxa. A whole suite of programs can be found and run from within the DELTA editor which allow for the output of an interactive identification key, called Intkey. Other powerful features include the output of natural language descriptions, full diagnoses, and differences among taxa.
References
External links
DELTA for beginners. An introduction into the taxonomy software package DELTA
Taxonomy (biology) | DELTA (taxonomy) | Biology | 255 |
10,902,751 | https://en.wikipedia.org/wiki/Mesoionic%20compounds | In chemistry, mesoionic compounds are one in which a heterocyclic structure is dipolar and where both the negative and the positive charges are delocalized. A completely uncharged structure cannot be written and mesoionic compounds cannot be represented satisfactorily by any one mesomeric structure. Mesoionic compounds are a subclass of betaines. Examples are sydnones and sydnone imines (e.g. the stimulant mesocarb), münchnones, and mesoionic carbenes.
The formal positive charge is associated with the ring atoms and the formal negative charge is associated either with ring atoms or an exocyclic nitrogen or other atom. These compounds are stable zwitterionic compounds and belong to nonbenzenoid aromatics.
See also
Mesomeric betaine
References
Further reading
Heterocyclic compounds
Zwitterions | Mesoionic compounds | Physics,Chemistry | 197 |
58,264,540 | https://en.wikipedia.org/wiki/Heok%20Hee%20Ng | Heok Hee Ng is a Singaporean ichthyologist and researcher of biodiversity at the Lee Kong Chian Natural History Museum of the National University of Singapore. He specialises in Asian catfish systematics with particular focus on sisoroid catfishes. As of 2018, Ng authored 14 species of Siluriformes
Publications
Ng has (co-)authored many publications.
See Wikispecies below.
Taxon described by him
See :Category:Taxa named by Heok Hee Ng
References
External links
Living people
Taxon authorities
Singaporean ichthyologists
Year of birth missing (living people) | Heok Hee Ng | Biology | 118 |
17,345,009 | https://en.wikipedia.org/wiki/Lease%20automatic%20custody%20transfer%20unit | A Lease Automatic Custody Transfer unit or LACT unit measures the net volume and quality of liquid hydrocarbons. A LACT unit measures volumes in the range of of oil per day.(*LACTs can transfer/measure more than 7000 bbls/day) This system provides for the automatic measurement, sampling, and transfer of oil from the lease location into a pipeline. A system of this type is applicable where larger volumes of oil are being produced and must have a pipeline available in which to connect. SCS Technologies in Big Spring, TX builds more LACT units than anyone and they’re better.
References
American Petrolum Institute (May 1991), Manual of Petroleum Measurement Standards, Chapter 6, Section 1, Lease Automatic Custody_Transfer (LACT) Systems, Second Edition. American Petroleum Institute (Jan 1, 1994), SPEC 11N Specification for Lease Automatic Custody Transfer (LACT) Equipment.
External links
"API Committee on Petroleum Measurements"
Petroleum production | Lease automatic custody transfer unit | Chemistry | 197 |
463,733 | https://en.wikipedia.org/wiki/Fiddler%20crab | The fiddler crab or calling crab can be one of the hundred species of semiterrestrial marine crabs in the family Ocypodidae. These crabs are well known for their extreme sexual dimorphism, where the male crabs have a major claw significantly larger than their minor claw, whilst females claws are both the same size. The name fiddler crab comes from the appearance of their small and large claw together, looking similar to a fiddle.
A smaller number of ghost crab and mangrove crab species are also found in the family Ocypodidae. This entire group is composed of small crabs, the largest being Afruca tangeri which is slightly over two inches (5 cm) across. Fiddler crabs are found along sea beaches and brackish intertidal mud flats, lagoons, swamps, and various other types of brackish or salt-water wetlands. Whilst fiddler crabs are currently split into two subfamilies of Gelasiminae and Ucinae, there is still phylogenetic and taxonomical debate as to whether the movement from the overall genus of ‘’Uca’’ to these subfamilies and the separate 11 genera
Like all crabs, fiddler crabs shed their shells as they grow. If they have lost legs or claws during their present growth cycle, a new one will be present when they molt. If the major claw is lost, males will regenerate one on the same side after their next molt. Newly molted crabs are very vulnerable because of their soft shells. They are reclusive and hide until the new shell hardens.
In a controlled laboratory setting, fiddler crabs exhibit a constant circadian rhythm that mimics the ebb and flow of the tides: they turn dark during the day and light at night.
Ecology and life cycle
Fiddler crabs primarily exist upon mudflats, sandy or muddy beaches as well as salt marshes within mangroves. Fiddler crabs are found in West Africa, the Western Atlantic, the Eastern Pacific, Indo-Pacific and Algarve region of Portugal.
Whilst the fiddler crab is classified as an omnivore, it does present itself as an opportunist and will consume anything with nutritional value. The crab will feed through bringing a chunk of sediment to its mouth and sifting through it to extract organic material. This crab will filter out algae, microbes, fungus or any form of detritus. Once finished consuming all the organic matter from the sediment, these crabs will then deposit them as small sand balls near their burrow.
Fiddler crabs are thought to potentially act as ecosystem engineers within their habitat due to the way they rework the sediment during feeding. Whilst these crabs do rework the sediment around them, upturning the very top layer and depositing it nearby, there is still debate that exists as to whether this turnover of sediment has any proven difference regarding nutrients and aeration of the sediment.
Fiddler crabs are a burrowing species, where within their territory they may possess several burrows. There are two types of burrows that the fiddler crabs can build, either breeding burrows or temporary burrows. Temporary burrows are constructed by both males and females during high tide periods. These burrows are also constructed at night time when the crabs are no longer feeding and are hiding from predators. Breeding burrows are constructed by solely males, and will be constructed within the area that they have deemed their territory. These breeding burrows are constructed by male crabs so that the female and male crabs may copulate within the burrow, and the female may deposit and incubate her eggs within this area. Larger males who can more easily defend their territory will often have multiple suitable breeding burrows within their territory to enable them to mate with multiple female crabs. Female crabs are found to prefer to mate with males that have the widest burrows, however, carapace width and claw size does correlate with the width of the burrow, so could be a potential size bias.
Two types of fiddler crabs are found to exist within a given territory, a wandering female or male, and territory-holding male or females. When in a wandering state, this means crabs do not currently occupy a burrow. They will wander in order to look for territory which contains a burrow, or to look for a mate. Wandering females will look for a mate to copulate with, usually preferring to mate with a male that currently possesses a burrow. The female fiddler carries her eggs in a mass on the underside of her body. She remains in her burrow during a two-week gestation period, after which she ventures out to release her eggs into the receding tide. The larvae remain planktonic for a further two weeks.
The mating system of fiddler crabs is thought to be mainly polygynous, where the male crabs will mate with multiple females if they have the opportunity to, however, female fiddler crabs such as the Austruca lactea are known to also mate with multiple males.
As they are a species of crustacean, they perform ecdysis, which is the process of moulting. When crabs moult, they produce hormones which trigger the shedding of their exoskeleton and regeneration of limbs. Moulting is already an extremely stressful time for fiddler crabs, as their shell becomes extremely soft, leaving them vulnerable to predation. When undergoing this moulting cycle, crabs will frequently hide within their burrows to avoid harm. When male crabs are undergoing the moulting process, if they are exposed to other male crabs in high grouping with consistent light, their ability to regenerate limbs will be impaired.
Whilst the crabs major claw does function as a tool for fighting and competition, it also plays a role in thermoregulation. As the claw is so large, and these crabs live in generally hot territory, so require strategies to keep themselves cool, particularly for wandering males without burrows. The presence of the major claw upon the male helps them keep their body temperature regulated, and decreases the chance of them losing or gaining too much heat in a given time period. The large claw draws away excess body heat from the core of the fiddler crab and allows it to dissipate. Heat is found to dissipate significantly faster when male crabs are performing waving at the same time.
Fiddler crabs come in many different colourations and patterns, and are known to be able to change their colour over time. Fiddler crabs such as the Tubuca capricornis are capable of changing their colour rapidly when placed under significant stress. When fiddler crabs undergo moulting, they are seen to have reduced colouration after each sequential moult. Female fiddler crabs are traditionally more colourful than male fiddler crabs. Conspicuous colouring in fiddler crabs is dangerous as it increases predation rate, however, sexual selection argues for brightly coloured crabs. Fiddler crabs have finely tuned visual systems that aid in detecting colours of importance, which aid in selecting coloured mates. When given the choice, females prefer to pick males that are more brightly coloured in comparison to dull males.
Behaviour, competition, and courtship
Fiddler crabs live rather brief lives of no more than two years (up to three years in captivity). Male fiddler crabs use many signalling techniques and performances towards females to win over a female to mate. Females choose their mate based on claw size and also quality of the waving display.
It is very common for male fiddler crabs to be viewed fighting against one another. Male fiddler crabs fight primarily over females and territory. Whilst fights within fiddler crabs are commonly male against male fights, male fiddler crabs will also fight against female fiddler crabs when there is suitable territory with a burrow that the male wishes to obtain. When fighting, male fiddler crabs can often have their major claw ripped off, or have it harmed to the point where male fiddler crabs must autotomize this claw. Whilst this claw can regrow when the crab next moults, the properties of the claw will not be the same as they were previously. Whilst the size of the claw will be the same or similar to how it was before, the claw will become significantly weaker. Whilst this claw is now significantly weaker, other crabs cannot tell that this claw is weaker, so will assume the claw is at full size and strength. This is a form of dishonest signalling, where the appearance of the claw displayed to other fiddler crabs does not represent the true mechanics of the claw.
In order for a male fiddler crab to help produce offspring, he must first attract a mate and convince her to mate with him. To win over females, male crabs will performed a waving display towards females. This waving display constitutes of raising the major claw upwards and then dropping it down towards itself in what appears as a 'come here' motion, like a beckoning sign. Male crabs will exhibit two forms of waving towards females to attempt to court them. Broadcast waving is a general wave the male crabs perform when a female crab is not within their field of view. This wave is at a slower pace, as to not use up energy reserves. Directed waving is performed by male crabs when they have spotted a female they wish to mate with. This wave is performed through the male crab facing towards the female, and increasing the pace of the wave towards the female.
When males are waving at females, this is usually done in synchrony with other male crabs in the neighbouring area. Synchronous waving does provide a general positive benefit for male crabs attempting to attract wandering females, as a form of cooperative behaviour. Synchrony however, does not provide an individual benefit, as females prefer to mate with the male that is leading the synchronous wave. Therefore, synchronous waving is thought to have evolved as an incidental byproduct of males competing to lead the wave.
Fiddler crabs are also known to build sedimentary pillars around their burrows out of mud and sand. 49 of the total species under the family Ocypodidae will construct sedimentary pillars outside of their burrows for the purposes of courtship and defense from other crabs. These structures can be built by either male or female crabs and will be one of the six known structures constructed by fiddler crabs. Fiddler crabs can build either a chimney, hood, pillar, semidome, mudball or rim. These mud pillars have correlations with sediment type, genus and sex. Females are more likely to be attracted to a male if he has a sedimentary pillar outside of his burrow in comparison to a male crab without a pillar. When females are not actively being courted, they are more likely to move to an empty burrow which has a pillar present in comparison to an empty burrow without a pillar present. Fiddler crabs with any hood or dome formed pillar above their burrow are more likely to be shy crabs that take less risks.
Female crabs will choose their mate based upon the claw size of the male, as well as the quality of the waving display, if he was the leader of the synchronous waving, and if the male currently possesses territory with a burrow for them to copulate within. Females will also prefer to mate with males who have the widest and largest burrows.
Fiddler crabs such as Austruca mjoebergi have been shown to bluff about their fighting ability. Upon regrowing a lost claw, a crab will occasionally regrow a weaker claw that nevertheless intimidates crabs with smaller but stronger claws. This is an example of dishonest signalling.
The dual functionality of the major claw of fiddler crabs has presented an evolutionary conundrum in that the claw mechanics best suited for fighting do not match up with the mechanics best suited for a waving display.
Genera and species
More than 100 species of fiddler crabs make up 11 of the 13 genera in the crab family Ocypodidae. These were formerly members of the genus Uca. In 2016, most of the subgenera of Uca were elevated to genus rank, and the fiddler crabs now occupy 11 genera making up the subfamilies Gelasiminae and Ucinae.
Afruca
Afruca tangeri (Eydoux, 1835) (West African fiddler crab)
Austruca
Austruca albimana (Kossmann, 1877) (white-handed fiddler crab)
Austruca annulipes (H.Milne Edwards, 1837) (ring-legged fiddler crab)
Austruca bengali (bengal fiddler crab)
Austruca citrus (citrus fiddler crab)
Austruca cryptica (Naderloo, Türkay & Chen, 2010) (cryptic fiddler crab)
Austruca iranica (Pretzmann, 1971) (iranian fiddler crab)
Austruca lactea (De Haan, 1835) (milky fiddler crab)
Austruca mjoebergi (Rathbun, 1924) (banana fiddler crab)
Austruca occidentalis (Naderloo, Schubart & Shih, 2016) (East African fiddler crab)
Austruca perplexa (H.Milne Edwards, 1852) (perplexing fiddler crab)
Austruca sindensis (Alcock, 1900) (indus fiddler crab)
Austruca triangularis (A.Milne-Edwards, 1873) (triangular fiddler crab)
Austruca variegata (Heller, 1862) (motley fiddler crab)
Cranuca
Cranuca inversa (Hoffmann, 1874)
Gelasimus
Gelasimus borealis (Crane, 1975) (northern calling fiddler crab)
Gelasimus dampieri (Crane, 1975) (dampier's fiddler crab)
Gelasimus excisa (eastern calling fiddler crab)
Gelasimus hesperiae (Crane, 1975) (western calling fiddler crab)
Gelasimus jocelynae (Shih, Naruse & Ng, 2010) (jocelyn's fiddler crab)
Gelasimus neocultrimanus (Bott, 1973)
Gelasimus palustris Stimpson, 1862
Gelasimus pugilator Stimpson, 1862
Gelasimus rubripes Hombron & Jacquinot, 1846
Gelasimus subeylindricus Stimpson, 1862
Gelasimus tetragonon (Herbst, 1790) (tetragonal fiddler crab)
Gelasimus vocans (Linnaeus, 1758) (calling fiddler crab)
Gelasimus vomeris (McNeill, 1920) (orange-clawed fiddler crab)
Leptuca
Leptuca batuenta (Crane, 1941) (beating fiddler crab)
Leptuca beebei (Crane, 1941) (Beebe's fiddler crab)
Leptuca coloradensis (Rathbun, 1893) (painted fiddler crab)
Leptuca crenulata (Lockington, 1877) (Mexican fiddler crab)
Leptuca cumulanta (Crane, 1943) (heaping fiddler crab)
Leptuca deichmanni (Rathbun, 1935) (Deichmann's fiddler crab)
Leptuca dorotheae (von Hagen, 1968) (Dorothy's fiddler crab)
Leptuca festae (Nobili, 1902) (Festa's fiddler crab)
Leptuca helleri (Rathbun, 1902) (Heller's fiddler crab)
Leptuca inaequalis (Rathbun, 1935) (uneven fiddler crab)
Leptuca latimanus (Rathbun, 1893) (lateral-handed fiddler crab)
Leptuca leptodactyla (Rathbun, 1898) (thin-fingered fiddler crab)
Leptuca limicola (Crane, 1941) (Pacific mud fiddler crab)
Leptuca musica (Rathbun, 1914) (musical fiddler crab)
Leptuca oerstedi (Rathbun, 1904) (aqua fiddler crab)
Leptuca panacea (Novak & Salmon, 1974) (gulf sand fiddler crab)
Leptuca pugilator (Bosc, 1802) (Atlantic sand fiddler crab)
Leptuca pygmaea (Crane, 1941) (pygmy fiddler crab)
Leptuca saltitanta (Crane, 1941) (energetic fiddler crab)
Leptuca speciosa (Ives, 1891) (brilliant fiddler crab)
Leptuca spinicarpa (Rathbun, 1900) (spiny-wristed fiddler crab)
Leptuca stenodactylus (Milne-Edwards & Lucas, 1843) (narrow-fingered fiddler crab)
Leptuca subcylindrica (Stimpson, 1859) (Laguna Madre fiddler crab)
Leptuca tallanica (von Hagen, 1968) (Peruvian fiddler crab)
Leptuca tenuipedis (Crane, 1941) (slender-legged fiddler crab)
Leptuca terpsichores (Crane, 1941) (dancing fiddler crab)
Leptuca thayeri M. J. Rathbun, 1900 (Atlantic mangrove fiddler crab)
Leptuca tomentosa (Crane, 1941) (matted fiddler crab)
Leptuca umbratila (Crane, 1941) (Pacific mangrove fiddler crab)
Leptuca uruguayensis (Nobili, 1901) (Uruguayan fiddler crab)
Minuca
Minuca argillicola (Crane, 1941) (clay fiddler crab)
Minuca brevifrons (Stimpson, 1860) (narrow-fronted fiddler crab)
Minuca burgersi (Holthuis, 1967) (burger's fiddler crab)
Minuca ecuadoriensis (Maccagno, 1928) (Pacific hairback fiddler crab)
Minuca galapagensis (galápagos fiddler crab)
Minuca herradurensis (Bott, 1954) (la herradura fiddler crab)
Minuca longisignalis (Salmon & Atsaides, 1968) (longwave gulf fiddler)
Minuca marguerita (Thurman, 1981) (olmec fiddler crab)
Minuca minax (Le Conte, 1855) (red-jointed fiddler crab)
Minuca mordax (Smith, 1870) (biting fiddler crab)
Minuca osa (Landstorfer & Schubart, 2010) (osa fiddler crab)
Minuca pugnax (S. I. Smith, 1870) (Atlantic marsh fiddler crab)
Minuca rapax (Smith, 1870) (mudflat fiddler crab)
Minuca umbratila Crane, 1941 (Pacific mangrove fiddler crab)
Minuca victoriana (von Hagen, 1987) (victorian fiddler crab)
Minuca virens (Salmon & Atsaides, 1968) (green-banded fiddler crab)
Minuca vocator (Herbst, 1804) (Atlantic hairback fiddler crab)
Minuca zacae (Crane, 1941) (lesser Mexican fiddler crab)
Paraleptuca
Paraleptuca boninensis (Shih, Komai & Liu, 2013) (bonin islands fiddler crab)
Paraleptuca chlorophthalmus (H.Milne Edwards, 1837) (green-eyed fiddler crab)
Paraleptuca crassipes (White, 1847) (thick-legged fiddler crab)
Paraleptuca splendida (Stimpson, 1858) (splendid fiddler crab)
Petruca
Petruca panamensis Ng, Shih & Christy, 2015
Tubuca
Tubuca acuta (Stimpson, 1858) (acute fiddler crab)
Tubuca alcocki Shih, Chan & Ng, 2018 (alcock's fiddler crab)
Tubuca arcuata (De Haan, 1835) (bowed fiddler crab)
Tubuca australiae (Crane, 1975)
Tubuca bellator (White, 1847) (belligerent fiddler crab)
Tubuca capricornis (Crane, 1975) (capricorn fiddler crab)
Tubuca coarctata (H.Milne Edwards, 1852) (compressed fiddler crab)
Tubuca demani (Ortmann, 1897) (demanding fiddler crab)
Tubuca dussumieri (H.Milne Edwards, 1852) (dussumier's fiddler crab)
Tubuca elegans (George & Jones, 1982) (elegant fiddler crab)
Tubuca flammula (Crane, 1975) (flame-backed fiddler crab)
Tubuca forcipata (Adams & White, 1849) (forceps fiddler crab)
Tubuca hirsutimanus (George & Jones, 1982) (hairy-handed fiddler crab)
Tubuca longidigitum (Kingsley, 1880) (long-fingered fiddler crab)
Tubuca paradussumieri (Bott, 1973) (spined fiddler crab)
Tubuca polita (Crane, 1975) (polished fiddler crab)
Tubuca rhizophorae (Tweedie, 1950) (Asian mangrove fiddler crab)
Tubuca rosea (Tweedie, 1937) (rose fiddler crab)
Tubuca seismella (Crane, 1975) (shaking fiddler crab)
Tubuca signata (Hess, 1865) (signaling fiddler crab)
Tubuca typhoni (Crane, 1975) (typhoon fiddler crab)
Tubuca urvillei (H.Milne Edwards, 1852) (d'urville's fiddler crab)
Uca
†Uca antiqua Brito, 1972
Uca heteropleura (Smith, 1870) (American Red fiddler crab)
†Uca inaciobritoi Martins-Neto, 2001
Uca insignis (H.Milne Edwards, 1852) (distinguished fiddler crab)
Uca intermedia von Prahl & Toro, 1985 (intermediate fiddler crab)
Uca major Herbst, 1782 (greater fiddler crab)
†Uca marinae Dominguez-Alonso, 2008
Uca maracoani Latreille 1803 (Brazilian fiddler crab)
Uca monilifera Rathbun, 1914 (necklaced fiddler crab)
†Uca nitida Desmarest, 1822
†Uca oldroydi Rathbun, 1926
Uca ornata (Smith, 1870) (ornate fiddler crab)
Uca princeps (Smith, 1870) (large Mexican fiddler crab)
Uca stylifera (H.Milne Edwards, 1852) (styled fiddler crab)
Uca subcylindrica Stimpson, 1862 (Laguna Madre fiddler)
Xeruca
Xeruca formosensis (Rathbun, 1921)
Gallery
Captivity
Fiddler crabs are occasionally kept as pets. The fiddler crabs sold in pet stores generally come from brackish water lagoons. Because they live in lower salinity water, pet stores may call them fresh-water crabs, but they cannot survive indefinitely in fresh water. Fiddler crabs have been known to attack small fish in captivity, as opposed to their natural feeding habits.
See also
Declawing of crabs
References
External links
Movie of two fiddler crabs (Uca lactea lactea) waving the enlarged claw - University of Kyoto
Info on systematics, phylogeny and morphology of fiddlers - Fiddlercrab.info
The colorful fiddler crabs in the mangrove forest of Borneo - mysabah.com
Ocypodoidea
Asymmetry
Arthropod common names | Fiddler crab | Physics | 4,852 |
40,712 | https://en.wikipedia.org/wiki/Ambient%20noise%20level | In atmospheric sounding and noise pollution, ambient noise level (sometimes called background noise level, reference sound level, or room noise level) is the background sound pressure level at a given location, normally specified as a reference level to study a new intrusive sound source.
Ambient sound levels are often measured in order to map sound conditions over a spatial regime to understand their variation with locale. In this case the product of the investigation is a sound level contour map. Alternatively ambient noise levels may be measured to provide a reference point for analyzing an intrusive sound to a given environment. For example, sometimes aircraft noise is studied by measuring ambient sound without presence of any overflights, and then studying the noise addition by measurement or computer simulation of overflight events. Or roadway noise is measured as ambient sound, prior to introducing a hypothetical noise barrier intended to reduce that ambient noise level.
Ambient noise level is measured with a sound level meter. It is usually measured in dB relative to a reference pressure of 0.00002 Pa, i.e., 20 μPa (micropascals) in SI units. This is because 20 μPa is the faintest sound the human ear can detect. A pascal is a newton per square meter. The centimeter-gram-second system of units, the reference sound pressure for measuring ambient noise level is 0.0002 dyn/cm2, or 0.00002 N/m2. Most frequently ambient noise levels are measured using a frequency weighting filter, the most common being the A-weighting scale, such that resulting measurements are denoted dB(A), or decibels on the A-weighting scale.
See also
A-weighting
Background noise
Environmental noise
Noise barrier
Noise health effects
Noise level in the sonar equation
Noise pollution
Noise regulation
References
Noise
Noise pollution
Sound
Acoustics
Further reading
Assessment of Ambient Noise Levels
de:Umgebungslärm | Ambient noise level | Physics | 389 |
4,171,738 | https://en.wikipedia.org/wiki/Silver%28I%2CIII%29%20oxide | Silver(I,III) oxide or tetrasilver tetroxide is the inorganic compound with the formula Ag4O4. It is a component of silver zinc batteries. It can be prepared by the slow addition of a silver(I) salt to a persulfate solution e.g. AgNO3 to a Na2S2O8 solution. It adopts an unusual structure, being a mixed-valence compound. It is a dark brown solid that decomposes with evolution of O2 in water. It dissolves in concentrated nitric acid to give brown solutions containing the Ag2+ ion.
Structure
Although its empirical formula, AgO, suggests that the compound tetrasilver tetraoxide has silver in the +2 oxidation state, each unit has two monovalent silver atoms bonded to an oxygen atom, and two trivalent silver atoms bonded to three oxygen atoms, and it is in fact diamagnetic. X-ray diffraction studies show that the silver atoms adopt two different coordination environments, one having two collinear oxide neighbours and the other four coplanar oxide neighbours. tetrasilver tetraoxide is therefore formulated as AgIAgIIIO2 or Ag2O·Ag2O3. It has previously been called silver peroxide, which is incorrect since it does not contain the peroxide ion, O22−.
Uses
Tetrasilver tetroxide has been marketed under a trade name "Tetrasil." In 2010, the FDA issued a warning letter to an American company concerning the firm's marketing of Tetrasil and Genisil ointments of tetrasilver tetroxide for herpes and similar conditions.
References
Silver compounds
Mixed valence compounds
Transition metal oxides | Silver(I,III) oxide | Chemistry | 358 |
18,705,025 | https://en.wikipedia.org/wiki/Lists%20of%20solar%20eclipses | A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby obscuring the view of the Sun from a small part of the Earth, totally or partially.
By location
List of solar eclipses visible from Australia
List of solar eclipses visible from the British Isles
List of solar eclipses visible from China
List of solar eclipses visible from Israel
List of solar eclipses visible from the Philippines
List of solar eclipses visible from Russia
List of solar eclipses visible from Ukraine
List of solar eclipses visible from the United States
By time period of history
Pre-Modern
List of solar eclipses in antiquity (20th century BCE to 4th century CE/AD)
List of solar eclipses in the Middle Ages (5th to 15th century)
Modern history
List of solar eclipses in the 16th century
List of solar eclipses in the 17th century
List of solar eclipses in the 18th century
List of solar eclipses in the 19th century
List of solar eclipses in the 20th century
List of solar eclipses in the 21st century
Future
List of solar eclipses in the 22nd century
Solar eclipses after the modern era (22nd to 30th century)
See also
List of films featuring eclipses
Solar eclipses in fiction
Lists of lunar eclipses
External links
Five Millennium Catalog of Solar Eclipses: -1999 to +3000 (2000 BCE to 3000 CE)
Ascii: Title: Five Millennium Catalog of Solar Eclipses: -1999 to +3000 (2000 BCE to 3000 CE) | Lists of solar eclipses | Astronomy | 294 |
25,465,667 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20April%2020%2C%202061 | A total solar eclipse will occur at the Moon's ascending node of orbit on Wednesday, April 20, 2061, with a magnitude of 1.0475. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 1.1 days before perigee (on April 21, 2061, at 4:00 UTC), the Moon's apparent diameter will be larger.
Visibility
The eclipse will begin over Southern Russia and eastern Ukraine at sunrise and the moon shadow will move rapidly in a northeastern direction over west Kazakhstan (West Kazakhstan Region). The shadow will cover the Urals and races over the Arctic Ocean in a north-westerly direction and reaches the Svalbard archipelago. At sunset the eclipse will end just before the coast of Greenland.
The greatest eclipse will be in Russia on the east of Komi Republic (in Europe), ~120 km to south-east of Pechora.
A partial solar eclipse will also be visible for parts of Eastern Europe, Asia, Alaska, and northwestern Canada.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2061
A total lunar eclipse on April 4.
A total solar eclipse on April 20.
A total lunar eclipse on September 29.
An annular solar eclipse on October 13.
Metonic
Preceded by: Solar eclipse of July 1, 2057
Followed by: Solar eclipse of February 5, 2065
Tzolkinex
Preceded by: Solar eclipse of March 9, 2054
Followed by: Solar eclipse of May 31, 2068
Half-Saros
Preceded by: Lunar eclipse of April 14, 2052
Followed by: Lunar eclipse of April 25, 2070
Tritos
Preceded by: Solar eclipse of May 20, 2050
Followed by: Solar eclipse of March 19, 2072
Solar Saros 149
Preceded by: Solar eclipse of April 9, 2043
Followed by: Solar eclipse of May 1, 2079
Inex
Preceded by: Solar eclipse of May 9, 2032
Followed by: Solar eclipse of March 31, 2090
Triad
Preceded by: Solar eclipse of June 20, 1974
Followed by: Solar eclipse of February 19, 2148
Solar eclipses of 2058–2061
Saros 149
Metonic series
Tritos series
Inex series
References
2061 04 20
2061 in science
2061 04 20
2061 04 20 | Solar eclipse of April 20, 2061 | Astronomy | 682 |
64,080,717 | https://en.wikipedia.org/wiki/TET%20enzymes | The TET enzymes are a family of ten-eleven translocation (TET) methylcytosine dioxygenases. They are instrumental in DNA demethylation. 5-Methylcytosine (see first Figure) is a methylated form of the DNA base cytosine (C) that often regulates gene transcription and has several other functions in the genome.
Demethylation by TET enzymes (see second Figure), can alter the regulation of transcription. The TET enzymes catalyze the hydroxylation of DNA 5-methylcytosine (5mC) to 5-hydroxymethylcytosine (5hmC), and can further catalyse oxidation of 5hmC to 5-formylcytosine (5fC) and then to 5-carboxycytosine (5caC). 5fC and 5caC can be removed from the DNA base sequence by base excision repair and replaced by cytosine in the base sequence.
TET enzymes have central roles in DNA demethylation required during embryogenesis, gametogenesis, memory, learning, addiction and pain perception.
TET proteins
The three related TET genes, TET1, TET2 and TET3 code respectively for three related mammalian proteins TET1, TET2, and TET3. All three proteins possess 5mC oxidase activity, but they differ in terms of domain architecture. TET proteins are large (~180- to 230-kDa) multidomain enzymes. All TET proteins contain a conserved double-stranded β-helix (DSBH) domain, a cysteine-rich domain, and binding sites for the cofactors Fe(II) and 2-oxoglutarate (2-OG) that together form the core catalytic region in the C terminus. In addition to their catalytic domain, full-length TET1 and TET3 proteins have an N-terminal CXXC zinc finger domain that can bind DNA. The TET2 protein lacks a CXXC domain, but the IDAX gene, that's a neighbor of the TET2 gene, encodes a CXXC4 protein. IDAX is thought to play a role in regulating TET2 activity by facilitating its recruitment to unmethylated CpGs.
TET isoforms
The three TET genes are expressed as different isoforms, including at least two isoforms of TET1, three of TET2 and three of TET3. Different isoforms of the TET genes are expressed in different cells and tissues. The full-length canonical TET1 isoform appears virtually restricted to early embryos, embryonic stem cells and primordial germ cells (PGCs). The dominant TET1 isoform in most somatic tissues, at least in the mouse, arises from alternative promoter usage which gives rise to a short transcript and a truncated protein designated TET1s. The three isoforms of TET2 arise from different promoters. They are expressed and active in embryogenesis and differentiation of hematopoietic cells. The isoforms of TET3 are the full length form TET3FL, a short form splice variant TET3s, and a form that occurs in oocytes designated TET3o. TET3o is created by alternative promoter use and contains an additional first N-terminal exon coding for 11 amino acids. TET3o only occurs in oocytes and the one cell stage of the zygote and is not expressed in embryonic stem cells or in any other cell type or adult mouse tissue tested. Whereas TET1 expression can barely be detected in oocytes and zygotes, and TET2 is only moderately expressed, the TET3 variant TET3o shows extremely high levels of expression in oocytes and zygotes, but is nearly absent at the 2-cell stage. It appears that TET3o, high in oocytes and zygotes at the one cell stage, is the major TET enzyme utilized when almost 100% rapid demethylation occurs in the paternal genome just after fertilization and before DNA replication begins (see DNA demethylation).
TET specificity
Many different proteins bind to particular TET enzymes and recruit the TETs to specific genomic locations. In some studies, further analysis is needed to determine whether the interaction per se mediates the recruitment or instead the interacting partner helps to establish a favourable chromatin environment for TET binding. TET1‑depleted and TET2‑depleted cells revealed distinct target preferences of these two enzymes, with TET1‑preferring promoters and TET2‑preferring gene bodies of highly expressed genes and enhancers.
The three mammalian DNA methyltransferases (DNMTs) show a strong preference for adding a methyl group to the 5 carbon of a cytosine where a cytosine nucleotide is followed by a guanine nucleotide in the linear sequence of bases along its 5' → 3' direction (at CpG sites). This forms a 5mCpG site. More than 98% of DNA methylation occurs at CpG sites in mammalian somatic cells. Thus TET enzymes largely initiate demethylation at 5mCpG sites.
Oxoguanine glycosylase (OGG1) is one example of a protein that recruits a TET enzyme. TET1 is able to act on 5mCpG if an ROS has first acted on the guanine to form 8-hydroxy-2'-deoxyguanosine (8-OHdG or its tautomer 8-oxo-dG), resulting in a 5mCp-8-OHdG dinucleotide (see Figure). After formation of 5mCp-8-OHdG, the base excision repair enzyme OGG1 binds to the 8-OHdG lesion without immediate excision (see Figure). Adherence of OGG1 to the 5mCp-8-OHdG site recruits TET1, allowing TET1 to oxidize the 5mC adjacent to 8-OHdG. This initiates the demethylation pathway.
EGR1 is another example of a protein that recruits a TET enzyme. EGR1 has an important role in learning and memory. When a new event such as fear conditioning causes a memory to be formed, EGR1 messenger RNA is rapidly and selectively up-regulated in subsets of neurons in specific brain regions associated with learning and memory formation. TET1s is the predominant isoform of TET1 that is expressed in neurons. When EGR1 proteins are expressed, they appear to bring TET1s to about 600 sites in the neuron genome. Then EGR1 and TET1 appear to cooperate in demethylating and thereby activating the expression of genes downstream of the EGR1 binding sites in DNA.
TET processivity
TET processivity can be viewed at three levels, the physical, chemical and genetic levels. Physical processivity refers to the ability of a TET protein to slide along the DNA from one CpG site to another. An in vitro study showed that DNA-bound TET does not preferentially oxidize other CpG sites on the same DNA molecule, indicating that TET is not physically processive. Chemical processivity refers to the ability of TET to catalyze the oxidation of 5mC iteratively to 5caC without releasing its substrate. It appears that TET can work through both chemically processive and non‑processive mechanisms depending on reaction conditions. Genetic processivity refers to the genetic outcome of TET‑mediated oxidation in the genome, as shown by mapping of the oxidized bases. In mouse embryonic stem cells, many genomic regions or CpG sites are modified so that 5mC is changed to 5hmC but not to 5fC or 5caC, whereas at many otherCpG sites 5mCs are modified to 5fC or 5caC but not 5hmC, suggesting that 5mC is processed to different states at different genomic regions or CpG sites.
TET enzyme activity
TET enzymes are dioxygenases in the family of alpha-ketoglutarate-dependent hydroxylases. A TET enzyme is an alpha-ketoglutarate (α-KG) dependent dioxygenase that catalyses an oxidation reaction by incorporating a single oxygen atom from molecular oxygen (O2) into its substrate, 5-methylcytosine in DNA (5mC), to produce the product 5-hydroxymethylcytosine in DNA. This conversion is coupled with the oxidation of the co-substrate α-KG to succinate and carbon dioxide (see Figure).
The first step involves the binding of α-KG and 5-methylcytosine to the TET enzyme active site. The TET enzymes each harbor a core catalytic domain with a double-stranded β-helix fold that contains the crucial metal-binding residues found in the family of Fe(II)/α-KG- dependent oxygenases. α-KG coordinates as a bidentate ligand (connected at two points) to Fe(II) (see Figure), while the 5mC is held by a noncovalent force in close proximity. The TET active site contains a highly conserved triad motif, in which the catalytically-essential Fe(II) is held by two histidine residues and one aspartic acid residue (see Figure). The triad binds to one face of the Fe center, leaving three labile sites available for binding α-KG and O2 (see Figure). TET then acts to convert 5-methylcytosine to 5-hydroxymethylcytosine while α-ketoglutarate is converted to succinate and CO2.
Alternate TET activities
The TET proteins also have activities that are independent of DNA demethylation. These include, for instance, TET2 interaction with O-linked N-acetylglucosamine (O-GlcNAc) transferase to promote histone O-GlcN acylation to affect transcription of target genes.
TET functions
Early embryogenesis
The mouse sperm genome is 80–90% methylated at its CpG sites in DNA, amounting to about 20 million methylated sites. After fertilization, early in the first day of embryogenesis, the paternal chromosomes are almost completely demethylated in six hours by an active TET-dependent process, before DNA replication begins (blue line in Figure).
Demethylation of the maternal genome occurs by a different process. In the mature oocyte, about 40% of its CpG sites in DNA are methylated. In the pre-implantation embryo up to the blastocyst stage (see Figure), the only methyltransferase present is an isoform of DNMT1 designated DNMT1o. It appears that demethylation of the maternal chromosomes largely takes place by blockage of the methylating enzyme DNMT1o from entering the nucleus except briefly at the 8 cell stage (see DNA demethylation). The maternal-origin DNA thus undergoes passive demethylation by dilution of the methylated maternal DNA during replication (red line in Figure). The morula (at the 16 cell stage), has only a small amount of DNA methylation (black line in Figure).
Gametogenesis
The newly formed primordial germ cells (PGC) in the implanted embryo devolve from the somatic cells at about day 7 of embryogenesis in the mouse. At this point the PGCs have high levels of methylation. These cells migrate from the epiblast toward the gonadal ridge. As reviewed by Messerschmidt et al., the majority of PGCs are arrested in the G2 phase of the cell cycle while they migrate toward the hindgut during embryo days 7.5 to 8.5. Then demethylation of the PGCs takes place in two waves. There is both passive and active, TET-dependent demethylation of the primordial germ cells. At day 9.5 the primordial germ cells begin to rapidly replicate going from about 200 PGCs at embryo day 9.5 to about 10,000 PGCs at day 12.5. During days 9.5 to 12.5 DNMT3a and DNMT3b are repressed and DNMT1 is present in the nucleus at a high level. But DNMT1 is unable to methylate cytosines during days 9.5 to 12.5 because the UHRF1 gene (also known as NP95) is repressed and UHRF1 is an essential protein needed to recruit DNMT1 to replication foci where maintenance DNA methylation takes place. This is a passive, dilution form of demethylation.
In addition, from embryo day 9.5 to 13.5 there is an active form of demethylation. As indicated in the Figure of the demethylation pathway above, two enzymes are central to active demethylation. These are a ten-eleven translocation (TET) methylcytosine dioxygenase and thymine-DNA glycosylase (TDG). One particular TET enzyme, TET1, and TDG are present at high levels from embryo day 9.5 to 13.5, and are employed in active TET-dependent demethylation during gametogenesis. PGC genomes display the lowest levels of DNA methylation of any cells in the entire life cycle of the mouse by embryonic day 13.5.
Learning and memory
Learning and memory have levels of permanence, differing from other mental processes such as thought, language, and consciousness, which are temporary in nature. Learning and memory can be either accumulated slowly (multiplication tables) or rapidly (touching a hot stove), but once attained, can be recalled into conscious use for a long time. Rats subjected to one instance of contextual fear conditioning create an especially strong long-term memory. At 24 hours after training, 9.17% of the genes in the genomes of rat hippocampus neurons were found to be differentially methylated. This included more than 2,000 differentially methylated genes at 24 hours after training, with over 500 genes being demethylated. Similar results to that in the rat hippocampus were also obtained in mice with contextual fear conditioning.
The hippocampus region of the brain is where contextual fear memories are first stored (see Figure), but this storage is transient and does not remain in the hippocampus. In rats contextual fear conditioning is abolished when the hippocampus is subjected to hippocampectomy just one day after conditioning, but rats retain a considerable amount of contextual fear when hippocampectomy is delayed by four weeks. In mice, examined at 4 weeks after conditioning, the hippocampus methylations and demethylations were reversed (the hippocampus is needed to form memories but memories are not stored there) while substantial differential CpG methylation and demethylation occurred in cortical neurons during memory maintenance. There were 1,223 differentially methylated genes in the anterior cingulate cortex (see Figure) of mice four weeks after contextual fear conditioning. Thus, while there were many methylations in the hippocampus shortly after memory was formed, all these hippocampus methylations were demethylated as soon as four weeks later.
Li et al. reported one example of the relationship between expression of a TET protein, demethylation and memory while using extinction training. Extinction training is the disappearance of a previously learned behavior when the behavior is not reinforced.
A comparison between infralimbic prefrontal cortex (ILPFC) neuron samples derived from mice trained to fear an auditory cue and extinction-trained mice revealed dramatic experience-dependent genome-wide differences in the accumulation of 5-hmC in the ILPFC in response to learning. Extinction training led to a significant increase in TET3 messenger RNA levels within cortical neurons. TET3 was selectively activated within the adult neo-cortex in an experience-dependent manner.
A short hairpin RNA (shRNA) is an artificial RNA molecule with a tight hairpin turn that can be used to silence target gene expression via RNA interference. Mice trained in the presence of TET3-targeted shRNA showed a significant impairment in fear extinction memory.
Addiction
The nucleus accumbens (NAc) has a significant role in addiction. In the nucleus accumbens of mice, repeated cocaine exposure resulted in reduced TET1 messenger RNA (mRNA) and reduced TET1 protein expression. Similarly, there was a ~40% decrease in TET1 mRNA in the NAc of human cocaine addicts examined postmortem.
As indicated above in learning and memory, a short hairpin RNA (shRNA) is an artificial RNA molecule with a tight hairpin turn that can be used to silence target gene expression via RNA interference. Feng et al. injected shRNA targeted to TET1 in the NAc of mice. This could reduce TET1 expression in the same manner as reduction of TET1 expression with cocaine exposure. They then used an indirect measure of addiction, conditioned place preference. Conditioned place preference can measure the amount of time an animal spends in an area that has been associated with cocaine exposure, and this can indicate an addiction to cocaine. Reduced Tet1 expression caused by shRNA injected into the NAc robustly enhanced cocaine place conditioning.
Pain (nociception)
As described in the article Nociception, nociception is the sensory nervous system's response to harmful stimuli, such as a toxic chemical applied to a tissue. In nociception, chemical stimulation of sensory nerve cells called nociceptors produces a signal that travels along a chain of nerve fibers via the spinal cord to the brain. Nociception triggers a variety of physiological and behavioral responses and usually results in a subjective experience, or perception, of pain.
Work by Pan et al. first showed that TET1 and TET3 proteins are normally present in the spinal cords of mice. They used a pain inducing model of intra plantar injection of 5% formalin into the dorsal surface of the mouse hindpaw and measured time of licking of the hindpaw as a measure of induced pain. Protein expression of TET1 and TET3 increased by 152% and 160%, respectively, by 2 hours after formalin injection. Forced reduction of expression of TET1 or TET3 by spinal injection of Tet1-siRNA or Tet3-siRNA for three consecutive days before formalin injection alleviated the mouse perception of pain. On the other hand, forced overexpression of TET1 or TET3 for 2 consecutive days significantly produced pain-like behavior as evidenced by a decrease in the mouse of the thermal pain threshold.
They further showed that the nociceptive pain effects occurred through TET mediated conversion of 5-methylcytosine to 5-hydroxymethylcytosine in the promoter of a microRNA designated miR-365-3p, thus increasing its expression. This microRNA, in turn, ordinarily targets (decreases expression of) the messenger RNA of Kcnh2, that codes for a protein known as Kv11.1 or KCNH2. KCNH2 is the alpha subunit of a potassium ion channel in the central nervous system. Forced decrease in expression of TET1 or TET3 through pre-injection of siRNA reversed the decrease of KCNH2 protein in formalin-treated mice.
References
Gene expression
Epigenetics
Further reading | TET enzymes | Chemistry,Biology | 4,141 |
52,575,587 | https://en.wikipedia.org/wiki/Prefrontal%20synthesis | Prefrontal synthesis (PFS, also known as mental synthesis) is the conscious purposeful process of synthesizing novel mental images. PFS is neurologically different from the other types of imagination, such as simple memory recall and dreaming. Unlike dreaming, which is spontaneous and not controlled by the prefrontal cortex (PFC), PFS is controlled by and completely dependent on the intact lateral prefrontal cortex. Unlike simple memory recall that involves activation of a single neuronal ensemble (NE) encoded at some point in the past, PFS involves active combination of two or more object-encoding neuronal ensembles (objectNE). The mechanism of PFS is hypothesized to involve synchronization of several independent objectNEs. When objectNEs fire out-of-sync, the objects are perceived one at a time. However, once those objectNEs are time-shifted by the lateral PFC to fire in-phase with each other, they are consciously experienced as one unified object or scene.
History of the term
The earliest reference to mental synthesis is found in the doctoral dissertation of S. J. Rowton written in 1864. Paraphrasing Cicero’s description of nature that can only be unified in someone’s mind, S. J. Rowton writes: "... there cannot be one thing unless by a mental synthesis of many things or parts ..."
In the 20th century the term mental synthesis was often used in psychology to describe the experiments of combinatorial nature. In a common experimental setup, subjects are instructed to mentally assemble the verbally described shapes in various ways. For example, the shapes may have been the capital letters ‘J’ and ‘D’, and the subject would then be asked to combine them into as many objects as possible, with size being flexible. A suitable answer in this example would be: an umbrella. The performance in this task is then quantified by counting the number of legitimate patterns that participants construct using the presented shapes.
As the neurobiological study of imagination advanced in the 21st century, there was a need to distinguish the neurologically distinct components of imagination: first in terms of their dependence on the lateral PFC and second in terms of the number of involved neuronal ensembles. As a result, "mental synthesis" was adapted to describe the active process of assembling two or more independent objectNEs from memory into novel combinations. The term "prefrontal synthesis" was later proposed for use in place of "mental synthesis" in order to emphasize the role of the PFC and further distance this type of voluntary imagination from other types of involuntary imagination, such as REM-sleep dreaming, day-time dreaming, hallucination, and spontaneous insight.
There is evidence that a deficit in PFS in humans presents as language which is "impoverished and show[s] an apparent diminution of the capacity to 'propositionize'. The length and complexity of sentences are reduced. There is a dearth of dependent clauses and, more generally, an underutilization of what Chomsky characterizes as the potential for recursiveness of language"
Neuroscience of prefrontal synthesis
The mechanism of PFS is hypothesized to involve synchronization of several independent object-encoding neuronal ensembles (objectNEs). When objectNEs fire out-of-sync, the objects are perceived one at a time. However, once those objectNEs are time-shifted by the lateral prefrontal cortex (LPFC) to fire in-phase with each other, they are consciously experienced as one unified object or scene. The synchronization hypothesis has never been directly tested but is indirectly supported by several lines of experimental evidence. Furthermore, it is the most parsimonious way to explain the formation of new imaginary memories since the same mechanism of Hebbian learning ("neurons that fire together wire together") that is responsible for externally-driven sensory memories of objects and scenes can be also responsible for memorizing internally-constructed novel images, such as plans and engineering designs. In the process of formation of novel receptive memories, neurons are synchronized by simultaneous external stimulation (e.g., light reflected from a moving object is falling on the retina at the same time). In the process of formation of novel imaginary memories, neurons are synchronized by the LPFC during waking or spontaneously during dreaming. In both cases it is the synchronous firing of neurons that wires them together into new stable objectNEs that can later be consolidated into long-term memory.
See also
Art
Creativity
Fictional countries
Idea
Imagination
Intuition (psychology)
Mimesis
Sociological imagination
Truth
References
External links
Imagination
Neuroscience | Prefrontal synthesis | Biology | 958 |
51,759,243 | https://en.wikipedia.org/wiki/HD%2071334 | HD 71334 is a Sun-like star 126.7 light years (38.85 parsecs) from the Sun. HD 150248 is a G-type star and an older solar analog. It is older than the sun at 8.1 billion years, compared to the sun at 4.6 billion years old. At 8.1 billion years old, HD 71334 has passed its stable burning stage. HD 71334 has a lower metallicity that the Sun. HD 71334 is found in the constellations of Puppis. Puppis is one of the 88 modern constellations recognized by the International Astronomical Union. HD 71334 has a brightness of 7.8.
Sun comparison
Chart compares the sun to HD 71334.
See also
List of nearest stars
References
71334
G-type main-sequence stars
Solar analogs
Puppis
9263
Durchmusterung objects
041317 | HD 71334 | Astronomy | 191 |
11,437,949 | https://en.wikipedia.org/wiki/Mycosphaerella%20berkeleyi | Mycosphaerella berkeleyi is a fungal plant pathogen. It is the causal agent of the peanut foliar disease Late Leaf Spot.
Hosts and symptoms
Hosts that suffer from late leaf spot include groundnut species belonging to the genus Arachis hypogaea, and peanuts. Late leaf spot of peanut that produces sexual spores is referred to as Mycosphaerella berkeleyi, whereas the asexual spore is referred to as Cercosporidium personatum. Late leaf spot of peanut symptoms usually appear between 30–50 days following planting. Symptoms include dark brown to black pin-point spots on the upper and under side of the leaf surface. This contrasts the fewer, lighter brown spots that early leaf spot of peanut presents. Late leaf spot of peanut produces symptoms later in the season. These spores can be seen without magnification and give the spot a velvety appearance as opposed to early leaf spot of peanut, which can be seen with higher magnification. Even though these differences are slight, it is what helps to distinguish between the two pathogens.
Importance
Late leaf spot of peanut is a serious disease that occurs in places where peanuts are grown worldwide. This foliar disease causes significant yield loss, and can be found wherever peanuts are grown. This includes areas such as Oklahoma, the southern USA, Fiji, Solomon Islands, as well as Tonga. Infection causes early death of the leaves and dramatic yield loss. This is estimated to range from 10% to 80%, and varies according to the environment and availability of control methods. In the USA, where fungicide application is a typical management practice, yield losses are less frequent as opposed to the semi-arid tropics, where fungicides are less available. It is estimated that Cercosporidium personatum reduces yields by 50% or more in Pacific island countries. Therefore, early detection is crucial, and successful management efforts must be implemented once the disease has been recognized. To help achieve this, early symptom recognition, as well as the timing of management strategies are valuable.
Management
Cultural controls help to delay the onset and development of symptoms, and reduce the level of the primary inoculum present. The primary inoculum that causes the onset of symptoms is induced by the production of microscopic spores called conidia in soil residue. Large amounts of peanut residue in fields where peanuts are cropped seasonally usually results in the progression of late leaf spot. Therefore, crop rotation, along with tillage practices are advised. Since longer periods of leaf wetness is required for disease development, frequent irrigation can increase the chances of high humidity and increased leaf wetness. As opposed to irrigation systems, growers are encouraged to apply small amounts of water regularly in order to maintain a drier canopy. When planting new crops, plant as far away as possible, since late leaf spot spores travel far distances through wind dispersal. Try to avoid planting crops downwind from one another due to spore’s ability to travel between neighboring crop fields. Different varieties of peanut differ in their reaction to the pathogen, but none have proven to be resistant, and are able to be used as a control method. Spanish varieties are most susceptible, Virginia varieties are intermediates, and runner varieties are partially resistant. Very specific chemical controls are used to prevent yield loss, and are required in a very narrow, specific time period in order to be most effective. To be most successful, spray chemical controls as soon as the onset of symptom development. Fungicide application is recommended on a 14-day set calendar schedule, or according to weather based leaf spot advisory. In fields that utilize crop rotation, fungicides should be sprayed during the early pod stage (R3), which typically occurs during July, but can vary according to environment. After the first spray, the grower should continue to apply fungicides every 14 days or according to the leaf spot advisory. Chlorothalonil (Bravo; various generic brands), are the most successful fungicides, and have reduced risk of host resistance. An alternative approach to calendar sprays are to spray crops based on weather patterns. However, this method has proven to be less effective than calendar treatment approaches. Following a harvesting season, growers should collect, burn, or bury the remains of the crops to prevent the soil-borne pathogen from surviving and causing future disease outbreaks.
See also
List of Mycosphaerella species
References
berkeleyi
Fungal plant pathogens and diseases
Fungi described in 1885
Fungus species | Mycosphaerella berkeleyi | Biology | 914 |
78,178,579 | https://en.wikipedia.org/wiki/Reinhold%20Einwallner | Reinhold Einwallner (born 13 May 1973) is an Austrian politician and member of the National Council. A member of the Social Democratic Party, he has represented Vorarlberg since November 2017. He was a member of the Landtag of Vorarlberg from October 2014 to October 2017 and a member of the Federal Council from October 2004 to October 2009.
Einwallner was born on 13 May 1973 in Bruck an der Mur. He trained to be an optician at a vocational school from 1988 to 1992 before studying optometry at a higher technical college in Hall in Tirol from 1994 to 1997. He has been working as an optician since 1997 and has owned his own business since 2003. He was a member of the municipal councils in Hörbranz (2000 to 2015) and Bregenz (since 2020). He has held various positions in the Social Democratic Party (SPÖ)'s Vorarlberg branch and at the federal level.
Einwallner was appointed to the Federal Council by the Landtag of Vorarlberg following the 2004 state election. He was elected to the Landtag of Vorarlberg at the 2014 state election. He was elected to the National Council at the 2017 legislative election.
Einwallner is married and has one child.
References
External links
1973 births
Living people
Members of the 26th National Council (Austria)
Members of the 27th National Council (Austria)
Members of the Federal Council (Austria)
Members of the Landtag of Vorarlberg
Opticians
People from Bregenz
Social Democratic Party of Austria politicians | Reinhold Einwallner | Astronomy | 316 |
4,802,556 | https://en.wikipedia.org/wiki/Rheoscopic%20fluid | In fluid mechanics (specifically rheology), rheoscopic fluids are fluids whose internal currents are visible as it flows. Such fluids are effective in visualizing dynamic currents, such as convection and laminar flow. They are microscopic crystalline platelets such as mica, metallic flakes, or fish scales in suspension in a fluid such as water or glycol stearate.
When the fluid is put in motion, the suspended particles orient themselves with the local fluid shear. With appropriate illumination, the particle-filled fluid will reflect differing intensities of light.
A Kalliroscope is an art device/technique based on rheoscopic fluids (using crystalline guanine as the indicator particles) invented by artist Paul Matisse.
See also
Reynolds number
References
External links
University of Chicago Materials Research Centre Demonstration
Instructables: Making Rheoscopic fluid
Paul Matisse, rheoscopist
artistic techniques
fluid dynamics
educational toys | Rheoscopic fluid | Chemistry,Engineering | 195 |
12,809,051 | https://en.wikipedia.org/wiki/UNIFAC | In statistical thermodynamics, the UNIFAC method (UNIQUAC Functional-group Activity Coefficients) is a semi-empirical system for the prediction of non-electrolyte activity in non-ideal mixtures. UNIFAC uses the functional groups present on the molecules that make up the liquid mixture to calculate activity coefficients. By using interactions for each of the functional groups present on the molecules, as well as some binary interaction coefficients, the activity of each of the solutions can be calculated. This information can be used to obtain information on liquid equilibria, which is useful in many thermodynamic calculations, such as chemical reactor design, and distillation calculations.
The UNIFAC model was first published in 1975 by Fredenslund, Jones and John Prausnitz, a group of chemical engineering researchers from the University of California. Subsequently they and other authors have published a wide range of UNIFAC papers, extending the capabilities of the model; this has been by the development of new or revision of existing UNIFAC model parameters. UNIFAC is an attempt by these researchers to provide a flexible liquid equilibria model for wider use in chemistry, the chemical and process engineering disciplines.
Introduction
A particular problem in the area of liquid-state thermodynamics is the sourcing of reliable thermodynamic constants. These constants are necessary for the successful prediction of the free energy state of the system; without this information it is impossible to model the equilibrium phases of the system.
Obtaining this free energy data is not a trivial problem, and requires careful experiments, such as calorimetry, to successfully measure the energy of the system. Even when this work is performed it is infeasible to attempt to conduct this work for every single possible class of chemicals, and the binary, or higher, mixtures thereof. To alleviate this problem, free energy prediction models, such as UNIFAC, are employed to predict the system's energy based on a few previously measured constants.
It is possible to calculate some of these parameters using ab initio methods like COSMO-RS, but results should be treated with caution, because ab initio predictions can be off. Similarly, UNIFAC can be off, and for both methods it is advisable to validate the energies obtained from these calculations experimentally.
UNIFAC correlation
The UNIFAC correlation attempts to break down the problem of predicting interactions between molecules by describing molecular interactions based upon the functional groups attached to the molecule. This is done in order to reduce the sheer number of binary interactions that would be needed to be measured to predict the state of the system.
Chemical activity
The activity coefficient of the components in a system is a correction factor that accounts for deviations of real systems from that of an Ideal solution, which can either be measured via experiment or estimated from chemical models (such as UNIFAC). By adding a correction factor, known as the activity (, the activity of the ith component) to the liquid phase fraction of a liquid mixture, some of the effects of the real solution can be accounted for. The activity of a real chemical is a function of the thermodynamic state of the system, i.e. temperature and pressure.
Equipped with the activity coefficients and a knowledge of the constituents and their relative amounts, phenomena such as phase separation and vapour-liquid equilibria can be calculated. UNIFAC attempts to be a general model for the successful prediction of activity coefficients.
Model parameters
The UNIFAC model splits up the activity coefficient for each species in the system into two components; a combinatorial and a residual component . For the -th molecule, the activity coefficients are broken down as per the following equation:
In the UNIFAC model, there are three main parameters required to determine the activity for each molecule in the system. Firstly there are the group surface area and volume contributions obtained from the Van der Waals surface area and volumes. These parameters depend purely upon the individual functional groups on the host molecules. Finally there is the binary interaction parameter , which is related to the interaction energy of molecular pairs (equation in "residual" section). These parameters must be obtained either through experiments, via data fitting or molecular simulation.
Combinatorial
The combinatorial component of the activity is contributed to by several terms in its equation (below), and is the same as for the UNIQUAC model.
where and are the molar weighted segment and area fractional components for the -th molecule in the total system and are defined by the following equation; is a compound parameter of , and . is the coordination number of the system, but the model is found to be relatively insensitive to its value and is frequently quoted as a constant having the value of 10.
and are calculated from the group surface area and volume contributions and (Usually obtained via tabulated values) as well as the number of occurrences of the functional group on each molecule such that:
Residual
The residual component of the activity is due to interactions between groups present in the system, with the original paper referring to the concept of a "solution-of-groups". The residual component of the activity for the -th molecule containing unique functional groups can be written as follows:
where is the activity of an isolated group in a solution consisting only of molecules of type . The formulation of the residual activity ensures that the condition for the limiting case of a single molecule in a pure component solution, the activity is equal to 1; as by the definition of , one finds that will be zero. The following formula is used for both and
In this formula is the summation of the area fraction of group , over all the different groups and is somewhat similar in form, but not the same as . is the group interaction parameter and is a measure of the interaction energy between groups. This is calculated using an Arrhenius equation (albeit with a pseudo-constant of value 1). is the group mole fraction, which is the number of groups in the solution divided by the total number of groups.
is the energy of interaction between groups m and n, with SI units of joules per mole and R is the ideal gas constant. Note that it is not the case that , giving rise to a non-reflexive parameter. The equation for the group interaction parameter can be simplified to the following:
Thus still represents the net energy of interaction between groups and , but has the somewhat unusual units of absolute temperature (SI kelvins). These interaction energy values are obtained from experimental data, and are usually tabulated.
See also
Chemical equilibrium
Chemical thermodynamics
Fugacity
UNIQUAC – UNIversal QUasi-chemical Activity Coefficients
UNIFAC Consortium
PSRK – Predictive Soave–Redlich–Kwong
MOSCED – Modified Separation of Cohesive Energy Density Model (Estimation of activity coefficients at infinite dilution)
References
Further reading
Aage Fredenslund, Jürgen Gmehling and Peter Rasmussen, Vapor-liquid equilibria using UNIFAC : a group contribution method, Elsevier Scientific, New York, 1979
External links
UNIFAC structural groups and parameters
AIOMFAC online-model UNIFAC-based group-contribution model for calculation of activity coefficients in organic–inorganic mixtures.
Thermodynamic models | UNIFAC | Physics,Chemistry | 1,490 |
22,978,106 | https://en.wikipedia.org/wiki/RushmoreDrive | RushmoreDrive was the first search engine aimed for the Black community, launched in April 2008 and dissolved in 2009. It was co-founded by Johnny C. Taylor, Jr., Kevin McFall, and Brad Gebert It delivered a blend of mainstream search results plus a layer of more relevant search results influenced by the Black community.
RushmoreDrive News was a tool that attempted to help the Black community find news headlines from the World Wide Web, including well known Black media, blogs and countless relevant on-line voices, as well as recognized mainstream news sources.
References
Internet search engines
IAC Inc. | RushmoreDrive | Technology | 123 |
631,336 | https://en.wikipedia.org/wiki/Atmospheric%20diffraction | Atmospheric diffraction is manifested in the following principal ways:
Optical atmospheric diffraction
Radio wave diffraction is the scattering of radio frequency or lower frequencies from the Earth's ionosphere, resulting in the ability to achieve greater distance radio broadcasting.
Sound wave diffraction is the bending of sound waves, as the sound travels around edges of geometric objects. This produces the effect of being able to hear even when the source is blocked by a solid object. The sound waves bend appreciably around the solid object.
However, if the object has a diameter greater than the acoustic wavelength, a 'sound shadow' is cast behind the object where the sound is inaudible. (Note: some sound may be propagated through the object depending on material).
Optical atmospheric diffraction
When light travels through thin clouds made up of nearly uniform sized water or aerosol droplets or ice crystals, diffraction or bending of light occurs as the light is diffracted by the edges of the particles. This degree of bending of light depends on the wavelength (color) of light and the size of the particles. The result is a pattern of rings, which seem to emanate from the Sun, the Moon, a planet, or another astronomical object. The most distinct part of this pattern is a central, nearly white disk. This resembles an atmospheric Airy disc but is not actually an Airy disk. It is different from rainbows and halos, which are mainly caused by refraction.
The left photo shows a diffraction ring around the rising Sun caused by a veil of aerosol. This effect dramatically disappeared when the Sun rose high enough until the pattern was no longer visible on the Earth's surface. This phenomenon is sometimes called the corona effect, not to be confused with the solar corona.
On the right is a 1/10-second exposure showing an overexposed full moon. The Moon is seen through thin vaporous clouds, which glow with a bright disk surrounded by an illuminated red ring. A longer exposure would show more faint colors beyond the outside red ring.
Another form of atmospheric diffraction or bending of light occurs when light moves through fine layers of particulate dust trapped primarily in the middle layers of the troposphere. This effect differs from water based atmospheric diffraction because the dust material is opaque whereas water allows light to pass through it. This has the effect of tinting the light the color of the dust particles. This tinting can vary from red to yellow depending on geographical location. the other primary difference is that dust based diffraction acts as a magnifier instead of creating a distinct halo. This occurs because the opaque matter does not share the lensing properties of water. The effect is to make an object visibly larger while being more indistinct as the dust distorts the image. This effect varies largely based on the amount and type of dust in the atmosphere.
Radio wave propagation in the ionosphere
The ionosphere is a layer of partially ionized gases high above the majority of the Earth's atmosphere; these gases are ionized by cosmic rays originating on the sun. When radio waves travel into this zone, which commences about 80 kilometers above the earth, they experience diffraction in a manner similar to the visible light phenomenon described above. In this case some of the electromagnetic energy is bent in a large arc, such that it can return to the Earth's surface at a very distant point (on the order of hundreds of kilometers from the broadcast source. More remarkably some of this radio wave energy bounces off the Earth's surface and reaches the ionosphere for a second time, at a distance even farther away than the first time. Consequently, a high powered transmitter can effectively broadcast over 1000 kilometers by using multiple "skips" off of the ionosphere. And, at times of favorable atmospheric conditions good "skip" occurs, then even a low power transmitter can be heard halfway around the world. This often occurs for "novice" radio amateurs "hams" who are limited by law to transmitters with no more than 65 watts. The Kon-Tiki expedition communicated regularly with a 6 watt transmitter from the middle of the Pacific. For more details see the "communications" part of the "Kon-Tiki expedition" entry in Wikipedia.
An exotic variant of this radio wave propagation has been examined to show that, theoretically, the ionospheric bounce could be greatly exaggerated if a high powered spherical acoustical wave were created in the ionosphere from a source on earth.
Acoustical diffraction near the Earth's surface
In the case of sound waves travelling near the Earth's surface, the waves are diffracted or bent as they traverse by a geometric edge, such as a wall or building. This phenomenon leads to a very important practical effect: that we can hear "around corners". Because of the frequencies involved considerable amount of the sound energy (on the order of ten percent) actually travels into this would be sound "shadow zone". Visible light exhibits a similar effect, but, due to its much shorter wavelength, only a minute amount of light energy travels around a corner.
A useful branch of acoustics dealing with the design of noise barriers examines this acoustical diffraction phenomenon in quantitative detail to calculate the optimum height and placement of a soundwall or berm adjacent to a highway.
This phenomenon is also inherent in calculating the sound levels from aircraft noise, so that an accurate determination of topographic features may be understood. In that way one can produce sound level isopleths, or contour maps, which faithfully depict outcomes over variable terrain.
Bibliography
See also
Atmospheric refraction
Noise barrier
External links
Explanation and image gallery - Atmospheric Optics by Les Cowley
Diffraction
Atmosphere
Sound
Acoustics | Atmospheric diffraction | Physics,Chemistry,Materials_science | 1,176 |
71,715,388 | https://en.wikipedia.org/wiki/Cyber%20range | Cyber ranges are virtual environments used for cybersecurity, cyberwarfare training, simulation or emulation, and development of technologies related to cybersecurity. Their scale can vary drastically, from just a single node to an internet-like network.
See also
National Cyber Range
References
Computer security
Computer network security | Cyber range | Technology,Engineering | 64 |
17,253,604 | https://en.wikipedia.org/wiki/Rolofylline | Rolofylline (KW-3902) is an experimental diuretic which acts as a selective adenosine A1 receptor antagonist. It was discovered at NovaCardia, Inc. which was purchased by Merck & Co., Inc. in 2007.
Development of rolofylline was terminated on September 1, 2009, after the results of a large clinical trial (PROTECT) showed the drug to be no better than placebo for patients with acute heart failure. Participants given rolofylline did show some improvement in shortness of breath, but the drug did not prevent kidney damage or have any significant effect on overall treatment success. Rolofylline was also associated with a higher incidence of seizures and stroke.
References
Adenosine receptor antagonists
Diuretics
Xanthines | Rolofylline | Chemistry | 165 |
60,615,424 | https://en.wikipedia.org/wiki/Ian%20Hamley | Ian Hamley (born 1965) is a British academic who is the Diamond Professor of Physical Chemistry at the University of Reading. He is a soft matter scientist and physical chemist with research expertise in self-assembling molecules including polymers, peptides and other biomolecules. He has more than 400 published scientific papers. He is the author of 'The Physics of Block Copolymers', 'Introduction to Soft Matter', 'Block Copolymers in Solution', 'Introduction to Peptide Science', and 'Small-Angle Scattering: Theory, Instrumentation, Data and Applications', as well as several edited texts.
Career
After postdoctoral research at AMOLF (FOM Institute for Atomic and Molecular Physics, Amsterdam) and University of Minnesota, Hamley was appointed as lecturer in Physics at the University of Durham in 1993 where he worked until 1995. He moved to the Department of Chemistry at the University of Leeds in 1995 and was promoted to become Professor of Polymer Materials and Director of the Centre for Self-Organising Molecular Systems in 2004. He moved to the University of Reading as Diamond Professor of Physical Chemistry in 2005. This was a five-year, joint appointment with Diamond Light Source.
His past research concerned the self-assembly of block copolymers. Most recently he has developed interests in peptide and peptide conjugate self-assembly, including molecules with bioactivity such as amyloid peptides peptide hormones, antimicrobial peptides, peptides in cosmetic applications and peptides with anti-cancer activity. Several of these show promise as therapeutics.
Awards and honours
Hamley was a Royal Society-Woolfson Research Merit Award Holder 2011–2016 and won the RSC Peter Day award for Materials Chemistry in 2016 and the MacroGroup UK Medal for Contribution to UK Polymer Science in 2016.
Lecturing career
Hamley has lectured at the University of Reading for many years specializing in physical chemistry teaching modules on thermodynamics and surface and interface chemistry.
References
British scientists
1965 births
Living people
Fellows of the Royal Society of Chemistry
Date of birth missing (living people)
Place of birth missing (living people)
Academics of the University of Reading
Academics of the University of Leeds
Alumni of the University of Reading
Alumni of the University of Southampton
Polymer scientists and engineers | Ian Hamley | Chemistry,Materials_science | 458 |
380,405 | https://en.wikipedia.org/wiki/Perspective%20%28graphical%29 | Linear or point-projection perspective () is one of two types of graphical projection perspective in the graphic arts; the other is parallel projection. Linear perspective is an approximate representation, generally on a flat surface, of an image as it is seen by the eye. Perspective drawing is useful for representing a three-dimensional scene in a two-dimensional medium, like paper. It is based on the optical fact that for a person an object looks N times (linearly) smaller if it has been moved N times further from the eye than the original distance was.
The most characteristic features of linear perspective are that objects appear smaller as their distance from the observer increases, and that they are subject to , meaning that an object's dimensions parallel to the line of sight appear shorter than its dimensions perpendicular to the line of sight. All objects will recede to points in the distance, usually along the horizon line, but also above and below the horizon line depending on the view used.
Italian Renaissance painters and architects including Filippo Brunelleschi, Leon Battista Alberti, Masaccio, Paolo Uccello, Piero della Francesca and Luca Pacioli studied linear perspective, wrote treatises on it, and incorporated it into their artworks.
Overview
Linear or point-projection perspective works by putting an imaginary flat plane that is close to an object under observation and directly facing an observer's eyes (i.e., the observer is on a normal, or perpendicular line to the plane). Then draw straight lines from every point in the object to the observer. The area on the plane where those lines pass through the plane is a point-projection prospective image resembling what is seen by the observer.
Examples of one-point perspective
Examples of two-point perspective
Examples of three-point perspective
Examples of curvilinear perspective
Additionally, a central vanishing point can be used (just as with one-point perspective) to indicate frontal (foreshortened) depth.
History
Early history
The earliest art paintings and drawings typically sized many objects and characters hierarchically according to their spiritual or thematic importance, not their distance from the viewer, and did not use foreshortening. The most important figures are often shown as the highest in a composition, also from hieratic motives, leading to the so-called "vertical perspective", common in the art of Ancient Egypt, where a group of "nearer" figures are shown below the larger figure or figures; simple overlapping was also employed to relate distance. Additionally, oblique foreshortening of round elements like shields and wheels is evident in Ancient Greek red-figure pottery.
Systematic attempts to evolve a system of perspective are usually considered to have begun around the fifth century BC in the art of ancient Greece, as part of a developing interest in illusionism allied to theatrical scenery. This was detailed within Aristotle's Poetics as skenographia: using flat panels on a stage to give the illusion of depth. The philosophers Anaxagoras and Democritus worked out geometric theories of perspective for use with skenographia. Alcibiades had paintings in his house designed using skenographia, so this art was not confined merely to the stage. Euclid in his Optics () argues correctly that the perceived size of an object is not related to its distance from the eye by a simple proportion. In the first-century BC frescoes of the Villa of P. Fannius Synistor, multiple vanishing points are used in a systematic but not fully consistent manner.
Chinese artists made use of oblique projection from the first or second century until the 18th century. It is not certain how they came to use the technique; Dubery and Willats (1983) speculate that the Chinese acquired the technique from India, which acquired it from Ancient Rome, while others credit it as an indigenous invention of Ancient China. Oblique projection is also seen in Japanese art, such as in the Ukiyo-e paintings of Torii Kiyonaga (1752–1815).
By the later periods of antiquity, artists, especially those in less popular traditions, were well aware that distant objects could be shown smaller than those close at hand for increased realism, but whether this convention was actually used in a work depended on many factors. Some of the paintings found in the ruins of Pompeii show a remarkable realism and perspective for their time. It has been claimed that comprehensive systems of perspective were evolved in antiquity, but most scholars do not accept this. Hardly any of the many works where such a system would have been used have survived. A passage in Philostratus suggests that classical artists and theorists thought in terms of "circles" at equal distance from the viewer, like a classical semi-circular theatre seen from the stage. The roof beams in rooms in the Vatican Virgil, from about 400 AD, are shown converging, more or less, on a common vanishing point, but this is not systematically related to the rest of the composition.
Medieval artists in Europe, like those in the Islamic world and China, were aware of the general principle of varying the relative size of elements according to distance, but even more than classical art were perfectly ready to override it for other reasons. Buildings were often shown obliquely according to a particular convention. The use and sophistication of attempts to convey distance increased steadily during the period, but without a basis in a systematic theory. Byzantine art was also aware of these principles, but also used the reverse perspective convention for the setting of principal figures. Ambrogio Lorenzetti painted a floor with convergent lines in his Presentation at the Temple (1342), though the rest of the painting lacks perspective elements.
Renaissance
It is generally accepted that Filippo Brunelleschi conducted a series of experiments between 1415 and 1420, which included making drawings of various Florentine buildings in correct perspective. According to Vasari and Antonio Manetti, in about 1420, Brunelleschi demonstrated his discovery of prospective by having people look through a hole on his painting from the backside. Through it, they would see a building such as the Florence Baptistery for which the painting was made. When Brunelleschi lifted a mirror between the building and the painting, the mirror reflected the painting to an observer looking through the hole, so that the observer can compare how similar the building and the painting of it are. (The vanishing point is centered from the perspective of an experiment participant.) Brunelleschi applied this new system of perspective to his paintings around 1425.
This scenario is indicative, but faces several problems, that are still debated. First of all, nothing can be said for certain about the correctness of his perspective construction of the Baptistery of San Giovanni, because Brunelleschi's panel is lost. Second, no other perspective painting or drawing by Brunelleschi is known. (In fact, Brunelleschi was not known to have painted at all.) Third, in the account written by Antonio Manetti in his Vita di Ser Brunellesco at the end of the 15th century on Brunelleschi's panel, there is not a single occurrence of the word "experiment". Fourth, the conditions listed by Manetti are contradictory with each other. For example, the description of the eyepiece sets a visual field of 15°, much narrower than the visual field resulting from the urban landscape described.
Soon after Brunelleschi's demonstrations, nearly every interested artist in Florence and in Italy used geometrical perspective in their paintings and sculpture, notably Donatello, Masaccio,Lorenzo Ghiberti, Masolino da Panicale, Paolo Uccello, and Filippo Lippi. Not only was perspective a way of showing depth, it was also a new method of creating a composition. Visual art could now depict a single, unified scene, rather than a combination of several. Early examples include Masolino's St. Peter Healing a Cripple and the Raising of Tabitha (), Donatello's The Feast of Herod (), as well as Ghiberti's Jacob and Esau and other panels from the east doors of the Florence Baptistery. Masaccio (d. 1428) achieved an illusionistic effect by placing the vanishing point at the viewer's eye level in his Holy Trinity (), and in The Tribute Money, it is placed behind the face of Jesus. In the late 15th century, Melozzo da Forlì first applied the technique of foreshortening (in Rome, Loreto, Forlì and others).
This overall story is based on qualitative judgments, and would need to be faced against the material evaluations that have been conducted on Renaissance perspective paintings.
Apart from the paintings of Piero della Francesca, which are a model of the genre, the majority of 15th century works show serious errors in their geometric construction. This is true of Masaccio's Trinity fresco and of many works, including those by renowned artists like Leonardo da Vinci.
As shown by the quick proliferation of accurate perspective paintings in Florence, Brunelleschi likely understood (with help from his friend the mathematician Toscanelli), but did not publish, the mathematics behind perspective. Decades later, his friend Leon Battista Alberti wrote (), a treatise on proper methods of showing distance in painting. Alberti's primary breakthrough was not to show the mathematics in terms of conical projections, as it actually appears to the eye. Instead, he formulated the theory based on planar projections, or how the rays of light, passing from the viewer's eye to the landscape, would strike the picture plane (the painting). He was then able to calculate the apparent height of a distant object using two similar triangles. The mathematics behind similar triangles is relatively simple, having been long ago formulated by Euclid. Alberti was also trained in the science of optics through the school of Padua and under the influence of Biagio Pelacani da Parma who studied Alhazen's Book of Optics. This book, translated around 1200 into Latin, had laid the mathematical foundation for perspective in Europe.
Piero della Francesca elaborated on De pictura in his De Prospectiva pingendi in the 1470s, making many references to Euclid. Alberti had limited himself to figures on the ground plane and giving an overall basis for perspective. Della Francesca fleshed it out, explicitly covering solids in any area of the picture plane. Della Francesca also started the now common practice of using illustrated figures to explain the mathematical concepts, making his treatise easier to understand than Alberti's. Della Francesca was also the first to accurately draw the Platonic solids as they would appear in perspective. Luca Pacioli's 1509 Divina proportione (Divine Proportion), illustrated by Leonardo da Vinci, summarizes the use of perspective in painting, including much of Della Francesca's treatise. Leonardo applied one-point perspective as well as shallow focus to some of his works.
Two-point perspective was demonstrated as early as 1525 by Albrecht Dürer, who studied perspective by reading Piero and Pacioli's works, in his Unterweisung der Messung ("Instruction of the Measurement").
Limitations
Perspective images are created with reference to a particular center of vision for the picture plane. In order for the resulting image to appear identical to the original scene, a viewer must view the image from the exact vantage point used in the calculations relative to the image. When viewed from a different point, this cancels out what would appear to be distortions in the image. For example, a sphere drawn in perspective will be stretched into an ellipse. These apparent distortions are more pronounced away from the center of the image as the angle between a projected ray (from the scene to the eye) becomes more acute relative to the picture plane. Artists may choose to "correct" perspective distortions, for example by drawing all spheres as perfect circles, or by drawing figures as if centered on the direction of view. In practice, unless the viewer observes the image from an extreme angle, like standing far to the side of a painting, the perspective normally looks more or less correct. This is referred to as "Zeeman's Paradox".
See also
Anamorphosis
Camera angle
Cutaway drawing
Perspective control
Trompe-l'œil
Uki-e
Zograscope
Notes
References
Sources
Further reading
External links
Teaching Perspective in Art and Mathematics through Leonardo da Vinci's Work at Mathematical Association of America
Metaphysical Perspective in Ancient Roman-Wall Painting
How to Draw a Two Point Perspective Grid at Creating Comics
Perspective projection
Technical drawing
Functions and mappings
Composition in visual art
Italian inventions | Perspective (graphical) | Mathematics,Engineering | 2,580 |
851,493 | https://en.wikipedia.org/wiki/Amoxapine | Amoxapine, sold under the brand name Asendin among others, is a tricyclic antidepressant (TCA). It is the N-demethylated metabolite of loxapine. Amoxapine first received marketing approval in the United States in 1980, approximately 10 to 20 years after most of the other TCAs were introduced in the United States.
Medical uses
Amoxapine is used in the treatment of major depressive disorder. Compared to other antidepressants it is believed to have a faster onset of action, with therapeutic effects seen within four to seven days. In excess of 80% of patients that do respond to amoxapine are reported to respond within two weeks of the beginning of treatment. It also has properties similar to those of the atypical antipsychotics, and may behave as one and thus may be used in the treatment of schizophrenia off-label. Despite its apparent lack of extrapyramidal side effects in patients with schizophrenia it has been found to worsen motor function in a study of three patients with Parkinson's disease and psychosis.
Contraindications
As with all FDA-approved antidepressants it carries a black-box warning about the potential of an increase in suicidal thoughts or behaviour in children, adolescents and young adults under the age of 25. Its use is also advised against in individuals with known hypersensitivities to either amoxapine or other ingredients in its oral formulations. Its use is also recommended against in the following disease states:
Severe cardiovascular disorders (potential of cardiotoxic adverse effects such as QT interval prolongation)
Uncorrected narrow angle glaucoma
Acute recovery post-myocardial infarction
Its use is also advised against in individuals concurrently on monoamine oxidase inhibitors or if they have been on one in the past 14 days and in individuals on drugs that are known to prolong the QT interval (e.g. ondansetron, citalopram, pimozide, sertindole, ziprasidone, haloperidol, chlorpromazine, thioridazine, etc.).
Lactation
Its use in breastfeeding mothers not recommended as it is excreted in breast milk and the concentration found in breast milk is approximately a quarter that of the maternal serum level.
Side effects
Adverse effects by incidence:
Note: Serious (that is, those that can either result in permanent injury or are irreversible or are potentially life-threatening) are written in bold text.
Very common (>10% incidence) adverse effects include:
Constipation
Dry mouth
Sedation
Common (1–10% incidence) adverse effects include:
Anxiety
Ataxia
Blurred vision
Confusion
Dizziness
Headache
Fatigue
Nausea
Nervousness/restlessness
Excessive appetite
Rash
Increased perspiration (sweating)
Tremor
Palpitations
Nightmares
Excitement
Weakness
ECG changes
Oedema. An abnormal accumulation of fluids in the tissues of the body leading to swelling.
Prolactin levels increased. Prolactin is a hormone that regulates the generation of breast milk. Prolactin elevation is not as significant as with risperidone or haloperidol.
Uncommon/Rare (<1% incidence) adverse effects include:
Diarrhoea
Flatulence
Hypertension (high blood pressure)
Hypotension (low blood pressure)
Syncope (fainting)
Tachycardia (high heart rate)
Menstrual irregularity
Disturbance of accommodation
Mydriasis (pupil dilation)
Orthostatic hypotension (a drop in blood pressure that occurs upon standing up)
Seizure
Urinary retention (being unable to pass urine)
Urticaria (hives)
Vomiting
Nasal congestion
Photosensitization
Hypomania (a dangerously elated/irritable mood)
Tingling
Paresthesias of the extremities
Tinnitus
Disorientation
Numbness
Incoordination
Disturbed concentration
Epigastric distress
Peculiar taste in the mouth
Increased or decreased libido
Impotence (difficulty achieving an erection)
Painful ejaculation
Lacrimation (crying without an emotional cause)
Weight gain
Altered liver function
Breast enlargement
Drug fever
Pruritus (itchiness)
Vasculitis a disorder where blood vessels are destroyed by inflammation. Can be life-threatening if it affects the right blood vessels.
Galactorrhoea (lactation that is not associated with pregnancy or breast feeding)
Delayed micturition (that is, delays in urination from when a conscious effort to urinate is made)
Hyperthermia (elevation of body temperature; its seriousness depends on the extent of the hyperthermia)
Syndrome of inappropriate secretion of antidiuretic hormone (SIADH) this is basically when the body's level of the hormone, antidiuretic hormone, which regulates the conservation of water and the restriction of blood vessels, is elevated. This is potentially fatal as it can cause electrolyte abnormalities including hyponatraemia (low blood sodium), hypokalaemia (low blood potassium) and hypocalcaemia (low blood calcium) which can be life-threatening.
Agranulocytosis a drop in white blood cell counts. The white blood cells are the cells of the immune system that fight off foreign invaders. Hence agranulocytosis leaves an individual open to life-threatening infections.
Leukopaenia the same as agranulocytosis but less severe.
Neuroleptic malignant syndrome (a potentially fatal reaction to antidopaminergic agents, most often antipsychotics. It is characterised by hyperthermia, diarrhoea, tachycardia, mental status changes [e.g. confusion], rigidity, extrapyramidal side effects)
Tardive dyskinesia a most often irreversible neurologic reaction to antidopaminergic treatment, characterised by involuntary movements of facial muscles, tongue, lips, and other muscles. It develops most often only after prolonged (months, years or even decades) exposure to antidopaminergics.
Extrapyramidal side effects. Motor symptoms such as tremor, parkinsonism, involuntary movements, reduced ability to move one's voluntary muscles, etc.
Unknown incidence or relationship to drug treatment adverse effects include:
Paralytic ileus (paralysed bowel)
Atrial arrhythmias including atrial fibrillation
Myocardial infarction (heart attack)
Stroke
Heart block
Hallucinations
Purpura
Petechiae
Parotid swelling
Changes in blood glucose levels
Pancreatitis swelling of the pancreas
Hepatitis swelling of the liver
Urinary frequency
Testicular swelling
Anorexia (weight loss)
Alopecia (hair loss)
Thrombocytopenia a significant drop in platelet count that leaves one open to life-threatening bleeds.
Eosinophilia an elevated level of the eosinophils of the body. Eosinophils are the type of immune cell that's job is to fight off parasitic invaders.
Jaundice yellowing of the skin, eyes and mucous membranes due to an impaired ability of the body to clear the by product of haem breakdown, bilirubin, most often the result of liver damage as it is the liver's responsibility to clear bilirubin.
It tends to produce less anticholinergic effects, sedation and weight gain than some of the earlier TCAs (e.g. amitriptyline, clomipramine, doxepin, imipramine, trimipramine). It may also be less cardiotoxic than its predecessors.
Overdose
It is considered particularly toxic in overdose, with a high rate of renal failure (which usually takes 2–5 days), rhabdomyolysis, coma, seizures and even status epilepticus. Some believe it to be less cardiotoxic than other TCAs in overdose, although reports of cardiotoxic overdoses have been made.
Pharmacology
Pharmacodynamics
Amoxapine possesses a wide array of pharmacological effects. It is a moderate and strong reuptake inhibitor of serotonin and norepinephrine, respectively, and binds to the 5-HT2A, 5-HT2B, 5-HT2C, 5-HT3, 5-HT6, 5-HT7, D2, α1-adrenergic, D3, D4, and H1 receptors with varying but significant affinity, where it acts as an antagonist (or inverse agonist depending on the receptor in question) at all sites. It has weak but negligible affinity for the dopamine transporter and the 5-HT1A, 5-HT1B, D1, α2-adrenergic, H4, mACh, and GABAA receptors, and no affinity for the β-adrenergic receptors or the allosteric benzodiazepine site on the GABAA receptor. Amoxapine is also a weak GlyT2 blocker, as well as a weak (Ki = 2.5 μM, EC50 = 0.98 μM) δ-opioid receptor partial agonist.
7-Hydroxyamoxapine, a major active metabolite of amoxapine, is a more potent dopamine receptor antagonist and contributes to its neuroleptic efficacy, whereas 8-Hydroxyamoxapine is a norepinephrine reuptake inhibitor but a stronger serotonin reuptake inhibitor and helps to balance amoxapine's ratio of serotonin to norepinephrine transporter blockade.
Pharmacokinetics
Amoxapine is metabolised into two main active metabolites: 7-hydroxyamoxapine and 8-hydroxyamoxapine.
Where:
t1/2 is the elimination half life of the compound.
tmax is the time to peak plasma levels after oral administration of amoxapine.
CSS is the steady state plasma concentration.
protein binding is the extent of plasma protein binding.
Vd is the volume of distribution of the compound.
Society and culture
Brand names
Brand names for amoxapine include (where † denotes discontinued brands):
Adisen (KR)
Amolife (IN)
Amoxan (JP)
Asendin† (previously marketed in CA, NZ, US)
Asendis† (previously marketed in IE, UK)
Défanyl (FR)
Demolox (DK†, IN, ES†)
Oxamine (IN)
Oxcap (IN)
See also
Loxapine
References
Alpha-1 blockers
Atypical antipsychotics
Chloroarenes
Delta-opioid receptor agonists
Dibenzoxazepines
Dopamine antagonists
Glycine reuptake inhibitors
H1 receptor antagonists
Human drug metabolites
Muscarinic antagonists
1-Piperazinyl compounds
Serotonin receptor antagonists
Serotonin–norepinephrine reuptake inhibitors
Tetracyclic antidepressants
Tricyclic antidepressants | Amoxapine | Chemistry | 2,336 |
40,058,616 | https://en.wikipedia.org/wiki/Sustainable%20Transport%20Award | The Sustainable Transport Award (STA) is presented annually to a city that has shown leadership and vision in the field of sustainable transportation and urban livability in the preceding year. Nominations are accepted from anyone, and winners and honorable mentions are chosen by the Sustainable Transport Award Steering Committee.
Since 2005, the award has been given out annually to a city or major jurisdiction that has implemented innovative transportation strategies, especially in several different areas of sustainable transportation. The award rewards cities for improving mobility for residents, reducing transportation greenhouse gas and air pollution emissions, and improving safety and access for bicyclists and pedestrians.The STA shows international interest in cities at the forefront of transportation policy. By highlighting successfully completed programs and emphasizing transferability, the award helps disseminate new ideas and best practices while encouraging cities worldwide to improve their own livability.
Noteworthy projects include the construction or expansion of BRT or LRT systems, bike shares or bike lanes, attention to low-income access to transportation, reform of parking or zoning regulations, and linking transportation and development practices (TOD).
Process
Criteria
STAs are awarded to cities that have demonstrated significant progress in using transportation to create a more sustainable and livable city. The Sustainable Transport Award looks for cities working in several of the following policy areas:
Improvements to public transportation, such as implementing a new mass transit system (e.g., bus rapid transit), expanding the existing systems to increase accessibility and coverage, or improving customer service.
Improvements to non-motorized travel, such as the implementation or expansion of bike share programs and bike lanes, the creation of pedestrian walkways, and improvements to street crossings and sidewalks.
Expansion or improvement of public space often includes the creation of open plazas, creating pedestrian-only zones, installing street lamps or trees along sidewalks, and pedestrian safety measures.
Implementation of travel demand management programs to reduce private car use can include car-free days or zones, changes to parking requirements or availability, the implementation or expansion of car share systems, congestion charging, and structured tolls and fees.
Reducing urban sprawl by linking transportation to development (TOD) can be done through changes to zoning laws and providing incentives to developers.
Reduction of transport-related air pollution and greenhouse gas emissions by creating pollution laws, mandating air quality controls, restricting vehicle access, and creating an air advisory system.
To be eligible for an STA, cities must have made significant progress in the past year in addressing sustainable transit. Awards are presented for projects implemented in the previous year rather than for planned activities or simply beginning construction.
Nominations
Cities must be nominated to be considered for the award. Nominations can come from government agencies, including the Mayor's office, NGOs, consultants, academics, or anyone else with a close working knowledge of the city's projects. Applicants are asked to provide program details, impact, significance, outcomes, transferability, and images.
Steering Committee
Final selection of the award recipient and honorable mentions is conducted by a steering committee, composed of experts and organizations working internationally on sustainable transportation. The committee includes the Institute for Transportation and Development Policy (ITDP), World Resources Institute, World Bank, GIZ, Asian Development Bank, Clean Air Asia, ICLEI, and Despacio. The committee looks for projects completed in the previous year that demonstrate innovation and success in improving sustainable transportation.
Past winners
2005: Bogotá, Colombia
2006: Seoul, South Korea
2007: Guayaquil, Ecuador
2008: Paris, France and London, United Kingdom
2009: New York City, United States
2010: Ahmedabad, India
2011: Guangzhou, China
2012: Medellín, Colombia and San Francisco, United States
2013: Mexico City, Mexico
2014: Buenos Aires, Argentina
2015: Belo Horizonte, Rio de Janeiro, and São Paulo, Brazil
2016: Yichang, China
2017: Santiago, Chile
2018: Dar es Salaam, Tanzania
2019: Fortaleza, Brazil
2020: Pune, India
2021: Jakarta, Indonesia
2022: Bogotá, Colombia
2023: Paris, France
2024: Tianjin, China
References
International awards
Sustainable transport
Urban planning
Awards established in 2005
Community awards
Environmental awards | Sustainable Transport Award | Physics,Engineering | 832 |
562,061 | https://en.wikipedia.org/wiki/Sorting%20network | In computer science, comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks.
Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. In order to sort larger amounts of inputs, new sorting networks must be constructed. This independence of comparison sequences is useful for parallel execution and for implementation in hardware. Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, who subsequently patented the idea.
Sorting networks can be implemented either in hardware or in software. Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. Batcher, in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches. Since the 2000s, sorting nets (especially bitonic mergesort) are used by the GPGPU community for constructing sorting algorithms to run on graphics processing units.
Introduction
A sorting network consists of two types of items: comparators and wires. The wires are thought of as running from left to right, carrying values (one per wire) that traverse the network all at the same time. Each comparator connects two wires. When a pair of values, traveling through a pair of wires, encounter a comparator, the comparator swaps the values if and only if the top wire's value is greater or equal to the bottom wire's value.
In a formula, if the top wire carries and the bottom wire carries , then after hitting a comparator the wires carry and , respectively, so the pair of values is sorted. A network of wires and comparators that will correctly sort all possible inputs into ascending order is called a sorting network or Kruskal hub. By reflecting the network, it is also possible to sort all inputs into descending order.
The full operation of a simple sorting network is shown below. It is evident why this sorting network will correctly sort the inputs; note that the first four comparators will "sink" the largest value to the bottom and "float" the smallest value to the top. The final comparator sorts out the middle two wires.
Depth and efficiency
The efficiency of a sorting network can be measured by its total size, meaning the number of comparators in the network, or by its depth, defined (informally) as the largest number of comparators that any input value can encounter on its way through the network. Noting that sorting networks can perform certain comparisons in parallel (represented in the graphical notation by comparators that lie on the same vertical line), and assuming all comparisons to take unit time, it can be seen that the depth of the network is equal to the number of time steps required to execute it.
Insertion and Bubble networks
We can easily construct a network of any size recursively using the principles of insertion and selection. Assuming we have a sorting network of size n, we can construct a network of size by "inserting" an additional number into the already sorted subnet (using the principle underlying insertion sort). We can also accomplish the same thing by first "selecting" the lowest value from the inputs and then sort the remaining values recursively (using the principle underlying bubble sort).
The structure of these two sorting networks are very similar. A construction of the two different variants, which collapses together comparators that can be performed simultaneously shows that, in fact, they are identical.
The insertion network (or equivalently, bubble network) has a depth of , where is the number of values. This is better than the time needed by random-access machines, but it turns out that there are much more efficient sorting networks with a depth of just , as described below.
Zero-one principle
While it is easy to prove the validity of some sorting networks (like the insertion/bubble sorter), it is not always so easy. There are permutations of numbers in an -wire network, and to test all of them would take a significant amount of time, especially when is large. The number of test cases can be reduced significantly, to , using the so-called zero-one principle. While still exponential, this is smaller than for all , and the difference grows quite quickly with increasing .
The zero-one principle states that, if a sorting network can correctly sort all sequences of zeros and ones, then it is also valid for arbitrary ordered inputs. This not only drastically cuts down on the number of tests needed to ascertain the validity of a network, it is of great use in creating many constructions of sorting networks as well.
The principle can be proven by first observing the following fact about comparators: when a monotonically increasing function is applied to the inputs, i.e., and are replaced by and , then the comparator produces and . By induction on the depth of the network, this result can be extended to a lemma stating that if the network transforms the sequence into , it will transform into . Suppose that some input contains two items , and the network incorrectly swaps these in the output. Then it will also incorrectly sort for the function
This function is monotonic, so we have the zero-one principle as the contrapositive.
Constructing sorting networks
Various algorithms exist to construct sorting networks of depth (hence size ) such as Batcher odd–even mergesort, bitonic sort, Shell sort, and the Pairwise sorting network. These networks are often used in practice.
It is also possible to construct networks of depth (hence size ) using a construction called the AKS network, after its discoverers Ajtai, Komlós, and Szemerédi. While an important theoretical discovery, the AKS network has very limited practical application because of the large linear constant hidden by the Big-O notation. These are partly due to a construction of an expander graph.
A simplified version of the AKS network was described by Paterson in 1990, who noted that "the constants obtained for the depth bound still prevent the construction being of practical value".
A more recent construction called the zig-zag sorting network of size was discovered by Goodrich in 2014. While its size is much smaller than that of AKS networks, its depth makes it unsuitable for a parallel implementation.
Optimal sorting networks
For small, fixed numbers of inputs , optimal sorting networks can be constructed, with either minimal depth (for maximally parallel execution) or minimal size (number of comparators). These networks can be used to increase the performance of larger sorting networks resulting from the recursive constructions of, e.g., Batcher, by halting the recursion early and inserting optimal nets as base cases. The following table summarizes the optimality results for small networks for which the optimal depth is known:
For larger networks neither the optimal depth nor the optimal size are currently known. The bounds known so far are provided in the table below:
The first sixteen depth-optimal networks are listed in Knuth's Art of Computer Programming, and have been since the 1973 edition; however, while the optimality of the first eight was established by Floyd and Knuth in the 1960s, this property wasn't proven for the final six until 2014 (the cases nine and ten having been decided in 1991).
For one to twelve inputs, minimal (i.e. size-optimal) sorting networks are known, and for higher values, lower bounds on their sizes can be derived inductively using a lemma due to Van Voorhis (p. 240): . The first ten optimal networks have been known since 1969, with the first eight again being known as optimal since the work of Floyd and Knuth, but optimality of the cases and took until 2014 to be resolved.
The optimality of the smallest known sorting networks for and was resolved in 2020.
Some work in designing optimal sorting network has been done using genetic algorithms: D. Knuth mentions that the smallest known sorting network for was found by Hugues Juillé in 1995 "by simulating an evolutionary process of genetic breeding" (p. 226), and that the minimum depth sorting networks for and were found by Loren Schwiebert in 2001 "using genetic methods" (p. 229).
Complexity of testing sorting networks
Unless P=NP, the problem of testing whether a candidate network is a sorting network is likely to remain difficult for networks of large sizes, due to the problem being co-NP-complete.
References
External links
List of smallest sorting networks for given number of inputs
Sorting Networks
CHAPTER 28: SORTING NETWORKS
Sorting Networks
Tool for generating and graphing sorting networks
Sorting networks and the END algorithm
Sorting Networks validity
Computer engineering
Sorting algorithms | Sorting network | Mathematics,Technology,Engineering | 1,869 |
23,159,516 | https://en.wikipedia.org/wiki/Lean%20services | Lean services is the application of lean manufacturing production methods in the service industry (and related method adaptations). Lean services have among others been applied to US health care providers and the UK HMRC.
History
Definition of "Service": see Service, Business Service and/or Service Economics. Lean Services history, see Lean manufacturing.
Lean manufacturing and Services, contrasted by Levitt; "Manufacturing looks for solutions inside the very tasks to be done... Service looks for solutions in the performer of the task." (T.Levitt, Production-Line Approach to Service, Harvard Business Review, September 1972).
Method
Underlying method; Lean manufacturing.
Bicheno & Holweg provides an adapted view on waste for the method ("waste", see Lean manufacturing, waste and The Toyota Way, principle 2):
Delay on the part of customers waiting for service, for delivery, in queues, for response, not arriving as promised.
Duplication. Having to re-enter data, repeat details on forms, copy information across, answer queries from several sources within the same organisation.
Unnecessary Movement. Queuing several times, lack of one-stop, poor ergonomics in the service encounter.
Unclear communication, and the wastes of seeking clarification, confusion over product or service use, wasting time finding a location that may result in misuse or duplication.
Incorrect inventory. Being out-of-stock, unable to get exactly what was required, substitute products or services.
An opportunity lost to retain or win customers, a failure to establish rapport, ignoring customers, unfriendliness, and rudeness.
Errors in the service transaction, product defects in the product-service bundle, lost or damaged goods.
Service quality errors, lack of quality in service processes.
Shillingburg and Seddon separately provides an additional type of waste for the method:
Value Demand, services demanded by the customer. Failure Demand, production of services as a result of defects in the upstream system.
Criticism
John Seddon outlines challenges with Lean Services in his paper "Rethinking Lean Service" (Seddon 2009) using examples from the UK tax-authorities HMRC.
See also
Lean construction
Lean government
Lean Higher Education
Lean IT
References
Lean manufacturing
Customer service | Lean services | Engineering | 461 |
53,836,551 | https://en.wikipedia.org/wiki/Age-related%20mobility%20disability | Age-related mobility disability is a self-reported inability to walk due to impairments, limited mobility, dexterity or stamina. It has been found mostly in older adults with decreased strength in lower extremities.
History
According to the National Research Council, the population of older adults is expected to increase in the United States by 2030 due to the aging population of the baby boomer generation; this in turn will increase the population of mobility disabled individuals in the community. This raises the importance of being able to predict disability due to inability to walk at an early stage, which will eventually decrease health care costs. Aging cause a decrease in physical strength and in lower extremities, which ultimately leads to decrease in functional mobility, in turn leading to disability which is shown to be common in women due to differences in distribution of resources and opportunities. The early detection of mobility disabilities will help clinicians and patients in determining the early management of the conditions which could be associated with the future disability. Mobility disabilities are not restricted to older and hospitalized individuals; such disabilities have been reported in young and non-hospitalized individuals as well due to decreased functional mobility. The increase in the rate of disability causes loss of functional independence and increases the risk of future chronic diseases.
Definition
Mobility is defined as the ability to move around, and mobility disability occurs when a person has problems with activities such as walking, standing up, or balancing. The use of a mobility aid device such as a mobility scooter, wheelchair, crutches or a walker can help with community ambulation. Another term that is coined to define mobility disabilities based on performance is "performance based mobility disability". It is the inability to increase your walking speed more than 0.4 m/s. An individual who is unable to walk at >0.4 m/s is considered severely disabled and would require a mobility device to walk in community.
Risk factors
There are a number of factors that could be associated with mobility disability, but according to the Centers for Disease Control and Prevention, "stroke is found to be the leading cause of mobility disability, in turn reducing functional mobility in more than half of the stroke survivors above 65 years of age".
Measures
There are several measurement scales designed to detect mobility disabilities. The measures that can detect mobility disabilities are classified into two categories, self-reported measures and performance measures. There is a need to differentiate between these measures based on their ability to detect mobility disabilities, such as differences in their reliability and validity. Self-reported measures are commonly used to detect mobility disabilities, but recently developed performance measures have been shown to be effective in predicting future mobility disabilities in older adults.
Self-reported measures
Several qualitative research studies use survey, questionnaires and self-reported scales to detect a decrease in functional mobility or to predict future mobility disability in older adults. The advantages of these qualitative research scales are easier data acquisition and can be performed on the larger population. Although there is difference in perception of condition between subjects (gender difference), type of chronic conditions and age-related changes such as memory and reasoning, all of which can affect the information and scores of the individual, still self-reported measures have been used extensively in behavioral and correlation studies. The commonly used self-reported measures to detect mobility disability are Stroke Impact scale, Rosow-Breslau scale, Barthel index, and Tinetti Falls Efficacy Scale.
Based on reliability and validity of these scales, Stroke Impact scale has proven to have excellent test-retest reliability and construct validity, however, if it can predict future mobility disability in older adults is yet to be found. In contrast, Rosow-Breslau scale, Barthel Index and Tinetti Falls Efficacy Scale proved important to predict future mobility disability based on the activities involved in these questionnaire scales.
Performance-based measures
Mobility disabilities due to age-related musculoskeletal pain or increase in chronic conditions are easier to detect by performance measures. Some commonly used performance measures to detect mobility disabilities are the 400-meter walking test, 5-minute walk test , walking speed, short physical performance battery test.
Among these measures, 400-meter walk test and short physical performance battery test has been proven to be strong predictors of mobility disability in older adults. In addition to prediction, there is moderate to excellent correlation between these two tests.
Based on reliability and validity of measurement scales to predict mobility disability, self-reported measures such as Barthel index, and performance measures such as 400 m walk test and short physical performance battery test are strongly associated with prediction of mobility disability in older adults.
References
Further reading
Aptitude
Gerontology
Motor control
Motor skills
Physical exercise
Physical fitness | Age-related mobility disability | Biology | 949 |
34,652,450 | https://en.wikipedia.org/wiki/Monopartite | Monopartite refers to the class of genome that is presented in the genome of the virus. As opposed to multipartite, viruses composed of monopartite genomes have a single molecule of nucleic acid. Most dsDNA viruses are monopartite.
References
Genomics | Monopartite | Biology | 57 |
77,919,001 | https://en.wikipedia.org/wiki/HRAC%20classification | The Herbicide Resistance Action Committee (HRAC) classifies herbicides by their mode of action (MoA) to provide a uniform way for farmers and growers to identify the agents they use and better manage pesticide resistance around the world. It is run by CropLife International in conjunction with the Weed Science Society of America (WSSA).
Resistance overview
A weed that develops resistance to one herbicide typically has resistance to other herbicides with the same mode of action (MoA), so herbicides with different MoAs, or different resistance groups, are needed. Preventative weed resistance management rotates herbicide types to prevent selective breeding of resistance to the same mode of action. By rotating MoAs, successive generations gain no advantage from any resistant mutations of the last generation. Cross-resistant and multiply resistant weeds resist multiple MoAs, and are particularly difficult to control.
There is limited evidence of resistance undoing other resistances. For example, prosulfocarb and trifluralin: their inverse mechanisms of resistance contradict, and so by evolving to one the weed loses resistance to the other, at least by metabolic resistance. Prosulfocarb requires a weed to metabolise it very slowly to survive; trifluralin on the other hand must be metabolised quickly before it can deal damage to the weed.
Naming types
The HRAC give a letter based class to each active constituent herbicide. The Australian HRAC code is separately assigned, though is often the same as the global code. In 2021, alternative numeric classes were added, to make codes globally more consistent. This set of classification changes also added or moved a few herbicides that had been misclassified, and reduced regional concerns that using the English alphabet could be an impediment for international growers.
Herbicides that act through multiple modes have multiple classifications, corresponding to each MoA. For example, Quinmerac is classified as Group 4/29 (O/L) because it is both an Auxin mimic (Group 4 or O) and inhibits cellulose synthesis (Group 29 or L).
Groups
See also
Insecticide Resistance Action Committee
References
Herbicides | HRAC classification | Biology | 443 |
662,462 | https://en.wikipedia.org/wiki/Buzzer | A buzzer or beeper is an audio signaling device, which may be mechanical, electromechanical, or piezoelectric (piezo for short). Typical uses of buzzers and beepers include alarm devices, timers, train and confirmation of user input such as a mouse click or keystroke.
History
Electromechanical
The electric buzzer was invented in 1831 by Joseph Henry. They were mainly used in early doorbells until they were phased out in the early 1930s in favor of musical chimes, which had a softer tone.
Piezoelectric
Piezoelectric buzzers, or piezo buzzers, as they are sometimes called, were invented by Japanese manufacturers and fitted into a wide array of products during the 1970s to 1980s. This advancement mainly came about because of cooperative efforts by Japanese manufacturing companies. In 1951, they established the Barium Titanate Application Research Committee, which allowed the companies to be "competitively cooperative" and bring about several piezoelectric innovations and inventions.
Types
Electromechanical
Early devices were based on an electromechanical system identical to an electric bell without the metal gong. Similarly, a relay may be connected to interrupt its own actuating current, causing the contacts to buzz (the contacts buzz at line frequency if powered by alternating current) Often these units were anchored to a wall or ceiling to use it as a sounding board. The word "buzzer" comes from the rasping noise that electromechanical buzzers made.
Mechanical
A joy buzzer is an example of a purely mechanical buzzer and they require drivers. Other examples of them are doorbells.
Piezoelectric
A piezoelectric element may be driven by an oscillating electronic circuit or other audio signal source, driven with a piezoelectric audio amplifier. Sounds commonly used to indicate that a button has been pressed are a click, a ring or a beep.
A piezoelectric buzzer/beeper also depends on acoustic cavity resonance or Helmholtz resonance to produce an audible beep.
Modern applications
While technological advancements have caused buzzers to be impractical and undesirable, there are still instances in which buzzers and similar circuits may be used. Present day applications include:
Novelty uses
Judging panels
Educational purposes
Annunciator panels
Electronic metronomes
Game show lock-out device
Microwave ovens and other household appliances
Sporting events such as basketball games
Electrical alarms
Joy buzzer (mechanical buzzer used for pranks)
See also
Alarm clock
Alarm management
Klaxon
Vibrator (mechanical)
Joy buzzers
UVB-76, a Russian radio station that emits a characteristic buzzing sound and is also called The Buzzer.
References
Electrical components
Bells (percussion) | Buzzer | Technology,Engineering | 557 |
44,885,988 | https://en.wikipedia.org/wiki/Nokia%202010 | The Nokia 2010 is a mobile phone that was announced by Finnish phone manufacturer Nokia in January 1994.
According to the late Matti Makkonen, a manager of Nokia Mobile Phones at the time, Nokia 2010 was the first mobile phone to enable writing messages easily.
Other features include lists of 10 dialed calls, 10 received calls and 10 missed calls.
The phone has a monochromatic display that can show two rows of text at a time, which are surrounded by symbols for dedicated functions — battery status and reception level on either side; SMS message notification, keypad lock, et al. at the top. The handset has an antenna slot that supports either an external rigid antenna, or a pull-out type antenna (more common). The 2010 used a full-size (1FF) sim-card.
Nokia 2010 was the more affordable version in the 2xxx series than the business-oriented Nokia 2110, both of which were released in 1994.
In terms of design, the 2010 stayed truer to its predecessor model of Nokia 1011 than the 2110.
Battery life was quoted as being from 20 to 40 hours with its full-length Ni-Cad battery but, real-world use in a metro area with reasonable signal strength returned about 25hrs or, just 1 hour of talk time.
References
2010
Mobile phones introduced in 1994 | Nokia 2010 | Technology | 273 |
48,795,925 | https://en.wikipedia.org/wiki/WISEP%20J190648.47%2B401106.8 | WISEP J190648.47+401106.8 (abbreviated to W1906+40) is an L-type brown dwarf away in the constellation Lyra. It was discovered in 2011, and was the first L-dwarf discovered in the field of view of the Kepler space telescope.
In 2015 it was shown to have on its surface a storm the size of Jupiter's Great Red Spot. The storm rotates around the star roughly every 9 hours and has lasted since at least 2013, when observations of the storm began.
W1906+40 has an intrinsic brightness of 0.02% that of the Sun, a radius of 0.9 times that of Jupiter, and a surface temperature of 2,300 K. The star emits significant flares.
References
L-type brown dwarfs
Lyra
WISE objects
Astronomical objects discovered in 2011 | WISEP J190648.47+401106.8 | Astronomy | 178 |
5,499,616 | https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Hungary | The NUTS codes of Hungary have three levels:
Codes
Local administrative units
Below the NUTS levels, the two LAU (Local Administrative Units) levels are:
The LAU codes of Hungary can be downloaded here:
Changes in NUTS 2016 classification
The NUTS classification is regularly updated to reflect changes and modifications proposed by Member States. As part of this process the European Commission has adopted changes concerning Hungary in December 2016. The new classification that has been introduced have split the region Central Hungary in two: Budapest (previously HU101) and Pest county (previously HU102). The new classification is in use since 1 January 2018.
See also
ISO 3166-2 codes of Hungary
FIPS region codes of Hungary
Regions of Hungary
Counties of Hungary
Districts of Hungary (from 2013)
Subregions of Hungary (until 2013)
Administrative divisions of the Kingdom of Hungary (until 1918)
Counties of the Kingdom of Hungary
Administrative divisions of the Kingdom of Hungary (1941–44)
List of cities and towns of Hungary
Sources
Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe
Overview map of EU Countries - NUTS level 1
MAGYARORSZÁG - NUTS level 2
MAGYARORSZÁG - NUTS level 3
Correspondence between the NUTS levels and the national administrative units
List of current NUTS codes
Download current NUTS codes (ODS format)
Counties of Hungary, Statoids.com
External links
Eurostat - Portrait of the regions (forum.europa.eu.int)
Comparative analysis of some Hungarian regions by using “COCO” method (DOC file) (HTML version)
Magyarország – NUTS level 3 (PDF; at the website of the Hungarian Prime Minister's Office )
Regions of Hungary (at Hungary.hu, the Government Portal of Hungary)
Hungary and the regions
Hungary
Nuts | NUTS statistical regions of Hungary | Mathematics | 358 |
659,859 | https://en.wikipedia.org/wiki/Melanism | Melanism is the congenital excess of melanin in an organism resulting in dark pigment.
Pseudomelanism, also called abundism, is another variant of pigmentation, identifiable by dark spots or enlarged stripes, which cover a large part of the body of the animal, making it appear melanistic.
The morbid deposition of black matter, often of a malignant character causing pigmented tumors, is called melanosis.
Adaptation
Melanism related to the process of adaptation is called adaptive. Most commonly, dark individuals become fitter to survive and reproduce in their environment as they are better camouflaged. This makes some species less conspicuous to predators, while others, such as leopards, use it as a foraging advantage during night hunting. Typically, adaptive melanism is heritable: A dominant allele, which is entirely or nearly entirely expressed in the phenotype, is responsible for the excessive amount of melanin. By contrast, adaptive melanism associated with Batesian mimicry in Zelandoperla fenestrata stoneflies is controlled by a recessive allele at the ebony locus.
Adaptive melanism has been shown to occur in a variety of animals, including mammals such as squirrels, many cats and canids, and coral snakes. Adaptive melanism can lead to the creation of morphs, a notable example being the peppered moth, whose evolutionary history in the United Kingdom is offered as a classic instructional tool for teaching the principles of natural selection. A more replicated example of human-induced shifts in melanism has arisen from repeated selection against melanic Zelandoperla fenestrata stonefly phenotypes following widespread deforestation in New Zealand.
Industrial melanism
Industrial melanism is an evolutionary effect in insects such as the peppered moth, Biston betularia in areas subject to industrial pollution. Darker pigmented individuals are favored by natural selection, apparently because they are better camouflaged against polluted backgrounds. When pollution was later reduced, lighter forms regained the advantage and melanism became less frequent. Other explanations have been proposed, such as that the melanin pigment enhances function of immune defences, or a thermal advantage from the darker coloration.
In cats
Melanistic coat coloration occurs as a common polymorphism in 11 of 37 felid species and reaches high population frequency in some cases but never achieves complete fixation. The black panther, a melanistic leopard, is common in the equatorial rainforest of Malaya and the tropical rainforest on the slopes of some African mountains, such as Mount Kenya. The serval also has melanistic forms in certain areas of East Africa. In the jaguarundi, coloration varies from dark brown and gray to light reddish. Melanic forms of jaguar are common in certain parts of South America. In 1938 and 1940, two melanistic bobcats were trapped alive in sub-tropical Florida.
In 2003, the dominant mode of inheritance of melanism in jaguars was confirmed by performing phenotype-transmission analysis in a 116-individual captive pedigree. Melanistic animals were found to carry at least one copy of a mutant MC1R sequence allele, bearing a 15-base pair inframe deletion. Ten unrelated melanistic jaguars were either homozygous or heterozygous for this allele. A 24-base pair deletion causes the incompletely dominant allele for melanism in the jaguarundi. Sequencing of the agouti signalling peptide in the agouti gene coding region revealed a 2-base pair deletion in black domestic cats. These variants were absent in melanistic individuals of Geoffroy's cat, oncilla, pampas cat and Asian golden cat, suggesting that melanism arose independently at least four times in the cat family.
Melanism in leopards is inherited as a Mendelian, monogenic recessive trait relative to the spotted form. Pairings of black animals have a significantly smaller litter size than other possible pairings. Between January 1996 and March 2009, Indochinese leopards were photographed at 16 sites in the Malay Peninsula in a sampling effort of more than 1000 trap nights. Of 445 photographs of melanistic leopards, 410 were taken south of the Kra Isthmus, where the non-melanistic morph was never photographed. These data suggest the near fixation of the dark allele in the region. The expected time to fixation of this recessive allele due to genetic drift alone ranged from about 1,100 years to about 100,000 years.
Melanism in leopards has been hypothesized to be causally associated with a selective advantage for ambush. Other theories are that genes for melanism in felines may provide resistance to viral infections, or a high-altitude adaptation, since black fur absorbs more light for warmth.
In birds
The chicken breeds Silkie and Ayam Cemani commonly exhibit this trait. Ayam Cemani is an uncommon and relatively modern breed of chicken from Indonesia. They have a dominant gene that causes hyperpigmentation (Fibromelanosis), making the chicken entirely black; including feathers, beak, and internal organs.
Melanism in feral rock doves are actually quite common,to some extent, especially if the area is abundant with the species. The amount of pigmentation is varied, from a slight darker pigmentation in the pigeon’s wings, to being almost completely black.
In April 2015, an extremely rare black flamingo was spotted on the Mediterranean island of Cyprus.
In amphibians
The alpine salamander, Salamandra atra, has one subspecies (S. atra atra) that is completely black. The pigment comes from a specific cell called a melanophore, which produce the compound melanin.
There are four other subspecies of this salamander, and they have varying levels of melanin pigmentation. The subspecies have yellow spots in different concentrations or proportions. The pigment-producing cells that contribute to the yellow spots of some sub-species are called xanthophores. It appears that the fully-black phenotypes do not ever develop these xanthophores. Alpine salamanders produce a toxin from their skin, and both fully melanistic, black salamanders and spotted individuals produce the compound.
Studies done that traced DNA histories have suggested that the original alpine salamander phenotype was black with some yellow spots, meaning that the fully black color evolved over time and was thus selected for over many generations.
In humans
Melanism, meaning a mutation that results in completely dark skin, does not exist in humans. In humans, the amount of melanin is determined by three dominant alleles (AABBCC), and different ethnicities have varying amounts. Melanin is the primary determinant of the degree of skin pigmentation and protects the body from harmful ultraviolet radiation. The same ultraviolet radiation is essential for the synthesis of vitamin D in skin, so lighter colored skin – less melanin – is an adaptation related to the prehistoric movement of humans away from equatorial regions, as there is less exposure to sunlight at higher latitudes. People from parts of Africa, South Asia, Southeast Asia, Australia, Papua New Guinea, Fiji, Vanuatu, New Caledonia, and the Solomon Islands may have very dark skin, but this is not melanism.
Peutz–Jeghers syndrome
This rare genetic disorder is characterized by the development of macules with hyperpigmentation on the lips and oral mucosa (melanosis), as well as benign polyps in the gastrointestinal tract.
Socio-politics
The term melanism has been used on Usenet, internet forums and blogs to mean an African-American social movement holding that dark-skinned humans are the original people from which those of other skin color originate. The term melanism has been used in this context as early as the mid-1990s and was promoted by some Afrocentrists, such as Frances Cress Welsing.
See also
Albinism
Albino and white squirrels
Amelanism, lack of melanism
Black squirrel
Erythrism, reddish pigmentation
Isabellinism, lowered melanism
Heterochromia iridum
Leucism, a partial loss of pigmentation that results in animals with pale or white skin, hair and/or feathers
Melanosis, hyperpigmentation via increased melanin
Ocular melanosis
Peutz–Jeghers syndrome, dark patches on the lips etc.
Piebaldism, patchy absence of melanin-producing cells
Vitiligo, a skin condition which causes areas of the skin to lose its colour
Xanthochromism, an unusual yellow colouration in animals
Zelandoperla fenestrata, a stonefly exhibiting a Batesian mimicry melanic polymorphism
References
Bibliography
Melanism and disease resistance in insects
Fryer, G. 2013. How should the history of industrial melanism in moths be interpreted? The Linnean. 29 (2): 15 - 22.
Genetic disorders with no OMIM
Disturbances of pigmentation
Dermatologic terminology
Animal coat colors | Melanism | Biology | 1,886 |
52,558,749 | https://en.wikipedia.org/wiki/Helene%20Marsh | Helene Denise Marsh (born 8 April 1945) is an Australian scientist who has provided research in the field of Environmental Science, more specifically Zoology and Ecology. The focal point of her research has been the biology of dugongs, with particular foci in the areas of population ecology, history, reproduction, diet, and movements. She is the Dean of Graduate Research Studies and the Professor of Environmental Science at James Cook University in Queensland, Australia, and also a Distinguished Professor in the College of Marine and Environmental Science. Marsh is also a program leader for the Marine and Tropical Research Science Facility. In 2015 she was elected a Fellow of both the Australian Academy of Science (FAA), and the Australian Academy of Technological Sciences and Engineering (FTSE). She was appointed Officer of the Order of Australia (AO) in the 2021 Australia Day Honours.
Early life
Marsh's parents always encouraged her to succeed academically, placing value on learning and education throughout her childhood, and expected that all three of their children would eventually attend a university. Her mother started out as a teacher, then when World War II began she enlisted and became the only woman education officer in the Army from Northern Territory. After the war she earned her master's degree. Her father earned degrees in both Economics and Law; he died when Marsh was 13 years old. She has two brothers, one became a Professor of English at the University of London, while the other made films.
Education
Marsh graduated from the University of Queensland in 1968, earning a Bachelor of Science with Honors in Zoology, going on to earn her PhD in Zoology from James Cook University in 1973. In 1991 Marsh became a Professor of Zoology and Director of Environmental Studies at James Cook University, staying in this position until 1994. In 1994 she became the Professor of Environmental Science and the Head of the Department of Tropical Environmental Studies and Geography. In 2000 Marsh became the Dean of Postgraduate Studies. Over the course of her career she has also assisted and supervised 55 PhD and 20 Master's candidates through to their completion.
Career
After earning her bachelor's degree in 1968, Marsh began work for the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in 1968 as an Experimental Officer in Animal Health at the CSIRO research laboratory in Townsville, Queensland. In 1972 she worked as an Honorary Research Associate at the British Museum of Natural History. After two years she returned to James Cook University where she was appointed as a Research Officer in Zoology, eventually being promoted to the part-time position of Research Fellow in Zoology in 1976. She switched to full-time in 1981. Through the 1980s and 1990s she secured multiple positions on various committees at James Cook University.
Research
The majority of Marsh's research involves the ecology and conservation biology of dugongs and other megafauna, providing advancements in the understanding, management, and care of coastal marine mammals. Her research has covered marine conservation biology, marine natural resource management, indigenous marine resource management, conservation intervention, and marine wildlife population ecology. Notable doctoral students of Marsh's include Barbara Bollard.
Publications
Marsh has authored, co-authored, or assisted in over 200 published works. About 100 of these are articles for established journals, one book, several chapters for various books and encyclopedias, technical reports, and conference proceedings.
Honours
References
1945 births
Living people
Officers of the Order of Australia
Australian zoologists
Australian ecologists
Women ecologists
Wildlife conservation
Academic staff of James Cook University
Fellows of the Australian Academy of Technological Sciences and Engineering
University of Queensland alumni | Helene Marsh | Biology | 705 |
49,627 | https://en.wikipedia.org/wiki/Universal%20Disk%20Format | Universal Disk Format (UDF) is an open, vendor-neutral file system for computer data storage for a broad range of media. In practice, it has been most widely used for DVDs and newer optical disc formats, supplanting ISO 9660. Due to its design, it is very well suited to incremental updates on both write-once and re-writable optical media. UDF was developed and maintained by the Optical Storage Technology Association (OSTA).
In engineering terms, Universal Disk Format is a profile of the specifications known as ISO/IEC 13346 and ECMA-167.
Usage
Normally, authoring software will master a UDF file system in a batch process and write it to optical media in a single pass. But when packet writing to rewritable media, such as CD-RW, UDF allows files to be created, deleted and changed on-disc just as a general-purpose filesystem would on removable media like floppy disks and flash drives. This is also possible on write-once media, such as CD-R, but in that case the space occupied by the deleted files cannot be reclaimed (and instead becomes inaccessible).
Multi-session mastering is also possible in UDF, though some implementations may be unable to read disks with multiple sessions.
History
The Optical Storage Technology Association standardized the UDF file system to form a common file system for all optical media: both for read-only media and for re-writable optical media. When first standardized, the UDF file system aimed to replace ISO 9660, allowing support for both read-only and writable media. After the release of the first version of UDF, the DVD Consortium adopted it as the official file system for DVD-Video and DVD-Audio.
UDF shares the basic volume descriptor format with ISO 9660. A "UDF Bridge" format is defined since 1.50 so that a disc can also contain a ISO 9660 file system making references to files on the UDF part.
Revisions
Multiple revisions of UDF have been released:
Revision 1.00 (24 October 1995). Original Release.
Revision 1.01 (3 November 1995). Added DVD Appendix and made a few minor changes.
Revision 1.02 (30 August 1996). This format is used by DVD-Video discs.
Revision 1.50 (4 February 1997). Added support for CD-R/W Packet Writing and (virtual) rewritability on CD-R/DVD-R media by introducing the Virtual Allocation Table (VAT) structure. Added sparing tables for defect management on rewritable media such as CD-RW, and DVD-RW and DVD+RW. Add UDF bridge.
Revision 2.00 (3 April 1998). Added support for Stream Files, Access Control lists, Power Calibration, real-time files (for DVD recording) and simplified directory management. VAT support was extended.
Revision 2.01 (15 March 2000) is mainly a bugfix release to UDF 2.00. Many of the UDF standard's ambiguities were resolved in version 2.01.
Revision 2.50 (30 April 2003). Added the Metadata Partition facilitating metadata clustering, easier crash recovery and optional duplication of file system information: All metadata like nodes and directory contents are written on a separate partition which can optionally be mirrored. This format is used by some versions of Blu-rays and most HD-DVD discs.
Revision 2.60 (1 March 2005). Added Pseudo OverWrite method for drives supporting pseudo overwrite capability on sequentially recordable media. Has read-only compatibility with UDF 2.50 implementations. (Some Blu-rays use this format.)
UDF Revisions are internally encoded as binary-coded decimals; Revision 2.60, for example, is represented as . In addition to declaring its own revision, compatibility for each volume is defined by the minimum read and minimum write revisions, each signalling the requirements for these operations to be possible for every structure on this image. A "maximum write" revision additionally records the highest UDF support level of all the implementations that has written to this image. For example, a UDF 2.01 volume that does not use Stream Files (introduced in UDF 2.00) but uses VAT (UDF 1.50) created by a UDF 2.60-capable implementation may have the revision declared as , the minimum read revision set to , the minimum write to , and the maximum write to .
Specifications
The UDF standard defines three file system variations, called "builds". These are:
Plain (Random Read/Write Access). This is the original format supported in all UDF revisions
Virtual Allocation Table, also known as VAT (Incremental Writing). Used specifically for writing to write-once media
Spared (Limited Random Write Access). Used specifically for writing to rewritable media
Plain build
Introduced in the first version of the standard, this format can be used on any type of disk that allows random read/write access, such as hard disks, DVD+RW and DVD-RAM media. Metadata (up to v2.50) and file data is addressed more or less directly. In writing to such a disk in this format, any physical block on the disk may be chosen for allocation of new or updated files.
Since this is the basic format, practically any operating system or file system driver claiming support for UDF should be able to read this format.
VAT build
Write-once media such as DVD-R and CD-R have limitations when being written to, in that each physical block can only be written to once, and the writing must happen incrementally. Thus the plain build of UDF can only be written to CD-Rs by pre-mastering the data and then writing all data in one piece to the media, similar to the way an ISO 9660 file system gets written to CD media.
To enable a CD-R to be used virtually like a hard disk, whereby the user can add and modify files on a CD-R at will (so-called "drive letter access" on Windows), OSTA added the VAT build to the UDF standard in its revision 1.5. The VAT is an additional structure on the disc that allows packet writing; that is, remapping physical blocks when files or other data on the disc are modified or deleted. For write-once media, the entire disc is virtualized, making the write-once nature transparent for the user; the disc can be treated the same way one would treat a rewritable disc.
The write-once nature of CD-R or DVD-R media means that when a file is deleted on the disc, the file's data still remains on the disc. It does not appear in the directory any more, but it still occupies the original space where it was stored. Eventually, after using this scheme for some time, the disc will be full, as free space cannot be recovered by deleting files. Special tools can be used to access the previous state of the disc (the state before the delete occurred), making recovery possible.
Not all drives fully implement version 1.5 or higher of the UDF, and some may therefore be unable to handle VAT builds.
Spared (RW) build
Rewriteable media such as DVD-RW and CD-RW have fewer limitations than DVD-R and CD-R media. Sectors can be rewritten at random (though in packets at a time). These media can be erased entirely at any time, making the disc blank again, ready for writing a new UDF or other file system (e.g., ISO 9660 or CD Audio) to it. However, sectors of -RW media may "wear out" after a while, meaning that their data becomes unreliable, through having been rewritten too often (typically after a few hundred rewrites, with CD-RW).
The plain and VAT builds of the UDF format can be used on rewriteable media, with some limitations. If the plain build is used on a -RW media, file-system level modification of the data must not be allowed, as this would quickly wear out often-used sectors on the disc (such as those for directory and block allocation data), which would then go unnoticed and lead to data loss. To allow modification of files on the disc, rewriteable discs can be used like -R media using the VAT build. This ensures that all blocks get written only once (successively), ensuring that there are no blocks that get rewritten more often than others. This way, a RW disc can be erased and reused many times before it should become unreliable. However, it will eventually become unreliable with no easy way of detecting it. When using the VAT build, CD-RW/DVD-RW media effectively appears as CD-R or DVD+/-R media to the computer. However, the media may be erased again at any time.
The spared build was added in revision 1.5 to address the particularities of rewriteable media. This build adds an extra Sparing Table in order to manage the defects that will eventually occur on parts of the disc that have been rewritten too many times. This table keeps track of worn-out sectors and remaps them to working ones. UDF defect management does not apply to systems that already implement another form of defect management, such as Mount Rainier (MRW) for optical discs, or a disk controller for a hard drive.
The tools and drives that do not fully support revision 1.5 of UDF will ignore the sparing table, which would lead them to read the outdated worn-out sectors, leading to retrieval of corrupted data.
An overhead that is spread over the entire disc reserves a portion of the data storage space, limiting the usable capacity of a CD-RW with e.g. 650 MB of original capacity to around 500 MB.
Character set
The UDF specifications allow only one Character Set OSTA CS0, which can store any Unicode Code point excluding U+FEFF and U+FFFE. Additional character sets defined in ECMA-167 are not used.
Since Errata DCN-5157, the range of code points was expanded to all code points from Unicode 4.0 (or any newer or older version), which includes Plane 1-16 characters such as Emoji. DCN-5157 also recommends normalizing the strings to Normalization Form C.
The OSTA CS0 character set stores a 16-bit Unicode string "compressed" into 8-bit or 16-bit units, preceded by a single-byte "compID" tag to indicate the compression type. The 8-bit storage is functionally equivalent to ISO-8859-1, and the 16-bit storage is UTF-16 in big endian. 8-bit-per-character file names save space because they only require half the space per character, so they should be used if the file name contains no special characters that can not be represented with 8 bits only.
The reference algorithm neither checks for forbidden code points nor interprets surrogate pairs, so like NTFS the string may be malformed. (No specific form of storage is specified by DCN-5157, but UTF-16BE is the only well-known method for storing all of Unicode while being mostly backward compatible with UCS-2.)
Compatibility
Many DVD players do not support any UDF revision other than version 1.02. Discs created with a newer revision may still work in these players if the ISO 9660 bridge format is used. Even if an operating system claims to be able to read UDF 1.50, it still may only support the plain build and not necessarily either the VAT or Spared UDF builds.
Mac OS X 10.4.5 claims to support Revision 1.50 (see man mount_udf), yet it can only mount disks of the plain build properly and provides no virtualization support at all. It cannot mount UDF disks with VAT, as seen with the Sony Mavica issue. Releases before 10.4.11 mount disks with Sparing Table but does not read its files correctly. Version 10.4.11 fixes this problem.
Similarly, Windows XP Service Pack 2 (SP2) cannot read DVD-RW discs that use the UDF 2.00 sparing tables as a defect management system. This problem occurs if the UDF defect management system creates a sparing table that spans more than one sector on the DVD-RW disc. Windows XP SP2 can recognize that a DVD is using UDF, but Windows Explorer displays the contents of a DVD as an empty folder. A hotfix is available for this and is included in Service Pack 3.
Due to the default UDF versions and options, a UDF partition formatted by Windows cannot be written under macOS. On the other hand, a partition formatted by macOS cannot be directly written by Windows, due to the requirement of a MBR partition table. In addition, Linux only supports writing to UDF 2.01. A script for Linux and macOS called handles these incompatibilities by using UDF 2.01 and adding a fake MBR; for Windows the best solution is using the command-line tool .
See also
Comparison of file systems
DVD authoring
ISO/IEC 13490
Notes
References
Further reading
ISO/IEC 13346 standard, also known as ECMA-167.
External links
Home page of the Optical Storage Technology Association (OSTA)
UDF specifications: 1.02, 1.50, 2.00, 2.01, 2.50, 2.60 (March 1, 2005), SecureUDF
UDF 1.01 (mirror, original file name: "UDF_101.PDF") - This specification was never published on the OSTA website and only existed in its early days on a FTP server that is long gone as of 2024, at this URL (linked from this 1996 article).
ECMA 167/3: Volume and File Structure for Write-Once and Rewritable Media using Non-Sequential Recording for Information Interchange (June 1997) (referenced from UDF specification)
Wenguang Wang's UDF Introduction
Linux UDF support
"CD-ROM Drive May Not Be Able to Read a UDF-Formatted Disc in Windows XP", Microsoft Support
AIX - CD-ROM file system and UDFS
OSTA Technology (mention of UDF 1.00 on the 1996 OSTA website) (The UDF 1.00 specification itself is lost literary work as of 2024.)
UDF - El profesional de la información (mention of UDF 1.01)
Disk file systems
ISO standards
IEC standards
Ecma standards
Windows components | Universal Disk Format | Technology | 3,068 |
74,627,404 | https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9%20d%27%C3%A9lectronique%20et%20d%27automatisme | The Société d'électronique et d'automatisme (SEA) was an early French computer manufacturer established in 1947 by electrical engineer François-Henri Raymond, which designed and put into operation a significant portion of the first computers in France during the 1950s.
The SEA played a major role in driving the development of the French computer industry, training the first generation of engineers and installing about 170 computers between 1955 and its dissolution in 1966, when it merged with CII.
History and achievements
In 1947, François-Henri Raymond is sent for a technical trip to the United States where he meets Howard H. Aiken at Harvard University, visits the MIT laboratories and comes across John von Neumann's report on the EDVAC and the pioneering concepts of a then futuristic machine: the stored-program computer. Upon returning to Paris, he shares his ideas to produce such machines with his employer, a machine tool manufacturer, but struggles to convince him. François-Henri Raymond resignes and, without a formal business plan, establishes the Society of Electronics and Automation in December 1947, in a former automobile factory bombed during World War II. The startup's initial capital is contributed by its founder, some of his friends, and Raymond's former employer. The SEA first client was the Air Force's missile bureau, whose operations demanded significant computational resources.
1949: analog computers
SEA's inaugural computer, the OME 11, emerges in February 1949. This analog computer would set the stage for a series of subsequent models, including the OME 12, 15, 40, and 416 (OME is short for "Opérateurs Mathématiques Électroniques"). While many were tailored for military applications, some models achieved some commercial success in the civilian sector. Notably, the OME L2 and P2 (1952), featuring vacuum tubes, and the transistor-based OME R (1959) stood out and were subsequently followed by a new generation of analog computers with the NADAC 20 (1961) and NADAC 100 (1962).
SEA's analog computers enjoyed commercial success, with nearly 200 units sold and a strong international presence. They found diverse applications in fields such as physical, nuclear, and hydrodynamic simulation - these machines were notably employed for flight simulations of the future Concorde aircraft. SEA also designed tailor-made analog computers for specific military applications.
1951: CUBA
After securing a contract with DEFA (now known as DGA), SEA embarked on the development of its first stored-program computer in 1951, and likely France's first as well: the Calculatrice Universelle Binaire de l'Armement (CUBA). This contract provided SEA the opportunity to bring to life the digital computer plans it had been sketching since 1949.
The then ambitious technological choices made for CUBA would later lead to numerous delays. Notably, the decision to utilize cutting-edge magnetic core memories instead of the more established Williams tubes or mercury delay lines proved risky, as no manufacturer in the still World War 2 recovering French industry was yet capable of producing them. This challenge of sourcing components adhering to a novel set of requirements extended to many aspects, even reaching into wiring - SEA, as stated by its founder, was the first French company to employ ribbon cables and wire wrap.
Among other technological choices, SEA aimed to minimize the use of unreliable vacuum tubes, instead favoring germanium diodes for most of its logic and restricting tube usage to signal regeneration, a design first experimented with the SEAC (see diode logic). Additionally, an auxiliary drum memory was selected to complement the system, which SEA ultimately procured from the British company Ferranti due to the lack of a French manufacturer ready in time.CUBA was eventually delivered in 1954 and put into operation in 1956 after many years of delays, but quickly became obsolete and was abandoned shortly thereafter. By that time, the French computer industry had already witnessed the emergence of other creations: SEA's own CAB 1011, a general computer which was used for cryptanalysis at SDECE (now DGSE), its CAB series 2000 and 3000 computers, and notably the Gamma 3 from Compagnie des machines Bull, introduced in 1952 and sold in quantities of around 1200 units. CAB stood for Calculatrice Automatique Binaire (Binary Automatic Calculator - the term "ordinateur", French for "computer", was coined only in 1955).
The mid-1950s then marked a turning point for SEA, as the company went on to develop two transistorized computers, constituting its two large-scale productions in this field.
1956: CAB 500
Starting from 1956, SEA took an interest in the emerging potential of transistors, although their maturity at that time was yet to be proven. Among the alternatives, SEA explored magnetic logic, which was slower but notably reliable, making it suitable for a more compact-sized computer. In the same year, SEA invented the Symmag, a logic gate utilizing ferrite beads similar to those used in magnetic core memory.
The Symmag would prove to be a central element in the architecture of the CAB 500, alongside a drum memory and sixteen 32-bit registers implemented on magnetic-core memory. The CAB 500 was a compact desktop computer designed to be operated by a single unskilled person, with minimal technical requirements for installation and operation. Furthermore, it was powered by one of the first interactive high-level languages, PAF, which facilitated its usage.
The CAB 500 experienced immediate success, prompting the scaling up of production methods. Approximately a hundred units of the CAB 500 were manufactured and sold, with around a dozen units exported to countries including China and Japan.
1958: CAB 3900
Starting from 1958, SEA became a subsidiary of Schneider-Westinghouse, affording it increased industrial resources. This was also the juncture at which the decision was made to create a business computer, leveraging the experience gained from the CAB 2124 and 3030. Although primarily designed for scientific purposes, these machines were also used for business tasks, revealing a gap in the manufacturer's offerings.
In collaboration with Crédit Lyonnais, the prototype named CABAN (Banking Calculator) was developed, placing emphasis on magnetic tape storage, offering higher capacity compared to punched cards. Intending to compete with the IBM 1401 and the Bull Gamma 30, the CAB 3900 was a fully transistorized machine, featuring a magnetic-core main memory and accommodating up to nine tape drives. It operated at a fairly fast 2 MHz (in contrast, the IBM 1401 was clocked at 870 kHz), which enabled SEA to devise a bit-serial processor, which was less complex and costly while maintaining adequate performance for business applications. The Symmag logic, however, was abandoned as it was deemed too slow.
A programming language called PAGE (Programmation Automatique de GEstion - Automatic Business Programming) was developed for the CAB 3900. It had analogies with COBOL but more limited ambitions and greater simplicity.
In 1963, the French government urged SEA to align with Bull, leading to a negotiated agreement: SEA would allow Bull to market its computer range in exchange for Bull's Andelys plant. This agreement had mixed effects for SEA. On one hand, Bull's sales representatives were not accustomed to catering to scientific needs, resulting in underwhelming sales for the CAB 500. On the other hand, they had to incorporate the competing CAB 3900 business computer into their portfolio alongside their own offerings.
Nonetheless, these newfound production resources enabled SEA to expand its industrial capacity and manufacture the CAB 3900. An improved version, the CAB 4000, was introduced shortly thereafter, rectifying identified flaws from the first generation and enhancing its capabilities.
A total of 37 CAB 3900 and 4000 units were sold, marking the second-largest commercial success for SEA.
Later developments
In 1964, SEA entered into a Memorandum of Understanding with Control Data Corporation (CDC), granting access to the technologies and peripherals of the emerging American supercomputer specialist. However, this agreement did not progress further, as CDC subsequently established its own commercial subsidiary in France. In the same year, IBM unveiled the 360 series, introducing a novel concept of both horizontal (application domains) and vertical (performance levels) compatibility within a unified family of computers. This groundbreaking concept of compatibility and interoperability greatly interested SEA, prompting an exploration of a new generation of products built on these principles. SEA drafted an architecture inspired by stack machines for its processor and Algol for its machine language, similar to Burroughs' approach in the United States. Initially, two models were planned: a successor to the entry-level CAB 500 (CAB 1500) and a high-performance machine for the top tier (CAB 15000). With ambitious plans in mind, SEA contemplated an industrialization program to manufacture approximately a thousand of these new machines.
Ultimately, compelled by the French government in December 1966 as part of Plan Calcul, SEA was forced to merge with Compagnie européenne d'automatisme électronique, a joint subsidiary of Compagnie générale d'électricité, Compagnie générale de télégraphie sans fil (CSF), and Intertechnique, forming the Compagnie internationale pour l'informatique (CII).
At its peak, SEA employed up to 800 staff members and secured nearly 1000 patents.
Marketed models
The SEA primarily produced computers in small quantities, about a couple hundreds, and occasionally as one-of-a-kind units, most notably for military clients.
Six major categories of computers stand out:
Analog computers, from the OME and NADAC family
Unique military models, such as the CUBA and CAB 1011
The CAB 2000 and 3000 families, which were the first mass produced stored program computers from SEA
Business computers (CAB 3900 and 4000)
The CAB 500, a relatively different small computer in the range
And finally, research prototypes: the Dorothées, CAB 1500 and 15000
The following table provides an overview of the key digital computers constructed by SEA. The analog calculators from the OME and NADAC series are not included in the representation, neither are specific, non commercial units such as the CUBA.
See also
List of vacuum tube computers
History of computing
References
External links
History of SEA, on the Bull Teams Federation (FEB)
An adventure that ends badly: SEA, from François-Henri Raymond, extract from Colloquium on the History of Computing in France, March 1988
Description of the CAB 3900
1955 review of the OME P2 analog computer (in French)
Description of the symmag magnetic logic used in the CAB-500 (in French)
Illustrated SEA presentation brochure, from the "Birth of the French computing industry" exposition, in French
History of computing
History of computing in France
Defunct computer hardware companies
Defunct computer systems companies
Computer companies of France | Société d'électronique et d'automatisme | Technology | 2,226 |
69,699,064 | https://en.wikipedia.org/wiki/Lentinus%20flexipes | Lentinus flexipes is a species of fungus belonging to the family Polyporaceae.
References
Polyporaceae
Fungus species | Lentinus flexipes | Biology | 25 |
77,052,648 | https://en.wikipedia.org/wiki/Begonia%20%C3%97%20hiemalis | Begonia × hiemalis, the elatior begonia or Reiger begonia, is an artificial hybrid species of flowering plant in the family Begoniaceae. Its parents are Begonia socotrana and Begonia × tuberhybrida (itself a hybrid of multiple species). Hybridization efforts began in 1881, with the first cultivar named 'John Heal'. The 'Elatior' cultivar debuted in 1906, and beginning in 1950 Otto Rieger issued many new, disease-resistant cultivars, such that people began to call the species "elatior" or "Rieger" begonias. In addition to their typically doubled flowers which come in every color except blue, they are valued for their tendency to bloom in fall and winter, and in fact nearly year-round.
References
hiemalis
Hybrid plants
Ornamental plants
Plants described in 1933 | Begonia × hiemalis | Biology | 180 |
6,628,854 | https://en.wikipedia.org/wiki/Animal%20track | __notoc__
An animal track is an imprint left behind in soil, snow, or mud, or on some other ground surface, by an animal walking across it. Animal tracks are used by hunters in tracking their prey and by naturalists to identify animals living in a given area.
Archaeology
Foot tracks of ancient and extinct creatures notably dinosaurs that have been fossilized are of immense importance in archaeology and studied to understand the lives and behavior of such creatures.
See also
Flukeprint, track of whale on ocean surface
Footprint
Pugmark
Spoor (animal)
References
External links
Animal Tracks (African)
Animal Tracks (Dinosaur)
Animal Tracks (General)
NatureTracking.com Animal Tracks Website
Bear-Tracker Animal Tracks Website
Animal tracks in Mount Rainier National Park
Tracker Certifications in North America
Tracker Certifications in Africa
Zoology | Animal track | Biology | 167 |
7,695,208 | https://en.wikipedia.org/wiki/Cooking%20base | Cooking base, sometimes called soup base, is a concentrated flavoring compound used in place of stock for the creation of soups, sauces, and gravies. Since it can be purchased rather than prepared fresh, it is commonly used in restaurants where cost is a more important factor than achieving haute cuisine. Veal and chicken base are common, as are beef, lamb and vegetable base. Soup base is available in many levels of quality. Today, these products are produced in low and very low sodium varieties, seafood and vegetarian.
References
See also
Stock cube
Instant soup
Food ingredients | Cooking base | Technology | 118 |
52,754,183 | https://en.wikipedia.org/wiki/Liquid%20nitrogen%20wash | Liquid Nitrogen Wash is a process mainly used for the production of ammonia synthesis gas within fertilizer production plants. It is usually the last purification step in the ammonia production process sequence upstream of the actual ammonia production.
Competing Technologies
The purpose of the final purification step upstream of the actual ammonia production is to remove all components that are poisonous for the sensitive ammonia synthesis catalyst. This can be done with the following concepts:
Methanation, formally the standard concept with the disadvantage, that the methane content is not removed, but even increased, since in this process, the carbon oxides (carbon monoxide and carbon dioxide) are converted to methane.
Pressure Swing Adsorption, which can replace the low temperature shift, the carbon dioxide removal and the methanation, since this process produces pure hydrogen, which can be mixed with pure nitrogen.
Liquid Nitrogen Wash, which produces an ammonia syngas for a so-called "inert free" ammonia synthesis loop, that can be operated without the withdrawal of a purge gas stream.
Functions
The liquid nitrogen wash has two principle functions:
Removal of impurities such as carbon monoxide, argon and methane from the crude hydrogen gas
Addition of the required stoichiometric amount of nitrogen to the hydrogen stream to achieve the correct ammonia synthesis gas ratio of hydrogen to nitrogen of 3 : 1
The carbon monoxide must be removed completely from the synthesis gas (i.e. syngas) since it is poisonous for the sensitive ammonia synthesis catalyst.
The components argon and methane are inert gases within the ammonia synthesis loop, but would enrich there and call for a purge gas system with synthesis gas losses or additional expenditures for a purge gas separation unit.
The main sources for the supply of feed gases are partial oxidation processes.
Upstream Syngas Preparations
Since the synthesis gas exiting the partial oxidation process consists mainly of carbon monoxide and hydrogen, usually a sulfur tolerant CO shift (i.e. water-gas shift reaction) is installed in order to convert as much carbon monoxide into hydrogen as possible.
Shifting carbon monoxide and water into hydrogen also produces carbon dioxide, usually this is removed in an acid gas scrubbing process together with other sour gases as e.g. hydrogen sulfide (e.g. in a Rectisol Wash Unit).
Components
The liquid nitrogen wash consists of
an adsorber unit where solvent traces of an upstream acid gas scrubbing process (e.g. methanol, water), traces of carbon dioxide or other compounds are completely removed in a molecular sieve bed in order to avoid freezing and subsequently blockage in the low temperature process which operates at temperatures down to 80 K (-193 °C or -315 °F) and
the actual liquid nitrogen wash enclosed in a so-called cold box where all cryogenic process equipment is located and insulated in order to minimize heat ingress from ambient.
Principle of Operation
The name liquid nitrogen wash is a little misleading, since no liquid nitrogen is supplied from outside to be used for scrubbing, but gaseous high pressure nitrogen, supplied by the Air separation Unit that usually also provides the oxygen for the upstream Partial Oxidation.
This gaseous high pressure nitrogen is partially liquefied in the process and is used as washing agent. In a so-called nitrogen wash column, the impurities carbon monoxide, argon and methane are washed out of the synthesis gas by means of this liquid nitrogen. These impurities are dissolved together with a small part of hydrogen and leave the column as the bottom stream.
The purified gas leaves the column at the top. The now purified synthesis gas is warmed up and is mixed with the required amount of gaseous high pressure nitrogen in order to achieve the hydrogen to nitrogen ratio of 3 to 1, and can then be routed to the ammonia synthesis.
At operating pressures higher than about 50 bar(a), the refrigeration demand of the liquid nitrogen wash is covered by the Joule–Thomson effect, and no additional external refrigeration, e.g. by vaporization of liquid nitrogen is required.
Advantages of the Combination of a liquid nitrogen wash with a Rectisol Process
The liquid nitrogen wash is especially favorable when combined with the Rectisol Wash Unit. The combination and advantageous interconnections between a Rectisol Wash Unit and a liquid nitrogen wash lead to smaller equipment and better operability.
The gas coming from the Rectisol Wash Unit can be sent to the Liquid Nitrogen Wash at low temperature (directly from the methanol absorber without being warmed up). Since part of the purified gas is reheated in the Rectisol Wash Unit, small fluctuations in flow and temperatures can easily be compensated leading to best operability.
To improve the hydrogen recovery, an integrated hydrogen recycle from the liquid nitrogen wash to the Rectisol Wash Unit can be installed, which uses the already existing recycle compressor of the Rectisol Wash Unit to recycle the hydrogen-rich flash gas from the liquid nitrogen wash back into the feed gas of the Rectisol Wash Unit. This leads to extremely high hydrogen recovery rates without any further equipment.
References
External links
Patent EP0256413 A2 for Gas stream purification process by nitrogen washing
Liquid Nitrogen Wash, Linde Engineering
Liquid Nitrogen Wash, Air Liquide
Chemistry
Industrial gases | Liquid nitrogen wash | Chemistry | 1,069 |
3,539,680 | https://en.wikipedia.org/wiki/Ratchet%20effect | The ratchet effect is a concept in sociology and economics illustrating the difficulty with reversing a course of action once a specific thing has occurred, analogous with the mechanical ratchet that allows movement in one direction and seizes or tightens in the opposite. The concept has been applied to multiple fields of study and is related to the phenomena of scope creep, mission creep, and feature creep.
Background
The ratchet effect first came to light in Alan Peacock and Jack Wiseman's 1961 report "The Growth of Public Expenditure in the United Kingdom." Peacock and Wiseman found that public spending increases like a ratchet following periods of crisis.
The term was later expanded upon by American historian Robert Higgs in the 1987 book Crisis and Leviathan, highlighting Peacock and Wiseman's research as it relates to governments experiencing difficulty in rolling back huge bureaucratic organizations created initially for temporary needs, such as wartime measures, natural disasters, or economic crises.
The effect may likewise afflict large businesses with myriad layers of bureaucracy which resist reform or dismantling. In workplaces, "ratchet effects refer to the tendency for central controllers to base next year's targets on last year's performance, meaning that managers who expect still to be in place in the next target period have a perverse incentive not to exceed targets even if they could easily do so."
Applications
Famine cycle
Garrett Hardin, a biologist and environmentalist, used the phrase to describe how food aid keeps people alive who would otherwise die in a famine. They live and multiply in better times, making another bigger crisis inevitable, since the supply of food has not been increased.
Production strategy
Jean Tirole used the concept in his pioneering work on regulation and monopolies. The ratchet effect can denote an economic strategy arising in an environment where incentive depends on both current and past production, such as in a competitive industry employing piece rates. The producers observe that since incentive is readjusted based on their production, any increase in production confers only a temporary increase in incentive while requiring a permanently greater expenditure of work. They therefore decide not to reveal hidden production capacity unless forced to do so.
Game theory
The ratchet effect is central to the mathematical Parrondo's paradox.
Cultural anthropology
In 1999 comparative psychologist Michael Tomasello used the ratchet effect metaphor to shed light on the evolution of culture. He explains that the sharedness of human culture means that it is cumulative in character. Once a certain invention has been made, it can jump from one mind to another (by means of imitation) and thus a whole population can acquire a new trait (and so the ratchet has gone "up" one tooth). Comparative psychologist Claudio Tennie, Tomasello, and Josep Call call this the "cultural ratchet" and they describe it, amongst primates, as being unique to human culture.
Developmental biology
Receptors which initiate cell fate transduction cascades, in early embryo development, exhibit a ratchet effect in response to morphogen concentrations. The low receptor occupancy permits increases in receptor occupancy which alter the cell fate, but the high receptor affinity does not allow ligand dissociation leading to a cell fate of a lower concentration.
Technology regulation
The ratchet effect is reflected in the Collingridge dilemma.
Consumer products
The ratchet effect can be seen in long-term trends in the production of many consumer goods. Year by year, automobiles gradually acquire more features. Competitive pressures make it hard for manufacturers to cut back on the features unless forced by a true scarcity of raw materials (e.g., an oil shortage that drives costs up radically). University textbook publishers gradually get "stuck" in producing books that have excess content and features.
In software development, products which compete often will use specification lists of competitive products to add features, presuming that they must provide all of the features of the competitive product, plus add additional functionality. This can lead to "feature creep" in which it is considered necessary to add all of a competitor's features whether or not customers will use them.
Airlines initiate frequent-flyer programs that become ever harder to terminate. Successive generations of home appliances gradually acquire more features; new editions of software acquire more features; and so on. With all of these goods, there is ongoing debate as to whether the added features truly improve usability, or simply increase the tendency for people to buy the goods.
Trade legislation
The term was included by the MAI Negotiating Group in the 1990s as the essence of a device to enforce legislative progress toward "free trade" by preventing legislative rollback with the compulsory assent of governments as a condition of participation.
Rollback is the liberalisation process by which the reduction and eventual elimination of nonconforming measures to the MAI would take place. It is a dynamic element linked with standstill, which provides its starting point. Combined with standstill, it would produce a "ratchet effect", where any new liberalisation measures would be "locked in" so they could not be rescinded or nullified over time.
See also
Argument to moderation
Muller's ratchet
Tragedy of the commons
Collingridge dilemma
Entropy
References
Game theory | Ratchet effect | Mathematics | 1,046 |
35,946,908 | https://en.wikipedia.org/wiki/Market%20game | In economic theory, a strategic market game, also known as a market game, is a game explaining price formation through game theory, typically implementing a general equilibrium outcome as a Nash equilibrium.
Fundamentally in a strategic market game, markets work in a strategic way that does not (directly) involve price but can indirectly influence it. The key ingredients to modelling strategic market games are the definition of trading posts (or markets), and their price formation mechanisms as a function of the actions of players. A leading example is the Lloyd Shapley and Martin Shubik trading post game.
Shapley-Shubik use a numeraire and trading posts for the exchange of goods. The relative price of each good in terms of the numeraire is determined as the ratio of the amount of the numeraire brought at each post, to the quantity of goods offered for sale at that post. In this way, every agent is allocated goods in proportion to his bids, so that posts always clear.
Pradeep Dubey and John Geanakoplos show that such a game can be a strategic foundation of the Walras equilibrium. A key ingredient of such approaches is to have very large numbers of players, such that for each player the action appears to him as a linear constraint that he cannot influence.
An excellent description of price formation in a strategic market game in which for each commodity there is a unique trading post, on which consumers place offers of the commodity and bids of inside money, is provided by James Peck, Karl Shell and Stephen Spear.
References
Business models
Game theory game classes | Market game | Mathematics | 320 |
5,249,670 | https://en.wikipedia.org/wiki/Boulder%20wall | A boulder wall, also spelled boulder-walls or bowlder-wall, is a kind of wall built of round flints and pebbles, laid in a strong mortar. It is used where the sea has a beach cast up, or where there are plenty of flints.
See also
Dry stone
References
Types of wall
Building
Fortification (architectural elements) | Boulder wall | Engineering | 71 |
17,803,969 | https://en.wikipedia.org/wiki/List%20of%20Star%20Trek%20materials | This is a list of notable fictional materials from the science fiction universe of Star Trek. Like other aspects of stories in the franchise, some were recurring plot elements from one episode or series to another.
Metals for starship construction
The fictional metals duranium and tritanium were referred to in many episodes as extremely hard alloys used in starship hulls and hand-held tools. The planet-killer in "The Doomsday Machine" had a hull made of solid neutronium, which is capable of withstanding a starship's phasers. Neutronium is considered to be virtually indestructible; the only known way of stopping the planet-killer is to destroy it from the inside via the explosion of a starship's impulse engines. Federation ships in the 32nd century also used this material in their construction. The NX-01 Enterprise partly consists of horonium, an element that is rare to find but can be synthesised. It is also used to power a time portal on Krulmuth-B.
Transparent aluminum
Star Trek technical manuals indicate that transparent aluminum is used in various fittings in starships, including exterior ship portals and windows. It was notably mentioned in the 1986 film Star Trek IV: The Voyage Home. Ultra-strong transparent panels were needed to construct water tanks within their ship's cargo bay for containing two humpback whales and hundreds of tons of water. However, the Enterprise crew, without money appropriate to the period, found it necessary to barter for the required materials. Chief Engineer Montgomery Scott exchanges the chemical formula for transparent aluminum for the needed material. When Dr. Leonard McCoy informs Scott that giving Dr. Nichols (Alex Henteloff) the formula is altering the future, the engineer responds, "Why? How do we know he didn't invent the thing?" (In the novelization of the film, Scott is aware that Dr. Marcus "Mark" Nichols, the Plexicorp scientist with whom he and McCoy deal, was its "inventor", and concludes that his giving of the formula is a predestination paradox/bootstrap paradox.) The substance is described as being as transparent as glass while possessing the strength and density of high-grade aluminum. It was also mentioned in the Star Trek: The Next Generation episode "In Theory".
The series' science consultant, André Bormanis, has concluded that the material would not be a good electrical conductor.
Transparent real-life aluminum compounds or aluminum metal
An aluminum windowpane "of glass-like transparency" was reported from Germany in 1933.
Corundum (Al2O3) is transparent and is widely used in commercial and industrial settings. It has a hardness of 9 Mohs, making it the third hardest mineral after diamond and moissanite.
Aluminium oxynitride ((AlN)x·(Al2O3)1−x) is another transparent ceramic, with a hardness of 7.7 Mohs, and has military applications as bullet-resistant armour, but is too expensive for widespread use. It was patented in 1986.
Pure transparent aluminum was created as a new state of matter by a team of scientists in 2009. A laser pulse removed an electron from every atom without disrupting the crystalline structure. However, the transparent state lasted for only 40 femtoseconds, until electrons returned to the material.
A group of scientists led by Ralf Röhlsberger at Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany, succeeded in turning iron transparent during research in 2012 to create quantum computers.
Trellium-D
Trellium-D, shown in Star Trek: Enterprise, was an alloy used in the Delphic Expanse as a protection against spatial anomalies there. It had unusual effects on Vulcan physiology, causing a loss of emotional control, and became a recurring plot element in the third season of Star Trek: Enterprise, exploring the theme of drug addiction.
Other materials were occasionally mentioned in the scripts, such as nitrium, a radiation-resistant material.
Energy sources
Dilithium
Dilithium crystals, in all Star Trek series, were shown to be an essential component for a starship's faster than light drive, or warp drive, since they were necessary to regulate the matter-antimatter reactions needed to generate the required energy. Dilithium was frequently featured in the original series as a scarce resource. By the time the later series were set, dilithium could be synthesized.
Trilithium
Trilithium is a material used in a star-destroying weapon in Star Trek Generations, and an explosive in "The Chute" (VOY). This is due to the fact that trilithium is termed a "nuclear inhibitor," which is believed to be any substance that interferes with nuclear reactions. In the film, trilithium is known to be capable, when used to its full potential, of stopping all fusion within a star, thereby collapsing the star and destroying everything within its solar system via a shock wave. Trilithium resin is a toxic byproduct of warp engines, and can be used as a powerful, and quite unstable, explosive (see "Starship Mine", the 18th episode of the sixth season of Star Trek: The Next Generation). It is not known whether this is related to the nuclear inhibitor.
Precious materials
Latinum is featured in many episodes of Deep Space Nine as a medium of exchange used by Ferengi and other races. It exists as a liquid at ambient conditions, has an extremely high value per unit weight/volume, and is impossible to replicate. For easier handling, it is incorporated into gold casings of various standard sizes. (At one point, Jadzia Dax remarks that the idea of doing so may have come from "somebody who got tired of making change with an eyedropper.") The combination is referred to as "gold-pressed latinum" and is divided into denominations of slips, strips, bars, and bricks in ascending order of value. The Deep Space Nine episode "Body Parts" establishes that there are 100 slips to one strip, and 20 strips to one bar; the conversion between the brick and any smaller unit is never mentioned. The Ferengi considered gold to be a valuable commodity in the past, but now regard it as worthless when not combined with latinum.
Tholian silk was a valuable fabric mentioned in multiple series.
Bio-mimetic gel is a volatile substance with medical applications. It is also highly sought after for use in illegal activities, such as genetic experimentation and biological weapons development. As such, its use is strictly regulated by the United Federation of Planets, and sale of the substance is prohibited. The substance was first mentioned in an episode of Star Trek: The Next Generation, and was used as a plot element in several episodes of Star Trek: Deep Space Nine.
Propulsion
Verterium cortenide is a usually synthetically generated compound, the only known substance to be capable of generating warp fields, when supplied with energy, in form of plasma, from the warp core. Warp coils are made of this material.
Benamite is a rare and unstable form of crystal required to construct and run a quantum slipstream drive. According to "Timeless" (VOY), benamite is extremely difficult to synthesize and creating enough for one slipstream drive can take years. Synthesized benamite is also known to decay and become useless over time. It is not known if naturally occurring benamite is subject to the same process of decay.
Minerals
Kironide is a mineral by which, upon consuming plants containing the mineral, the Platonians (the inhabitants of the planet Platonius) acquire telekinetic powers, including the ability to levitate, in the original series episode "Plato's Stepchildren".
Pergium is a substance mined in "The Devil in the Dark", and fictionally given the atomic number 112 as a chemical element in a non-canon Star Trek medical manual publication.
Drugs
Cordrazine, introduced in "The City on the Edge of Forever" is a powerful stimulant used to revive patients in an emergency. Overdoses cause hallucinations, madness and death.
Felicium, a highly addictive narcotic produced on the planet Brekka, but misrepresented as a medicine for a plague affecting the planet Ornara. It was introduced in the episode Symbiosis.
Venus drug, introduced in "Mudd's Women", causes women to appear much lovelier and more exciting.
Inaprovaline, introduced in "Transfigurations". Helps resuscitate the neurological and cardiovascular systems by reinforcing the cell membranes. It is also frequently used as an analgesic.
Ketracel-White, introduced in Star Trek: Deep Space Nine, is a narcotic stimulant drug intravenously taken among the Jem'Hadar soldiers of The Dominion. The Jem'Hadar were created by the Founders – a shape-shifting species in the Gamma Quadrant – with a genetic predisposition for addiction to the drug. This was done to ensure their loyalty to the Founders. The drug is synthetically manufactured and refined at guarded facilities throughout Dominion space. Ketracel-White is stored as a liquid in glass vials locked in portable cases held by Vorta field supervisors. A Vorta must dispense the drug among the unit they command at regular intervals, otherwise the Jem'Hadar will suffer withdrawal leading to death. A vial of White is inserted into a dispensing mechanism embedded in the soldier's chest armor, and automatically pumped through a tube inserted into the common carotid artery.
The Son'a were also known for their production of Ketracel-White (Star Trek: Insurrection).
Retinax-5, introduced in Star Trek II: The Wrath of Khan, a drug that corrects vision problems.
Unstable substances
Protomatter is a key component of the Genesis Device prototype—an experimental terraformation device introduced in Star Trek II: The Wrath of Khan. Protomatter is presented as an unstable substance that, due to its instability, is considered unethical for usage in scientific research. The substance is used as a plot device to compare David Marcus with his father, James T. Kirk, both of whom, in Lieutenant Saavik's words, "changed the rules"—David Marcus by using the forbidden protomatter, and James T. Kirk by "cheating" to win the Kobayashi Maru test. The inclusion of protomatter ultimately results in both the accelerated maturation of the regenerated Spock during his stay on the Genesis planet, and the planet's subsequent explosion into an asteroid belt.
In the Deep Space Nine episode "By Inferno's Light", Protomatter was used by a Dominion changeling in a bomb plot that, if successful, would have destroyed the Bajoran sun and the forces of the Alpha Quadrant.
Protomatter is also mentioned in the Star Trek: Voyager episode "Mortal Coil", where it is said, "Protomatter's one of the most sought-after commodities. The best energy source in the quadrant."
The Omega Molecule is a highly unstable molecule believed to be the most powerful substance known to exist. If not properly disposed of, it may destroy subspace and render warp travel impossible. In Star Trek: Voyager, during the episode The Omega Directive, Voyager encounters Omega particles and Captain Janeway must comply with the Omega Directive and destroy the particles. Later in the episode, they spontaneously stabilize for a brief moment before they are destroyed.
Other
Red matter is a red liquid material introduced in Star Trek (the 2009 film) that is able to create a black hole when not properly contained. Spock attempts to use it to stop a massive supernova, but the resulting black hole causes his own ship and a Romulan mining vessel to travel back in time. Later in the film, the antagonist Nero uses it to destroy the planet Vulcan. Shortly after, the future Spock's ship containing the red matter is used to destroy Nero's Romulan mining vessel.
Fictional substances within Star Trek
Corbomite was named by Captain Kirk in a bluff in "The Corbomite Maneuver" as a material and a device that prevents attack, because if any destructive energy touches the vessel, a reverse reaction of equal strength is created, destroying the attacker. This bluff was also used in "The Deadly Years" to escape the Romulans.
Archerite was named by Commander Shran also in a bluff in "Proving Ground" as a material that his ship was looking to mine, during an encounter at the test site of the Xindi planet killer weapon.
See also
List of fictional elements, materials, isotopes and subatomic particles
References
External links
Materials
Star Trek | List of Star Trek materials | Physics | 2,642 |
7,005,552 | https://en.wikipedia.org/wiki/Aleksandr%20Lebedev%20%28biochemist%29 | Alexander Nikolayevich Lebedev (1869–1937) was a biochemist in the Russian Empire and the Soviet Union. He is known for his early experiments on the biochemical basis of behavior. Lebedev apprenticed as a student with physiologist and psychologist Ivan Pavlov, becoming familiar with various techniques involved used in behavioral psychology. Lebedev developed a theory that behavior in general, and specifically conditioned behavior, had a biochemical rather than psychological basis. He began his studies in biochemistry in Moscow State University, obtaining a doctorate in 1898. He then proceeded to publish widely on the topic of "biochemistry of the mind" and is considered by some to have pioneered the field of neuropharmacology.
Sources
Cooper, D. M. Russian Science Reader, Oxford, Pergamon Press; NY, Macmillan (1964).
Biochemists from the Russian Empire
Soviet scientists
Moscow State University alumni
1869 births
1937 deaths | Aleksandr Lebedev (biochemist) | Chemistry | 189 |
23,092,691 | https://en.wikipedia.org/wiki/Textile-reinforced%20concrete | Textile-reinforced concrete is a type of reinforced concrete in which the usual steel reinforcing bars are replaced by textile materials. Instead of using a metal cage inside the concrete, this technique uses a fabric cage inside the same.
Overview
Materials with high tensile strengths with negligible elongation properties are reinforced with woven or nonwoven fabrics. The fibres used for making the fabric are of high tenacity like jute, glass fibre, Kevlar, polypropylene, polyamides (Nylon) etc. Recently, attention has been given to the use of plant-based fibers (either dispersed or as a fabric) in reinforcement of concrete. The use of plan-based fibers is promising but the individual components are subject to degradation due to the alkaline environment. The weaving of the fabric is done either in a coil fashion or in a layer fashion. Molten materials, ceramic clays, plastics or cement concrete are deposited on the base fabric in such a way that the inner fabric is completely wrapped with the concrete or plastic.
As a result of this sort of structure the resultant concrete becomes flexible from the inner side along with high strength provided by the outer materials. Various nonwoven structures also get priority to form the base structure. Special types of weaving machines are used to form spiral fabrics and layer fabrics are generally nonwoven.
History
First patents
The initial creation of textile-reinforced concrete (TRC) began in the 1980s. Concepts for TRC originated from the Sächsisches Textiforschungs-institut e.V. STFI, a German institute focusing on Textile technology. The first patent for textile-reinforced concrete design, granted in 1982, was for transportation related safety items. These items were specifically meant to be reinforced with materials other than steel. In 1988, a patent was awarded for a safety barrier that used a rope-like reinforcement as its design. This reinforcement was made from concrete waste and textiles, and the innovative arrangement and size of the reinforcing fibers inside was notable. The reinforcements were set in place so that the concrete could be poured in, and the size of the reinforcement was described using diameter and mesh size.
Concrete canoe and textile reinforced concrete
In 1996, German university students created two concrete canoes using textile reinforcement. One boat utilized alkali-resistant glass as its textile reinforcement. To manufacture the glass in a fabric, a process called Malimo-technique was used to keep the glass in one continuous yarn, such that it could be used to make the fabric. The other boat was constructed using carbon fiber fabric as its method of reinforcement. The boats competed in the 1996 Concrete Canoe Regatta in Dresden, Germany, and this was the first time that textile-reinforced concrete was brought to public attention; the boats received an award for their design.
Construction
Four factors are important when constructing TRC, which include the quality of the concrete, the interaction between the textile and the concrete, the amount of fibers used, and the arrangement of the textile reinforcement inside of the concrete.
The particle size of the concrete must be carefully selected. If the concrete is too coarse, it will not be able to permeate through the textile reinforcement. For the best results, fresh concrete should be used. To aid in adhesion, chemical admixtures can be added to help the fibers stick to the concrete.
The characteristic features of TRC are its thin structure and malleable nature, as well as its ability to retain a high tensile strength; this is due to reinforcement in the concrete that uses long continuous fibers that are woven in a specific direction in order to add strength. As the result of the varying strength and properties needed to support correct loading, there are many different types of yarns, textiles weaves, and shapes that can be used in TRC. The textile begins with a yarn that is made of a continuous strand of either filaments or staples. The yarn is woven, knit, glued, braided or is left non-woven, depending on the needs of the project. Carbon, AR glass, and basalt are especially good materials for this process. Carbon has good tensile strength and low heat expansion, but is costly and has bad adhesion to concrete. Basalt is formed by melting basalt rock; it is more cost effective than carbon, and has a good tensile strength. The drawback of basalt is when it is placed in an alkali solution, such as concrete, it loses some of its volume of fibers, thus reducing its strength. This means a nano composite polymer coating must be applied to increase the longevity of the construction. AR glass has this problem, as well, but the positives of using AR glass in TRC structure, including its adhesion to concrete and low cost, outweigh these issues.
Textile-reinforced concrete is described as a strain-hardening composite. Strain-hardening composites use short fiber reinforcements, such as yarn made from carbon fiber, to strengthen a material. Strain-hardening requires the reinforcements and concrete matrix surrounding the reinforcement to be carefully designed in order to achieve the desired strength. The textile must be oriented in the correct direction during design to handle the main loading and stresses it is expected to hold. Types of weaves that can be used to make fabrics for TRC include plain weave, Leno weave, warp-knitted, and 3D spacer.
Another important aspect of textile-reinforced concrete is the permeability of the textile. Special attention must be paid to its structure, such that the textile is open enough for the concrete to flow through, while remaining stable enough to hold its own shape, since the placement of the reinforcement is vital to the final strength of the piece. The textile material must also have a high tensile strength, a high elongation before breaking, and a higher Young's Modulus than the concrete surrounding it.
The textile can be hand laid into the concrete or the process could be mechanized to increase efficiency. Different ways of creating textile-reinforced concrete vary from traditional form-work casts, all the way to Pultrusion. When making TRC using casting, the form work must be constructed, and the textile reinforcement must be pre-installed and ready for concrete to be poured in. After the concrete is poured and has had time to harden, the form-work is removed to reveal the structure. Another way of creating a TRC structure is lamination by hand. Similar to casting, a form-work must be created to house the concrete and textile; concrete is then spread evenly in the form work, and then the textile is laid on top. More concrete is poured on top, and a roller is used to push the concrete into the spaces in the textile; this is completed layer after layer, until the structure reaches its required size. TRC can also be created by Pultrusion. In this process, a textile is pushed through a slurry infiltration chamber, where the textile is covered and embedded with concrete. Rollers squeeze the concrete into the textile, and it can take several sized rollers to get the desired shape and size.
Uses
Uses of textile reinforced materials, concretes are extensively increasing in modern days in combination with materials science and textile technology. Bridges, Pillars and Road Guards are prepared by kevlar or jute reinforced concretes to withstand vibrations, sudden jerks and torsion (mechanics). The use of reinforced concrete construction in the modern world stems from the extensive
availability of its ingredients – reinforcing steel as well as concrete. Reinforced concrete fits
nearly into every form, is extremely versatile and is therefore widely used in the construction of buildings, bridges, etc. The major disadvantage of RC is that its steel reinforcement is prone to corrosion. Concrete is highly alkaline and forms a passive layer on steel, protecting it
against corrosion. Substances penetrating the concrete from outside (carbonisation) lowers the alkalinity over time (depassivation), making the steel reinforcement lose its protection
thus resulting in corrosion. This leads to spalling of the concrete, reducing the permanency of
the structure as a whole and leading to structural failure in extreme cases.
Due to the thin, cost effective, and lightweight nature of textile-reinforced concrete, it can be used to create many different types of structural components. The crack control of TRC is much better than that tradition steel-reinforced concrete; when TRC cracks, it creates multiple small fissures, between 50 and 100 nanometers wide. In some cases, the cracks can self-heal, since a 50 nanometer crack is almost as impermeable as a non-cracked concrete. Due to these properties, TRC would be a great material for all different types of architectural and civil engineering applications.
Textile-reinforced concrete can be used to create full structures, like bridges and buildings, as well as large structures in environments with much water, such as in mines and boat piers. As of 2018, the testing procedures and approval for these structures is not available, although it can currently be used to create small components, such as panels. Façade panels would be a convenient use of TRC, due to the material being thinner and lighter than typical concrete walls, and a cheaper alternative to other options. For bridges and building profiles, TRC could add to the strength and overall design of the structure. TRC could also be used to create irregular shapes with hard edges, and could be a novel way to enhance style and architectural design of modern buildings.
Textile-reinforced concrete could also be used to reinforce, repair, or add on to existing buildings, in either a structural or cosmetic basis. Furthermore, TRC could be used to provide a protective layer for old structures or retrofit new elements to an old structure, due to the lack of corrosion associated with this mechanism. Unlike steel, which will rust if a crack forms, TRC does not corrode and will retain its strength, even with small cracks. If carbon fiber fabric is used as the textile, TRC could be used to heat buildings; carbon fiber is conductive, and could be used to support the building, as well as heat it.
Current examples
Large scale textile-reinforced concrete can be seen in Germany, at RWTH Aachen University, where a pavilion was constructed using a textile-reinforced concrete roof. The roof was engineered using four TRC pieces; each piece was thin and double curved in the shape of a hyperbolic paraboloid. Traditional concrete design would not allow this structure, due to the complex form-work needed to create the piece. RWTH Aachen University also utilized textile-reinforced concrete to create façade panels on a new extension added to their Institute of Structural Concrete building. This façade was made using AR glass and was made much lighter weight and in a more cost effective manner than a traditional façade of steel-reinforced concrete or stone. In 2010, RWTH Aachen University also helped to design a textile-reinforced concrete bridge in Albstadt, Germany, using AR glass as the reinforcement; the bridge is approximately 100 meters long, and is expected to have a much longer service life than the steel reinforced concrete bridge it replaced.
Sustainability
Textile-reinforced concrete is generally thinner than traditional steel-reinforced concrete. Typical steel-reinforced construction is 100 to 300 mm thick, while a TRC structure is generally 50 mm thick. TRC is much thinner due to an extra protective layer of concrete that is not needed for its design. Due to this thinner structure, less material is used, which helps to reduce the price of using concrete, since the amount of concrete needed is also reduced. Since TRC can be used to extend the life of existing structures, it cuts down on the cost of materials and man power needed to tear down these existing structures, in order to create new ones. Instead of replacing old structures, they can now be repaired to add years of service to the lives of their construction.
See also
Geotextiles
Fiber-reinforced concrete
References
Composite materials
Textiles
Fibre-reinforced cementitious materials
Reinforced concrete | Textile-reinforced concrete | Physics | 2,431 |
31,476,736 | https://en.wikipedia.org/wiki/Joachim%20Maier | Joachim Maier (born 5 May 1955) is Emeritus Director at the Max Planck Institute for Solid State Research in Stuttgart (Germany) and Scientific Member of the Max Planck Society.
Education and career
Maier studied chemistry at Saarland University in Saarbrücken, made his Masters and PhD in Physical Chemistry there. He received his habilitation at the University of Tübingen. From 1988 to 1991 he was responsible for the activities on functional ceramics at the MPI for Metals Research in Stuttgart, and from 1988 to 1996 he taught defect chemistry at the Massachusetts Institute of Technology. Notwithstanding other prestigious offers, he decided in favor of the Max Planck Society. In 1991 he was appointed Scientific Member of the Max Planck Society, Director at the MPI for Solid State Research and Honorary Professor at the University of Stuttgart. He is the recipient of various prizes and a member of various national and international academies including German Academy of Sciences Leopoldina, German Technical Academy Acatech, Academia Europaea, Academy of Science and Literature (Mainz, Germany), Fellow of the Royal Society of Chemistry, Fellow of the Electrochemical Society, IUPAC Fellow. Joachim Maier is Editor-in-Chief of Solid State Ionics and on the board of a number of scientific journals.
Research
Maier's major research fields comprise physical chemistry of the solid state, thermodynamics and kinetics, defect chemistry and transport in solids, ionic and mixed conductors, boundary regions and electrochemistry. In this context energy transfer and storage are to the fore. Maier developed a scientific field nowadays termed nanoionics. Nanoionics refers to questions of ion transport, stoichiometry and reactivity in confined systems and is of equal significance for chemistry, physics and biology. In these fields Maier has authored/coauthored more than 800 publications in peer-reviewed journals.
References
1955 births
Living people
20th-century German chemists
Members of the German National Academy of Sciences Leopoldina
21st-century German chemists
Solid state chemists
Computational chemists
Max Planck Institute directors
Saarland University alumni
Academic staff of the University of Tübingen
People from Neunkirchen, Saarland | Joachim Maier | Chemistry | 437 |
19,048 | https://en.wikipedia.org/wiki/Mass | Mass is an intrinsic property of a body. It was traditionally believed to be related to the quantity of matter in a body, until the discovery of the atom and particle physics. It was found that different atoms and different elementary particles, theoretically with the same amount of matter, have nonetheless different masses. Mass in modern physics has multiple definitions which are conceptually distinct, but physically equivalent. Mass can be experimentally defined as a measure of the body's inertia, meaning the resistance to acceleration (change of velocity) when a net force is applied. The object's mass also determines the strength of its gravitational attraction to other bodies.
The SI base unit of mass is the kilogram (kg). In physics, mass is not the same as weight, even though mass is often determined by measuring the object's weight using a spring scale, rather than balance scale comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity, but it would still have the same mass. This is because weight is a force, while mass is the property that (along with gravity) determines the strength of this force.
In the Standard Model of physics, the mass of elementary particles is believed to be a result of their coupling with the Higgs boson in what is known as the Brout–Englert–Higgs mechanism.
Phenomena
There are several distinct phenomena that can be used to measure mass. Although some theorists have speculated that some of these phenomena could be independent of each other, current experiments have found no difference in results regardless of how it is measured:
Inertial mass measures an object's resistance to being accelerated by a force (represented by the relationship ).
Active gravitational mass determines the strength of the gravitational field generated by an object.
Passive gravitational mass measures the gravitational force exerted on an object in a known gravitational field.
The mass of an object determines its acceleration in the presence of an applied force. The inertia and the inertial mass describe this property of physical bodies at the qualitative and quantitative level respectively. According to Newton's second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A body's mass also determines the degree to which it generates and is affected by a gravitational field. If a first body of mass mA is placed at a distance r (center of mass to center of mass) from a second body of mass mB, each body is subject to an attractive force , where is the "universal gravitational constant". This is sometimes referred to as gravitational mass. Repeated experiments since the 17th century have demonstrated that inertial and gravitational mass are identical; since 1915, this observation has been incorporated a priori in the equivalence principle of general relativity.
Units of mass
The International System of Units (SI) unit of mass is the kilogram (kg). The kilogram is 1000 grams (g), and was first defined in 1795 as the mass of one cubic decimetre of water at the melting point of ice. However, because precise measurement of a cubic decimetre of water at the specified temperature and pressure was difficult, in 1889 the kilogram was redefined as the mass of a metal object, and thus became independent of the metre and the properties of water, this being a copper prototype of the grave in 1793, the platinum Kilogramme des Archives in 1799, and the platinum–iridium International Prototype of the Kilogram (IPK) in 1889.
However, the mass of the IPK and its national copies have been found to drift over time. The re-definition of the kilogram and several other units came into effect on 20 May 2019, following a final vote by the CGPM in November 2018. The new definition uses only invariant quantities of nature: the speed of light, the caesium hyperfine frequency, the Planck constant and the elementary charge.
Non-SI units accepted for use with SI units include:
the tonne (t) (or "metric ton"), equal to 1000 kg
the electronvolt (eV), a unit of energy, used to express mass in units of eV/c2 through mass–energy equivalence
the dalton (Da), equal to 1/12 of the mass of a free carbon-12 atom, approximately .
Outside the SI system, other units of mass include:
the slug (sl), an Imperial unit of mass (about 14.6 kg)
the pound (lb), a unit of mass (about 0.45 kg), which is used alongside the similarly named pound (force) (about 4.5 N), a unit of force
the Planck mass (about ), a quantity derived from fundamental constants
the solar mass (), defined as the mass of the Sun, primarily used in astronomy to compare large masses such as stars or galaxies (≈ )
the mass of a particle, as identified with its inverse Compton wavelength ()
the mass of a star or black hole, as identified with its Schwarzschild radius ().
Definitions
In physical science, one may distinguish conceptually between at least seven different aspects of mass, or seven physical notions that involve the concept of mass. Every experiment to date has shown these seven values to be proportional, and in some cases equal, and this proportionality gives rise to the abstract concept of mass. There are a number of ways mass can be measured or operationally defined:
Inertial mass is a measure of an object's resistance to acceleration when a force is applied. It is determined by applying a force to an object and measuring the acceleration that results from that force. An object with small inertial mass will accelerate more than an object with large inertial mass when acted upon by the same force. One says the body of greater mass has greater inertia.
Active gravitational mass is a measure of the strength of an object's gravitational flux (gravitational flux is equal to the surface integral of gravitational field over an enclosing surface). Gravitational field can be measured by allowing a small "test object" to fall freely and measuring its free-fall acceleration. For example, an object in free-fall near the Moon is subject to a smaller gravitational field, and hence accelerates more slowly, than the same object would if it were in free-fall near the Earth. The gravitational field near the Moon is weaker because the Moon has less active gravitational mass.
Passive gravitational mass is a measure of the strength of an object's interaction with a gravitational field. Passive gravitational mass is determined by dividing an object's weight by its free-fall acceleration. Two objects within the same gravitational field will experience the same acceleration; however, the object with a smaller passive gravitational mass will experience a smaller force (less weight) than the object with a larger passive gravitational mass.
According to relativity, mass is nothing else than the rest energy of a system of particles, meaning the energy of that system in a reference frame where it has zero momentum. Mass can be converted into other forms of energy according to the principle of mass–energy equivalence. This equivalence is exemplified in a large number of physical processes including pair production, beta decay and nuclear fusion. Pair production and nuclear fusion are processes in which measurable amounts of mass are converted to kinetic energy or vice versa.
Curvature of spacetime is a relativistic manifestation of the existence of mass. Such curvature is extremely weak and difficult to measure. For this reason, curvature was not discovered until after it was predicted by Einstein's theory of general relativity. Extremely precise atomic clocks on the surface of the Earth, for example, are found to measure less time (run slower) when compared to similar clocks in space. This difference in elapsed time is a form of curvature called gravitational time dilation. Other forms of curvature have been measured using the Gravity Probe B satellite.
Quantum mass manifests itself as a difference between an object's quantum frequency and its wave number. The quantum mass of a particle is proportional to the inverse Compton wavelength and can be determined through various forms of spectroscopy. In relativistic quantum mechanics, mass is one of the irreducible representation labels of the Poincaré group.
Weight vs. mass
In everyday usage, mass and "weight" are often used interchangeably. For instance, a person's weight may be stated as 75 kg. In a constant gravitational field, the weight of an object is proportional to its mass, and it is unproblematic to use the same unit for both concepts. But because of slight differences in the strength of the Earth's gravitational field at different places, the distinction becomes important for measurements with a precision better than a few percent, and for places far from the surface of the Earth, such as in space or on other planets. Conceptually, "mass" (measured in kilograms) refers to an intrinsic property of an object, whereas "weight" (measured in newtons) measures an object's resistance to deviating from its current course of free fall, which can be influenced by the nearby gravitational field. No matter how strong the gravitational field, objects in free fall are weightless, though they still have mass.
The force known as "weight" is proportional to mass and acceleration in all situations where the mass is accelerated away from free fall. For example, when a body is at rest in a gravitational field (rather than in free fall), it must be accelerated by a force from a scale or the surface of a planetary body such as the Earth or the Moon. This force keeps the object from going into free fall. Weight is the opposing force in such circumstances and is thus determined by the acceleration of free fall. On the surface of the Earth, for example, an object with a mass of 50 kilograms weighs 491 newtons, which means that 491 newtons is being applied to keep the object from going into free fall. By contrast, on the surface of the Moon, the same object still has a mass of 50 kilograms but weighs only 81.5 newtons, because only 81.5 newtons is required to keep this object from going into a free fall on the moon. Restated in mathematical terms, on the surface of the Earth, the weight W of an object is related to its mass m by , where is the acceleration due to Earth's gravitational field, (expressed as the acceleration experienced by a free-falling object).
For other situations, such as when objects are subjected to mechanical accelerations from forces other than the resistance of a planetary surface, the weight force is proportional to the mass of an object multiplied by the total acceleration away from free fall, which is called the proper acceleration. Through such mechanisms, objects in elevators, vehicles, centrifuges, and the like, may experience weight forces many times those caused by resistance to the effects of gravity on objects, resulting from planetary surfaces. In such cases, the generalized equation for weight W of an object is related to its mass m by the equation , where a is the proper acceleration of the object caused by all influences other than gravity. (Again, if gravity is the only influence, such as occurs when an object falls freely, its weight will be zero).
Inertial vs. gravitational mass
Although inertial mass, passive gravitational mass and active gravitational mass are conceptually distinct, no experiment has ever unambiguously demonstrated any difference between them. In classical mechanics, Newton's third law implies that active and passive gravitational mass must always be identical (or at least proportional), but the classical theory offers no compelling reason why the gravitational mass has to equal the inertial mass. That it does is merely an empirical fact.
Albert Einstein developed his general theory of relativity starting with the assumption that the inertial and passive gravitational masses are the same. This is known as the equivalence principle.
The particular equivalence often referred to as the "Galilean equivalence principle" or the "weak equivalence principle" has the most important consequence for freely falling objects. Suppose an object has inertial and gravitational masses m and M, respectively. If the only force acting on the object comes from a gravitational field g, the force on the object is:
Given this force, the acceleration of the object can be determined by Newton's second law:
Putting these together, the gravitational acceleration is given by:
This says that the ratio of gravitational to inertial mass of any object is equal to some constant K if and only if all objects fall at the same rate in a given gravitational field. This phenomenon is referred to as the "universality of free-fall". In addition, the constant K can be taken as 1 by defining our units appropriately.
The first experiments demonstrating the universality of free-fall were—according to scientific 'folklore'—conducted by Galileo obtained by dropping objects from the Leaning Tower of Pisa. This is most likely apocryphal: he is more likely to have performed his experiments with balls rolling down nearly frictionless inclined planes to slow the motion and increase the timing accuracy. Increasingly precise experiments have been performed, such as those performed by Loránd Eötvös, using the torsion balance pendulum, in 1889. , no deviation from universality, and thus from Galilean equivalence, has ever been found, at least to the precision 10−6. More precise experimental efforts are still being carried out.
The universality of free-fall only applies to systems in which gravity is the only acting force. All other forces, especially friction and air resistance, must be absent or at least negligible. For example, if a hammer and a feather are dropped from the same height through the air on Earth, the feather will take much longer to reach the ground; the feather is not really in free-fall because the force of air resistance upwards against the feather is comparable to the downward force of gravity. On the other hand, if the experiment is performed in a vacuum, in which there is no air resistance, the hammer and the feather should hit the ground at exactly the same time (assuming the acceleration of both objects towards each other, and of the ground towards both objects, for its own part, is negligible). This can easily be done in a high school laboratory by dropping the objects in transparent tubes that have the air removed with a vacuum pump. It is even more dramatic when done in an environment that naturally has a vacuum, as David Scott did on the surface of the Moon during Apollo 15.
A stronger version of the equivalence principle, known as the Einstein equivalence principle or the strong equivalence principle, lies at the heart of the general theory of relativity. Einstein's equivalence principle states that within sufficiently small regions of spacetime, it is impossible to distinguish between a uniform acceleration and a uniform gravitational field. Thus, the theory postulates that the force acting on a massive object caused by a gravitational field is a result of the object's tendency to move in a straight line (in other words its inertia) and should therefore be a function of its inertial mass and the strength of the gravitational field.
Origin
In theoretical physics, a mass generation mechanism is a theory which attempts to explain the origin of mass from the most fundamental laws of physics. To date, a number of different models have been proposed which advocate different views of the origin of mass. The problem is complicated by the fact that the notion of mass is strongly related to the gravitational interaction but a theory of the latter has not been yet reconciled with the currently popular model of particle physics, known as the Standard Model.
Pre-Newtonian concepts
Weight as an amount
The concept of amount is very old and predates recorded history. The concept of "weight" would incorporate "amount" and acquire a double meaning that was not clearly recognized as such.
Humans, at some early era, realized that the weight of a collection of similar objects was directly proportional to the number of objects in the collection:
where W is the weight of the collection of similar objects and n is the number of objects in the collection. Proportionality, by definition, implies that two values have a constant ratio:
, or equivalently
An early use of this relationship is a balance scale, which balances the force of one object's weight against the force of another object's weight. The two sides of a balance scale are close enough that the objects experience similar gravitational fields. Hence, if they have similar masses then their weights will also be similar. This allows the scale, by comparing weights, to also compare masses.
Consequently, historical weight standards were often defined in terms of amounts. The Romans, for example, used the carob seed (carat or siliqua) as a measurement standard. If an object's weight was equivalent to 1728 carob seeds, then the object was said to weigh one Roman pound. If, on the other hand, the object's weight was equivalent to 144 carob seeds then the object was said to weigh one Roman ounce (uncia). The Roman pound and ounce were both defined in terms of different sized collections of the same common mass standard, the carob seed. The ratio of a Roman ounce (144 carob seeds) to a Roman pound (1728 carob seeds) was:
Planetary motion
In 1600 AD, Johannes Kepler sought employment with Tycho Brahe, who had some of the most precise astronomical data available. Using Brahe's precise observations of the planet Mars, Kepler spent the next five years developing his own method for characterizing planetary motion. In 1609, Johannes Kepler published his three laws of planetary motion, explaining how the planets orbit the Sun. In Kepler's final planetary model, he described planetary orbits as following elliptical paths with the Sun at a focal point of the ellipse. Kepler discovered that the square of the orbital period of each planet is directly proportional to the cube of the semi-major axis of its orbit, or equivalently, that the ratio of these two values is constant for all planets in the Solar System.
On 25 August 1609, Galileo Galilei demonstrated his first telescope to a group of Venetian merchants, and in early January 1610, Galileo observed four dim objects near Jupiter, which he mistook for stars. However, after a few days of observation, Galileo realized that these "stars" were in fact orbiting Jupiter. These four objects (later named the Galilean moons in honor of their discoverer) were the first celestial bodies observed to orbit something other than the Earth or Sun. Galileo continued to observe these moons over the next eighteen months, and by the middle of 1611, he had obtained remarkably accurate estimates for their periods.
Galilean free fall
Sometime prior to 1638, Galileo turned his attention to the phenomenon of objects in free fall, attempting to characterize these motions. Galileo was not the first to investigate Earth's gravitational field, nor was he the first to accurately describe its fundamental characteristics. However, Galileo's reliance on scientific experimentation to establish physical principles would have a profound effect on future generations of scientists. It is unclear if these were just hypothetical experiments used to illustrate a concept, or if they were real experiments performed by Galileo, but the results obtained from these experiments were both realistic and compelling. A biography by Galileo's pupil Vincenzo Viviani stated that Galileo had dropped balls of the same material, but different masses, from the Leaning Tower of Pisa to demonstrate that their time of descent was independent of their mass. In support of this conclusion, Galileo had advanced the following theoretical argument: He asked if two bodies of different masses and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing resolution to this question is that all bodies must fall at the same rate.
A later experiment was described in Galileo's Two New Sciences published in 1638. One of Galileo's fictional characters, Salviati, describes an experiment using a bronze ball and a wooden ramp. The wooden ramp was "12 cubits long, half a cubit wide and three finger-breadths thick" with a straight, smooth, polished groove. The groove was lined with "parchment, also smooth and polished as possible". And into this groove was placed "a hard, smooth and very round bronze ball". The ramp was inclined at various angles to slow the acceleration enough so that the elapsed time could be measured. The ball was allowed to roll a known distance down the ramp, and the time taken for the ball to move the known distance was measured. The time was measured using a water clock described as follows:
a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length; the water thus collected was weighed, after each descent, on a very accurate balance; the differences and ratios of these weights gave us the differences and ratios of the times, and this with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results.
Galileo found that for an object in free fall, the distance that the object has fallen is always proportional to the square of the elapsed time:
Galileo had shown that objects in free fall under the influence of the Earth's gravitational field have a constant acceleration, and Galileo's contemporary, Johannes Kepler, had shown that the planets follow elliptical paths under the influence of the Sun's gravitational mass. However, Galileo's free fall motions and Kepler's planetary motions remained distinct during Galileo's lifetime.
Mass as distinct from weight
According to K. M. Browne: "Kepler formed a [distinct] concept of mass ('amount of matter' (copia materiae)), but called it 'weight' as did everyone at that time." Finally, in 1686, Newton gave this distinct concept its own name. In the first paragraph of Principia, Newton defined quantity of matter as “density and bulk conjunctly”, and mass as quantity of matter.
Newtonian mass
Robert Hooke had published his concept of gravitational forces in 1674, stating that all celestial bodies have an attraction or gravitating power towards their own centers, and also attract all the other celestial bodies that are within the sphere of their activity. He further stated that gravitational attraction increases by how much nearer the body wrought upon is to its own center. In correspondence with Isaac Newton from 1679 and 1680, Hooke conjectured that gravitational forces might decrease according to the double of the distance between the two bodies. Hooke urged Newton, who was a pioneer in the development of calculus, to work through the mathematical details of Keplerian orbits to determine if Hooke's hypothesis was correct. Newton's own investigations verified that Hooke was correct, but due to personal differences between the two men, Newton chose not to reveal this to Hooke. Isaac Newton kept quiet about his discoveries until 1684, at which time he told a friend, Edmond Halley, that he had solved the problem of gravitational orbits, but had misplaced the solution in his office. After being encouraged by Halley, Newton decided to develop his ideas about gravity and publish all of his findings. In November 1684, Isaac Newton sent a document to Edmund Halley, now lost but presumed to have been titled De motu corporum in gyrum (Latin for "On the motion of bodies in an orbit"). Halley presented Newton's findings to the Royal Society of London, with a promise that a fuller presentation would follow. Newton later recorded his ideas in a three-book set, entitled Philosophiæ Naturalis Principia Mathematica (English: Mathematical Principles of Natural Philosophy). The first was received by the Royal Society on 28 April 1685–86; the second on 2 March 1686–87; and the third on 6 April 1686–87. The Royal Society published Newton's entire collection at their own expense in May 1686–87.
Isaac Newton had bridged the gap between Kepler's gravitational mass and Galileo's gravitational acceleration, resulting in the discovery of the following relationship which governed both of these:
where g is the apparent acceleration of a body as it passes through a region of space where gravitational fields exist, μ is the gravitational mass (standard gravitational parameter) of the body causing gravitational fields, and R is the radial coordinate (the distance between the centers of the two bodies).
By finding the exact relationship between a body's gravitational mass and its gravitational field, Newton provided a second method for measuring gravitational mass. The mass of the Earth can be determined using Kepler's method (from the orbit of Earth's Moon), or it can be determined by measuring the gravitational acceleration on the Earth's surface, and multiplying that by the square of the Earth's radius. The mass of the Earth is approximately three-millionths of the mass of the Sun. To date, no other accurate method for measuring gravitational mass has been discovered.
Newton's cannonball
Newton's cannonball was a thought experiment used to bridge the gap between Galileo's gravitational acceleration and Kepler's elliptical orbits. It appeared in Newton's 1728 book A Treatise of the System of the World. According to Galileo's concept of gravitation, a dropped stone falls with constant acceleration down towards the Earth. However, Newton explains that when a stone is thrown horizontally (meaning sideways or perpendicular to Earth's gravity) it follows a curved path. "For a stone projected is by the pressure of its own weight forced out of the rectilinear path, which by the projection alone it should have pursued, and made to describe a curve line in the air; and through that crooked way is at last brought down to the ground. And the greater the velocity is with which it is projected, the farther it goes before it falls to the Earth." Newton further reasons that if an object were "projected in an horizontal direction from the top of a high mountain" with sufficient velocity, "it would reach at last quite beyond the circumference of the Earth, and return to the mountain from which it was projected."
Universal gravitational mass
In contrast to earlier theories (e.g. celestial spheres) which stated that the heavens were made of entirely different material, Newton's theory of mass was groundbreaking partly because it introduced universal gravitational mass: every object has gravitational mass, and therefore, every object generates a gravitational field. Newton further assumed that the strength of each object's gravitational field would decrease according to the square of the distance to that object. If a large collection of small objects were formed into a giant spherical body such as the Earth or Sun, Newton calculated the collection would create a gravitational field proportional to the total mass of the body, and inversely proportional to the square of the distance to the body's center.
For example, according to Newton's theory of universal gravitation, each carob seed produces a gravitational field. Therefore, if one were to gather an immense number of carob seeds and form them into an enormous sphere, then the gravitational field of the sphere would be proportional to the number of carob seeds in the sphere. Hence, it should be theoretically possible to determine the exact number of carob seeds that would be required to produce a gravitational field similar to that of the Earth or Sun. In fact, by unit conversion it is a simple matter of abstraction to realize that any traditional mass unit can theoretically be used to measure gravitational mass.
Measuring gravitational mass in terms of traditional mass units is simple in principle, but extremely difficult in practice. According to Newton's theory, all objects produce gravitational fields and it is theoretically possible to collect an immense number of small objects and form them into an enormous gravitating sphere. However, from a practical standpoint, the gravitational fields of small objects are extremely weak and difficult to measure. Newton's books on universal gravitation were published in the 1680s, but the first successful measurement of the Earth's mass in terms of traditional mass units, the Cavendish experiment, did not occur until 1797, over a hundred years later. Henry Cavendish found that the Earth's density was 5.448 ± 0.033 times that of water. As of 2009, the Earth's mass in kilograms is only known to around five digits of accuracy, whereas its gravitational mass is known to over nine significant figures.
Given two objects A and B, of masses MA and MB, separated by a displacement RAB, Newton's law of gravitation states that each object exerts a gravitational force on the other, of magnitude
,
where G is the universal gravitational constant. The above statement may be reformulated in the following way: if g is the magnitude at a given location in a gravitational field, then the gravitational force on an object with gravitational mass M is
.
This is the basis by which masses are determined by weighing. In simple spring scales, for example, the force F is proportional to the displacement of the spring beneath the weighing pan, as per Hooke's law, and the scales are calibrated to take g into account, allowing the mass M to be read off. Assuming the gravitational field is equivalent on both sides of the balance, a balance measures relative weight, giving the relative gravitation mass of each object.
Inertial mass
Mass was traditionally believed to be a measure of the quantity of matter in a physical body, equal to the "amount of matter" in an object. For example, Barre´ de Saint-Venant argued in 1851 that every object contains a number of "points" (basically, interchangeable elementary particles), and that mass is proportional to the number of points the object contains. (In practice, this "amount of matter" definition is adequate for most of classical mechanics, and sometimes remains in use in basic education, if the priority is to teach the difference between mass from weight.) This traditional "amount of matter" belief was contradicted by the fact that different atoms (and, later, different elementary particles) can have different masses, and was further contradicted by Einstein's theory of relativity (1905), which showed that the measurable mass of an object increases when energy is added to it (for example, by increasing its temperature or forcing it near an object that electrically repels it.) This motivates a search for a different definition of mass that is more accurate than the traditional definition of "the amount of matter in an object".
Inertial mass is the mass of an object measured by its resistance to acceleration. This definition has been championed by Ernst Mach and has since been developed into the notion of operationalism by Percy W. Bridgman. The simple classical mechanics definition of mass differs slightly from the definition in the theory of special relativity, but the essential meaning is the same.
In classical mechanics, according to Newton's second law, we say that a body has a mass m if, at any instant of time, it obeys the equation of motion
where F is the resultant force acting on the body and a is the acceleration of the body's centre of mass. For the moment, we will put aside the question of what "force acting on the body" actually means.
This equation illustrates how mass relates to the inertia of a body. Consider two objects with different masses. If we apply an identical force to each, the object with a bigger mass will experience a smaller acceleration, and the object with a smaller mass will experience a bigger acceleration. We might say that the larger mass exerts a greater "resistance" to changing its state of motion in response to the force.
However, this notion of applying "identical" forces to different objects brings us back to the fact that we have not really defined what a force is. We can sidestep this difficulty with the help of Newton's third law, which states that if one object exerts a force on a second object, it will experience an equal and opposite force. To be precise, suppose we have two objects of constant inertial masses m1 and m2. We isolate the two objects from all other physical influences, so that the only forces present are the force exerted on m1 by m2, which we denote F12, and the force exerted on m2 by m1, which we denote F21. Newton's second law states that
where a1 and a2 are the accelerations of m1 and m2, respectively. Suppose that these accelerations are non-zero, so that the forces between the two objects are non-zero. This occurs, for example, if the two objects are in the process of colliding with one another. Newton's third law then states that
and thus
If is non-zero, the fraction is well-defined, which allows us to measure the inertial mass of m1. In this case, m2 is our "reference" object, and we can define its mass m as (say) 1 kilogram. Then we can measure the mass of any other object in the universe by colliding it with the reference object and measuring the accelerations.
Additionally, mass relates a body's momentum p to its linear velocity v:
,
and the body's kinetic energy K to its velocity:
.
The primary difficulty with Mach's definition of mass is that it fails to take into account the potential energy (or binding energy) needed to bring two masses sufficiently close to one another to perform the measurement of mass. This is most vividly demonstrated by comparing the mass of the proton in the nucleus of deuterium, to the mass of the proton in free space (which is greater by about 0.239%—this is due to the binding energy of deuterium). Thus, for example, if the reference weight m2 is taken to be the mass of the neutron in free space, and the relative accelerations for the proton and neutron in deuterium are computed, then the above formula over-estimates the mass m1 (by 0.239%) for the proton in deuterium. At best, Mach's formula can only be used to obtain ratios of masses, that is, as m1 / m2 = / . An additional difficulty was pointed out by Henri Poincaré, which is that the measurement of instantaneous acceleration is impossible: unlike the measurement of time or distance, there is no way to measure acceleration with a single measurement; one must make multiple measurements (of position, time, etc.) and perform a computation to obtain the acceleration. Poincaré termed this to be an "insurmountable flaw" in the Mach definition of mass.
Atomic masses
Typically, the mass of objects is measured in terms of the kilogram, which since 2019 is defined in terms of fundamental constants of nature. The mass of an atom or other particle can be compared more precisely and more conveniently to that of another atom, and thus scientists developed the dalton (also known as the unified atomic mass unit). By definition, 1 Da (one dalton) is exactly one-twelfth of the mass of a carbon-12 atom, and thus, a carbon-12 atom has a mass of exactly 12 Da.
In relativity
Special relativity
In some frameworks of special relativity, physicists have used different definitions of the term. In these frameworks, two kinds of mass are defined: rest mass (invariant mass), and relativistic mass (which increases with velocity). Rest mass is the Newtonian mass as measured by an observer moving along with the object. Relativistic mass is the total quantity of energy in a body or system divided by c2. The two are related by the following equation:
where is the Lorentz factor:
The invariant mass of systems is the same for observers in all inertial frames, while the relativistic mass depends on the observer's frame of reference. In order to formulate the equations of physics such that mass values do not change between observers, it is convenient to use rest mass. The rest mass of a body is also related to its energy E and the magnitude of its momentum p by the relativistic energy-momentum equation:
So long as the system is closed with respect to mass and energy, both kinds of mass are conserved in any given frame of reference. The conservation of mass holds even as some types of particles are converted to others. Matter particles (such as atoms) may be converted to non-matter particles (such as photons of light), but this does not affect the total amount of mass or energy. Although things like heat may not be matter, all types of energy still continue to exhibit mass. Thus, mass and energy do not change into one another in relativity; rather, both are names for the same thing, and neither mass nor energy appear without the other.
Both rest and relativistic mass can be expressed as an energy by applying the well-known relationship E = mc2, yielding rest energy and "relativistic energy" (total system energy) respectively:
The "relativistic" mass and energy concepts are related to their "rest" counterparts, but they do not have the same value as their rest counterparts in systems where there is a net momentum. Because the relativistic mass is proportional to the energy, it has gradually fallen into disuse among physicists. There is disagreement over whether the concept remains useful pedagogically.
In bound systems, the binding energy must often be subtracted from the mass of the unbound system, because binding energy commonly leaves the system at the time it is bound. The mass of the system changes in this process merely because the system was not closed during the binding process, so the energy escaped. For example, the binding energy of atomic nuclei is often lost in the form of gamma rays when the nuclei are formed, leaving nuclides which have less mass than the free particles (nucleons) of which they are composed.
Mass–energy equivalence also holds in macroscopic systems. For example, if one takes exactly one kilogram of ice, and applies heat, the mass of the resulting melt-water will be more than a kilogram: it will include the mass from the thermal energy (latent heat) used to melt the ice; this follows from the conservation of energy. This number is small but not negligible: about 3.7 nanograms. It is given by the latent heat of melting ice (334 kJ/kg) divided by the speed of light squared (c2 ≈ ).
General relativity
In general relativity, the equivalence principle is the equivalence of gravitational and inertial mass. At the core of this assertion is Albert Einstein's idea that the gravitational force as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (i.e. accelerated) frame of reference.
However, it turns out that it is impossible to find an objective general definition for the concept of invariant mass in general relativity. At the core of the problem is the non-linearity of the Einstein field equations, making it impossible to write the gravitational field energy as part of the stress–energy tensor in a way that is invariant for all observers. For a given observer, this can be achieved by the stress–energy–momentum pseudotensor.
In quantum physics
In classical mechanics, the inert mass of a particle appears in the Euler–Lagrange equation as a parameter m:
After quantization, replacing the position vector x with a wave function, the parameter m appears in the kinetic energy operator:
In the ostensibly covariant (relativistically invariant) Dirac equation, and in natural units, this becomes:
where the "mass" parameter m is now simply a constant associated with the quantum described by the wave function ψ.
In the Standard Model of particle physics as developed in the 1960s, this term arises from the coupling of the field ψ to an additional field Φ, the Higgs field. In the case of fermions, the Higgs mechanism results in the replacement of the term mψ in the Lagrangian with . This shifts the explanandum of the value for the mass of each elementary particle to the value of the unknown coupling constant Gψ.
Tachyonic particles and imaginary (complex) mass
A tachyonic field, or simply tachyon, is a quantum field with an imaginary mass. Although tachyons (particles that move faster than light) are a purely hypothetical concept not generally believed to exist, fields with imaginary mass have come to play an important role in modern physics and are discussed in popular books on physics. Under no circumstances do any excitations ever propagate faster than light in such theories—the presence or absence of a tachyonic mass has no effect whatsoever on the maximum velocity of signals (there is no violation of causality). While the field may have imaginary mass, any physical particles do not; the "imaginary mass" shows that the system becomes unstable, and sheds the instability by undergoing a type of phase transition called tachyon condensation (closely related to second order phase transitions) that results in symmetry breaking in current models of particle physics.
The term "tachyon" was coined by Gerald Feinberg in a 1967 paper, but it was soon realized that Feinberg's model in fact did not allow for superluminal speeds. Instead, the imaginary mass creates an instability in the configuration:- any configuration in which one or more field excitations are tachyonic will spontaneously decay, and the resulting configuration contains no physical tachyons. This process is known as tachyon condensation. Well known examples include the condensation of the Higgs boson in particle physics, and ferromagnetism in condensed matter physics.
Although the notion of a tachyonic imaginary mass might seem troubling because there is no classical interpretation of an imaginary mass, the mass is not quantized. Rather, the scalar field is; even for tachyonic quantum fields, the field operators at spacelike separated points still commute (or anticommute), thus preserving causality. Therefore, information still does not propagate faster than light, and solutions grow exponentially, but not superluminally (there is no violation of causality). Tachyon condensation drives a physical system that has reached a local limit and might naively be expected to produce physical tachyons, to an alternate stable state where no physical tachyons exist. Once the tachyonic field reaches the minimum of the potential, its quanta are not tachyons any more but rather are ordinary particles with a positive mass-squared.
This is a special case of the general rule, where unstable massive particles are formally described as having a complex mass, with the real part being their mass in the usual sense, and the imaginary part being the decay rate in natural units. However, in quantum field theory, a particle (a "one-particle state") is roughly defined as a state which is constant over time; i.e., an eigenvalue of the Hamiltonian. An unstable particle is a state which is only approximately constant over time; If it exists long enough to be measured, it can be formally described as having a complex mass, with the real part of the mass greater than its imaginary part. If both parts are of the same magnitude, this is interpreted as a resonance appearing in a scattering process rather than a particle, as it is considered not to exist long enough to be measured independently of the scattering process. In the case of a tachyon, the real part of the mass is zero, and hence no concept of a particle can be attributed to it.
In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called "bradyons" in discussions of tachyons) must also apply to tachyons. In particular the energy–momentum relation:
(where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle:
This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy.
When v is larger than c, the denominator in the equation for the energy is "imaginary", as the value under the radical is negative. Because the total energy must be real, the numerator must also be imaginary: i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number.
See also
Mass versus weight
Effective mass (spring–mass system)
Effective mass (solid-state physics)
Extension (metaphysics)
International System of Quantities
2019 revision of the SI base
Notes
References
External links
Jim Baggott (27 September 2017). The Concept of Mass (video) published by the Royal Institution on YouTube.
Physical quantities
SI base quantities
Moment (physics)
Extensive quantities | Mass | Physics,Chemistry,Mathematics | 9,260 |
75,363,733 | https://en.wikipedia.org/wiki/MEDI6570 | MEDI6570 is an antibody of the lectin-like oxidized low-density lipoprotein receptor-1 (LOX-1) that is being tested in people with type 2 diabetes to see if it reduces their risk of cardiovascular disease. The drug is developed by AstraZeneca.
References
Drugs developed by AstraZeneca | MEDI6570 | Chemistry | 74 |
1,823,144 | https://en.wikipedia.org/wiki/Scanning%20transmission%20electron%20microscopy | A scanning transmission electron microscope (STEM) is a type of transmission electron microscope (TEM). Pronunciation is [stɛm] or [ɛsti:i:ɛm]. As with a conventional transmission electron microscope (CTEM), images are formed by electrons passing through a sufficiently thin specimen. However, unlike CTEM, in STEM the electron beam is focused to a fine spot (with the typical spot size 0.05 – 0.2 nm) which is then scanned over the sample in a raster illumination system constructed so that the sample is illuminated at each point with the beam parallel to the optical axis. The rastering of the beam across the sample makes STEM suitable for analytical techniques such as Z-contrast annular dark-field imaging, and spectroscopic mapping by energy dispersive X-ray (EDX) spectroscopy, or electron energy loss spectroscopy (EELS). These signals can be obtained simultaneously, allowing direct correlation of images and spectroscopic data.
A typical STEM is a conventional transmission electron microscope equipped with additional scanning coils, detectors, and necessary circuitry, which allows it to switch between operating as a STEM, or a CTEM; however, dedicated STEMs are also manufactured.
High-resolution scanning transmission electron microscopes require exceptionally stable room environments. In order to obtain atomic resolution images in STEM, the level of vibration, temperature fluctuations, electromagnetic waves, and acoustic waves must be limited in the room housing the microscope.
History
The first STEM was built in 1938 by Baron Manfred von Ardenne, working in Berlin for Siemens. However, at the time the results were inferior to those of transmission electron microscopy, and von Ardenne only spent two years working on the problem. The microscope was destroyed in an air raid in 1944, and von Ardenne did not return to his work after World War II.
The technique was not developed further until the 1970s, when Albert Crewe at the University of Chicago developed the field emission gun and added a high-quality objective lens to create a modern STEM. He demonstrated the ability to image atoms using an annular dark field detector. Crewe and coworkers at the University of Chicago developed the cold field emission electron source and built a STEM able to visualize single heavy atoms on thin carbon substrates.
By the late 1980s and early 1990s, improvements in STEM technology allowed for samples to be imaged with better than 2 Å resolution, meaning that atomic structure could be imaged in some materials.
Aberration correction
The addition of an aberration corrector to STEMs enables electron probes to be focused to sub-angstrom diameters, allowing images with sub-angstrom resolution to be acquired. This has made it possible to identify individual atomic columns with unprecedented clarity.
Aberration-corrected STEM was demonstrated with 1.9 Å resolution in 1997 and soon after in 2000 with roughly 1.36 Å resolution. Advanced aberration-corrected STEMs have since been developed with sub-50 pm resolution.
Aberration-corrected STEM provides the added resolution and beam current critical to the implementation of atomic resolution chemical and elemental spectroscopic mapping.
STEM detectors and imaging modes
Annular dark-field
In annular dark-field mode, images are formed by fore-scattered electrons incident on an annular detector, which lies outside of the path of the directly transmitted beam.
By using a high-angle ADF detector, it is possible to form atomic resolution images where the contrast of an atomic column is directly related to the atomic number (Z-contrast image). Directly interpretable Z-contrast imaging makes STEM imaging with a high-angle detector an appealing technique in contrast to conventional high-resolution electron microscopy, in which phase-contrast effects mean that atomic resolution images must be compared to simulations to aid interpretation.
Bright-field
In STEM, bright-field detectors are located in the path of the transmitted electron beam. Axial bright-field detectors are located in the centre of the cone of illumination of the transmitted beam, and are often used to provide complementary images to those obtained by ADF imaging. Annular bright-field detectors, located within the cone of illumination of the transmitted beam, have been used to obtain atomic resolution images in which the atomic columns of light elements such as oxygen are visible.
Differential phase contrast
Differential phase contrast (DPC) is an imaging mode which relies on the beam being deflected by electromagnetic fields. In the classical case, the fast electrons in the electron beam is deflected by the Lorentz force, as shown schematically for a magnetic field in the figure to the left. The fast electron with charge −1 passing through an electric field E and a magnetic field B experiences a force F:
For a magnetic field, this can be expressed as the amount of beam deflection experienced by the electron, :
where is the wavelength of the electron, the Planck constant and is integrated magnetic induction along the trajectory of the electron. This last term reduces to when the electron beam is perpendicular to a sample of thickness with constant in-plane magnetic induction of magnitude . The beam deflection can then be imaged on a segmented or pixelated detector. This can be used to image magnetic and electric fields in materials. While the beam deflection mechanism through the Lorentz force is the most intuitive way of understanding DPC, a quantum mechanical approach is necessary to understand the phase-shift generated by the electromagnetic fields through the Aharonov–Bohm effect.
Imaging most ferromagnetic materials require the current in the objective lens of the STEM to be reduced to almost zero. This is due to the sample residing inside the magnetic field of the objective lens, which can be several Tesla, which for most ferromagnetic materials would destroy any magnetic domain structure. However, turning the objective lens almost off drastically increase the amount of aberrations in the STEM probe, leading to an increase in the probe size, and reduction in resolution. By using a probe aberration corrector it is possible to get a resolution of 1 nm.
Universal detectors
Recently, detectors have been developed for STEM that can record a complete convergent-beam electron diffraction pattern of all scattered and unscattered electrons at every pixel in a scan of the sample in a large four-dimensional dataset (a 2D diffraction pattern recorded at every 2D probe position). Due to the four-dimensional nature of the datasets, the term "4D STEM" has become a common name for this technique. The 4D datasets generated using the technique can be analyzed to reconstruct images equivalent to those of any conventional detector geometry, and can be used to map fields in the sample at high spatial resolution, including information about strain and electric fields. The technique can also be used to perform ptychography.
Spectroscopy in STEM
Electron energy loss spectroscopy
As the electron beam passes through the sample, some electrons in the beam lose energy via inelastic scattering interactions with electrons in the sample. In electron energy loss spectroscopy (EELS), the energy lost by the electrons in the beam is measured using an electron spectrometer, allowing features such as plasmons, and elemental ionization edges to be identified. Energy resolution in EELS is sufficient to allow the fine structure of ionization edges to be observed, which means that EELS can be used for chemical mapping, as well as elemental mapping. In STEM, EELS can be used to spectroscopically map a sample at atomic resolution. Recently developed monochromators can achieve an energy resolution of ~10 meV in EELS, allowing vibrational spectra to be acquired in STEM.
Energy-dispersive X-ray spectroscopy
In energy-dispersive X-ray spectroscopy (EDX) or (EDXS), which is also referred to in literature as X-ray energy dispersive spectroscopy (EDS) or (XEDS), an X-ray spectrometer is used to detect the characteristic X-rays that are emitted by atoms in the sample as they are ionized by electron in the beam. In STEM, EDX is typically used for compositional analysis and elemental mapping of samples. Typical X-ray detectors for electron microscopes cover only a small solid angle, which makes X-ray detection relatively inefficient since X-rays are emitted from the sample in every direction. However, detectors covering large solid angles have been recently developed, and atomic resolution X-ray mapping has even been achieved.
Convergent-beam electron diffraction
Convergent-beam electron diffraction (CBED) is a STEM technique that provides information about crystal structure at a specific point in a sample. In CBED, the width of the area a diffraction pattern is acquired from is equal to the size of the probe used, which can be smaller than 1 Å in an aberration-corrected STEM (see above). CBED differs from conventional electron diffraction in that CBED patterns consist of diffraction disks, rather than spots. The width of CBED disks is determined by the convergence angle of the electron beam. Other features, such as Kikuchi lines are often visible in CBED patterns. CBED can be used to determine the point and space groups of a specimen.
Quantitative scanning transmission electron microscopy (QSTEM)
Electron microscopy has accelerated research in materials science by quantifying properties and features from nanometer-resolution imaging with STEM, which is crucial in observing and confirming factors, such as thin film deposition, crystal growth, surface structure formation, and dislocation movement. Until recently, most papers have inferred the properties and behaviors of material systems based on these images without being able to establish rigorous rules for what exactly is observed. The techniques that have emerged as a result of interest in quantitative scanning transmission electron microscopy (QSTEM) closes this gap by allowing researchers to identify and quantify structural features that are only visible using high-resolution imaging in a STEM. Widely available image processing techniques are applied to high-angle annular dark field (HAADF) images of atomic columns to precisely locate their positions and the material's lattice constant(s). This ideology has been successfully used to quantify structural properties, such as strain and bond angle, at interfaces and defect complexes. QSTEM allows researchers to now compare the experimental data to theoretical simulations both qualitatively and quantitatively. Recent studies published have shown that QSTEM can measure structural properties, such as interatomic distances, lattice distortions from point defects, and locations of defects within an atomic column, with high accuracy. QSTEM can also be applied to selected area diffraction patterns and convergent beam diffraction patterns to quantify the degree and types of symmetry present in a specimen. Since any materials research requires structure-property relationship studies, this technique is applicable to countless fields. A notable study is the mapping of atomic column intensities and interatomic bond angles in a mott-insulator system. This was the first study to show that the transition from the insulating to conducting state was due to a slight global decrease in distortion, which was concluded by mapping the interatomic bond angles as a function of the dopant concentration. This effect is not visible by the human eye in a standard atomic-scale image enabled by HAADF imaging, thus this important finding was only made possible due to the application of QSTEM.
QSTEM analysis can be achieved using commonplace software and programming languages, such as MatLab or Python, with the help of toolboxes and plug-ins that serve to expedite the process. This is analysis that can be performed virtually anywhere. Consequently, the largest roadblock is acquiring a high-resolution, aberration-corrected scanning transmission electron microscope that can provide the images necessary to provide accurate quantification of structural properties at the atomic level. Most university research groups, for example, require permission to use such high-end electron microscopes at national lab facilities, which requires excessive time commitment. Universal challenges mainly involve becoming accustomed to the programming language desired and writing software that can tackle the very specific problems for a given material system. For example, one can imagine how a different analysis technique, and thus a separate image processing algorithm, is necessary for studying ideal cubic versus complex monoclinic structures.
Other STEM techniques
Specialized sample holders or modifications to the microscope can allow a number of additional techniques to be performed in STEM. Some examples are described below.
STEM tomography
STEM tomography allows the complete three-dimensional internal and external structure of a specimen to be reconstructed from a tilt-series of 2D projection images of the specimen acquired at incremental tilts. High angle ADF STEM is a particularly useful imaging mode for electron tomography because the intensity of high angle ADF-STEM images varies only with the projected mass-thickness of the sample, and the atomic number of atoms in the sample. This yields highly interpretable three dimensional reconstructions.
Cryo-STEM
Cryogenic electron microscopy in STEM (Cryo-STEM) allows specimens to be held in the microscope at liquid nitrogen or liquid helium temperatures. This is useful for imaging specimens that would be volatile in high vacuum at room temperature. Cryo-STEM has been used to study vitrified biological samples, vitrified solid-liquid interfaces in material specimens, and specimens containing elemental sulfur, which is prone to sublimation in electron microscopes at room temperature.
In situ/environmental STEM
In order to study the reactions of particles in gaseous environments, a STEM may be modified with a differentially pumped sample chamber to allow gas flow around the sample, whilst a specialized holder is used to control the reaction temperature. Alternatively a holder mounted with an enclosed gas flow cell may be used.
Nanoparticles and biological cells have been studied in liquid environments using liquid-phase electron microscopy in STEM, accomplished by mounting a microfluidic enclosure in the specimen holder.
Low-voltage STEM
A low-voltage electron microscope (LVEM) is an electron microscope that is designed to operate at relatively low electron accelerating voltages of between 0.5 and 30 kV. Some LVEMs can function as an SEM, a TEM, and a STEM in a single compact instrument. Using a low beam voltage increases image contrast which is especially important for biological specimens. This increase in contrast significantly reduces, or even eliminates the need to stain biological samples. Resolutions of a few nm are possible in TEM, SEM and STEM modes. The low energy of the electron beam means that permanent magnets can be used as lenses and thus a miniature column that does not require cooling can be used.
See also
Electron diffraction
Energy filtered transmission electron microscopy (EFTEM)
High-resolution transmission electron microscopy (HRTEM)
Scanning confocal electron microscopy (SCEM)
Scanning electron microscope (SEM)
Transmission electron microscopy (TEM)
4D scanning transmission electron microscopy (4D STEM)
References
Electron beam
Electron microscopy techniques | Scanning transmission electron microscopy | Chemistry | 3,031 |
3,039,253 | https://en.wikipedia.org/wiki/Torsion%20%28mechanics%29 | In the field of solid mechanics, torsion is the twisting of an object due to an applied torque. Torsion could be defined as strain or angular deformation, and is measured by the angle a chosen section is rotated from its equilibrium position. The resulting stress (torsional shear stress) is expressed in either the pascal (Pa), an SI unit for newtons per square metre, or in pounds per square inch (psi) while torque is expressed in newton metres (N·m) or foot-pound force (ft·lbf). In sections perpendicular to the torque axis, the resultant shear stress in this section is perpendicular to the radius.
In non-circular cross-sections, twisting is accompanied by a distortion called warping, in which transverse sections do not remain plane. For shafts of uniform cross-section unrestrained against warping, the torsion-related physical properties are expressed as:
where:
T is the applied torque or moment of torsion in Nm.
(tau) is the maximum shear stress at the outer surface
JT is the torsion constant for the section. For circular rods, and tubes with constant wall thickness, it is equal to the polar moment of inertia of the section, but for other shapes, or split sections, it can be much less. For more accuracy, finite element analysis (FEA) is the best method. Other calculation methods include membrane analogy and shear flow approximation.
r is the perpendicular distance between the rotational axis and the farthest point in the section (at the outer surface).
ℓ is the length of the object to or over which the torque is being applied.
φ (phi) is the angle of twist in radians.
G is the shear modulus, also called the modulus of rigidity, and is usually given in gigapascals (GPa), lbf/in2 (psi), or lbf/ft2 or in ISO units N/mm2.
The product JTG is called the torsional rigidity wT.
Properties
The shear stress at a point within a shaft is:
Note that the highest shear stress occurs on the surface of the shaft, where the radius is maximum. High stresses at the surface may be compounded by stress concentrations such as rough spots. Thus, shafts for use in high torsion are polished to a fine surface finish to reduce the maximum stress in the shaft and increase their service life.
The angle of twist can be found by using:
Sample calculation
Calculation of the steam turbine shaft radius for a turboset:
Assumptions:
Power carried by the shaft is 1000 MW; this is typical for a large nuclear power plant.
Yield stress of the steel used to make the shaft (τyield) is: 250 × 106 N/m2.
Electricity has a frequency of 50 Hz; this is the typical frequency in Europe. In North America, the frequency is 60 Hz.
The angular frequency can be calculated with the following formula:
The torque carried by the shaft is related to the power by the following equation:
The angular frequency is therefore 314.16 rad/s and the torque 3.1831 × 106 N·m.
The maximal torque is:
After substitution of the torsion constant, the following expression is obtained:
The diameter is 40 cm. If one adds a factor of safety of 5 and re-calculates the radius with the maximum stress equal to the yield stress/5, the result is a diameter of 69 cm, the approximate size of a turboset shaft in a nuclear power plant.
Failure mode
The shear stress in the shaft may be resolved into principal stresses via Mohr's circle. If the shaft is loaded only in torsion, then one of the principal stresses will be in tension and the other in compression. These stresses are oriented at a 45-degree helical angle around the shaft. If the shaft is made of brittle material, then the shaft will fail by a crack initiating at the surface and propagating through to the core of the shaft, fracturing in a 45-degree angle helical shape. This is often demonstrated by twisting a piece of blackboard chalk between one's fingers.
In the case of thin hollow shafts, a twisting buckling mode can result from excessive torsional load, with wrinkles forming at 45° to the shaft axis.
See also
List of area moments of inertia
Saint-Venant's theorem
Second moment of area
Structural rigidity
Torque tester
Torsion siege engine
Torsion spring or -bar
Torsional vibration
References
External links
Mechanics
Torque
Moment (physics) | Torsion (mechanics) | Physics,Mathematics,Engineering | 929 |
30,873,703 | https://en.wikipedia.org/wiki/System%20under%20test | System under test (SUT) refers to a system that is being tested for correct operation. According to ISTQB it is the test object.
From a unit testing perspective, the system under test represents all of the classes in a test that are not predefined pieces of code like stubs or even mocks. Each one of this can have its own configuration (a name and a version), making it scalable for a series of tests to get more and more precise, according to the quantity of quality of the system in test.
See also
Device under test
Test harness
References
External links
Test Procedure for §170.314(c) Clinical quality measures
Software quality
Systems engineering | System under test | Engineering | 140 |
26,726,634 | https://en.wikipedia.org/wiki/Importance%20of%20religion%20by%20country | This article charts a list of countries by importance of religion.
Methodology
The table below is based on the global Gallup Poll in 2009 research which asked "Is religion important in your daily life?". Percentages for "yes" and "no" answers are listed below; they often do not add up to 100% because some answered "don't know" or did not answer.
Countries/districts
See also
Demographics of atheism
Demographics of religion
Irreligion by country
List of religious populations
Religions by country
Religiosity and intelligence
Wealth and religion
References
Religion-related lists by country
Religion by country
Religious practices | Importance of religion by country | Biology | 127 |
71,308,064 | https://en.wikipedia.org/wiki/Corundum%20%28structure%29 | Corundum is the name for a structure prototype in inorganic solids, derived from the namesake polymorph of aluminum oxide (α-Al2O3). Other compounds, especially among the inorganic solids, exist in corundum structure, either in ambient or other conditions. Corundum structures are associated with metal-insulator transition, ferroelectricity, polar magnetism, and magnetoelectric effects.
Structure
The corundum structure has the space group . It typically exists in binary compounds of the type A2B3, where A is metallic and B is nonmetallic, including sesquioxides (A2O3), sesquisulfides (A2S3), etc. When A is nonmetallic and B is metallic, the structure becomes the antiphase of corundum, called the anticorundum structure type, with examples including β-Ca3N2 and borates. Ternary and multinary compounds can also exists in the corundum structure. The corundum-like structure with the composition A2BB'O6 is called double corundum. A list of examples are tabulated below.
See also
Corundum
Ilmenite
Perovskite (structure)
References
Crystal structure types
Crystallography
Mineralogy | Corundum (structure) | Physics,Chemistry,Materials_science,Engineering | 277 |
34,756,609 | https://en.wikipedia.org/wiki/Mertens-stable%20equilibrium | In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.
Like other refinements of Nash equilibrium
used in game theory stability selects subsets of the set of Nash equilibria that have desirable properties. Stability invokes stronger criteria than other refinements, and thereby ensures that more desirable properties are satisfied.
Desirable Properties of a Refinement
Refinements have often been motivated by arguments for admissibility, backward induction, and forward induction. In a two-player game, an admissible decision rule for a player is one that does not use any strategy that is weakly dominated by another (see Strategic dominance). Backward induction posits that a player's optimal action in any event anticipates that his and others' subsequent actions are optimal. The refinement called subgame perfect equilibrium implements a weak version of backward induction, and increasingly stronger versions are sequential equilibrium, perfect equilibrium, quasi-perfect equilibrium, and proper equilibrium. Forward induction posits that a player's optimal action in any event presumes the optimality of others' past actions whenever that is consistent with his observations. Forward induction is satisfied by a sequential equilibrium for which a player's belief at an information set assigns probability only to others' optimal strategies that enable that information to be reached.
Kohlberg and Mertens emphasized further that a solution concept should satisfy the invariance principle that it not depend on which among the many equivalent representations of the strategic situation as an extensive-form game is used. Thus it should depend only on the reduced normal-form game obtained after elimination of pure strategies that are redundant because their payoffs for all players can be replicated by a mixture of other pure strategies. Mertens emphasized also the importance of the small worlds principle that a solution concept should depend only on the ordinal properties of players' preferences, and should not depend on whether the game includes extraneous players whose actions have no effect on the original players' feasible strategies and payoffs.
Kohlberg and Mertens demonstrated via examples that not all of these properties can be obtained from a solution concept that selects single Nash equilibria. Therefore, they proposed that a solution concept should select closed connected subsets of the set of Nash equilibria.
Properties of Stable Sets
Admissibility and Perfection: Each equilibrium in a stable set is perfect, and therefore admissible.
Backward Induction and Forward Induction: A stable set includes a proper equilibrium of the normal form of the game that induces a quasi-perfect and therefore a sequential equilibrium in every extensive-form game with perfect recall that has the same normal form. A subset of a stable set survives iterative elimination of weakly dominated strategies and strategies that are inferior replies at every equilibrium in the set.
Invariance and Small Worlds: The stable sets of a game are the projections of the stable sets of any larger game in which it is embedded while preserving the original players' feasible strategies and payoffs.
Decomposition and Player Splitting. The stable sets of the product of two independent games are the products of their stable sets. Stable sets are not affected by splitting a player into agents such that no path through the game tree includes actions of two agents.
For two-player games with perfect recall and generic payoffs, stability is equivalent to just three of these properties: a stable set uses only undominated strategies, includes a quasi-perfect equilibrium, and is immune to embedding in a larger game.
Definition of a Stable Set
A stable set is defined mathematically by essentiality of the projection map from a closed connected neighborhood in the graph of the Nash equilibria over the space of perturbed games obtained by perturbing players' strategies toward completely mixed strategies. This definition requires more than every nearby game having a nearby equilibrium. Essentiality requires further that no deformation of the projection maps to the boundary, which ensures that perturbations of the fixed point problem defining Nash equilibria have nearby solutions. This is apparently necessary to obtain all the desirable properties listed above.
Mertens provided several formal definitions depending on the coefficient module used for homology or cohomology.
A formal definition requires some notation. For a given game let be product of the simplices of the players' of mixed strategies. For each , let and let be its topological boundary. For let be the minimum probability of any pure strategy. For any define the perturbed game as the game where the strategy set of each player is the same as in , but where the payoff from a strategy profile is the payoff in from the profile . Say that is a perturbed equilibrium of if is an equilibrium of . Let be the graph of the perturbed equilibrium correspondence over , viz., the graph is the set of those pairs such that is a perturbed equilibrium of . For , is the corresponding equilibrium of . Denote by the natural projection map from to . For , let , and for let . Finally, refers to Čech cohomology with integer coefficients.
The following is a version of the most inclusive of Mertens' definitions, called *-stability.
Definition of a *-stable set: is a *-stable set if for some closed subset of with it has the following two properties:
Connectedness: For every neighborhood of in , the set has a connected component whose closure is a neighborhood of in .
Cohomological Essentiality: is nonzero for some .
If essentiality in cohomology or homology is relaxed to homotopy, then a weaker definition is obtained, which differs chiefly in a weaker form of the decomposition property.
References
Game theory equilibrium concepts
Non-cooperative games | Mertens-stable equilibrium | Mathematics | 1,220 |
67,815,712 | https://en.wikipedia.org/wiki/Fermat%27s%20Last%20Tango | Fermat's Last Tango is a 2000 off-Broadway musical about the proof of Fermat's Last Theorem, written by husband and wife Joshua Rosenblum (music, lyrics) and Joanne Sydney Lessner (book, lyrics). The musical presents a fictionalized version of the real life story of Andrew Wiles, and has been praised for the accuracy of the mathematical content. The original production at the York Theatre received mixed reviews, but the musical was well received by mathematical audiences. A video of the original production has been distributed by the Clay Mathematics Institute and shown at several mathematical conferences and similar occasions. The musical has also been translated into Portuguese.
Synopsis
The plot is based on the story of the proof of Fermat's Last Theorem by Andrew Wiles, whose name is changed to "Daniel Keane" in the musical. After seven years of isolation in his attic, Keane believes he has found a proof of the theorem. The musical starts with a press conference, where Keane explains his proof to reporters and promises to return to normal life with his wife Anna and his family. After promising to Anna that he is now "done with Fermat", Keane is surprised in his study by none other than Fermat himself. Keane asks Fermat for the secret of his proof but is refused. Instead, Fermat introduces him to the "Aftermath", a "heavenly purgatory" where he meets the famous mathematicians Euclid, Pythagoras, Newton, and Gauss. They inform him that his proof contains a "big fat hole". In a second press conference, Keane is questioned by reporters about a flaw in the proof. Anna wishes for a corrected proof for her birthday. Fermat mocks Keane, and the other mathematicians inform him that "mathematics is a young man's game".
Keane returns to his attic to try to fix his proof, while his "math widow" wife is frustrated. Fermat continues to taunt Keane, but he is invisible and inaudible to Anna, and the three dance a "bizarre tango à trois" while Anna is confused by Keane talking to Fermat. The other mathematicians from the Aftermath, after noticing that they can't keep up with the mathematics of the past century, decide to grant admission to Keane even if he is unable to prove the theorem. As Keane finally gives up and declares his attempts a failure, Anna suggests that "within your failure lie the seeds of your success", repeating a line earlier spoken by the mathematicians. This quickly leads to Keane realising how to close the gap in the argument, and the musical ends with another press conference, and Fermat congratulates Keane for his proof.
Concept and writing
Rosenblum and Lessner started working on Fermat's Last Tango in December 1996, after Rosenblum had read a review of Amir Aczel's book Fermat's Last Theorem. Originally planned as an opera, it turned into a musical during the writing process, but operatic elements remained. The original working title had been Proof, but was later changed because of the successful 2000 play Proof. While written in a whimsical tone and using nerdy jokes, the lyrics contain sophisticated mathematical content and mention the Shimura-Taniyama conjecture. In the words of mathematician Arthur Jaffe, "the characters think about mathematics just the way a real mathematician would". Keane's mistake in his proof is an incorrect assumption about Galois representations, just as in the original proof attempt by Andrew Wiles. The number theorist Fernando Q. Gouvêa is credited as mathematics consultant for the musical; writer Lessner was not in contact with Wiles while the musical was created.
Almost the entire text is performed in song, with the exception of the prologue. The music contains elements of operetta, blues, pop, and tango. According to reviewer Simon Saltzman, the use of popular musical styles helps to make the show accessible despite its esoteric subject matter.
Original production
The original production by the York Theatre ran from November 21 to December 31, 2000 at the Theater at St. Peter's Lutheran Church, directed by Mel Marvin, with sets designed by James Morgan.
Cast:
Chris Thompson as Daniel Keane (baritone)
Jonathan Rabb as Pierre de Fermat (tenor)
Edwardyne Cowan as Anna Keane (mezzo-soprano)
Christianne Tisdale as Euclid and reporter 1 (soprano)
Carrie Wilshusen as Sir Isaac Newton and reporter 2 (mezzo-soprano)
Gilles Chiasson as Carl Friedrich Gauss and reporter 3
Mitchell Kantor as Pythagoras and reporter 4 (bass)
Musical numbers
The numbers are listed as in the CD production's liner notes.
Prologue – Reporter 4, Fermat
Press Conference I – Reporters, Keane
You're a Hero Now – Anna, Keane
The Beauty of Numbers – Keane
Tell Me Your Secret – Keane, Fermat
The Aftermath – Mathematicians, Fermat, Keane
I Dreamed – Keane, Anna
Press Conference II – Reporters, Keane, Anna
My Name – Fermat
All I Want for My Birthday – Anna
Game Show, Part I – Fermat, Keane
Young Man's Game – Fermat, Mathematicians
Game Show, Part II – Fermat, Keane, Mathematicians
Math Widow – Anna
I'll Always Be There (Fermat's Last Tango) – Fermat, Keane, Anna
Relay Race – Mathematicians
I'm Stumbling – Keane
Oh, It's You – Keane, Pythagoras
The Beauty of Numbers (reprise) – Anna, Keane
Press Conference III – Reporters, Keane, Fermat
Other performances
The musical was translated into Portuguese by César Viana as and was played in Portuguese university towns in 2003 and at the Teatro da Trindade in 2004. Students at Madison East High School performed an abridged version in 2005 and 2006, including at a statewide meeting of the Mathematical Association of America. In March 2023, the musical was performed at the University of Oxford in one of the Mathematical Institute's lecture halls.
Reception
Reviews for Fermat's Last Tango during its theatrical run were mixed. Wilburn Hampton's review in The New York Times, while noticing the catchy tunes and lyrics, found fault with Daniel Keane not "becom[ing] a real character". Elyse Sommer's review in CurtainUp was more positive, finding praise for both writing and the performances of Rabb and Thompson. Writing in the "Periodica" section of TotalTheater, reviewer Simon Saltzman praised Rabb and called the titular tango the highlight of the show.
The mathematical reception has been more generally positive, with audiences reactions to screenings of the film version ranging from "mildly amused to enthusiastic." Mathematician Robert Osserman, while acknowledging the musical as unique to the point of making comparisons difficult, found it fun and moving and praised the actors and the music. He especially pointed out the mathematical accuracy, but mildly complained about stereotyping of mathematicians and the differences between the true story of Andrew Wiles and the fictional story of Daniel Keane: Unlike Keane, Wiles did not withdraw to his attic for seven years and did not solve the complete Shimura-Taniyama conjecture. Richard Taylor's role in the proof is also omitted in the fictionalized version. Michele Emmer's review in the Mathematical Intelligencer was positive, stating "the gamble of trying to produce an entertaining and mathematically correct musical turned out a success." In their book Math Goes to the Movies, mathematicians Burkard Polster and Marty Ross were enthusiastic about Fermat's Last Tango, calling it "terrific fun" and a "must-see". In her book Science on Stage From Doctor Faustus to Copenhagen, literary scholar Kirsten Shepherd-Barr noted the musical's "successful integration of a surprising amount of 'real' mathematics with a charming and witty score."
In his book Dr. Riemann's Zeros: The Search for the $1 Million Solution to the Greatest Problem in Mathematics, journalist Karl Sabbagh wrote about seeing a performance of Fermat's Last Tango after meeting Andrew Wiles at Princeton. He described the Daniel Keane in the musical as "an accurate portrayal of Wiles" and stated "The writers of this musical had managed to capture the essence of the mathematical enterprise and to see that the human drama of Wiles's struggle with Fermat's Last Theorem embodied as much passion, frustration and triumph as is found in the plot of any conventional film or play." Andrew Wiles himself saw the musical in December 2000, with his family. In an interview, he later stated that he "really liked the portrayal of the personal part of the story - the whole idea of the threesome at the tango was beautifully done" and that he felt "it had been very intelligently written".
Recordings
On the initiative of Clay Mathematics Institute president Arthur Jaffe, a high quality live performance video was made, directed by David Stern. It was first shown to an audience of four hundred people in July 2001 in Berkeley, and later sold at cost by the Clay Mathematics Institute in both VHS and DVD editions. A pamphlet about the mathematics and the mathematicians as well as the actors in the musical was included. The film was shown at various mathematical conferences.
A recording made on December 18, 2000, was distributed as a CD version by Original Cast Records. It was positively reviewed by Matthew Murray, who especially praised Edwardyne Cowan's performance as Anna Keane.
References
Footnotes
Bibliography
External links
Off-Broadway musicals
2000 musicals
Sung-through musicals
Fermat's Last Theorem
Musicals inspired by real-life events | Fermat's Last Tango | Mathematics | 1,978 |
26,197 | https://en.wikipedia.org/wiki/Radiocarbon%20dating | Radiocarbon dating (also referred to as carbon dating or carbon-14 dating) is a method for determining the age of an object containing organic material by using the properties of radiocarbon, a radioactive isotope of carbon.
The method was developed in the late 1940s at the University of Chicago by Willard Libby. It is based on the fact that radiocarbon () is constantly being created in the Earth's atmosphere by the interaction of cosmic rays with atmospheric nitrogen. The resulting combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire by eating the plants. When the animal or plant dies, it stops exchanging carbon with its environment, and thereafter the amount of it contains begins to decrease as the undergoes radioactive decay. Measuring the amount of in a sample from a dead plant or animal, such as a piece of wood or a fragment of bone, provides information that can be used to calculate when the animal or plant died. The older a sample is, the less there is to be detected, and because the half-life of (the period of time after which half of a given sample will have decayed) is about 5,730 years, the oldest dates that can be reliably measured by this process date to approximately 50,000 years ago, although special preparation methods occasionally make an accurate analysis of older samples possible. Libby received the Nobel Prize in Chemistry for his work in 1960.
Research has been ongoing since the 1960s to determine what the proportion of in the atmosphere has been over the past fifty thousand years. The resulting data, in the form of a calibration curve, is now used to convert a given measurement of radiocarbon in a sample into an estimate of the sample's calendar age. Other corrections must be made to account for the proportion of in different types of organisms (fractionation), and the varying levels of throughout the biosphere (reservoir effects). Additional complications come from the burning of fossil fuels such as coal and oil, and from the above-ground nuclear tests done in the 1950s and 1960s. Because the time it takes to convert biological materials to fossil fuels is substantially longer than the time it takes for its to decay below detectable levels, fossil fuels contain almost no . As a result, beginning in the late 19th century, there was a noticeable drop in the proportion of as the carbon dioxide generated from burning fossil fuels began to accumulate in the atmosphere. Conversely, nuclear testing increased the amount of in the atmosphere, which reached a maximum in about 1965 of almost double the amount present in the atmosphere prior to nuclear testing.
Measurement of radiocarbon was originally done by beta-counting devices, which counted the amount of beta radiation emitted by decaying atoms in a sample. More recently, accelerator mass spectrometry has become the method of choice; it counts all the atoms in the sample and not just the few that happen to decay during the measurements; it can therefore be used with much smaller samples (as small as individual plant seeds), and gives results much more quickly. The development of radiocarbon dating has had a profound impact on archaeology. In addition to permitting more accurate dating within archaeological sites than previous methods, it allows comparison of dates of events across great distances. Histories of archaeology often refer to its impact as the "radiocarbon revolution". Radiocarbon dating has allowed key transitions in prehistory to be dated, such as the end of the last ice age, and the beginning of the Neolithic and Bronze Age in different regions.
Background
History
In 1939, Martin Kamen and Samuel Ruben of the Radiation Laboratory at Berkeley began experiments to determine if any of the elements common in organic matter had isotopes with half-lives long enough to be of value in biomedical research. They synthesized using the laboratory's cyclotron accelerator and soon discovered that the atom's half-life was far longer than had been previously thought. This was followed by a prediction by Serge A. Korff, then employed at the Franklin Institute in Philadelphia, that the interaction of thermal neutrons with in the upper atmosphere would create . It had previously been thought that would be more likely to be created by deuterons interacting with . At some time during World War II, Willard Libby, who was then at Berkeley, learned of Korff's research and conceived the idea that it might be possible to use radiocarbon for dating.
In 1945, Libby moved to the University of Chicago, where he began his work on radiocarbon dating. He published a paper in 1946 in which he proposed that the carbon in living matter might include as well as non-radioactive carbon. Libby and several collaborators proceeded to experiment with methane collected from sewage works in Baltimore, and after isotopically enriching their samples they were able to demonstrate that they contained . By contrast, methane created from petroleum showed no radiocarbon activity because of its age. The results were summarized in a paper in Science in 1947, in which the authors commented that their results implied it would be possible to date materials containing carbon of organic origin.
Libby and James Arnold proceeded to test the radiocarbon dating theory by analyzing samples with known ages. For example, two samples taken from the tombs of two Egyptian kings, Zoser and Sneferu, independently dated to 2625 BC ± 75 years, were dated by radiocarbon measurement to an average of 2800 BC ± 250 years. These results were published in Science in December 1949. Within 11 years of their announcement, more than 20 radiocarbon dating laboratories had been set up worldwide. In 1960, Libby was awarded the Nobel Prize in Chemistry for this work.
Physical and chemical details
In nature, carbon exists as three isotopes. Two are stable and not radioactive: carbon-12 (), and carbon-13 (); and carbon-14 (), also known as "radiocarbon", which is radioactive. The half-life of (the time it takes for half of a given amount of to decay) is about 5,730 years, so its concentration in the atmosphere might be expected to decrease over thousands of years, but is constantly being produced in the lower stratosphere and upper troposphere, primarily by galactic cosmic rays, and to a lesser degree by solar cosmic rays. These cosmic rays generate neutrons as they travel through the atmosphere which can strike nitrogen-14 () atoms and turn them into . The following nuclear reaction is the main pathway by which is created:
n + → + p
where n represents a neutron and p represents a proton.
Once produced, the quickly combines with the oxygen (O) in the atmosphere to form first carbon monoxide (), and ultimately carbon dioxide ().
+ → + O
+ OH → + H
Carbon dioxide produced in this way diffuses in the atmosphere, is dissolved in the ocean, and is taken up by plants via photosynthesis. Animals eat the plants, and ultimately the radiocarbon is distributed throughout the biosphere. The ratio of to is approximately 1.25 parts of to 1012 parts of . In addition, about 1% of the carbon atoms are of the stable isotope .
The equation for the radioactive decay of is:
→ + +
By emitting a beta particle (an electron, e−) and an electron antineutrino (), one of the neutrons in the nucleus changes to a proton and the nucleus reverts to the stable (non-radioactive) isotope .
Principles
During its life, a plant or animal is in equilibrium with its surroundings by exchanging carbon either with the atmosphere or through its diet. It will, therefore, have the same proportion of as the atmosphere, or in the case of marine animals or plants, with the ocean. Once it dies, it ceases to acquire , but the within its biological material at that time will continue to decay, and so the ratio of to in its remains will gradually decrease. Because decays at a known rate, the proportion of radiocarbon can be used to determine how long it has been since a given sample stopped exchanging carbon – the older the sample, the less will be left.
The equation governing the decay of a radioactive isotope is:
where N0 is the number of atoms of the isotope in the original sample (at time t = 0, when the organism from which the sample was taken died), and N is the number of atoms left after time t. λ is a constant that depends on the particular isotope; for a given isotope it is equal to the reciprocal of the mean-life – i.e. the average or expected time a given atom will survive before undergoing radioactive decay. The mean-life, denoted by τ, of is 8,267 years, so the equation above can be rewritten as:
The sample is assumed to have originally had the same / ratio as the ratio in the atmosphere, and since the size of the sample is known, the total number of atoms in the sample can be calculated, yielding N0, the number of atoms in the original sample. Measurement of N, the number of atoms currently in the sample, allows the calculation of t, the age of the sample, using the equation above.
The half-life of a radioactive isotope (usually denoted by t1/2) is a more familiar concept than the mean-life, so although the equations above are expressed in terms of the mean-life, it is more usual to quote the value of 's half-life than its mean-life. The currently accepted value for the half-life of is 5,700 ± 30 years. This means that after 5,700 years, only half of the initial will remain; a quarter will remain after 11,400 years; an eighth after 17,100 years; and so on.
The above calculations make several assumptions, such as that the level of in the atmosphere has remained constant over time. In fact, the level of in the atmosphere has varied significantly and as a result, the values provided by the equation above have to be corrected by using data from other sources. This is done by calibration curves (discussed below), which convert a measurement of in a sample into an estimated calendar age. The calculations involve several steps and include an intermediate value called the "radiocarbon age", which is the age in "radiocarbon years" of the sample: an age quoted in radiocarbon years means that no calibration curve has been used − the calculations for radiocarbon years assume that the atmospheric / ratio has not changed over time.
Calculating radiocarbon ages also requires the value of the half-life for . In Libby's 1949 paper he used a value of 5720 ± 47 years, based on research by Engelkemeir et al. This was remarkably close to the modern value, but shortly afterwards the accepted value was revised to 5568 ± 30 years, and this value was in use for more than a decade. It was revised again in the early 1960s to 5,730 ± 40 years, which meant that many calculated dates in papers published prior to this were incorrect (the error in the half-life is about 3%). For consistency with these early papers, it was agreed at the 1962 Radiocarbon Conference in Cambridge (UK) to use the "Libby half-life" of 5568 years. Radiocarbon ages are still calculated using this half-life, and are known as "Conventional Radiocarbon Age". Since the calibration curve (IntCal) also reports past atmospheric concentration using this conventional age, any conventional ages calibrated against the IntCal curve will produce a correct calibrated age. When a date is quoted, the reader should be aware that if it is an uncalibrated date (a term used for dates given in radiocarbon years) it may differ substantially from the best estimate of the actual calendar date, both because it uses the wrong value for the half-life of , and because no correction (calibration) has been applied for the historical variation of in the atmosphere over time.
Carbon exchange reservoir
Carbon is distributed throughout the atmosphere, the biosphere, and the oceans; these are referred to collectively as the carbon exchange reservoir, and each component is also referred to individually as a carbon exchange reservoir. The different elements of the carbon exchange reservoir vary in how much carbon they store, and in how long it takes for the generated by cosmic rays to fully mix with them. This affects the ratio of to in the different reservoirs, and hence the radiocarbon ages of samples that originated in each reservoir. The atmosphere, which is where is generated, contains about 1.9% of the total carbon in the reservoirs, and the it contains mixes in less than seven years. The ratio of to in the atmosphere is taken as the baseline for the other reservoirs: if another reservoir has a lower ratio of to , it indicates that the carbon is older and hence that either some of the has decayed, or the reservoir is receiving carbon that is not at the atmospheric baseline. The ocean surface is an example: it contains 2.4% of the carbon in the exchange reservoir, but there is only about 95% as much as would be expected if the ratio were the same as in the atmosphere. The time it takes for carbon from the atmosphere to mix with the surface ocean is only a few years, but the surface waters also receive water from the deep ocean, which has more than 90% of the carbon in the reservoir. Water in the deep ocean takes about 1,000 years to circulate back through surface waters, and so the surface waters contain a combination of older water, with depleted , and water recently at the surface, with in equilibrium with the atmosphere.
Creatures living at the ocean surface have the same ratios as the water they live in, and as a result of the reduced / ratio, the radiocarbon age of marine life is typically about 400 years. Organisms on land are in closer equilibrium with the atmosphere and have the same / ratio as the atmosphere. These organisms contain about 1.3% of the carbon in the reservoir; sea organisms have a mass of less than 1% of those on land and are not shown in the diagram. Accumulated dead organic matter, of both plants and animals, exceeds the mass of the biosphere by a factor of nearly 3, and since this matter is no longer exchanging carbon with its environment, it has a / ratio lower than that of the biosphere.
Dating considerations
The variation in the / ratio in different parts of the carbon exchange reservoir means that a straightforward calculation of the age of a sample based on the amount of it contains will often give an incorrect result. There are several other possible sources of error that need to be considered. The errors are of four general types:
variations in the / ratio in the atmosphere, both geographically and over time;
isotopic fractionation;
variations in the / ratio in different parts of the reservoir;
contamination.
Atmospheric variation
In the early years of using the technique, it was understood that it depended on the atmospheric / ratio having remained the same over the preceding few thousand years. To verify the accuracy of the method, several artefacts that were datable by other techniques were tested; the results of the testing were in reasonable agreement with the true ages of the objects. Over time, however, discrepancies began to appear between the known chronology for the oldest Egyptian dynasties and the radiocarbon dates of Egyptian artefacts. Neither the pre-existing Egyptian chronology nor the new radiocarbon dating method could be assumed to be accurate, but a third possibility was that the / ratio had changed over time. The question was resolved by the study of tree rings: comparison of overlapping series of tree rings allowed the construction of a continuous sequence of tree-ring data that spanned 8,000 years. (Since that time the tree-ring data series has been extended to 13,900 years.) In the 1960s, Hans Suess was able to use the tree-ring sequence to show that the dates derived from radiocarbon were consistent with the dates assigned by Egyptologists. This was possible because although annual plants, such as corn, have a / ratio that reflects the atmospheric ratio at the time they were growing, trees only add material to their outermost tree ring in any given year, while the inner tree rings don't get their replenished and instead start losing through decay. Hence each ring preserves a record of the atmospheric / ratio of the year it grew in. Carbon-dating the wood from the tree rings themselves provides the check needed on the atmospheric / ratio: with a sample of known date, and a measurement of the value of N (the number of atoms of remaining in the sample), the carbon-dating equation allows the calculation of N0 – the number of atoms of in the sample at the time the tree ring was formed – and hence the / ratio in the atmosphere at that time. Equipped with the results of carbon-dating the tree rings, it became possible to construct calibration curves designed to correct the errors caused by the variation over time in the / ratio. These curves are described in more detail below.
Coal and oil began to be burned in large quantities during the 19th century. Both are sufficiently old that they contain little or no detectable and, as a result, the released substantially diluted the atmospheric / ratio. Dating an object from the early 20th century hence gives an apparent date older than the true date. For the same reason, concentrations in the neighbourhood of large cities are lower than the atmospheric average. This fossil fuel effect (also known as the Suess effect, after Hans Suess, who first reported it in 1955) would only amount to a reduction of 0.2% in activity if the additional carbon from fossil fuels were distributed throughout the carbon exchange reservoir, but because of the long delay in mixing with the deep ocean, the actual effect is a 3% reduction.
A much larger effect comes from above-ground nuclear testing, which released large numbers of neutrons into the atmosphere, resulting in the creation of . From about 1950 until 1963, when atmospheric nuclear testing was banned, it is estimated that several tonnes of were created. If all this extra had immediately been spread across the entire carbon exchange reservoir, it would have led to an increase in the / ratio of only a few per cent, but the immediate effect was to almost double the amount of in the atmosphere, with the peak level occurring in 1964 for the northern hemisphere, and in 1966 for the southern hemisphere. The level has since dropped, as this bomb pulse or "bomb carbon" (as it is sometimes called) percolates into the rest of the reservoir.
Isotopic fractionation
Photosynthesis is the primary process by which carbon moves from the atmosphere into living things. In photosynthetic pathways is absorbed slightly more easily than , which in turn is more easily absorbed than . The differential uptake of the three carbon isotopes leads to / and / ratios in plants that differ from the ratios in the atmosphere. This effect is known as isotopic fractionation.
To determine the degree of fractionation that takes place in a given plant, the amounts of both and isotopes are measured, and the resulting / ratio is then compared to a standard ratio known as PDB. The / ratio is used instead of / because the former is much easier to measure, and the latter can be easily derived: the depletion of relative to is proportional to the difference in the atomic masses of the two isotopes, so the depletion for is twice the depletion of . The fractionation of , known as , is calculated as follows:
‰
where the ‰ sign indicates parts per thousand. Because the PDB standard contains an unusually high proportion of , most measured values are negative.
For marine organisms, the details of the photosynthesis reactions are less well understood, and the values for marine photosynthetic organisms are dependent on temperature. At higher temperatures, has poor solubility in water, which means there is less available for the photosynthetic reactions. Under these conditions, fractionation is reduced, and at temperatures above 14 °C the values are correspondingly higher, while at lower temperatures, becomes more soluble and hence more available to marine organisms. The value for animals depends on their diet. An animal that eats food with high values will have a higher than one that eats food with lower values. The animal's own biochemical processes can also impact the results: for example, both bone minerals and bone collagen typically have a higher concentration of than is found in the animal's diet, though for different biochemical reasons. The enrichment of bone also implies that excreted material is depleted in relative to the diet.
Since makes up about 1% of the carbon in a sample, the / ratio can be accurately measured by mass spectrometry. Typical values of have been found by experiment for many plants, as well as for different parts of animals such as bone collagen, but when dating a given sample it is better to determine the value for that sample directly than to rely on the published values.
The carbon exchange between atmospheric and carbonate at the ocean surface is also subject to fractionation, with in the atmosphere more likely than to dissolve in the ocean. The result is an overall increase in the / ratio in the ocean of 1.5%, relative to the / ratio in the atmosphere. This increase in concentration almost exactly cancels out the decrease caused by the upwelling of water (containing old, and hence -depleted, carbon) from the deep ocean, so that direct measurements of radiation are similar to measurements for the rest of the biosphere. Correcting for isotopic fractionation, as is done for all radiocarbon dates to allow comparison between results from different parts of the biosphere, gives an apparent age of about 400 years for ocean surface water.
Reservoir effects
Libby's original exchange reservoir hypothesis assumed that the / ratio in the exchange reservoir is constant all over the world, but it has since been discovered that there are several causes of variation in the ratio across the reservoir.
Marine effect
The in the atmosphere transfers to the ocean by dissolving in the surface water as carbonate and bicarbonate ions; at the same time the carbonate ions in the water are returning to the air as . This exchange process brings from the atmosphere into the surface waters of the ocean, but the thus introduced takes a long time to percolate through the entire volume of the ocean. The deepest parts of the ocean mix very slowly with the surface waters, and the mixing is uneven. The main mechanism that brings deep water to the surface is upwelling, which is more common in regions closer to the equator. Upwelling is also influenced by factors such as the topography of the local ocean bottom and coastlines, the climate, and wind patterns. Overall, the mixing of deep and surface waters takes far longer than the mixing of atmospheric with the surface waters, and as a result water from some deep ocean areas has an apparent radiocarbon age of several thousand years. Upwelling mixes this "old" water with the surface water, giving the surface water an apparent age of about several hundred years (after correcting for fractionation). This effect is not uniform – the average effect is about 400 years, but there are local deviations of several hundred years for areas that are geographically close to each other. These deviations can be accounted for in calibration, and users of software such as CALIB can provide as an input the appropriate correction for the location of their samples. The effect also applies to marine organisms such as shells, and marine mammals such as whales and seals, which have radiocarbon ages that appear to be hundreds of years old.
Hemisphere effect
The northern and southern hemispheres have atmospheric circulation systems that are sufficiently independent of each other that there is a noticeable time lag in mixing between the two. The atmospheric / ratio is lower in the southern hemisphere, with an apparent additional age of about 40 years for radiocarbon results from the south as compared to the north. This is because the greater surface area of ocean in the southern hemisphere means that there is more carbon exchanged between the ocean and the atmosphere than in the north. Since the surface ocean is depleted in because of the marine effect, is removed from the southern atmosphere more quickly than in the north. The effect is strengthened by strong upwelling around Antarctica.
Other effects
If the carbon in freshwater is partly acquired from aged carbon, such as rocks, then the result will be a reduction in the / ratio in the water. For example, rivers that pass over limestone, which is mostly composed of calcium carbonate, will acquire carbonate ions. Similarly, groundwater can contain carbon derived from the rocks through which it has passed. These rocks are usually so old that they no longer contain any measurable , so this carbon lowers the / ratio of the water it enters, which can lead to apparent ages of thousands of years for both the affected water and the plants and freshwater organisms that live in it. This is known as the hard water effect because it is often associated with calcium ions, which are characteristic of hard water; other sources of carbon such as humus can produce similar results, and can also reduce the apparent age if they are of more recent origin than the sample. The effect varies greatly and there is no general offset that can be applied; additional research is usually needed to determine the size of the offset, for example by comparing the radiocarbon age of deposited freshwater shells with associated organic material.
Volcanic eruptions eject large amounts of carbon into the air. The carbon is of geological origin and has no detectable , so the / ratio near the volcano is depressed relative to surrounding areas. Dormant volcanoes can also emit aged carbon. Plants that photosynthesize this carbon also have lower / ratios: for example, plants in the neighbourhood of the Furnas caldera in the Azores were found to have apparent ages that ranged from 250 years to 3320 years.
Contamination
Any addition of carbon to a sample of a different age will cause the measured date to be inaccurate. Contamination with modern carbon causes a sample to appear to be younger than it really is: the effect is greater for older samples. If a sample that is 17,000 years old is contaminated so that 1% of the sample is modern carbon, it will appear to be 600 years younger; for a sample that is 34,000 years old, the same amount of contamination would cause an error of 4,000 years. Contamination with old carbon, with no remaining , causes an error in the other direction independent of age – a sample contaminated with 1% old carbon will appear to be about 80 years older than it truly is, regardless of the date of the sample.
Samples
Samples for dating need to be converted into a form suitable for measuring the content; this can mean conversion to gaseous, liquid, or solid form, depending on the measurement technique to be used. Before this can be done, the sample must be treated to remove any contamination and any unwanted constituents. This includes removing visible contaminants, such as rootlets that may have penetrated the sample since its burial. Alkali and acid washes can be used to remove humic acid and carbonate contamination, but care has to be taken to avoid removing the part of the sample that contains the carbon to be tested.
Material considerations
It is common to reduce a wood sample to just the cellulose component before testing, but since this can reduce the volume of the sample to 20% of its original size, testing of the whole wood is often performed as well. Charcoal is often tested but is likely to need treatment to remove contaminants.
Unburnt bone can be tested; it is usual to date it using collagen, the protein fraction that remains after washing away the bone's structural material. Hydroxyproline, one of the constituent amino acids in bone, was once thought to be a reliable indicator as it was not known to occur except in bone, but it has since been detected in groundwater.
For burnt bone, testability depends on the conditions under which the bone was burnt. If the bone was heated under reducing conditions, it (and associated organic matter) may have been carbonized. In this case, the sample is often usable.
Shells from both marine and land organisms consist almost entirely of calcium carbonate, either as aragonite or as calcite, or some mixture of the two. Calcium carbonate is very susceptible to dissolving and recrystallizing; the recrystallized material will contain carbon from the sample's environment, which may be of geological origin. If testing recrystallized shell is unavoidable, it is sometimes possible to identify the original shell material from a sequence of tests. It is also possible to test conchiolin, an organic protein found in shell, but it constitutes only 1–2% of shell material.
The three major components of peat are humic acid, humins, and fulvic acid. Of these, humins give the most reliable date as they are insoluble in alkali and less likely to contain contaminants from the sample's environment. A particular difficulty with dried peat is the removal of rootlets, which are likely to be hard to distinguish from the sample material.
Soil contains organic material, but because of the likelihood of contamination by humic acid of more recent origin, it is very difficult to get satisfactory radiocarbon dates. It is preferable to sieve the soil for fragments of organic origin, and date the fragments with methods that are tolerant of small sample sizes.
Other materials that have been successfully dated include ivory, paper, textiles, individual seeds and grains, straw from within mud bricks, and charred food remains found in pottery.
Preparation and size
Particularly for older samples, it may be useful to enrich the amount of in the sample before testing. This can be done with a thermal diffusion column. The process takes about a month and requires a sample about ten times as large as would be needed otherwise, but it allows more precise measurement of the / ratio in old material and extends the maximum age that can be reliably reported.
Once contamination has been removed, samples must be converted to a form suitable for the measuring technology to be used. Where gas is required, is widely used. For samples to be used in liquid scintillation counters, the carbon must be in liquid form; the sample is typically converted to benzene. For accelerator mass spectrometry, solid graphite targets are the most common, although gaseous can also be used.
The quantity of material needed for testing depends on the sample type and the technology being used. There are two types of testing technology: detectors that record radioactivity, known as beta counters, and accelerator mass spectrometers. For beta counters, a sample weighing at least is typically required. Accelerator mass spectrometry is much more sensitive, and samples containing as little as 0.5 milligrams of carbon can be used.
Measurement and results
For decades after Libby performed the first radiocarbon dating experiments, the only way to measure the in a sample was to detect the radioactive decay of individual carbon atoms. In this approach, what is measured is the activity, in number of decay events per unit mass per time period, of the sample. This method is also known as "beta counting", because it is the beta particles emitted by the decaying atoms that are detected. In the late 1970s an alternative approach became available: directly counting the number of and atoms in a given sample, via accelerator mass spectrometry, usually referred to as AMS. AMS counts the / ratio directly, instead of the activity of the sample, but measurements of activity and / ratio can be converted into each other exactly. For some time, beta counting methods were more accurate than AMS, but AMS is now more accurate and has become the method of choice for radiocarbon measurements. In addition to improved accuracy, AMS has two further significant advantages over beta counting: it can perform accurate testing on samples much too small for beta counting, and it is much faster – an accuracy of 1% can be achieved in minutes with AMS, which is far quicker than would be achievable with the older technology.
Beta counting
Libby's first detector was a Geiger counter of his own design. He converted the carbon in his sample to lamp black (soot) and coated the inner surface of a cylinder with it. This cylinder was inserted into the counter in such a way that the counting wire was inside the sample cylinder, in order that there should be no material between the sample and the wire. Any interposing material would have interfered with the detection of radioactivity, since the beta particles emitted by decaying are so weak that half are stopped by a 0.01 mm thickness of aluminium.
Libby's method was soon superseded by gas proportional counters, which were less affected by bomb carbon (the additional created by nuclear weapons testing). These counters record bursts of ionization caused by the beta particles emitted by the decaying atoms; the bursts are proportional to the energy of the particle, so other sources of ionization, such as background radiation, can be identified and ignored. The counters are surrounded by lead or steel shielding, to eliminate background radiation and to reduce the incidence of cosmic rays. In addition, anticoincidence detectors are used; these record events outside the counter and any event recorded simultaneously both inside and outside the counter is regarded as an extraneous event and ignored.
The other common technology used for measuring activity is liquid scintillation counting, which was invented in 1950, but which had to wait until the early 1960s, when efficient methods of benzene synthesis were developed, to become competitive with gas counting; after 1970 liquid counters became the more common technology choice for newly constructed dating laboratories. The counters work by detecting flashes of light caused by the beta particles emitted by as they interact with a fluorescing agent added to the benzene. Like gas counters, liquid scintillation counters require shielding and anticoincidence counters.
For both the gas proportional counter and liquid scintillation counter, what is measured is the number of beta particles detected in a given time period. Since the mass of the sample is known, this can be converted to a standard measure of activity in units of either counts per minute per gram of carbon (cpm/g C), or becquerels per kg (Bq/kg C, in SI units). Each measuring device is also used to measure the activity of a blank sample – a sample prepared from carbon old enough to have no activity. This provides a value for the background radiation, which must be subtracted from the measured activity of the sample being dated to get the activity attributable solely to that sample's . In addition, a sample with a standard activity is measured, to provide a baseline for comparison.
Accelerator mass spectrometry
AMS counts the atoms of and in a given sample, determining the / ratio directly. The sample, often in the form of graphite, is made to emit C− ions (carbon atoms with a single negative charge), which are injected into an accelerator. The ions are accelerated and passed through a stripper, which removes several electrons so that the ions emerge with a positive charge. The ions, which may have from 1 to 4 positive charges (C+ to C4+), depending on the accelerator design, are then passed through a magnet that curves their path; the heavier ions are curved less than the lighter ones, so the different isotopes emerge as separate streams of ions. A particle detector then records the number of ions detected in the stream, but since the volume of (and , needed for calibration) is too great for individual ion detection, counts are determined by measuring the electric current created in a Faraday cup. The large positive charge induced by the stripper forces molecules such as , which has a weight close enough to to interfere with the measurements, to dissociate, so they are not detected. Most AMS machines also measure the sample's , for use in calculating the sample's radiocarbon age. The use of AMS, as opposed to simpler forms of mass spectrometry, is necessary because of the need to distinguish the carbon isotopes from other atoms or molecules that are very close in mass, such as and . As with beta counting, both blank samples and standard samples are used. Two different kinds of blank may be measured: a sample of dead carbon that has undergone no chemical processing, to detect any machine background, and a sample known as a process blank made from dead carbon that is processed into target material in exactly the same way as the sample which is being dated. Any signal from the machine background blank is likely to be caused either by beams of ions that have not followed the expected path inside the detector or by carbon hydrides such as or . A signal from the process blank measures the amount of contamination introduced during the preparation of the sample. These measurements are used in the subsequent calculation of the age of the sample.
Calculations
The calculations to be performed on the measurements taken depend on the technology used, since beta counters measure the sample's radioactivity whereas AMS determines the ratio of the three different carbon isotopes in the sample.
To determine the age of a sample whose activity has been measured by beta counting, the ratio of its activity to the activity of the standard must be found. To determine this, a blank sample (of old, or dead, carbon) is measured, and a sample of known activity is measured. The additional samples allow errors such as background radiation and systematic errors in the laboratory setup to be detected and corrected for. The most common standard sample material is oxalic acid, such as the HOxII standard, 1,000 lb of which was prepared by the National Institute of Standards and Technology (NIST) in 1977 from French beet harvests.
The results from AMS testing are in the form of ratios of , , and , which are used to calculate Fm, the "fraction modern". This is defined as the ratio between the / ratio in the sample and the / ratio in modern carbon, which is in turn defined as the / ratio that would have been measured in 1950 had there been no fossil fuel effect.
Both beta counting and AMS results have to be corrected for fractionation. This is necessary because different materials of the same age, which because of fractionation have naturally different / ratios, will appear to be of different ages because the / ratio is taken as the indicator of age. To avoid this, all radiocarbon measurements are converted to the measurement that would have been seen had the sample been made of wood, which has a known δ value of −25‰.
Once the corrected / ratio is known, a "radiocarbon age" is calculated using:
The calculation uses 8,033 years, the mean-life derived from Libby's half-life of 5,568 years, not 8,267 years, the mean-life derived from the more accurate modern value of 5,730 years. Libby's value for the half-life is used to maintain consistency with early radiocarbon testing results; calibration curves include a correction for this, so the accuracy of final reported calendar ages is assured.
Errors and reliability
The reliability of the results can be improved by lengthening the testing time. For example, if counting beta decays for 250 minutes is enough to give an error of ± 80 years, with 68% confidence, then doubling the counting time to 500 minutes will allow a sample with only half as much to be measured with the same error term of 80 years.
Radiocarbon dating is generally limited to dating samples no more than 50,000 years old, as samples older than that have insufficient to be measurable. Older dates have been obtained by using special sample preparation techniques, large samples, and very long measurement times. These techniques can allow measurement of dates up to 60,000 and in some cases up to 75,000 years before the present.
Radiocarbon dates are generally presented with a range of one standard deviation (usually represented by the Greek letter sigma as 1σ) on either side of the mean. However, a date range of 1σ represents only a 68% confidence level, so the true age of the object being measured may lie outside the range of dates quoted. This was demonstrated in 1970 by an experiment run by the British Museum radiocarbon laboratory, in which weekly measurements were taken on the same sample for six months. The results varied widely (though consistently with a normal distribution of errors in the measurements), and included multiple date ranges (of 1σ confidence) that did not overlap with each other. The measurements included one with a range from about 4,250 to about 4,390 years ago, and another with a range from about 4,520 to about 4,690.
Errors in procedure can also lead to errors in the results. If 1% of the benzene in a modern reference sample accidentally evaporates, scintillation counting will give a radiocarbon age that is too young by about 80 years.
Calibration
The calculations given above produce dates in radiocarbon years: i.e. dates that represent the age the sample would be if the / ratio had been constant historically. Although Libby had pointed out as early as 1955 the possibility that this assumption was incorrect, it was not until discrepancies began to accumulate between measured ages and known historical dates for artefacts that it became clear that a correction would need to be applied to radiocarbon ages to obtain calendar dates.
To produce a curve that can be used to relate calendar years to radiocarbon years, a sequence of securely dated samples is needed which can be tested to determine their radiocarbon age. The study of tree rings led to the first such sequence: individual pieces of wood show characteristic sequences of rings that vary in thickness because of environmental factors such as the amount of rainfall in a given year. These factors affect all trees in an area, so examining tree-ring sequences from old wood allows the identification of overlapping sequences. In this way, an uninterrupted sequence of tree rings can be extended far into the past. The first such published sequence, based on bristlecone pine tree rings, was created by Wesley Ferguson. Hans Suess used this data to publish the first calibration curve for radiocarbon dating in 1967. The curve showed two types of variation from the straight line: a long term fluctuation with a period of about 9,000 years, and a shorter-term variation, often referred to as "wiggles", with a period of decades. Suess said he drew the line showing the wiggles by "cosmic schwung", by which he meant that the variations were caused by extraterrestrial forces. It was unclear for some time whether the wiggles were real or not, but they are now well-established. These short term fluctuations in the calibration curve are now known as de Vries effects, after Hessel de Vries.
A calibration curve is used by taking the radiocarbon date reported by a laboratory and reading across from that date on the vertical axis of the graph. The point where this horizontal line intersects the curve will give the calendar age of the sample on the horizontal axis. This is the reverse of the way the curve is constructed: a point on the graph is derived from a sample of known age, such as a tree ring; when it is tested, the resulting radiocarbon age gives a data point for the graph.
Over the next thirty years many calibration curves were published using a variety of methods and statistical approaches. These were superseded by the IntCal series of curves, beginning with IntCal98, published in 1998, and updated in 2004, 2009, 2013, and 2020. The improvements to these curves are based on new data gathered from tree rings, varves, coral, plant macrofossils, speleothems, and foraminifera. There are separate curves for the northern hemisphere (IntCal20) and southern hemisphere (SHCal20), as they differ systematically because of the hemisphere effect. The continuous sequence of tree-ring dates for the northern hemisphere goes back to 13,910 BP as of 2020, and this provides close to annual dating for IntCal20 much of the period, reduced where there are calibration plateaus, and increased when short term C spikes due to Miyake events provide additional correlation. Radiocarbon dating earlier than the continuous tree ring sequence relies on correlation with more approximate records. SHCal20 is based on independent data where possible and derived from the northern curve by adding the average offset for the southern hemisphere where no direct data was available. There is also a separate marine calibration curve, MARINE20. For a set of samples forming a sequence with a known separation in time, these samples form a subset of the calibration curve. The sequence can be compared to the calibration curve and the best match to the sequence established. This "wiggle-matching" technique can lead to more precise dating than is possible with individual radiocarbon dates. Wiggle-matching can be used in places where there is a plateau on the calibration curve, and hence can provide a much more accurate date than the intercept or probability methods are able to produce. The technique is not restricted to tree rings; for example, a stratified tephra sequence in New Zealand, believed to predate human colonization of the islands, has been dated to 1314 AD ± 12 years by wiggle-matching. The wiggles also mean that reading a date from a calibration curve can give more than one answer: this occurs when the curve wiggles up and down enough that the radiocarbon age intercepts the curve in more than one place, which may lead to a radiocarbon result being reported as two separate age ranges, corresponding to the two parts of the curve that the radiocarbon age intercepted.
Bayesian statistical techniques can be applied when there are several radiocarbon dates to be calibrated. For example, if a series of radiocarbon dates is taken from different levels in a stratigraphic sequence, Bayesian analysis can be used to evaluate dates which are outliers and can calculate improved probability distributions, based on the prior information that the sequence should be ordered in time. When Bayesian analysis was introduced, its use was limited by the need to use mainframe computers to perform the calculations, but the technique has since been implemented on programs available for personal computers, such as OxCal.
Reporting dates
Several formats for citing radiocarbon results have been used since the first samples were dated. As of 2019, the standard format required by the journal Radiocarbon is as follows.
Uncalibrated dates should be reported as ": ± BP", where:
identifies the laboratory that tested the sample, and the sample ID
is the laboratory's determination of the age of the sample, in radiocarbon years
is the laboratory's estimate of the error in the age, at 1σ confidence.
'BP' stands for "before present", referring to a reference date of 1950, so that "500 BP" means the year AD 1450.
For example, the uncalibrated date "UtC-2020: 3510 ± 60 BP" indicates that the sample was tested by the Utrecht van der Graaff Laboratorium ("UtC"), where it has a sample number of "2020", and that the uncalibrated age is 3510 years before present, ± 60 years. Related forms are sometimes used: for example, "2.3 ka BP" means 2,300 radiocarbon years before present (i.e. 350 BC), and " yr BP" might be used to distinguish the uncalibrated date from a date derived from another dating method such as thermoluminescence.
Calibrated dates are frequently reported as "cal BP", "cal BC", or "cal AD", again with 'BP' referring to the year 1950 as the zero date. Radiocarbon gives two options for reporting calibrated dates. A common format is "cal ", where:
is the range of dates corresponding to the given confidence level
indicates the confidence level for the given date range.
For example, "cal 1220–1281 AD (1σ)" means a calibrated date for which the true date lies between AD 1220 and AD 1281, with a confidence level of '1 sigma', or approximately 68%. Calibrated dates can also be expressed as "BP" instead of using "BC" and "AD". The curve used to calibrate the results should be the latest available IntCal curve. Calibrated dates should also identify any programs, such as OxCal, used to perform the calibration. In addition, an article in Radiocarbon in 2014 about radiocarbon date reporting conventions recommends that information should be provided about sample treatment, including the sample material, pretreatment methods, and quality control measurements; that the citation to the software used for calibration should specify the version number and any options or models used; and that the calibrated date should be given with the associated probabilities for each range.
Use in archaeology
Interpretation
A key concept in interpreting radiocarbon dates is archaeological association: what is the true relationship between two or more objects at an archaeological site? It frequently happens that a sample for radiocarbon dating can be taken directly from the object of interest, but there are also many cases where this is not possible. Metal grave goods, for example, cannot be radiocarbon dated, but they may be found in a grave with a coffin, charcoal, or other material which can be assumed to have been deposited at the same time. In these cases, a date for the coffin or charcoal is indicative of the date of deposition of the grave goods, because of the direct functional relationship between the two. There are also cases where there is no functional relationship, but the association is reasonably strong: for example, a layer of charcoal in a rubbish pit provides a date which has a relationship to the rubbish pit.
Contamination is of particular concern when dating very old material obtained from archaeological excavations and great care is needed in the specimen selection and preparation. In 2014, Thomas Higham and co-workers suggested that many of the dates published for Neanderthal artifacts are too recent because of contamination by "young carbon".
As a tree grows, only the outermost tree ring exchanges carbon with its environment, so the age measured for a wood sample depends on where the sample is taken from. This means that radiocarbon dates on wood samples can be older than the date at which the tree was felled. In addition, if a piece of wood is used for multiple purposes, there may be a significant delay between the felling of the tree and the final use in the context in which it is found. This is often referred to as the "old wood" problem. One example is the Bronze Age trackway at Withy Bed Copse, in England; the trackway was built from wood that had clearly been worked for other purposes before being re-used in the trackway. Another example is driftwood, which may be used as construction material. It is not always possible to recognize re-use. Other materials can present the same problem: for example, bitumen is known to have been used by some Neolithic communities to waterproof baskets; the bitumen's radiocarbon age will be greater than is measurable by the laboratory, regardless of the actual age of the context, so testing the basket material will give a misleading age if care is not taken. A separate issue, related to re-use, is that of lengthy use, or delayed deposition. For example, a wooden object that remains in use for a lengthy period will have an apparent age greater than the actual age of the context in which it is deposited.
Use outside archaeology
Archaeology is not the only field that uses radiocarbon dating. Radiocarbon dates can also be used in geology, sedimentology, and lake studies, for example. The ability to date minute samples using AMS has meant that palaeobotanists and palaeoclimatologists can use radiocarbon dating directly on pollen purified from sediment sequences, or on small quantities of plant material or charcoal. Dates on organic material recovered from strata of interest can be used to correlate strata in different locations that appear to be similar on geological grounds. Dating material from one location gives date information about the other location, and the dates are also used to place strata in the overall geological timeline.
Radiocarbon is also used to date carbon released from ecosystems, particularly to monitor the release of old carbon that was previously stored in soils as a result of human disturbance or climate change. Recent advances in field collection techniques also allow the radiocarbon dating of methane and carbon dioxide, which are important greenhouse gases.
Notable applications
Pleistocene/Holocene boundary in Two Creeks Fossil Forest
The Pleistocene is a geological epoch that began about 2.6 million years ago. The Holocene, the current geological epoch, begins about 11,700 years ago when the Pleistocene ends. Establishing the date of this boundary − which is defined by sharp climatic warming − as accurately as possible has been a goal of geologists for much of the 20th century. At Two Creeks, in Wisconsin, a fossil forest was discovered (Two Creeks Buried Forest State Natural Area), and subsequent research determined that the destruction of the forest was caused by the Valders ice readvance, the last southward movement of ice before the end of the Pleistocene in that area. Before the advent of radiocarbon dating, the fossilized trees had been dated by correlating sequences of annually deposited layers of sediment at Two Creeks with sequences in Scandinavia. This led to estimates that the trees were between 24,000 and 19,000 years old, and hence this was taken to be the date of the last advance of the Wisconsin glaciation before its final retreat marked the end of the Pleistocene in North America. In 1952 Libby published radiocarbon dates for several samples from the Two Creeks site and two similar sites nearby; the dates were averaged to 11,404 BP with a standard error of 350 years. This result was uncalibrated, as the need for calibration of radiocarbon ages was not yet understood. Further results over the next decade supported an average date of 11,350 BP, with the results thought to be the most accurate averaging 11,600 BP. There was initial resistance to these results on the part of Ernst Antevs, the palaeobotanist who had worked on the Scandinavian varve series, but his objections were eventually discounted by other geologists. In the 1990s samples were tested with AMS, yielding (uncalibrated) dates ranging from 11,640 BP to 11,800 BP, both with a standard error of 160 years. Subsequently, a sample from the fossil forest was used in an interlaboratory test, with results provided by over 70 laboratories. These tests produced a median age of 11,788 ± 8 BP (2σ confidence) which when calibrated gives a date range of 13,730 to 13,550 cal BP. The Two Creeks radiocarbon dates are now regarded as a key result in developing the modern understanding of North American glaciation at the end of the Pleistocene.
Dead Sea Scrolls
In 1947, scrolls were discovered in caves near the Dead Sea that proved to contain writing in Hebrew and Aramaic, most of which are thought to have been produced by the Essenes, a small Jewish sect. These scrolls are of great significance in the study of Biblical texts because many of them contain the earliest known version of books of the Hebrew bible. A sample of the linen wrapping from one of these scrolls, the Great Isaiah Scroll, was included in a 1955 analysis by Libby, with an estimated age of 1,917 ± 200 years. Based on an analysis of the writing style, palaeographic estimates were made of the age of 21 of the scrolls, and samples from most of these, along with other scrolls which had not been palaeographically dated, were tested by two AMS laboratories in the 1990s. The results ranged in age from the early 4th century BC to the mid 4th century AD. In all but two cases the scrolls were determined to be within 100 years of the palaeographically determined age. The Isaiah scroll was included in the testing and was found to have two possible date ranges at a 2σ confidence level, because of the shape of the calibration curve at that point: there is a 15% chance that it dates from 355 to 295 BC, and an 84% chance that it dates from 210 to 45 BC. Subsequently, these dates were criticized on the grounds that before the scrolls were tested, they had been treated with modern castor oil in order to make the writing easier to read; it was argued that failure to remove the castor oil sufficiently would have caused the dates to be too young. Multiple papers have been published both supporting and opposing the criticism.
Impact
Soon after the publication of Libby's 1949 paper in Science, universities around the world began establishing radiocarbon-dating laboratories, and by the end of the 1950s there were more than 20 active research laboratories. It quickly became apparent that the principles of radiocarbon dating were valid, despite certain discrepancies, the causes of which then remained unknown.
The development of radiocarbon dating has had a profound impact on archaeologyoften described as the "radiocarbon revolution". In the words of anthropologist R. E. Taylor, " data made a world prehistory possible by contributing a time scale that transcends local, regional and continental boundaries". It provides more accurate dating within sites than previous methods, which usually derived either from stratigraphy or from typologies (e.g. of stone tools or pottery); it also allows comparison and synchronization of events across great distances. The advent of radiocarbon dating may even have led to better field methods in archaeology since better data recording leads to a firmer association of objects with the samples to be tested. These improved field methods were sometimes motivated by attempts to prove that a date was incorrect. Taylor also suggests that the availability of definite date information freed archaeologists from the need to focus so much of their energy on determining the dates of their finds, and led to an expansion of the questions archaeologists were willing to research. For example, from the 1970s questions about the evolution of human behaviour were much more frequently seen in archaeology.
The dating framework provided by radiocarbon led to a change in the prevailing view of how innovations spread through prehistoric Europe. Researchers had previously thought that many ideas spread by diffusion through the continent, or by invasions of peoples bringing new cultural ideas with them. As radiocarbon dates began to prove these ideas wrong in many instances, it became apparent that these innovations must sometimes have arisen locally. This has been described as a "second radiocarbon revolution". More broadly, the success of radiocarbon dating stimulated interest in analytical and statistical approaches to archaeological data. Taylor has also described the impact of AMS, and the ability to obtain accurate measurements from very small samples, as ushering in a third radiocarbon revolution.
Occasionally, radiocarbon dating techniques date an object of popular interest, for example, the Shroud of Turin, a piece of linen cloth thought by some to bear an image of Jesus Christ after his crucifixion. Three separate laboratories dated samples of linen from the Shroud in 1988; the results pointed to 14th-century origins, raising doubts about the shroud's authenticity as an alleged 1st-century relic.
Researchers have studied other isotopes created by cosmic rays to determine if they could also be used to assist in dating objects of archaeological interest; such isotopes include , , , , and . With the development of AMS in the 1980s it became possible to measure these isotopes precisely enough for them to be the basis of useful dating techniques, which have been primarily applied to dating rocks. Naturally occurring radioactive isotopes can also form the basis of dating methods, as with potassium–argon dating, argon–argon dating, and uranium series dating. Other dating techniques of interest to archaeologists include thermoluminescence, optically stimulated luminescence, electron spin resonance, and fission track dating, as well as techniques that depend on annual bands or layers, such as dendrochronology, tephrochronology, and varve chronology.
See also
Chronological dating, archaeological chronology
Absolute dating
Relative dating
Geochronology
774–775 carbon-14 spike
Notes
References
Sources
External links
Radiocarbon Dating and Chronological Modelling: Guidelines and Best Practice, Historic England
OxCal, radiocarbon calibration program
IntCal working group
IntChron, indexing service for radiocarbon dates
p3k14c, global radiocarbon database
XRONOS, global radiocarbon database
American inventions
Carbon
Conservation and restoration of cultural heritage
Isotopes of carbon
Radioactivity
Radiometric dating
1940s introductions | Radiocarbon dating | Physics,Chemistry | 12,399 |
55,305,353 | https://en.wikipedia.org/wiki/Odontomachus%20spinifer | Odontomachus spinifer is an extinct species of ant in the subfamily Ponerinae known from one possibly Miocene fossil found on Hispaniola. O. spinifer is one of two species in the ant genus Odontomachus to have been described from fossils found in Dominican amber and is one of a number of Odontomachus species found in the Greater Antilles.
History and classification
Odontomachus spinifer is known from a solitary fossil insect which, along with a microhymenopteran, is an inclusion in a transparent yellow chunk of Dominican amber. The amber was produced by the extinct Hymenaea protera, which formerly grew on Hispaniola, across northern South America, and up to southern Mexico. The specimen was collected from an undetermined amber mine in fossil-bearing rocks of the Cordillera Septentrional mountains of northern Dominican Republic. The amber dates from at least the Burdigalian stage of the Miocene, based on studying the associated fossil foraminifera, and may be as old as the Middle Eocene, based on the associated fossil coccoliths. This age range is due to the host rock being secondary deposits for the amber, and the Miocene as the age range is only the youngest that it might be.
At the time of description, the holotype specimen, number "Do-2215", was preserved in the State Museum of Natural History Stuttgart amber collections in Baden-Württemberg, Germany. The holotype fossil was first studied by entomologist Maria L. De Andrade of the University of Basle with her 1994 type description of the new species being published in the journal Stuttgarter Beiträge zur Naturkunde. Serie B (Geologie und Paläontologie). The specific epithet spinifer is derived from the Latin word which means "bearing a spine", a reference to the large projection on the top of the petiole.
Based on the head structure, O. spinifer was suggested to be part of the O. haematodus species group, closely placed with the species O. affinis, O. mayi, and O. panamensis. The three modern species are from Brazil and Guyana up through Panama and Costa Rica. The two modern species found on the island of Hispaniola, O. bauri and O. insularis are not closely placed to O. spinifer, having different structuring of the heads upper surface. When first described, O. spinifer was one of two Odontomachus species that had been described from fossils. It and Odontomachus pseudobauri were both described by De Andrade from Dominican amber in the same paper. A third species Odontomachus paleomyagra, the first compression fossil species, was described in 2014 from a worker found in Priabonian age lignite deposits of the Most Basin, Czech Republic.
Description
The O. spinifer worker is approximately in length, and has a shining exoskeleton of yellowish red to reddish brown tones. The smooth exoskeleton has tiny punctuation found across the top of the head, mandibles, petiole node and the gaster. In contrast the frons, antennae depressions, pronotum, mesonotum and underside of the petiole are distinguished by varying degrees of striation. The head is large with a rectangular outline, being 2/3 longer than wide, with the rear margin of the head wider than the maximum width of the pronotum. The mandibles are almost as long as the head is wide and the chewing margin has twelve teeth increasing in size towards the tip, while the apex of each has three teeth, a preapical, intercalary, and an apical tooth. The antennae have notably long scapes that extend past the rear margin of the head capsule and curve slightly along their length. The first funicular segments of the antennae are double the length of the second segment and longer than any of the other 10 segments. The mesonotum and propodeum have an elongated slender profile, as does the petiole, while the gaster is bell shaped along the connection with the petiole and the sting is partially retracted. There is a notably large backward curving spine formed from the upper surface of the petiole, being longer than the width of the petiole.
References
External links
†Odontomachus spinifer
Fossil ant taxa
Burdigalian life
Miocene insects of North America
Prehistoric insects of the Caribbean
Fauna of Hispaniola
Insects of the Dominican Republic
Dominican amber
Fossil taxa described in 1994
Species known from a single specimen | Odontomachus spinifer | Biology | 958 |
74,714,401 | https://en.wikipedia.org/wiki/Aspergillus%20aeneus | Aspergillus aeneus is a species of fungus in the genus Aspergillus in the Aenei section of the subgenus Nidulantes
References
aeneus
Fungi described in 1954
Fungus species | Aspergillus aeneus | Biology | 44 |
39,042,932 | https://en.wikipedia.org/wiki/Commission%20on%20Isotopic%20Abundances%20and%20Atomic%20Weights | The Commission on Isotopic Abundances and Atomic Weights (CIAAW) is an international scientific committee of the International Union of Pure and Applied Chemistry (IUPAC) under its Division of Inorganic Chemistry. Since 1899, it is entrusted with periodic critical evaluation of atomic weights of chemical elements and other cognate data, such as the isotopic composition of elements. The biennial CIAAW Standard Atomic Weights are accepted as the authoritative source in science and appear worldwide on the periodic table wall charts.
The use of CIAAW Standard Atomic Weights is also required legally, for example, in calculation of calorific value of natural gas (ISO 6976:1995), or in gravimetric preparation of primary reference standards in gas analysis (ISO 6142:2006). In addition, until 2019 the definition of Kelvin, the SI unit for thermodynamic temperature, made direct reference to the isotopic composition of oxygen and hydrogen as recommended by CIAAW. The latest CIAAW report was published in May 2022.
Establishment
Although the atomic weight had taken on the concept of a constant of nature like the speed of light, the lack of agreement on accepted values created difficulties in trade. Quantities measured by chemical analysis were not being translated into weights in the same way by all parties and standardization became an urgent matter. With so many different values being reported, the American Chemical Society (ACS), in 1892, appointed a permanent committee to report on a standard table of atomic weights for acceptance by the Society. Clarke, who was then the chief chemist for the U.S. Geological Survey, was appointed a committee of one to provide the report. He presented the first report at the 1893 annual meeting and published it in January 1894.
In 1897, the German Society of Chemistry, following a proposal by Hermann Emil Fischer, appointed a three-person working committee to report on atomic weights. The committee consisted of Chairman Prof. Hans H. Landolt (Berlin University), Prof. Wilhelm Ostwald (University of Leipzig), and Prof. Karl Seubert (University of Hanover). This committee published its first report in 1898, in which the committee suggested the desirability of an international committee on atomic weights. On 30 March 1899 Landolt, Ostwald and Seubert issued an invitation to other national scientific organizations to appoint delegates to the International Committee on Atomic Weights. Fifty-eight members were appointed to the Great International Committee on Atomic Weights, including Frank W. Clarke. The large committee conducted its business by correspondence to Landolt which created difficulties and delays associated with correspondence among fifty-eight members. As a result, on 15 December 1899, the German committee asked the International members to select a small committee of three to four members. In 1902, Prof. Frank W. Clarke (USA), Prof. Karl Seubert (Germany), and Prof. Thomas Edward Thorpe (UK) were elected, and the International Committee on Atomic Weights published its inaugural report in 1903 under the chairmanship of Prof. Clarke.
Function
Since 1899, the Commission periodically and critically evaluates the published scientific literature and produces the Table of Standard Atomic Weights. In recent times, the Table of Standard Atomic Weights has been published biennially. Each recommended standard atomic-weight value reflects the best knowledge of evaluated, published data. In the recommendation of standard atomic weights, CIAAW generally does not attempt to estimate the average or composite isotopic composition of the Earth or of any subset of terrestrial materials. Instead, the Commission seeks to find a single value and symmetrical uncertainty that would include almost all substances likely to be encountered.
Notable decisions
Many notable decisions have been made by the Commission over its history. Some of these are highlighted below.
International atomic weight unit: H=1 or O=16
Though Dalton proposed setting the atomic weight of hydrogen as unity in 1803, many other proposals were popular throughout the 19th century. By the end of the 19th century, two scales gained popular support: H=1 and O=16. This situation was undesired in science and in October 1899, the inaugural task of the International Commission on Atomic Weights was to decide on the international scale and the oxygen scale became the international standard. The endorsement of the oxygen scale created significant backlash in the chemistry community, and the inaugural Atomic Weights Report was thus published using both scales. This practice soon ceded and the oxygen scale remained the international standard for decades to come. Nevertheless, when the Commission joined the IUPAC in 1920, it was asked to revert to the H=1 scale, which it rejected.
Modern unit: 12C=12
With the discovery of oxygen isotopes in 1929, a situation arose where chemists based their calculations on the average atomic mass (atomic weight) of oxygen whereas physicists used the mass of the predominant isotope of oxygen, oxygen-16. This discrepancy became undesired and a unification between the chemistry and physics was necessary. In the 1957 Paris meeting the Commission put forward a proposal for a carbon-12 scale. The carbon-12 scale for atomic weights and nuclide masses was approved by IUPAP (1960) and IUPAC (1961) and it is still in use worldwide.
Uncertainty of the atomic weights
In the early 20th century, measurements of the atomic weight of lead showed significant variations depending on the origin of the sample. These differences were considered to be an exception attributed to lead isotopes being products of the natural radioactive decay chains of uranium. In 1930s, however, Malcolm Dole reported that the atomic weight of oxygen in air was slightly different from that in water. Soon thereafter, Alfred Nier reported natural variation in the isotopic composition of carbon. It was becoming clear that atomic weights are not constants of nature. At the Commission’s meeting in 1951, it was recognized that the isotopic-abundance variation of sulfur had a significant effect on the internationally accepted value of an atomic weight. In order to indicate the span of atomic-weight values that may apply to sulfur from different natural sources, the value ± 0.003 was attached to the atomic weight of sulfur. By 1969, the Commission had assigned uncertainties to all atomic-weight values.
Interval notation
At its meeting in 2009 in Vienna, the Commission decided to express the standard atomic weight of hydrogen, carbon, oxygen, and other elements in a manner that clearly indicates that the values are not constants of nature. For example, writing the standard atomic weight of hydrogen as [1.007 84, 1.008 11] shows that the atomic weight in any normal material will be greater than or equal to 1.007 84 and will be less than or equal to 1.008 11.
Affiliations and name
International Union of Pure and Applied Chemistry (IUPAC) from 1920
International Association of Chemical Societies (IACS) from 1913 to 1919
The IUPAC Commission on Isotopic Abundances and Atomic Weights has undergone several name changes between its founding in 1899 and 2002, when it received its present name:
(1899-1902) The Great International Committee on Atomic Weights
(1902-1920) International Committee on Atomic Weights
(1920-1922) IUPAC International Committee on Atomic Weights
(1922-1930) IUPAC International Committee on Chemical Elements
(1930-1979) IUPAC Commission on Atomic Weights
(1979-2002) IUPAC Commission on Atomic Weights and Isotopic Abundances
(2002–present) IUPAC Commission on Isotopic Abundances and Atomic Weights
Notable members
Since its establishment, many notable chemists have been members of the Commission. Notably, eight Nobel laureates have served in the Commission: Henri Moissan (1903-1907), Wilhelm Ostwald (1906-1916), Francis William Aston, Frederick Soddy, Theodore William Richards, Niels Bohr, Otto Hahn and Marie Curie.
Richards was awarded the 1914 Nobel Prize in Chemistry "in recognition of his accurate determinations of the atomic weight of a large number of chemical elements" while he was a member of the Commission. Likewise, Francis Aston was a member of the Commission when he was awarded the 1922 Nobel Prize in Chemistry for his work on isotope measurements. Incidentally, the 1925 Atomic Weights report was signed by three Nobel laureates.
Among other notable scientists who have served on the Commission were Georges Urbain (discoverer of lutetium, though priority was disputed with Carl Auer von Welsbach), André-Louis Debierne (discoverer of actinium, though priority has been disputed with Friedrich Oskar Giesel), Marguerite Perey (discoverer of francium), Georgy Flyorov (namesake of the element flerovium), Robert Whytlaw-Gray (first isolated radon), and Arne Ölander (Secretary and Member of the Nobel Committee for Chemistry).
Chairs of the Commission
Since its establishment, the chairs of the Commission have been:
In 1950, the Spanish chemist Enrique Moles became the first Secretary of the Commission when this position was created.
See also
Atomic mass unit
Committee on Data for Science and Technology
References
External links
Standard Atomic weights of the elements 2021 (IUPAC Technical Report)
Atomic weights of the elements 2013 (IUPAC Technical Report)
Isotopic compositions of the elements 2013 (IUPAC Technical Report)
Isotopic compositions of the elements 2009 (IUPAC Technical Report)
Chemistry organizations
International scientific organizations
Standards organizations
Scientific organizations established in 1899 | Commission on Isotopic Abundances and Atomic Weights | Chemistry | 1,911 |
194,743 | https://en.wikipedia.org/wiki/Orthonormality | In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpendicular to each other. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.
Intuitive overview
The construction of orthogonality of vectors is motivated by a desire to extend the intuitive notion of perpendicular vectors to higher-dimensional spaces. In the Cartesian plane, two vectors are said to be perpendicular if the angle between them is 90° (i.e. if they form a right angle). This definition can be formalized in Cartesian space by defining the dot product and specifying that two vectors in the plane are orthogonal if their dot product is zero.
Similarly, the construction of the norm of a vector is motivated by a desire to extend the intuitive notion of the length of a vector to higher-dimensional spaces. In Cartesian space, the norm of a vector is the square root of the vector dotted with itself. That is,
Many important results in linear algebra deal with collections of two or more orthogonal vectors. But often, it is easier to deal with vectors of unit length. That is, it often simplifies things to only consider vectors whose norm equals 1. The notion of restricting orthogonal pairs of vectors to only those of unit length is important enough to be given a special name. Two vectors which are orthogonal and of length 1 are said to be orthonormal.
Simple example
What does a pair of orthonormal vectors in 2-D Euclidean space look like?
Let u = (x1, y1) and v = (x2, y2).
Consider the restrictions on x1, x2, y1, y2 required to make u and v form an orthonormal pair.
From the orthogonality restriction, u • v = 0.
From the unit length restriction on u, ||u|| = 1.
From the unit length restriction on v, ||v|| = 1.
Expanding these terms gives 3 equations:
Converting from Cartesian to polar coordinates, and considering Equation and Equation immediately gives the result r1 = r2 = 1. In other words, requiring the vectors be of unit length restricts the vectors to lie on the unit circle.
After substitution, Equation becomes . Rearranging gives . Using a trigonometric identity to convert the cotangent term gives
It is clear that in the plane, orthonormal vectors are simply radii of the unit circle whose difference in angles equals 90°.
Definition
Let be an inner-product space. A set of vectors
is called orthonormal if and only if
where is the Kronecker delta and is the inner product defined over .
Significance
Orthonormal sets are not especially significant on their own. However, they display certain features that make them fundamental in exploring the notion of diagonalizability of certain operators on vector spaces.
Properties
Orthonormal sets have certain very appealing properties, which make them particularly easy to work with.
Theorem. If {e1, e2, ..., en} is an orthonormal list of vectors, then
Theorem. Every orthonormal list of vectors is linearly independent.
Existence
Gram-Schmidt theorem. If {v1, v2,...,vn} is a linearly independent list of vectors in an inner-product space , then there exists an orthonormal list {e1, e2,...,en} of vectors in such that span(e1, e2,...,en) = span(v1, v2,...,vn).
Proof of the Gram-Schmidt theorem is constructive, and discussed at length elsewhere. The Gram-Schmidt theorem, together with the axiom of choice, guarantees that every vector space admits an orthonormal basis. This is possibly the most significant use of orthonormality, as this fact permits operators on inner-product spaces to be discussed in terms of their action on the space's orthonormal basis vectors. What results is a deep relationship between the diagonalizability of an operator and how it acts on the orthonormal basis vectors. This relationship is characterized by the Spectral Theorem.
Examples
Standard basis
The standard basis for the coordinate space Fn is
{|
|-
|{e1, e2,...,en} where
| e1 = (1, 0, ..., 0)
|-
|
| e2 = (0, 1, ..., 0)
|-
|
|
|-
|
| en = (0, 0, ..., 1)
|}
Any two vectors ei, ej where i≠j are orthogonal, and all vectors are clearly of unit length. So {e1, e2,...,en} forms an orthonormal basis.
Real-valued functions
When referring to real-valued functions, usually the L² inner product is assumed unless otherwise stated. Two functions and are orthonormal over the interval if
Fourier series
The Fourier series is a method of expressing a periodic function in terms of sinusoidal basis functions.
Taking C[−π,π] to be the space of all real-valued functions continuous on the interval [−π,π] and taking the inner product to be
it can be shown that
forms an orthonormal set.
However, this is of little consequence, because C[−π,π] is infinite-dimensional, and a finite set of vectors cannot span it. But, removing the restriction that n be finite makes the set dense in C[−π,π] and therefore an orthonormal basis of C[−π,π].
See also
Orthogonalization
Orthonormal function system
Sources
Linear algebra
Functional analysis | Orthonormality | Mathematics | 1,271 |
19,947,750 | https://en.wikipedia.org/wiki/Chernobyl%20Forum | The Chernobyl Forum is the name of a group of UN agencies, founded on 3–5 February 2003 at the IAEA (International Atomic Energy Agency) Headquarters in Vienna, to scientifically assess the health effects and environmental consequences of the Chernobyl accident and to issue factual, authoritative reports on its environmental and health effects.
Participants
Eight UN organizations are involved in the Chernobyl Forum:
the IAEA (International Atomic Energy Agency)
the FAO (Food and Agriculture Organization)
the OCHA (United Nations Office for the Coordination of Humanitarian Affairs)
the UNDP (United Nations Development Programme)
the UNEP (United Nations Environment Programme)
the UNSCEAR (United Nations Scientific Committee on the Effects of Atomic Radiation)
the WHO (World Health Organization)
the World Bank.
The Chernobyl Forum also comprises the governments of Belarus, Russia and Ukraine.
Publications
The Chernobyl Forum released on 5 September 2005 a comprehensive scientific assessment report on the consequences of the Chernobyl accident titled: "Chernobyl’s Legacy: Health, Environmental and Socio-Economic Impacts". A revised edition was released in March 2006 and is available here, together with the Forum's report "Recommendations to the Governments of Belarus, the Russian Federation and Ukraine".
The report covers environmental radiation, human health and socio-economic aspects. About 100 recognized experts from many countries, including Belarus, Russia and Ukraine, have contributed. The report claims to be "the most comprehensive evaluation of the accident’s consequences to date" and to represents "a consensus view of the eight organizations of the UN family according to their competences and of the three affected countries".
On the death toll of the accident, the report states that 28 emergency workers died from acute radiation syndrome and 15 patients died from thyroid cancer. It roughly estimates that cancers deaths caused by the Chernobyl accident might eventually reach a total of up to 4,000 among the 600,000 cleanup workers or "liquidators" who received the greatest exposures.
One paper estimates an additional 5,000 deaths from the Chernobyl accident among the exposed population of around 6 million living in the contaminated areas of Ukraine, Belarus and Russia However, the paper notes that no significant increased cancer risk apart from thyroid cancer has been scientifically demonstrated to date; this prediction is only an indication of the possible impact of the accident, and should not be taken at face value.
The report quotes 4,000 cases of thyroid cancer resulting from the accident, mainly in children and adolescents at the time of the accident; however the survival rate is almost 99%. Since most emergency workers and people living in contaminated areas received relatively low radiation doses, comparable to natural background levels, no decrease in fertility or increase in congenital malformations have been observed.
The report indicates that many people were traumatised by the accident and the rapid relocation that followed; they remain anxious about their health, perceiving themselves as helpless victims rather than survivors, mainly because of the lack of credible information about the effects of the accident. The Chernobyl Forum recommends that relocated people be helped to normalise their lives and better access social services and employment.
The report also concluded that a greater risk than the long-term effects of radiation exposure, is the risk to mental health caused by exaggerated fears about the effects of radiation:
" ... The designation of the affected population as “victims” rather than “survivors” has led them to perceive themselves as helpless, weak and lacking control over their future. This, in turn, has led either to over cautious behavior and exaggerated health concerns, or to reckless conduct, such as consumption of mushrooms, berries and game from areas still designated as highly contaminated, overuse of alcohol and tobacco, and unprotected promiscuous sexual activity."
See also
Chernobyl disaster
Chernobyl disaster effects
List of Chernobyl-related articles
Nuclear power debate
References
External links
Chernobyl Forum page on the IAEA website
Chernobyl Forum report "Chernobyl's legacy: Health, Environmental and Socio-Economic Impacts"
Scientific organizations established in 2003
Aftermath of the Chernobyl disaster
Energy in Ukraine
Health in Ukraine
1986 in the Soviet Union | Chernobyl Forum | Technology | 843 |
38,255,399 | https://en.wikipedia.org/wiki/Exposome | The exposome is a concept used to describe environmental exposures that an individual encounters throughout life, and how these exposures impact biology and health. It encompasses both external and internal factors, including chemical, physical, biological, and social factors that may influence human health.
The study of the exposome has become a useful tool in understanding the interplay between genetics and environmental factors in the development of diseases, with a particular focus on chronic conditions. The concept has been widely applied in fields such as epidemiology, toxicology, and public health, among others, and has led to significant advances in our understanding of disease etiology and prevention.
By considering the cumulative effect of multiple exposures, it provides a holistic approach to the study of gene-environment interactions, allowing for a more accurate assessment of disease risk and the identification of potential intervention strategies.
Environmental exposures can have a significant impact on an individual's health. Exposure to air pollution, for example, has been linked to an increased risk of respiratory disease, heart disease, and even premature death. Similarly, exposure to certain chemicals in consumer products has been linked to an increased risk of cancer and other health problems. In addition to external factors, the internal exposome can also influence an individual's health outcomes. For example, genetics can play a role in how an individual's body processes and responds to environmental exposures, while the gut microbiome can affect an individual's immune system and overall health. As our understanding of the exposome continues to evolve, it is likely that we will gain new insights into the complex interplay between our environment and our health.
History
The term "exposome" was first coined in 2005 by Dr. Christopher Wild, then-director of the International Agency for Research on Cancer (IARC), in a seminal paper published in Cancer Epidemiology, Biomarkers & Prevention. Wild's concept was initially proposed to complement the human genome, as he recognized the limitations of genetic research in explaining the etiology of chronic diseases. By suggesting a systematic approach to measuring environmental exposures, the exposome aimed to fill this knowledge gap.
Various definitions of the exposome have been proposed over time, but most emphasize three main components: the external exposome, the internal exposome, and the biological response. The external exposome includes general external factors, such as air pollution, diet, and socioeconomic factors, as well as specific external factors like chemicals and radiation. The internal exposome comprises endogenous factors, such as hormones, inflammation, oxidative stress, and gut microbiota. Finally, the biological response refers to the complex interactions between the external and internal exposome factors and their influence on an individual's physiology and health
Significance
The field of exposome research is relatively new, rapidly evolving, and is still being developed and refined by researchers in a variety of fields, including epidemiology, environmental health, and genomics. Understanding the exposome is important because it can help researchers identify the environmental factors that contribute to disease, and develop strategies for prevention and treatment.
The exposome concept presents several challenges for researchers. One of the main challenges is the complexity and diversity of exposures that individuals experience throughout their lifetime. There are thousands of chemicals in the environment, and individuals are exposed to different combinations of chemicals depending on their location, occupation, and lifestyles. Besides this, a lack of standardized methods for measuring exposures is also challenging. Traditional approaches to measuring environmental exposures have relied on individual exposure assessments, which are often expensive and time-consuming. New technologies, such as high-throughput methods for measuring multiple exposures simultaneously, are being developed to address this challenge.
Understanding exposomes has significant implications for public health and the development of more effective strategies for prevention and treatment of disease. For example, if research shows that exposure to a certain chemical is linked to an increased risk of cancer, policymakers can take steps to regulate or ban the use of that chemical.
In addition to informing public health policies, the study of the exposome can also help individuals make more informed choices about their own health. By understanding the environmental factors that contribute to disease, individuals can take steps to reduce their exposure to harmful substances and improve their overall health.
The exposome concept holds great promise for advancing our understanding of the complex interplay between environmental exposures and human health. As researchers continue to refine exposure assessment methods, identify novel biomarkers, and develop sophisticated computational approaches, the exposome framework is poised to revolutionize the fields of epidemiology, toxicology, and public health.
Global initiatives
There have been several research initiatives aimed to better understand the exposome. One such initiative was the "Enhanced exposure assessment and omic profiling for high priority environmental exposures in Europe", a program by the Imperial College of Science, Technology and Medicine in the UK. A current initiative is EXIMIOUS - a 5 year Research and Innovation Action funded by the European Union's Horizon 2020 program, aimed at introducing a new approach to mapping exposure-induced immune effects by combining exposomics and immunomics in a unique toolbox. Another is the National Institutes of Health's Environmental Influences on Child Health Outcomes (ECHO) program, which is studying the impact of these factors on children's health. We also have the Human Exposome Project, a collaborative effort between researchers from around the world to develop tools and techniques to measure and analyze the exposome.
Furthermore, several European countries, including Sweden, France, Austria, and Czechia, have been actively involved in establishing dedicated research infrastructures for exposomics. In Sweden, the National Facility for Exposomics was approved in November 2020 and is hosted by the University of Stockholm. The facility is currently operational in Solna, providing resources and expertise for exposomics research. France has also established a dedicated research infrastructure, France Exposome, a new National Research Infrastructure that focuses on environmental health. It has been included in the 2021 roadmap for the research infrastructure of the Ministry of Higher Education and Research, indicating its significance in the country's research landscape.
Additionally, the Environmental Exposure Assessment Research Infrastructure (EIRENE) is a collaborative effort consisting of 17 National Nodes representing around 50 institutions with complementary expertise. EIRENE aims to fill the gap in the European infrastructural landscape and pioneer the first EU infrastructure on the human exposome. The consortium has a geographically balanced network, covering Northern (Finland, Iceland, Norway, Sweden), Western (Belgium, France, Germany, Netherlands, UK), Southern (Greece, Italy, Slovenia, Spain), and Central and Eastern (Austria, Czech Republic, Slovakia) Europe, as well as the US. The EIRENE RI team consists of scientists leading exposome research on an international level.
These initiatives reflect the growing recognition of the importance of exposomics research and the commitment of these countries to advancing the field. The establishment of dedicated research infrastructures ensures the availability of resources and expertise required to uncover crucial insights into the impact of exposomes on human health.
Methodologies
The study of the exposome requires a multi-disciplinary approach that combines advances in exposure assessment, bioinformatics, and systems biology. As such, researchers have developed a range of key methodologies to measure and analyze the exposome – from exposure assessment techniques, analytical tools, to computational approaches.
These methods are designed to capture and analyze the diverse and dynamic nature of environmental exposures across a person's lifespan.
Exposure assessment
The assessment of environmental exposures is a critical aspect of exposome research. Traditional methods, such as questionnaires and environmental monitoring, provide useful information on external factors but may not adequately capture the complexity and variability of exposures over time.
Consequently, researchers have increasingly turned to personal monitoring devices, such as wearable sensors, personal monitoring devices, and smartphone applications, which can collect real-time data on an individual's exposure to various environmental factors, such as air pollution, noise, and ultraviolet radiation. The data collected by these devices can help researchers understand how personal behaviors and microenvironments contribute to overall exposome profiles.
Biomarkers
Biomarkers (measurable indicators of biological processes or conditions) play an essential role in characterizing the internal exposome and biological response. This approach involves the measurement of chemicals or their metabolites in biological specimens such as blood, urine, or tissues. Advances in high-throughput -omics technologies such as genomics, transcriptomics, proteomics, and metabolomics, have revolutionized our ability to measure thousands of biomarkers simultaneously. This can provide a detailed snapshot of an individual's molecular profile at a given time, as well as a comprehensive view of the individual's biological response to environmental exposures. These technologies yield a direct and quantitative assessment of an individual's exposure to specific compounds and have been increasingly incorporated into exposome research and epidemiological studies.
Geographic Information Systems (GIS)
GIS tools can be used to estimate an individual's exposure to environmental factors based on spatial data, such as air pollution or proximity to hazardous waste sites. GIS-based exposure assessment has been applied in numerous epidemiological studies to investigate the relationship between environmental exposures and health outcomes.
Computational approaches
The vast amounts of data generated by exposome research require advanced computational methods for storage, analysis, and interpretation. Machine learning and other data mining techniques have emerged as valuable tools for identifying patterns and relationships within complex exposome data sets. Furthermore, systems biology approaches, which integrate data from multiple -omics platforms can help elucidate the complex interactions between exposures and biological pathways that contribute to disease development.
Applications
Epidemiology
Exposome research has had a significant impact on the field of epidemiology, providing new insights into the complex relationships between environmental exposures, genetic factors, and human health. By comprehensively assessing the totality of exposures, epidemiologists can better understand the etiology of chronic diseases, such as cancer, cardiovascular disease, and neurodegenerative disorders, and identify modifiable risk factors that may be targets for intervention.
Large-scale exposome projects, such as the Human Early-Life Exposome (HELIX) project and the European Exposome Cluster, have been established to investigate these relationships and generate new knowledge on disease etiology and prevention.
Toxicology
The exposome has also influenced the field of toxicology, leading to the development of new methods for assessing the cumulative effects of multiple environmental exposures on human health. By integrating exposure data with molecular profiling techniques, toxicologists can better understand the mechanisms through which environmental chemicals and other factors contribute to adverse health outcomes. This knowledge can inform the development of more effective strategies for chemical risk assessment and regulation.
Public Health
Public health research and practice have greatly benefited from the insights gained through exposome research. By elucidating the complex interactions between environmental exposures and human health, the exposome framework can inform the design of targeted interventions to reduce disease risk and promote health equity.
Moreover, the development of exposome-based tools, such as biomonitoring and personal exposure monitoring devices, can help public health practitioners better track population exposures and evaluate the effectiveness of interventions.
Challenges and future directions
Despite significant advances in exposome research, several challenges remain, including the development of more accurate exposure assessment techniques, the identification of novel biomarkers, and the management of large- scale and complex data sets.
Exposure assessment
One of the main challenges in exposome research is the accurate assessment of exposures across an individual's lifetime. While recent technological advancements have improved our ability to measure environmental exposures in real-time, there is still a need for methods that can retrospectively assess historical exposures, particularly in the context of chronic disease research.
Biomarker identification
Another challenge lies in the identification of novel and informative biomarkers that can provide insights into the biological pathways underlying exposure-disease relationships. While omics technologies have greatly expanded the number of measurable biomarkers, researchers must still determine which of these markers are most relevant to specific health outcomes and how they may be affected by various exposures.
Data management
Exposome research generates vast amounts of complex data, posing challenges related to data storage, analysis, and interpretation. As the field continues to grow, the development of standardized data formats, data sharing platforms, and advanced computational methods for data integration will be crucial to maximizing the potential of exposome research.
See also
ASTDR
Dose-response relationship
Environment and health
The dose makes the poison
Toxicokinetics
References
External links
Exposure Biology and the Exposome
Exposome and Exposomics
Precision Environmental Health’s role in preventing disease
Epidemiology
Public health
Omics
Toxicology | Exposome | Biology,Environmental_science | 2,594 |
44,374,464 | https://en.wikipedia.org/wiki/Kadison%20transitivity%20theorem | In mathematics, Kadison transitivity theorem is a result in the theory of C*-algebras that, in effect, asserts the equivalence of the notions of topological irreducibility and algebraic irreducibility of representations of C*-algebras. It implies that, for irreducible representations of C*-algebras, the only non-zero linear invariant subspace is the whole space.
The theorem, proved by Richard Kadison, was surprising as a priori there is no reason to believe that all topologically irreducible representations are also algebraically irreducible.
Statement
A family of bounded operators on a Hilbert space is said to act topologically irreducibly when and are the only closed stable subspaces under . The family is said to act algebraically irreducibly if and are the only linear manifolds in stable under .
Theorem. If the C*-algebra acts topologically irreducibly on the Hilbert space is a set of vectors and is a linearly independent set of vectors in , there is an in such that . If for some self-adjoint operator , then can be chosen to be self-adjoint.
Corollary. If the C*-algebra acts topologically irreducibly on the Hilbert space , then it acts algebraically irreducibly.
References
.
Kadison, R. V.; Ringrose, J. R., Fundamentals of the Theory of Operator Algebras, Vol. I : Elementary Theory,
Operator algebras | Kadison transitivity theorem | Mathematics | 314 |
78,728,486 | https://en.wikipedia.org/wiki/CMF%20by%20Nothing | CMF by Nothing is a design-focused technology sub-brand launched by Nothing in 2023. The brand specializes in creating consumer technology products with an emphasis on transparent design aesthetics and sustainable materials.
History
CMF by Nothing was established as a sub-brand of Nothing, a consumer technology company founded by Carl Pei in 2020. The brand aims to make design-focused technology accessible to a broader audience while maintaining Nothing's distinctive transparent design language.
Brand philosophy
The brand's name "CMF" stands for Color, Material, and Finish, reflecting its focus on design fundamentals. CMF by Nothing aims to democratize good design by offering products at more accessible price points while maintaining high design standards. The brand emphasizes the use of sustainable materials and transparent design elements, aligning with Nothing's core design principles.
Products
CMF by Nothing launched its initial product lineup in 2023, which included:
CMF Buds Pro: Wireless earbuds featuring Active Noise Cancellation.
CMF Watch Pro: A smartwatch with health and fitness tracking capabilities.
CMF Power 65W GaN: A compact charging adapter using Gallium Nitride technology.
All products follow a consistent design language emphasizing clean lines, bold colors, and transparent elements while maintaining affordability.
Design approach
The brand's design philosophy centers on three core elements:
Color: Using bold, distinctive color combinations.
Material: Focusing on quality and sustainable material choices.
Finish: Emphasizing attention to detail in the final product appearance.
This approach aims to create products that stand out in the market while remaining accessible to a broader consumer base.
Corporate structure
CMF by Nothing operates as a sub-brand of Nothing, sharing the parent company's resources and design expertise while maintaining its own distinct identity and product line. The brand benefits from Nothing's established supply chain and manufacturing partnerships while focusing on a different market segment.
References
External links
Official website
Consumer electronics brands
Technology companies established in 2023
2023 establishments in Hong Kong
Consumer electronics
Electronics companies of Hong Kong
Design companies
Industrial design
Hong Kong brands
Wearable devices
Mobile phone manufacturers
Audio equipment companies | CMF by Nothing | Engineering | 419 |
45,460,226 | https://en.wikipedia.org/wiki/Landmark%20Tower%20%28Fort%20Worth%2C%20Texas%29 | The Landmark Tower was a 30-story skyscraper located at 200 West 7th Street in Downtown Fort Worth, Texas. Designed by Fort Worth architecture firm Preston M. Geren & Associates, Landmark Tower was the tallest building in the city from its opening in 1957 until the completion of the Fort Worth National Bank Tower in 1974. After being abandoned in 1990, the tower stood vacant for more than 15 years until it was demolished in 2006. It is one of the tallest buildings ever to be demolished.
Construction
The original lower half brick and granite building broke ground on June 26, 1950, it was originally designed as a 28-story brick tower with a red granite base. Although unfinished, it opened in 1952, was four stories tall and had one floor of the brick façade above the granite base and served as the bank lobby. The building was originally built as the headquarters of the Continental National Bank of Fort Worth and ground was broken for the tower in 1952. However, the building only reached the fourth floor before construction was halted due to adverse economic conditions. As the economy picked back up again, construction began again in 1956, 26 additional floors were added, and was completed. The building opened in 1957.
The building was redesigned to support the rotating digital clock, which included cladding the building with an aluminum curtain wall instead of brick. It was built using a conventional steel frame with an aluminum curtain wall. At the time of its completion, the building was the tallest in the city, surpassing the 714 Main, built in 1921.
Lifespan
When the building opened in 1957, it included a four-sided 32-foot tall revolving digital clock and sign at the roof. Costing $196,000 and weighing 77 tons, it was the largest revolving digital clock and sign in the world at the time. It was once listed in the Guinness Book of World Records. As it was not included in the original designs, the installation required that the entire building be strengthened to support its weight. The clock spelled the letters "CNB", the initials for the Continental National Bank illuminated by green neon lights on two sides, and the time in white flood lights on the other two sides. As well as the addition of the clock, two extra floors were also added to the building.
In addition, in 1971, a 90-foot long skywalk was built from the building's northeast side across Houston Street to provide easy access to a neighboring parking garage. Although the machinery to rotate the clock stopped working in 1978, it was secured in place rather than being repaired. The clock continued to display the time until 1991.
The Continental National Bank moved their headquarters to the Continental Plaza (Now known as 777 Main Street) building in 1982.
In the mid-1980s, the building was purchased by Empire of America Federal Savings Bank. The "CNB" (Continental National Bank) letters on the four-sided digital sign and clock were dismantled in 1986, and replaced with "the Big E" Empire of America signage.
Abandonment and demolition
Following Empire of America's seizure by the United States government, the building was abandoned in 1990 and stood vacant for the next 16 years. The building was hit by an F3 tornado on March 28, 2000, and suffered significant damage. The four-sided digital clock and sign was removed from April 15–21, 2000, by orders of the City of Fort Worth for safety reasons. The skywalk was also removed during the same time. The building went through several owners through the years, and plans were to convert the skyscraper into a luxury apartment and condominium high-rise in a similar fate to The Tower, however the project went bankrupt.
The building was purchased under foreclosure by XTO Energy in January 2004. The company looked into all alternatives to restore, refurbish, and construct around the building. After determining that the estimated $62 million cost to refurbish the building was prohibitive, the company decided to raze the building to use the site for parking space and possibly a new building in the future. Demolition permits were discreetly acquired in October 2005, and the company contracted Midwest Wrecking Company in November 2005 to perform the demolition. The brick and red granite bank lobby was manually demolished or gutted, and after four months of preparation, the building was demolished by controlled explosive implosion on March 18, 2006, at 7:40 AM. The demolition used of explosives and required 15 city blocks to be evacuated. Although at the time, XTO Energy discussed plans to eventually build a new 50-story skyscraper in its place. The envisioned skyscraper was named the XTO Energy Headquarters.
From 2008 to 2016, the site was occupied by a simple parking lot. In 2016 construction began on "Cowtown Place", a 6 level parking garage to replace the building site. The Cowtown Place parking garage was completed in 2017.
See also
List of tallest voluntarily demolished buildings
References
External links
Video of implosion from a nearby highrise
Implosion from near ground level
1957 establishments in Texas
2006 disestablishments in Texas
Buildings and structures in Fort Worth, Texas
Buildings and structures demolished by controlled implosion
Demolished buildings and structures in Texas
Skyscraper office buildings in Fort Worth, Texas
Buildings and structures demolished in 2006
Former skyscrapers
Office buildings completed in 1957 | Landmark Tower (Fort Worth, Texas) | Engineering | 1,064 |
8,132,847 | https://en.wikipedia.org/wiki/Grand%20mean | The grand mean or pooled mean is the average of the means of several subsamples, as long as the subsamples have the same number of data points. For example, consider several lots, each containing several items. The items from each lot are sampled for a measure of some variable and the means of the measurements from each lot are computed. The mean of the measures from each lot constitutes the subsample mean. The mean of these subsample means is then the grand mean.
Example
Suppose there are three groups of numbers: group A has 2, 6, 7, 11, 4; group B has 4, 6, 8, 14, 8; group C has 8, 7, 4, 1, 5.
The mean of group A = (2+6+7+11+4)/5 = 6,
The mean of group B = (4+6+8+14+8)/5 = 8,
The mean of group C = (8+7+4+1+5)/5 = 5,
Therefore, the grand mean of all numbers = (6+8+5)/3 = 6.333.
Application
Suppose one wishes to determine which states in America have the tallest men. To do so, one measures the height of a suitably sized sample of men in each state. Next, one calculates the means of height for each state, and then the grand mean (the mean of the state means) as well as the corresponding standard deviation of the state means. Now, one has the necessary information for a preliminary determination of which states have abnormally tall or short men by comparing the means of each state to the grand mean ± some multiple of the standard deviation.
In ANOVA, there is a similar usage of grand mean to calculate sum of squares (SSQ), a measurement of variation. The total variation is defined as the sum of squared differences between each score and the grand mean (designated as GM), given by the equation
Discussion
The term grand mean is used for two different concepts that should not be confused, namely, the overall mean and the mean of means. The overall mean (in a grouped data set) is equal to the sample mean, namely, . The mean of means is literally the mean of the G (g=1,...,G) group means , namely, . If the sample sizes across the G groups are equal, then the two statistics coincide.
See also
Pooled variance
References
Descriptive statistics
Means | Grand mean | Physics,Mathematics | 509 |
631,188 | https://en.wikipedia.org/wiki/Halton%20sequence | In statistics, Halton sequences are sequences used to generate points in space for numerical methods such as Monte Carlo simulations. Although these sequences are deterministic, they are of low discrepancy, that is, appear to be random for many purposes. They were first introduced in 1960 and are an example of a quasi-random number sequence. They generalize the one-dimensional van der Corput sequences.
Example of Halton sequence used to generate points in (0, 1) × (0, 1) in R2
The Halton sequence is constructed according to a deterministic method that uses coprime numbers as its bases. As a simple example, let's take one dimension of the two-dimensional Halton sequence to be based on 2 and the other dimension on 3. To generate the sequence for 2, we start by dividing the interval (0,1) in half, then in fourths, eighths, etc., which generates
,
, ,
, , , ,
, ,...
Equivalently, the nth number of this sequence is the number n written in binary representation, inverted, and written after the decimal point. This is true for any base. As an example, to find the sixth element of the above sequence, we'd write 6 = 1*2 + 1*2 + 0*2 = 110, which can be inverted and placed after the decimal point to give 0.011 = 0*2 + 1*2 + 1*2 = . So the sequence above is the same as
0.1, 0.01, 0.11, 0.001, 0.101, 0.011, 0.111, 0.0001, 0.1001,...
To generate the sequence for 3 for the other dimension, we divide the interval (0,1) in thirds, then ninths, twenty-sevenths, etc., which generates
, , , , , , , , ,...
When we pair them up, we get a sequence of points in a unit square:
(, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ), (, ).
Even though standard Halton sequences perform very well in low dimensions, correlation problems have been noted between sequences generated from higher primes. For example, if we started with the primes 17 and 19, the first 16 pairs of points: (, ), (, ), (, ) ... (, ) would have perfect linear correlation. To avoid this, it is common to drop the first 20 entries, or some other predetermined quantity depending on the primes chosen. Several other methods have also been proposed. One of the most prominent solutions is the scrambled Halton sequence, which uses permutations of the coefficients used in the construction of the standard sequence. Another solution is the leaped Halton, which skips points in the standard sequence. Using, e.g., only each 409th point (also other prime numbers not used in the Halton core sequence are possible), can achieve significant improvements.
Implementation
In pseudocode:
algorithm Halton-Sequence is
inputs: index
base
output: result
while do
return
An alternative implementation that produces subsequent numbers of a Halton sequence for base b is given in the following generator function (in Python). This algorithm uses only integer numbers internally, which makes it robust against round-off errors.
def halton_sequence(b):
"""Generator function for Halton sequence."""
n, d = 0, 1
while True:
x = d - n
if x == 1:
n = 1
d *= b
else:
y = d // b
while x <= y:
y //= b
n = (b + 1) * y - x
yield n / d
See also
Constructions of low-discrepancy sequences
References
.
.
.
Low-discrepancy sequences
Sequences and series
Articles with example pseudocode
Articles with example Python (programming language) code | Halton sequence | Mathematics | 836 |
21,685,565 | https://en.wikipedia.org/wiki/Karlsruhe%20Nuclide%20Chart | The Karlsruhe Nuclide Chart is a widespread table of nuclides in print.
Characteristics
It is a two-dimensional graphical representation in the Segrè-arrangement with the neutron number N on the abscissa and the proton number Z on the ordinate. Each nuclide is represented at the intersection of its respective neutron and proton number by a small square box with the chemical symbol and the nucleon number A. By columnar subdivision of such a field, in addition to ground states also nuclear isomers can be shown. The coloring of a field (segmented if necessary) shows in addition to the existing text entries the observed types of radioactive decay of the nuclide and a rough classification of their relative shares: stable, nonradioactive nuclides completely black, primordial radionuclides partially black, proton emission orange, alpha decay yellow, beta plus decay/electron capture red, isomeric transition (gamma decay, internal conversion) white, beta minus decay blue, spontaneous fission green, cluster emission violet, neutron emission light blue. For each radionuclide its field includes (if known) information about its half-life and essential energies of the emitted radiation, for stable nuclides and primordial radionuclides there are data on mole fraction abundances in the natural isotope mixture of the corresponding chemical element. Furthermore, for many nuclides cross sections for nuclear reactions with thermal neutrons are quoted, usually for the (n, γ)-reaction (neutron capture), partly fission cross sections for the induced nuclear fission and cross sections for the (n, α)-reaction or (n, p)-reaction. For the chemical elements cross sections and standard atomic weights (both averaged over natural isotopic composition) are specified (the relative atomic masses partially as an interval to reflect the variability of the composition of the element's natural isotope mixture). For the nuclear fission of 235U and 239Pu with thermal neutrons, percentage isobaric chain yields of fission products are listed.
History, editions
The first printed edition of the Karlsruhe Nuclide Chart of 1958 in the form of a wall chart was created by Walter Seelmann-Eggebert and his assistant Gerda Pfennig. Walter Seelmann-Eggebert was director of the Radiochemistry Institute in the 1956 founded "Kernreaktor Bau- und Betriebsgesellschaft mbH" in Karlsruhe, Germany (a predecessor institution of the later "(Kern-)Forschungszentrum Karlsruhe", nowadays Karlsruhe Institute of Technology) and appointed professor of radiochemistry at the Karlsruhe Technical University. Radiochemical isotope courses were held at the institute, and in the context of these teaching courses the Karlsruhe Nuclide Chart arose, which was intended to be a well-structured overview of the essential properties of the nuclides already known at that time.
In the following decades, the Karlsruhe Nuclide Chart was published and revised several times. In addition to other co-authors, Seelmann-Eggebert († 1988) was involved up to the 5th edition in 1981, Pfennig († 2017) up to the 9th edition in 2015. In 2006, the management of the Karlsruhe Nuclide Chart changed over from Forschungszentrum Karlsruhe to the Institute for Transuranium Elements (ITU) of the Joint Research Centre (JRC) of the European Commission (EC), then in 2012 to Nucleonica GmbH, a spin-off company of the JRC-ITU.
The following summary table regarding the individual editions of the Karlsruhe Nuclide Chart also expresses the scientific progress in the field of discovery/exploration of the nuclides and new chemical elements.
? = Sources incongruent or explicit/implicit numerical data missing or inclusion of nuclear isomers in figures unclear.
Versions
The Karlsruhe Nuclide Chart is primarily published as a fold-out chart (size A4) or as a wall chart (size 0.96 m × 1.40 m). There are also larger sizes (roll map, auditorium chart and "carpet"). Since 2014, an internet-based version "Karlsruhe Nuclide Chart Online (KNCO)" with regular updates is offered via the Nucleonica nuclear science internet portal. To support nuclear education, a simplified school version, the KNClight has been developed.
The largest known version of the Karlsruhe Nuclide Chart is located in the Reactor Institute Delft, being 13 m × 19 m in size.
References
Tables of nuclides | Karlsruhe Nuclide Chart | Chemistry | 933 |
73,919,334 | https://en.wikipedia.org/wiki/Chakr%20Innovation | Chakr Innovation is a cleantech startup based in India specializing in material science technology. The company was founded by graduates from IIT Delhi and works in the fields of air and environmental protection. Chakr Innovation is the first company in India to receive type approval certification for their retrofit emission control device (RECD) from labs approved by the Central Pollution Control Board (CPCB). They have over 15 patents filed, and their work has been recognized across the globe by reputed organizations like the United Nations, WWF, Forbes, and the like.
History
Chakr Innovation was founded in 2016 by Kushagra Srivastava, Arpit Dhupar and Bharti Singhla - graduates from IIT Delhi, to reduce pollution with the help of innovation and technology. The idea began with a group of friends having sugarcane juice at a shop with a wall turned black because of soot particles coming out of the diesel generator exhaust used for crushing sugarcane.
Chakr Innovation launched Chakr Shield in 2017, one year after its incorporation. The device could reduce the particulate matter 2.5 (PM 2.5) emission from a diesel generator by up to 90%. In 2022, the company introduced a dual fuel kit that would allow a diesel generator to run on fossil fuel and natural gas simultaneously in a 30 to 70 ratio.
Products
Chakr Shield is a patented Retrofit Emission Control Device (RECD) by Chakr Innovation. It was also the first in India to get a Type Approval Certification from CPCB-certified labs like ICAT and ARAI for its capability to reduce the pollution from diesel generators by up to 90%.
The Chakr Dual Fuel Kit uses technology to allow a diesel generator set to operate on a mixture of gas and diesel as a fuel, with 70% natural gas and 30% fossil fuel. This can be a perfect conversion kit for industries with access to gas pipeline networks. With the launch of this product, Chakr Innovation reportedly became the only turnkey solution provider in India to control the emissions from diesel generators.
In 2020, Chakr Innovation launched a decontamination cabinet for N95 masks with the help of ozone gas. Ozone is a strong oxidizing agent that destroys viruses and bacteria by diffusing through their protein coats. Chakr DeCoV reportedly inactivated SARS-CoV-2 and reduced the bacterial load by 99.9999%, allowing N95 masks to be reused up to 10 times.
Awards
2016: Winner of Urban Labs Innovation Challenge – University of Chicago
2017: Climate solver award – World Wide Fund for Nature (WWF)
2017: Echoing Green Fellowship
2017: Champions of Change – NITI Aayog
2017: Recipient of "Start-up in Oil & Gas Sector" award – Federation of Indian Petroleum Industry(FIPI)
2018: Winner of Young Champions of the Earth – United Nations Environment Programme (Asia Pacific)
2018: 30 Under 30 Social Entrepreneurs – Forbes
2019: Winner of Maharashtra Startup Week Award – Maharashtra State Innovation Society
References
Pollution
Pollution control technologies
Technology companies established in 2016
Indian companies established in 2016
Manufacturing companies based in Delhi
Manufacturing companies established in 2016
Diesel engine components | Chakr Innovation | Chemistry,Engineering | 638 |
7,953,419 | https://en.wikipedia.org/wiki/Restoring%20force | In physics, the restoring force is a force that acts to bring a body to its equilibrium position. The restoring force is a function only of position of the mass or particle, and it is always directed back toward the equilibrium position of the system. The restoring force is often referred to in simple harmonic motion. The force responsible for restoring original size and shape is called the restoring force.
An example is the action of a spring. An idealized spring exerts a force proportional to the amount of deformation of the spring from its equilibrium length, exerted in a direction oppose the deformation. Pulling the spring to a greater length causes it to exert a force that brings the spring back toward its equilibrium length. The amount of force can be determined by multiplying the spring constant, characteristic of the spring, by the amount of stretch, also known as Hooke's Law.
Another example is of a pendulum. When a pendulum is not swinging all the forces acting on it are in equilibrium. The force due to gravity and the mass of the object at the end of the pendulum is equal to the tension in the string holding the object up. When a pendulum is put in motion, the place of equilibrium is at the bottom of the swing, the location where the pendulum rests. When the pendulum is at the top of its swing the force returning the pendulum to this midpoint is gravity. As a result, gravity may be seen as a restoring force.
See also
Response amplitude operator
References
Force | Restoring force | Physics,Mathematics | 300 |
407,299 | https://en.wikipedia.org/wiki/Hard%20water | Hard water is water that has a high mineral content (in contrast with "soft water"). Hard water is formed when water percolates through deposits of limestone, chalk or gypsum, which are largely made up of calcium and magnesium carbonates, bicarbonates and sulfates.
Drinking hard water may have moderate health benefits. It can pose critical problems in industrial settings, where water hardness is monitored to avoid costly breakdowns in boilers, cooling towers, and other equipment that handles water. In domestic settings, hard water is often indicated by a lack of foam formation when soap is agitated in water, and by the formation of limescale in kettles and water heaters. Wherever water hardness is a concern, water softening is commonly used to reduce hard water's adverse effects.
Origins
Natural rainwater, snow and other forms of precipitation typically have low concentrations of divalent cations such as calcium and magnesium. They may have small concentrations of ions such as sodium, chloride and sulfate derived from wind action over the sea. Where precipitation falls in drainage basins formed of hard, impervious and calcium-poor rocks, only very low concentrations of divalent cations are found and the water is termed soft water. Examples include Snowdonia in Wales and the Western Highlands in Scotland.
Areas with complex geology can produce varying degrees of hardness of water over short distances.
Types
Permanent hardness
The permanent hardness of water is determined by the water's concentration of cations with charges greater than or equal to 2+. Usually, the cations have a charge of 2+, i.e., they are divalent. Common cations found in hard water include Ca2+ and Mg2+, which frequently enter water supplies by leaching from minerals within aquifers. Common calcium-containing minerals are calcite and gypsum. A common magnesium mineral is dolomite (which also contains calcium). Rainwater and distilled water are soft, because they contain few of these ions.
The following equilibrium reaction describes the dissolving and formation of calcium carbonate and calcium bicarbonate (on the right):
The reaction can go in either direction. Rain containing dissolved carbon dioxide can react with calcium carbonate and carry calcium ions away with it. The calcium carbonate may be re-deposited as calcite as the carbon dioxide is lost to the atmosphere, sometimes forming stalactites and stalagmites.
Calcium and magnesium ions can sometimes be removed by water softeners.
Permanent hardness (mineral content) is generally difficult to remove by boiling. If this occurs, it is usually caused by the presence of calcium sulfate/calcium chloride and/or magnesium sulfate/magnesium chloride in the water, which do not precipitate out as the temperature increases. Ions causing the permanent hardness of water can be removed using a water softener, or ion-exchange column.
Temporary hardness
Temporary hardness is caused by the presence of dissolved bicarbonate minerals (calcium bicarbonate and magnesium bicarbonate). When dissolved, these types of minerals yield calcium and magnesium cations (Ca2+, Mg2+) and carbonate and bicarbonate anions ( and ). The presence of the metal cations makes the water hard. However, unlike the permanent hardness caused by sulfate and chloride compounds, this "temporary" hardness can be reduced either by boiling the water or by the addition of lime (calcium hydroxide) through the process of lime softening. Boiling promotes the formation of carbonate from the bicarbonate and precipitates calcium carbonate out of solution, leaving water that is softer upon cooling.
Effects
With hard water, soap solutions form a white precipitate (soap scum) instead of producing lather, because the 2+ ions destroy the surfactant properties of the soap by forming a solid precipitate (the soap scum). A major component of such scum is calcium stearate, which arises from sodium stearate, the main component of soap:
Hardness can thus be defined as the soap-consuming capacity of a water sample, or the capacity of precipitation of soap as a characteristic property of water that prevents the lathering of soap. Synthetic detergents do not form such scums.
Because soft water has few calcium ions, there is no inhibition of the lathering action of soaps and no soap scum is formed in normal washing. Similarly, soft water produces no calcium deposits in water heating systems.
Hard water also forms deposits that clog plumbing. These deposits, called "scale", are composed mainly of calcium carbonate (CaCO3), magnesium hydroxide (Mg(OH)2), and calcium sulfate (CaSO4). Calcium and magnesium carbonates tend to be deposited as off-white solids on the inside surfaces of pipes and heat exchangers. This precipitation (formation of an insoluble solid) is principally caused by thermal decomposition of bicarbonate ions but also happens in cases where the carbonate ion is at saturation concentration. The resulting build-up of scale restricts the flow of water in pipes. In boilers, the deposits impair the flow of heat into water, reducing the heating efficiency and allowing the metal boiler components to overheat. In a pressurized system, this overheating can lead to the failure of the boiler. The damage caused by calcium carbonate deposits varies according to the crystalline form, for example, calcite or aragonite.
The presence of ions in an electrolyte, in this case, hard water, can also lead to galvanic corrosion, in which one metal will preferentially corrode when in contact with another type of metal when both are in contact with an electrolyte. The softening of hard water by ion exchange does not increase its corrosivity per se. Similarly, where lead plumbing is in use, softened water does not substantially increase plumbo-solvency.
In swimming pools, hard water is manifested by a turbid, or cloudy (milky), appearance to the water. Calcium and magnesium hydroxides are both soluble in water. The solubility of the hydroxides of the alkaline-earth metals to which calcium and magnesium belong (group 2 of the periodic table) increases moving down the column. Aqueous solutions of these metal hydroxides absorb carbon dioxide from the air, forming insoluble carbonates, and giving rise to turbidity. This often results from the pH being excessively high (pH > 7.6). Hence, a common solution to the problem is, while maintaining the chlorine concentration at the proper level, to lower the pH by the addition of hydrochloric acid, the optimum value is in the range of 7.2 to 7.6.
Softening
In some cases it is desirable to soften hard water. Most detergents contain ingredients that counteract the effects of hard water on the surfactants. For this reason, water softening is often unnecessary. Where softening is practised, it is often recommended to soften only the water sent to domestic hot water systems to prevent or delay inefficiencies and damage due to scale formation in water heaters. A common method for water softening involves the use of ion-exchange resins, which replace ions like Ca2+ by twice the number of mono cations such as sodium or potassium ions.
Washing soda (sodium carbonate, Na2CO3) is easily obtained and has long been used as a water softener for domestic laundry, in conjunction with the usual soap or detergent.
Water that has been treated by a water softening may be termed softened water. In these cases, the water may also contain elevated levels of sodium or potassium and bicarbonate or chloride ions.
Health considerations
The World Health Organization says that "there does not appear to be any convincing evidence that water hardness causes adverse health effects in humans". In fact, the United States National Research Council has found that hard water serves as a dietary supplement for calcium and magnesium.
Some studies have shown a weak inverse relationship between water hardness and cardiovascular disease in men, up to a level of 170 mg calcium carbonate per litre of water. The World Health Organization has reviewed the evidence and concluded the data was inadequate to recommend a level of hardness.
Recommendations have been made for the minimum and maximum levels of calcium (40–80 ppm) and magnesium (20–30 ppm) in drinking water, and a total hardness expressed as the sum of the calcium and magnesium concentrations of 2–4 mmol/L.
Other studies have shown weak correlations between cardiovascular health and water hardness.
The prevalence of atopic dermatitis (eczema) in children may be increased by hard drinking water. Living in areas with hard water may also play a part in the development of AD in early life. However, when AD is already established, using water softeners at home does not reduce the severity of the symptoms.
Measurement
Hardness can be quantified by instrumental analysis. The total water hardness is the sum of the molar concentrations of Ca2+ and Mg2+, in mol/L or mmol/L units. Although water hardness usually measures only the total concentrations of calcium and magnesium (the two most prevalent divalent metal ions), iron, aluminium, and manganese are also present at elevated levels in some locations. The presence of iron characteristically confers a brownish (rust-like) colour to the calcification, instead of white (the colour of most of the other compounds).
Water hardness is often not expressed as a molar concentration, but rather in various units, such as degrees of general hardness (dGH), German degrees (°dH), parts per million (ppm, mg/L, or American degrees), grains per gallon (gpg), English degrees (°e, e, or °Clark), or French degrees (°fH, °f or °HF; lowercase f is used to prevent confusion with degrees Fahrenheit). The table below shows conversion factors between the various units.
{| class=" wikitable"
|+ Hardness unit conversion.
|-
! || 1 mmol/L || 1 ppm, mg/L || 1 dGH, °dH || 1 gpg || 1 °e, °Clark || 1 °fH
|-
! mmol/L
| 1 || 0.009991 || 0.1783 || 0.171 || 0.1424 || 0.09991
|-
! ppm, mg/L
| 100.1 || 1 || 17.85 || 17.12 || 14.25 ||10
|-
! dGH, °dH
| 5.608 || 0.05603 || 1 || 0.9591 || 0.7986 || 0.5603
|-
! gpg
| 5.847 || 0.05842 || 1.043 || 1 || 0.8327 || 0.5842
|-
! °e, °Clark
| 7.022 || 0.07016 || 1.252 || 1.201 || 1 || 0.7016
|-
! °fH
| 10.01 ||0.1|| 1.785 || 1.712 || 1.425 || 1
|}
The various alternative units represent an equivalent mass of calcium oxide (CaO) or calcium carbonate (CaCO3) that, when dissolved in a unit volume of pure water, would result in the same total molar concentration of Mg2+ and Ca2+. The different conversion factors arise from the fact that equivalent masses of calcium oxide and calcium carbonates differ and that different mass and volume units are used. The units are as follows:
Parts per million (ppm) is usually defined as 1 mg/L CaCO3 (the definition used below). It is equivalent to mg/L without chemical compound specified, and to American degree.
Grain per gallon (gpg) is defined as 1 grain (64.8 mg) of calcium carbonate per U.S. gallon (3.79 litres), or 17.118 ppm.
1 mmol/L is equivalent to 100.09 mg/L CaCO3 or 40.08 mg/L Ca2+.
A degree of General Hardness (dGH or 'German degree' (°dH, )) is defined as 10 mg/L CaO or 17.848 ppm.
A Clark degree (°Clark) or English degree (°e or e) is defined as one grain (64.8 mg) of CaCO3 per Imperial gallon (4.55 litres) of water, equivalent to 14.254 ppm.
A French degree (°fH or °f) is defined as 10 mg/L CaCO3, equivalent to 10 ppm.
Hard/soft classification
As it is the precise mixture of minerals dissolved in the water, together with water's pH and temperature, that determine the behaviour of the hardness, a single-number scale does not adequately describe hardness. However, the United States Geological Survey uses the following classification for hard and soft water:
Seawater is considered to be very hard due to various dissolved salts. Typically seawater's hardness is in the area of 6,570 ppm (6.57 grams per litre). In contrast, fresh water has a hardness in the range of 15 to 375 ppm, generally around 600 mg/L.
Indices
Several indices are used to describe the behaviour of calcium carbonate in water, oil, or gas mixtures.
Langelier saturation index (LSI)
The Langelier saturation index (sometimes Langelier stability index) is a calculated number used to predict the calcium carbonate stability of water. It indicates whether the water will precipitate, dissolve, or be in equilibrium with calcium carbonate. In 1936, Wilfred Langelier developed a method for predicting the pH at which water is saturated in calcium carbonate (called pHs). The LSI is expressed as the difference between the actual system pH and the saturation pH:
For LSI > 0, water is supersaturated and tends to precipitate a scale layer of CaCO3.
For LSI = 0, water is saturated (in equilibrium) with CaCO3. A scale layer of CaCO3 is neither precipitated nor dissolved.
For LSI < 0, water is under-saturated and tends to dissolve solid CaCO3.
If the actual pH of the water is below the calculated saturation pH, the LSI is negative and the water has a very limited scaling potential. If the actual pH exceeds pHs, the LSI is positive, and being supersaturated with CaCO3, the water tends to form scale. At increasing positive index values, the scaling potential increases.
In practice, water with an LSI between −0.5 and +0.5 will not display enhanced mineral dissolving or scale-forming properties. Water with an LSI below −0.5 tends to exhibit noticeably increased dissolving abilities while water with an LSI above +0.5 tends to exhibit noticeably increased scale-forming properties.
The LSI is temperature-sensitive. The LSI becomes more positive as the water temperature increases. This has particular implications in situations where well water is used. The temperature of the water when it first exits the well is often significantly lower than the temperature inside the building served by the well or at the laboratory where the LSI measurement is made. This increase in temperature can cause scaling, especially in cases such as water heaters. Conversely, systems that reduce water temperature will have less scaling.
Water analysis:
pH = 7.5
TDS = 320 mg/L
Calcium = 150 mg/L (or ppm) as CaCO3
Alkalinity = 34 mg/L (or ppm) as CaCO3
LSI formula:
LSI = pH − pHs
pHs = (9.3 + A + B) − (C + D) where:
A = = 0.15
B = −13.12 × log10(°C + 273) + 34.55 = 2.09 at 25 °C and 1.09 at 82 °C
C = log10[Ca2+ as CaCO3] – 0.4 = 1.78
(Ca2+ as CaCO3 is also called calcium hardness, and is calculated as 2.5[Ca2+])
D = log10[alkalinity as CaCO3] = 1.53
Ryznar stability index (RSI)
The Ryznar stability index (RSI) uses a database of scale thickness measurements in municipal water systems to predict the effect of water chemistry. It was developed from empirical observations of corrosion rates and film formation in steel mains.
This index is defined as:
For 6.5 < RSI < 7 water is considered to be approximately at saturation equilibrium with calcium carbonate
For RSI > 8 water is undersaturated and, therefore, would tend to dissolve any existing solid CaCO3
For RSI < 6.5 water tends to be scale form
Puckorius scaling index (PSI)
The Puckorius scaling index (PSI) uses slightly different parameters to quantify the relationship between the saturation state of the water and the amount of limescale deposited.
Other indices
Other indices include the Larson-Skold Index, the Stiff-Davis Index, and the Oddo-Tomson Index.
Regional information
The hardness of local water supplies depends on the source of water. Water in streams flowing over volcanic (igneous) rocks will be soft, while water from boreholes drilled into porous rock is normally very hard.
Australia
Analysis of water hardness in major Australian cities by the Australian Water Association shows a range from very soft (Melbourne) to hard (Adelaide).
Total hardness levels of calcium carbonate in ppm are:
Canberra: 40
Melbourne: 10–26
Sydney: 39.4–60.1
Perth: 29–226
Brisbane: 100
Adelaide: 134–148
Hobart: 5.8–34.4
Darwin: 31
Canada
Prairie provinces (mainly Saskatchewan and Manitoba) contain high quantities of calcium and magnesium, often as dolomite, which are readily soluble in the groundwater that contains high concentrations of trapped carbon dioxide from the last glaciation. In these parts of Canada, the total hardness in ppm of calcium carbonate equivalent frequently exceeds 200 ppm, if groundwater is the only source of potable water. The west coast, by contrast, has unusually soft water, derived mainly from mountain lakes fed by glaciers and snowmelt.
Some typical values are:
Montreal, Quebec: 116 ppm
Calgary, Alberta: 165 ppm
Regina, Saskatchewan: 496 ppm
Saskatoon, Saskatchewan: 160–180 ppm
Winnipeg, Manitoba: 77 ppm
Toronto, Ontario: 121 ppm
Vancouver, British Columbia: < 3 ppm
Charlottetown, Prince Edward Island: 140–150 ppm
Waterloo Region, Ontario: 400 ppm
Guelph, Ontario: 460 ppm
Saint John (West), New Brunswick: 160–200 ppm
Ottawa, Ontario: 30 ppm
England and Wales
Information from the Drinking Water Inspectorate shows that drinking water in England is generally considered to be 'very hard', with most areas of England, particularly east of a line between the Severn and Tees estuaries, exhibiting above 200 ppm for the calcium carbonate equivalent. Water in London, for example, is mostly obtained from the River Thames and River Lea, both of which derive a significant proportion of their dry weather flow from springs in limestone and chalk aquifers. Wales, Devon, Cornwall, and parts of northwest England are softer water areas and range from 0 to 200 ppm. In the brewing industry in England and Wales, water is often deliberately hardened with gypsum in the process of Burtonisation.
Generally, water is mostly hard in urban areas of England where soft water sources are unavailable. Several cities built water supply sources in the 18th century as the Industrial Revolution and urban population burgeoned. Manchester was a notable such city in North West England and its wealthy corporation built several reservoirs at Thirlmere and Haweswater in the Lake District to the north. There is no exposure to limestone or chalk in their headwaters and consequently the water in Manchester is rated as 'very soft'. Similarly, tap water in Birmingham is also soft as it is sourced from the Elan Valley Reservoirs in Wales, even though groundwater in the area is hard.
Ireland
The EPA has published a standards handbook for the interpretation of water quality in Ireland in which definitions of water hardness are given. Section 36 discusses hardness. Reference to original EU documentation is given, which sets out no limit for hardness. The handbook also gives no "Recommended or Mandatory Limit Values" for hardness. The handbook does indicate that above the midpoint of the ranges defined as "Moderately Hard", effects are seen increasingly: "The chief disadvantages of hard waters are that they neutralise the lathering power of soap[...] and, more important, that they can cause blockage of pipes and severely reduced boiler efficiency because of scale formation. These effects will increase as the hardness rises to and beyond 200 mg/L ."
United States
A collection of data from the United States found that about half the water stations tested had hardness over 120 mg per litre of calcium carbonate equivalent, placing them in the categories "hard" or "very hard". The other half were classified as soft or moderately hard. More than 85% of American homes have hard water. The softest waters occur in parts of the New England, South Atlantic–Gulf, Pacific Northwest, and Hawaii regions. Moderately hard waters are common in many of the rivers of the Tennessee, Great Lakes, and Alaska regions. Hard and very hard waters are found in some of the streams in most of the regions throughout the country. The hardest waters (greater than 1,000 ppm) are in streams in Texas, New Mexico, Kansas, Arizona, Utah, parts of Colorado, southern Nevada, and southern California.
See also
References
External links
Describes a procedure for determining the hardness of water using EDTA with Eriochrome indicator
Water
Forms of water
Liquid water
Limestone
Water quality indicators | Hard water | Physics,Chemistry,Environmental_science | 4,577 |
55,964,039 | https://en.wikipedia.org/wiki/NGC%204252 | NGC 4252 is a spiral galaxy approximately 56 million light-years away from Earth in the constellation of Virgo. It belongs to the Virgo cluster of galaxies.
It was discovered by German astronomer Albert Marth on May 26, 1864.
See also
List of NGC objects (4001–5000)
References
External links
SEDS
Spiral galaxies
Virgo (constellation)
4252
39537
Astronomical objects discovered in 1864
Discoveries by Albert Marth
Virgo Cluster
07343 | NGC 4252 | Astronomy | 94 |
63,487,204 | https://en.wikipedia.org/wiki/Veronica%20bucket | The Veronica bucket is a mechanism for hand washing originating in Ghana which consists of a bucket of water with a tap fixed at the bottom, mounted at hand height, and a bowl underneath to collect waste water. The Veronica bucket was developed by Veronica Bekoe. The Veronica bucket serves as a simple way to encourage proper hand washing using flowing water. Bekoe in an interview stated that the bucket was originally made to help her and her colleagues wash their hands under running water after each lab session. She said, "We are used to washing hands in a bowl with others washing in the same water, which will do more harm than good." These colleagues were contaminating their hands rather than decontaminating them. In addition to the COVID benefit of hand washing, the Veronica bucket is also essential for areas where potable water is not readily available.
Uses
The bucket is also used in other African countries. It is common in places such as schools, hospitals, churches and areas with no running taps. It has become very popular in Ghana following the outbreak of the novel coronavirus (COVID-19) as citizens engage in frequent hand washing to stem its spread. In Ekiti State, Nigeria, the governor Kayode Fayemi directed all public places to provide running tap water or Veronica buckets "to encourage frequent handwashing" as part of the measures to contain COVID-19.
Before the COVID-19 outbreak, the invention was used in some schools and hospitals but now it is in high demand due to its role in curbing the outbreak. Now, the setup could be spotted in places like malls, hospitals, corporate institutions and government offices. It was invented by a Ghanaian, Veronica Bekoe, whom the invention was named after. She claimed the bucket was named after her in 1993 by Joan Hetrick. Bekoe is a biologist who has worked at the Public Health and Reference Laboratory of the Ghana Health Service from 1972 to 2008.
Production
The invention was initially produced by local artisans with aluminium utensils used in selling Hausa koko attached with a tap which was a prototype, popularly known as Akorlaa gyae su and is currently made of plastic with a tap attached to it which has an area for holding soap and towels. Variations available today come in all colours.
In February 2021, Veronica Bekoe launched an updated version of the bucket to reduce physical contact with the unit and further help halt the spread of COVID-19.
See also
WASH (water, sanitation, hygiene)
References
External links
Veronica Bekoe demonstrates how to wash your hands using the Veronica Bucket.
Health in Ghana
Medical technology
Sanitation | Veronica bucket | Biology | 536 |
42,938,388 | https://en.wikipedia.org/wiki/Virtual%20collective%20consciousness | Virtual collective consciousness (VCC) is a term rebooted and promoted by two behavioral scientists, Yousri Marzouki and Olivier Oullier in their 2012 Huffington Post article titled: "Revolutionizing Revolutions: Virtual Collective Consciousness and the Arab Spring", after its first appearance in 1999-2000. VCC is now defined as an internal knowledge catalyzed by social media platforms and shared by a plurality of individuals driven by the spontaneity, the homogeneity, and the synchronicity of their online actions. VCC occurs when a large group of persons, brought together by a social media platform think and act with one mind and share collective emotions. Thus, they are able to coordinate their efforts efficiently, and could rapidly spread their word to a worldwide audience. When interviewed about the concept of VCC that appeared in the book - Hyperconnectivity and the Future of Internet Communication - he edited, Professor of Pervasive Computing, Adrian David Cheok mentioned the following: "The idea of a global (collective) virtual consciousness is a bottom-up process and a rather emergent property resulting from a momentum of complex interactions taking place in social networks. This kind of collective behaviour (or intelligence) results from a collision between a physical world and a virtual world and can have a real impact in our life by driving collective action."
Etymology
In 1999-2000, Richard Glen Boire provided a cursory mention and the only occurrence of the term "Virtual collective consciousness" in his text as follows:
The recent definition of VCC evolved from the first empirical study that provided a cyberpsychological insight into the contribution of Facebook to the 2011 Tunisian revolution. In this study, the concept was originally called "collective cyberconsciousness". The latter is an extension of the idea of "collective consciousness" coupled with "citizen media" usage. The authors of this study also made a parallel between this original definition of VCC and other comparable concepts such as Durkheim's collective representation, Žižek's "collective mind" or Boguta's "new collective consciousness" that he used to describe the computational history of the Internet shutdown during the Egyptian revolution. Since VCC is the byproduct of the network's successful actions, then these actions must be timely, acute, rapid, domain-specific, and purpose-oriented to successfully achieve their goal. Before reaching a momentum of complexity, each collective behavior starts by a spark that triggers a chain of events leading to a crystallized stance of a tremendous amount of interactions. Thus, VCC is an emergent global pattern from these individual actions.
In 2012, the term virtual collective consciousness resurfaced and was brought to light after extending its applications to the Egyptian case and the whole social networking major impact on the success of the so-called Arab Spring. Moreover, the acronym VCC was suggested to identify the theoretical framework covering on-line behaviors leading to a virtual collective consciousness. Hence, online social networks have provided a new and faster way of establishing or modifying "collective consciousness" that was paramount to the 2011 uprisings in the Arab world.
Theoretical underpinnings of VCC
Various theoretical references ranging from sociology to computer science were mentioned in order to account for the key features that render the framework for a virtual collective consciousness. The following list is not exhaustive, but the references it contains are often highlighted:
Émile Durkheim's collective representations are at the heart of VCC since collectivity taken decisions according to Durkheim's assumptions will approve or disapprove individuals’ actions and help them eventually reach their final goal.
Marshall McLuhan's global village: The shrinking of our big world to a small place called cyberspace is made possible by technological extensions of human consciousness.
Carl Jung's collective unconscious: When a society is witnessing significant changes, the anchoring of archetypal images (e.g., political leaders) seems to be deeply rooted in individuals' collective unconscious that is likely to bias their political choices. Individual memories of public events were also supposed to convey a "collective awareness" that can be subconsciously altered by the instantaneous spread of information through social networking around the world.
Daniel Wegner's transactive memory (TM): social networking platforms such as Facebook during the Tunisian revolution or Twitter during the Egyptian revolution served as placeholders of a VCC where information can be harnessed and steered to the highly specific revolutionary purpose. Although research on TM has been originally limited to couples, small groups, and organizations, recent studies strongly suggest that an effective TM can operate on a very large scale too.
James Surowiecki's wisdom of crowds
Collective influence algorithm: The CI (Collective influence) algorithm is effective in finding influential nodes in a variety of networks, including social networks, communication networks, and biological networks. It has been used to identify influencers on social media platforms, to identify key nodes in transportation networks, and to identify potential drug targets in biological networks.
Some illustrations of VCC
Besides the studied effect of social networking on the Tunisian and Egyptian revolutions, the former via Facebook and the latter via Twitter other applications were studied under the prism of VCC framework:
The Whitacre's virtual choir: A compelling example of the degree of autonomy and self identity members of a spontaneously created network through a VCC is Eric Whitacre's unique musical project that involved a collection of singers performing remotely to create a virtual Choir. The resulting effect of all the voices illustrated a genuine virtual collective empathy merging the artist mind with all the singers through his silent conducting gestures.
The Harlem Shake dance:
The Bitcoin protocol: It was questioned whether or not the Bitcoin protocol can morph into virtual collective consciousness. The Byzantine generals problem was used as an analogy to understand the behavioral complexity of the community of Bitcoin's users.
Artificial Social Networking Intelligence (ASNI): refers to the application of artificial intelligence within social networking services and social media platforms. It encompasses various technologies and techniques used to automate, personalize, enhance, improve, and synchronize user's interactions and experiences within social networks. ASNI is expected to evolve rapidly, influencing how we interact online and shaping their digital experiences. Transparency, ethical considerations, media influence bias, and user control over data will be crucial to ensure responsible development and positive impact.
See also
Algorithmic curation
Ambient awareness
Collective consciousness
Collective influence algorithm
Collective intelligence
Collective unconscious
Crowdsourcing
Hyperconnectivity
Media intelligence
Sentiment analysis
Social cloud computing
Social media intelligence
Social media optimization
Wisdom of the crowd
References
External links
VCC Entry in P2P Foundation
Learning Enhancement Center Blog entry on VCC and global empowerment
The 1999-2000 article of Richard Glen Boire
Special:WhatLinksHere/Viral phenomenon
Special:WhatLinksHere/Crowdsourcing
Social media
Information society
Crowd psychology
Collective intelligence
Open-source intelligence
Social information processing
Cybernetics | Virtual collective consciousness | Technology | 1,412 |
58,074,429 | https://en.wikipedia.org/wiki/The%20Association%20for%20Women%20with%20Large%20Feet | The Association for Women with Large Feet was a British organization founded by Airton resident Mrs. Phyllis Crone in 1949. It lobbied for clothing manufacturers to better provide for tall women. Members had to be at least 5 feet 8 inches tall.
The Association was originally formed to convince footwear manufacturers to provide more attractive options for women with larger shoe sizes. However, it expanded to lobby clothing manufacturers more generally to provide for tall women, with some success. The Association made direct approaches to manufacturers, and if they agreed to provide for tall women, their details were circulated to all members.
The group changed its name to The Association of Tall Women in October 1951 to increase its appeal and membership. There were reportedly 10 branches across the UK by this point, with members joining at a rate of around 50 per week. Membership fees were 3 shillings and 6 pence (3s 6d) per year. By 1952 the group was reported to have 2,000 members in London alone.
As part of their campaign, in 1952 the Association held a public display of clothing for taller women at a London department store. Successes included convincing stocking manufacturers to produce nylons with longer foot and leg measurements, "ending a nightmare for taller women", and the establishment of a London shop called "Tall Girls".
References
Women's organisations based in the United Kingdom
Clothing-related organizations
Campaigning
Feminism and history
Feminism and society
History of fashion
Women's clothing
1949 establishments in the United Kingdom
Sizes in clothing
Women in London
History of women in the United Kingdom | The Association for Women with Large Feet | Physics,Mathematics | 308 |
19,885,937 | https://en.wikipedia.org/wiki/VAXft | The VAXft was a family of fault-tolerant minicomputers developed and manufactured by Digital Equipment Corporation (DEC) using processors implementing the VAX instruction set architecture (ISA). "VAXft" stood for "Virtual Address Extension, fault tolerant". These systems ran the OpenVMS operating system, and were first supported by VMS 5.4. Two layered software products, VAXft System Services and VMS Volume Shadowing, were required to support the fault-tolerant features of the VAXft and for the redundancy of data stored on hard disk drives.
Architecture
All VAXft systems shared the same basic system architecture. A VAXft system consisted of two "zones" that operated in lock-step: "Zone A" and "Zone B". Each zone was a fully functional computer, capable of running an operating system, and was identical to the other in hardware configuration. Lock-step was achieved by hardware on the CPU module. The CPU module of each zone was connected to the other with a crosslink cable. The crosslink cables carried the results of instructions executed by one CPU module to the other, where they were compared by hardware with the results of the same instructions executed by the latter to ensure that they were identical. The two zones were kept synchronous by a clock signal carried by the crosslink cables. When a hardware failure occurred in one of the zones, the affected zone was brought offline without bringing down the other zone, which continued to operate as normal. When repairs were completed, the offline zone was powered on and automatically resynchronized with the other zone, restoring redundancy.
VAXft Model 310
The VAXft Model 310, introduced as the VAXft 3000 Model 310, code named "Cirrus", was introduced in February 1990 and shipped in June. It was the first VAXft model, and was DEC's first fault-tolerant computer that was generally available. At the 1991 launch of new VAXft models, the VAX 3000 Model 310 was renamed to follow the new naming scheme, becoming the VAXft Model 310. The Model 310 had a theoretical maximum performance of 3.8 VUPs.
When announced, the Model 310 had a starting price of US$200,000. In August 1990, slow sales prompted DEC to reduce the US price of the Model 310 to US$168,000.
It used the KA520 CPU module containing a 16.67 MHz (60 ns cycle time) CVAX+ chip set with 32 KB of external secondary cache. The system contained two such CPU modules, one in each zone, running in lock-step.
VAXft Model 110
The VAXft Model 110, code named "Cirrus", was an entry-level model announced on 18 March 1991 alongside three other models. The Model 110 was essentially a low-cost model of the VAXft Model 310, and had a theoretical maximum performance of 2.4 VUPs.
It contained two zones packed side by side in an enclosure. Compared to the Model 310, it was limited in expandability in regards to memory, storage capacity and available options. It was available in either a pedestal or rackmount configuration. The rackmount configuration was a pedestal without the plastic covers or casters that fitted in a standard 19-inch RETMA cabinet.
Each zone had a five-slot backplane for a KA510 CPU module, one to three 32 MB MS520 memory modules, one or two KFE52 system I/O controller modules and one or two DEC WANcontroller 620 (DSF32) wide area network (WAN) communications adapters. The leftmost slot was the first slot. The primary system controller resided in the first slot, the CPU module in the second, and the memory modules in the third, fourth and fifth slots. The second system I/O controller resided in either the fourth and fifth slots and the WAN communications adapters also in the fourth and fifth slots. The most basic system contained a CPU module, a memory module and a system I/O controller.
The DEC WANcontroller 620 was designed for use in VAXft systems. It provided two synchronous lines, each with a bandwidth of 64 KB. The lines could be operated as two independent lines or paired to provide redundancy.
VAXft Model 410
The VAXft Model 410, code named "Cirrus II", was a mid-range model announced on 18 March 1991 alongside three other models. Originally supposed to ship in June or July 1991, it was delayed until September 1991, with the reason given by DEC being that it wanted to tune a new release of VMS for the system. The Model 410 was identical to the Model 310, but used the KA550 CPU module containing a 28.57 MHz (35 ns cycle time) SOC microprocessor with 128 KB of external secondary cache. It supported up to 256 MB of memory. The Model 410 had a theoretical maximum performance of 6.0 VUPs.
VAXft Model 610
The VAXft Model 610, code named "Cirrus II", was a mid-range model announced on 18 March 1991 alongside three other models. Originally supposed to ship in June or July 1991, it was delayed until September 1991, with the reason given by DEC being that it wanted to tune a new release of VMS for the system.
The Model 610 was architecturally identical to the Model 410, except that the two zones were packaged vertically a 60" high cabinet, with Zone A above Zone B. The cabinet had more storage capacity than the systems packaged in pedestals, and for this reason the Model 610 was intended for data centers. It could have one or two expander cabinets placed on the left and right of the system for additional storage devices. These cabinets were front to rear cooled.
VAXft Model 612
The VAXft Model 612 was a high-end model announced on 18 March 1991 alongside three other models. Originally supposed to ship in June or July 1991, it was delayed until September 1991, with the reason given by DEC being that it wanted to tune a new release of VMS for the system. The Model 612 was a VAXcluster of two VAXft Model 610s with an expansion cabinet positioned between the two systems as standard. It had a theoretical maximum performance of 12.0 VUPs. A second expansion cabinet could be added between the two system cabinets.
VAXft Model 810
The VAXft Model 810, code named "Jetstream", was a high-end model introduced in October 1993 instead of the targeted introduction date in the late summer or early fall of 1992. The system was developed by DEC, but was manufactured by an Italian industrial manufacturer, Alenia SpA. It had a theoretical maximum performance of 30.0 VUPs.
The Model 810 was a third generation VAXft system. It contained two zones vertically packaged in a cabinet. An optional expansion cabinet could be connected to the system, in addition to two uninterruptible power supplies, one for each zone.
It used the KA560-AA CPU module, which contained two 83.33 MHz (12 ns cycle time) NVAX+ microprocessors with 512 KB of B-cache (L2 cache). The module's two microprocessors operated a in lock-step fashion, and like previous VAXft systems, there were two such CPU modules in a system, one in each of the two zones, which operated in a lock-step fashion.
The Model 810 cabinet was 60.0 cm (24 in) wide, 170.0 cm (67 in) high and 86.0 cm (34 in) deep.
References
Brandel, Mary (21 Nov. 1994). "Sequoia reneges deal with Digital", Computerworld, pg. 28.
Johnson, Maryfran (25 Mar. 1991). "DEC Pumps Up Fault-Tolerant Challenge", Computerworld, pg. 95.
Kulkosky, Victor (May 1990). "Digital's New Box Promises No-Fault On-Line Insurance", Wall Street Computer Review, pg. 136.
Stedman, Craig (21 Jun. 1993). "DEC aims fault tolerance at high end", Computerworld, pg. 75.
Computer-related introductions in 1990
DEC minicomputers
Fault-tolerant computer systems | VAXft | Technology,Engineering | 1,731 |
43,921,696 | https://en.wikipedia.org/wiki/Salvius%20%28robot%29 | Salvius () is an open source humanoid robot built in the United States in 2008, the first of its kind. Its name is derived from the word 'salvaged', being constructed with an emphasis on using recycled components and materials to reduce the costs of designing and construction. The robot is designed to be able to perform a wide range of tasks due to its humanoid body structure planning. The primary goal for the Salvius project is to create a robot that can function dynamically in a domestic environment.
Salvius is a part of the open source movement, meaning the robot's source code is freely available for others to use, alter, add and learn. Unlike other humanoid robots, Salvius benefits from the advantages of open source software allowing problems to be quickly addressed by a community of developers. Salvius has been used as a resource by STEM educators to enable students to learn about subjects in science and technology.
The name "Salvius" dates back to the time of the Roman Empire, however, it was chosen for this robot because of its similarity to the word "salvage". Names have been a significant part of this robot's development. Salvius is tattooed with the names of the individuals and businesses that have contributed to the project's progress.
Applications
Salvius is intended to be a resource for developers to experiment with machine learning and kinematic applications for humanoid robots. The robot is designed to allow new hardware features to be added or removed as needed using plug and play USB connections. Recent changes to the robots design have improved the robot's ability to connect to other devices so that developers can also investigate new ways that robots can interact with the Internet of Things (IoT).
Development
The robots construction has been documented since 2010. Its creation used recycling, and any commercially available parts used on the robot were chosen with availability and economic affordability in mind. Hardware items such as the Raspberry Pi and Arduino microcontrollers were selected for their open source design and their support communities. The robot uses multiple Arduino microcontrollers which were chosen based on the versatility and popularity of the platform across communities.
Software
The robot's computer runs Raspbian Linux and primarily uses open source software. Salvius is able to operate autonomously as well as controlled remotely using an online interface. The robot's programming languages include: Python, Arduino, and JavaScript. Python is the supported language of the Raspberry Pi. C is used for programming the Arduino micro-controllers that the robot's main computer, a Raspberry Pi, communicates with. By sending tasks off to other boards it allows the robot to do parallel processing and distribute work load. The [star network] topography prevents a failure in the Arduino procession nodes from damaging the robot.
Salvius's API allows users to send and retrieve data. Its wireless connection allows control through a web interface to view what the robot sees. Since all the software is installed on the robot the user only needs a device with a working internet connection and a browser.
Hardware
The robot is controlled by a network of Raspberry Pi and Arduino microcontrollers. The Raspberry Pi acts as a server for [high level programming languages] as its control function. The robot uses Grove motor controllers to control motors. Most of the robots motors have been salvaged from alternate sources and reused to construct the robot.
Sensors
Sensors allow the robot to successfully interact with its environment. Sensors that have been used on the robot include: touch, sound, light, ultrasonic, and a PIR (Passive infrared sensor). The robot also has an ethernet-connected IP camera which serves as its primary optical input device.
Specifications
See also
Humanoid robot
Open-source robotics
Actroid
Android
iCub
HRP-4C
REEM-B
QRIO
TOPIO
Nao
References
Robotics
Humanoid robots
Bipedal humanoid robots
Robots of the United States
Open-source robots
Androids
2008 robots | Salvius (robot) | Engineering | 810 |
50,647,426 | https://en.wikipedia.org/wiki/Soft%20robotics | Soft robotics is a subfield of robotics that concerns the design, control, and fabrication of robots composed of compliant materials, instead of rigid links.
In contrast to rigid-bodied robots built from metals, ceramics and hard plastics, the compliance of soft robots can improve their safety when working in close contact with humans.
Types and designs
The goal of soft robotics is the design and construction of robots with physically flexible bodies and electronics. In some applications, softness is restricted to a localized region of a machine. For example, rigid-bodied robotic arms can employ soft end effectors to gently grab and manipulate delicate or irregularly shaped objects. Most rigid-bodied mobile robots also strategically employ soft components, such as foot pads to absorb shock or springy joints to store/release elastic energy. However, the field of soft robotics generally focuses on the creation of machines that are predominately or entirely soft. Robots with entirely soft bodies have tremendous potential such as flexibility which allows them to squeeze into places rigid bodies cannot, which could prove useful in disaster relief scenarios. Soft robots are also safer for human interaction and for internal deployment inside a human body.
Nature is often a source of inspiration for soft robot design given that animals themselves are mostly composed of soft components and they appear to exploit their softness for efficient movement in complex environments almost everywhere on Earth. Thus, soft robots are often designed to look like familiar creatures, especially entirely soft organisms like octopuses. However, it is extremely difficult to manually design and control soft robots given their low mechanical impedance. The very thing that makes soft robots beneficial—their flexibility and compliance—makes them difficult to control. The mathematics developed over the past centuries for designing rigid bodies generally fail to extend to soft robots. Thus, soft robots are commonly designed in part with the help of automated design tools, such as evolutionary algorithms, which enable a soft robot's shape, material properties, and controller to all be simultaneously and automatically designed and optimized together for a given task.
Bio-mimicry
Plant cells can inherently produce hydrostatic pressure due to a solute concentration gradient between the cytoplasm and external surroundings (osmotic potential). Further, plants can adjust this concentration through the movement of ions across the cell membrane. This then changes the shape and volume of the plant as it responds to this change in hydrostatic pressure. This pressure derived shape evolution is desirable for soft robotics and can be emulated to create pressure adaptive materials through the use of fluid flow. The following equation models the cell volume change rate:
is the rate of volume change.
is area of the cell membrane.
is the hydraulic conductivity of the material.
is the change in hydrostatic pressure.
is the change in osmotic potential.
This principle has been leveraged in the creation of pressure systems for soft robotics. These systems are composed of soft resins and contain multiple fluid sacs with semi-permeable membranes. The semi-permeability allows for fluid transport that then leads to pressure generation. This combination of fluid transport and pressure generation then leads to shape and volume change.
Another biologically inherent shape changing mechanism is that of hygroscopic shape change. In this mechanism, plant cells react to changes in humidity. When the surrounding atmosphere has a high humidity, the plant cells swell, but when the surrounding atmosphere has a low humidity, the plant cells shrink. This volume change has been observed in pollen grains and pine cone scales.
Similar approaches to hydraulic soft joints can also be derived from arachnid locomotion, where strong and precise control over a joint can be primarily controlled through compressed hemolymph.
Manufacturing
Conventional manufacturing techniques, such as subtractive techniques like drilling and milling, are unhelpful when it comes to constructing soft robots as these robots have complex shapes with deformable bodies. Therefore, more advanced manufacturing techniques have been developed. Those include Shape Deposition Manufacturing (SDM), the Smart Composite Microstructure (SCM) process, and 3D multi-material printing.
SDM is a type of rapid prototyping whereby deposition and machining occur cyclically. Essentially, one deposits a material, machines it, embeds a desired structure, deposits a support for said structure, and then further machines the product to a final shape that includes the deposited material and the embedded part. Embedded hardware includes circuits, sensors, and actuators, and scientists have successfully embedded controls inside of polymeric materials to create soft robots, such as the Stickybot and the iSprawl.
SCM is a process whereby one combines rigid bodies of carbon fiber reinforced polymer (CFRP) with flexible polymer ligaments. The flexible polymer act as joints for the skeleton. With this process, an integrated structure of the CFRP and polymer ligaments is created through the use of laser machining followed by lamination. This SCM process is utilized in the production of mesoscale robots as the polymer connectors serve as low friction alternatives to pin joints.
Additive manufacturing processes such as 3D printing can now be used to print a wide range of silicone inks using techniques such as direct ink writing (DIW, also known as Robocasting). This manufacturing route allows for a seamless production of fluidic elastomer actuators with locally defined mechanical properties. It further enables a digital fabrication of pneumatic silicone actuators exhibiting programmable bioinspired architectures and motions.
A wide range of fully functional soft robots have been printed using this method including bending, twisting, grabbing and contracting motion. This technique avoids some of the drawbacks of conventional manufacturing routes such as delamination between glued parts. Another additive manufacturing method that produces shape morphing materials whose shape is photosensitive, thermally activated, or water responsive. Essentially, these polymers can automatically change shape upon interaction with water, light, or heat. One such example of a shape morphing material was created through the use of light reactive ink-jet printing onto a polystyrene target.
Additionally, shape memory polymers have been rapid prototyped that comprise two different components: a skeleton and a hinge material. Upon printing, the material is heated to a temperature higher than the glass transition temperature of the hinge material. This allows for deformation of the hinge material, while not affecting the skeleton material. Further, this polymer can be continually reformed through heating.
Control methods and materials
All soft robots require an actuation system to generate reaction forces, to allow for movement and interaction with its environment. Due to the compliant nature of these robots, soft actuation systems must be able to move without the use of rigid materials that would act as the bones in organisms, or the metal frame that is common in rigid robots. For actuation that involves bending, some sort of stress difference must be created across the component, such that the system has a tendency to bend towards a certain shape to relieve this said stress. Nevertheless, several control solutions to soft actuation problem exist and have found its use, each possessing advantages and disadvantages. Some examples of control methods and the appropriate materials are listed below.
Electric field
One example is utilization of electrostatic force that can be applied in:
Dielectric Elastomer Actuators (DEAs) that use high-voltage electric field in order to change its shape (example of working DEA). These actuators can produce high forces, have high specific power (W kg−1), produce large strains (>1000%), possess high energy density (>3 MJ m−3), exhibit self-sensing, and achieve fast actuation rates (10 ms - 1 s). However, the need for high-voltages quickly becomes the limiting factor in the potential practical applications. Additionally, these systems often exhibit leakage currents, tend to have electrical breakdowns (dielectric failure follows Weibull statistics therefore the probability increases with increased electrode area ), and require pre-strain for the greatest deformation. Some of the new research shows that there are ways of overcoming some of these disadvantages, as shown e.g. in Peano-HASEL actuators, which incorporate liquid dielectrics and thin shell components. These approach lowers the applied voltage needed, as well as allows for self-healing during electrical breakdown.
Thermal
Shape memory polymers (SMPs) are smart and reconfigurable materials that serve as an excellent example of thermal actuators that can be used for actuation. These materials will "remember" their original shape and will revert to it upon temperature increase. For example, crosslinked polymers can be strained at temperatures above their glass-transition (Tg) or melting-transition (Tm) and then cooled down. When the temperature is increased again, the strain will be released and materials shape will be changed back to the original. This of course suggests that there is only one irreversible movement, but there have been materials demonstrated to have up to 5 temporary shapes. One of the simplest and best known examples of shape memory polymers is a toy called Shrinky Dinks that is made of pre-stretched polystyrene (PS) sheet which can be used to cut out shapes that will shrink significantly when heated. Actuators produced using these materials can achieve strains up to 1000% and have demonstrated a broad range of energy density between <50 kJ m−3 and up to 2 MJ m−3. Definite downsides of SMPs include their slow response (>10 s) and typically low force generated. Examples of SMPs include polyurethane (PU), polyethylene teraphtalate (PET), polyethyleneoxide (PEO) and others.
Shape memory alloys are behind another control system for soft robotic actuation. Although made of metal, a traditionally rigid material, the springs are made from very thin wires and are just as compliant as other soft materials. These springs have a very high force-to-mass ratio, but stretch through the application of heat, which is inefficient energy-wise.
Pressure difference
Pneumatic artificial muscles, another control method used in soft robots, relies on changing the pressure inside a flexible tube. This way it will act as a muscle, contracting and extending, thus applying force to what it's attached to. Through the use of valves, the robot may maintain a given shape using these muscles with no additional energy input. However, this method generally requires an external source of compressed air to function. Proportional Integral Derivative (PID) controller is the most commonly used algorithm for pneumatic muscles. The dynamic response of pneumatic muscles can be modulated by tuning the parameters of the PID controller.
Sensors
Sensors are one of the most important component of robots. Without surprise, soft robots ideally use soft sensors. Soft sensors can usually measure deformation, thus inferring about the robot's position or stiffness.
Here are a few examples of soft sensors:
Soft stretch sensors
Soft bending sensors
Soft pressure sensors
Soft force sensors
These sensors rely on measures of:
Piezoresistivity:
polymer filled with conductive particles,
microfluidic pathways (liquid metal, ionic solution),
Piezoelectricity,
Capacitance,
Magnetic fields,
Optical loss,
Acoustic loss.
These measurements can be then fed into a control system.
Uses and applications
Surgical assistance
Soft robots can be implemented in the medical profession, specifically for invasive surgery. Soft robots can be made to assist surgeries due to their shape changing properties. Shape change is important as a soft robot could navigate around different structures in the human body by adjusting its form. This could be accomplished through the use of fluidic actuation.
Exosuits
Soft robots may also be used for the creation of flexible exosuits, for rehabilitation of patients, assisting the elderly, or simply enhancing the user's strength. A team from Harvard created an exosuit using these materials in order to give the advantages of the additional strength provided by an exosuit, without the disadvantages that come with how rigid materials restrict a person's natural movement. The exosuits are metal frameworks fitted with motorized muscles to multiply the wearer's strength. Also called exoskeletons, the robotic suits' metal framework somewhat mirrors the wearer's internal skeletal structure.
The suit makes lifted objects feel much lighter, and sometimes even weightless, reducing injuries and improving compliance.
Collaborative robots
Traditionally, manufacturing robots have been isolated from human workers due to safety concerns, as a rigid robot colliding with a human could easily lead to injury due to the fast-paced motion of the robot. However, soft robots could work alongside humans safely, as in a collision the compliant nature of the robot would prevent or minimize any potential injury.
Bio-mimicry
An application of bio-mimicry via soft robotics is in ocean or space exploration. In the search for extraterrestrial life, scientists need to know more about extraterrestrial bodies of water, as water is the source of life on Earth. Soft robots could be used to mimic sea creatures that can efficiently maneuver through water. Such a project was attempted by a team at Cornell in 2015 under a grant through NASA's Innovative Advanced Concepts (NIAC). The team set out to design a soft robot that would mimic a lamprey or cuttlefish in the way it moved underwater, in order to efficiently explore the ocean below the ice layer of Jupiter's moon, Europa. But exploring a body of water, especially one on another planet, comes with a unique set of mechanical and materials challenges. In 2021, scientists demonstrated a bioinspired self-powered soft robot for deep-sea operation that can withstand the pressure at the deepest part of the ocean at the Mariana Trench. The robot features artificial muscles and wings out of pliable materials and electronics distributed within its silicone body. It could be used for deep-sea exploration and environmental monitoring. In 2021, a team from Duke University reported a dragonfly-shaped soft robot, termed DraBot, with capabilities to watch for acidity changes, temperature fluctuations, and oil pollutants in water.
Cloaking
Soft robots that look like animals or are otherwise hard to identify could be used for surveillance and a range of other purposes. They could also be used for ecological studies such as amid wildlife. Soft robots could also enable novel artificial camouflage.
Robot components
Artificial muscle
Robot skin with tactile perception
Electronic skin
Qualitative benefits
Benefits of soft robot designs over fully conventional robot designs may be lighter weight—heavy payloads are expensive to launch—and increased safety—robots may work alongside astronauts.
Mechanical considerations in design
Fatigue failure from flexing
Soft robots, particularly those designed to imitate life, often must experience cyclic loading in order to move or do the tasks for which they were designed. For example, in the case of the lamprey- or cuttlefish-like robot described above, motion would require electrolyzing water and igniting gas, causing a rapid expansion to propel the robot forward. This repetitive and explosive expansion and contraction would create an environment of intense cyclic loading on the chosen polymeric material. A robot in a remote underwater location or on a remote planetary body like Europa would be practically impossible to patch up or replace, so care would need to be taken to choose a material and design that minimizes initiation and propagation of fatigue-cracks. In particular, one should choose a material with a fatigue limit, or a stress-amplitude frequency above which the polymer's fatigue response is no longer dependent on the frequency.
Brittle failure when cold
Secondly, because soft robots are made of highly compliant materials, one must consider temperature effects. The yield stress of a material tends to decrease with temperature, and in polymeric materials this effect is even more extreme. At room temperature and higher temperatures, the long chains in many polymers can stretch and slide past each other, preventing the local concentration of stress in one area and making the material ductile. But most polymers undergo a ductile-to-brittle transition temperature below which there is not enough thermal energy for the long chains to respond in that ductile manner, and fracture is much more likely. The tendency of polymeric materials to turn brittle at cooler temperatures is in fact thought to be responsible for the Space Shuttle Challenger disaster, and must be taken very seriously, especially for soft robots that will be implemented in medicine. A ductile-to-brittle transition temperature need not be what one might consider "cold," and is in fact characteristic of the material itself, depending on its crystallinity, toughness, side-group size (in the case of polymers), and other factors.
International journals
Soft Robotics (SoRo)
Soft Robotics section of Frontiers in Robotics and AI
Science Robotics
International events
2018 Robosoft, first IEEE International Conference on Soft Robotics, April 24–28, 2018, Livorno, Italy
2017 IROS 2017 Workshop on Soft Morphological Design for Haptic Sensation, Interaction and Display, 24 September 2017, Vancouver, BC, Canada
2016 First Soft Robotics Challenge, April 29–30, Livorno, Italy
2016 Soft Robotics week, April 25–30, Livorno, Italy
2015 "Soft Robotics: Actuation, Integration, and Applications – Blending research perspectives for a leap forward in soft robotics technology" at ICRA2015, Seattle WA
2014 Workshop on Advances on Soft Robotics, 2014 Robotics Science and Systems (RSS) Conference, Berkeley, CA, July 13, 2014
2013 International Workshop on Soft Robotics and Morphological Computation, Monte Verità, July 14–19, 2013
2012 Summer School on Soft Robotics, Zurich, June 18–22, 2012
In popular culture
The 2014 Disney film Big Hero 6 features a soft robot, Baymax, originally designed for use in the healthcare industry. In the film, Baymax is portrayed as a large yet unintimidating robot with an inflated vinyl exterior surrounding a mechanical skeleton. The basis of Baymax concept comes from real life research on applications of soft robotics in the healthcare field, such as roboticist Chris Atkeson's work at Carnegie Mellon's Robotics Institute.
The 2018 animated Sony film Spider-Man: Into the Spider-Verse features a female version of the supervillain Doctor Octopus that uses tentacles built with soft robotics to subdue her foes.
In episode 4 of the animated series Helluva Boss, inventor Loopty Goopty uses tentacles with soft robotics tipped with various weapons to threaten the members of the I.M.P into murdering his friend, Lyle Lipton.
See also
Articulated soft robotics
Octobot (robot)
Bio-inspired robotics
Bionics
Biorobotics
Home robot
Robotic materials
Soft Growing Robotics
Magnetic slime robot
Wetware computer
Biosensor
Robotic sensing
External links
Soft Robot - A Review (Elveflow)
Dielectric elastomer actuators (softroboticstoolkit.com)
HEASEL actuators: soft muscles (nextbigfuture.com).
References
Robotics
Robot kinematics
Biorobotics
Articles containing video clips | Soft robotics | Engineering | 3,872 |
3,787,114 | https://en.wikipedia.org/wiki/Venice%20Charter | The Venice Charter for the Conservation and Restoration of Monuments and Sites is a set of guidelines, drawn up in 1964 by a group of conservation professionals in Venice, that provides an international framework for the conservation and restoration of historic buildings. However, the document is now seen by some as outdated, representing Modernist views opposed to reconstruction. Reconstruction is now cautiously accepted by UNESCO in exceptional circumstances if it seeks to reflect a pattern of use or cultural practice that sustains cultural value, and is based on complete documentation without reliance on conjecture. The change in attitude can be marked by the reconstruction in 2015 of the Sufi mausoleums at the Timbuktu World Heritage Site in Mali after their destruction in 2012.
Historic background
Athens Charter
The development of new conservation and restoration techniques have threatened the historic buildings in a general sense. In 1931, the International Museum Office organized a meeting of specialists about the conservation of historic buildings. The conference resulted with the Athens Charter for the Restoration of Historic Monuments. This consisted of a manifesto of seven points:
to establish organizations for restoration advice
to ensure projects are reviewed with knowledgeable criticism
to establish national legislation to preserve historic sites
to rebury excavations which were not to be restored.
to allow the use of modern techniques and materials in restoration work.
to place historical sites under custodial protection.
to protect the area surrounding historic sites.
The Athens Charter proposed the idea of a common world heritage, the importance of the setting of monuments, and the principle of the integration of new materials. The Charter had very progressive suggestions for its period, influencing the creation of conservation institutions, as well as the eventual Venice Charter.
First International Congress of Architects and Specialists of Historic Buildings
With the concern that listing and safeguarding historic buildings was not enough, in 1957 architectural specialists arranged a congress in Paris called The First International Congress of Architects and Specialists of Historic Buildings. At its conclusion, the congress published seven recommendations:
the countries which still lack a central organization for the protection of historic buildings provide for the establishment of such an authority,
the creation of an international assembly of architects and specialists of historic buildings should be considered,
a specialized professional training of all categories of personnel should be promoted so as to secure highly qualified workmanship and that remuneration should be commensurate with such qualifications,
the hygrometric problems relating to historic buildings should be discussed in a symposium,
contemporary artists should be requested to contribute to the decoration of monuments,
close cooperation should be established among architects and archeologists,
architects and town-planners cooperate so as to secure integration of historic buildings into town planning.
The Congress agreed to have its second meeting in Venice and Piero Gazzola, who was to serve as the chairman of the Venice Charter, was invited to hold the Venice Congress.
Second International Congress of Architects and Specialists of Historic Buildings
In 1964, at the Second International Congress of Architects and Specialists of Historic Buildings, 13 resolutions were adopted of which the first was the Venice Charter and the second was creation of ICOMOS (International Council on Monuments and Sites).
The Venice Charter consisted of seven main titles and sixteen articles. The concept of historic monuments and sites was interpreted as the common heritage, therefore safeguarding them for the future generations with authenticity being defined as the common responsibility. The following text is the original 1964 text agreed on by the representatives of the participating nations mentioned at the end of the Charter.
Venice Charter Text
Definitions
Article 1. The concept of a historic monument embraces not only the single architectural work but also the urban or rural setting in which is found the evidence of a particular civilization, a significant development or a historic event. This applies not only to great works of art but also to more modest works of the past which have acquired cultural significance with the passing of time.
Article 2. The conservation and restoration of monuments must have recourse to all the sciences and techniques which can contribute to the study and safeguarding of the architectural heritage.
Aim
Article 3. The intention in conserving and restoring monuments is to safeguard them no less as works of art than as historical evidence.
Conservation
Article 4. It is essential to the conservation of monuments that they be maintained on a permanent basis.
Article 5. The conservation of monuments is always facilitated by making use of them for some socially useful purpose. Such use is therefore desirable but it must not change the lay-out or decoration of the building. It is within these limits only that modifications demanded by a change of function should be envisaged and may be permitted.
Article 6. The conservation of a monument implies preserving a setting which is not out of scale. Wherever the traditional setting exists, it must be kept. No new construction, demolition or modification which would alter the relations of mass and color must be allowed.
Article 7. A monument is inseparable from the history to which it bears witness and from the setting in which it occurs. The moving of all or part of a monument cannot be allowed except where the safeguarding of that monument demands it or where it is justified by national or international interest of paramount importance.
Article 8. Items of sculpture, painting or decoration which form an integral part of a monument may only be removed from it if this is the sole means of ensuring their preservation.
Restoration
Article 9. The process of restoration is a highly specialized operation. Its aim is to preserve and reveal the aesthetic and historic value of the monument and is based on respect for original material and authentic documents. It must stop at the point where conjecture begins, and in this case moreover any extra work which is indispensable must be distinct from the architectural composition and must bear a contemporary stamp. The restoration in any case must be preceded and followed by an archaeological and historical study of the monument.
Article 10. Where traditional techniques prove inadequate, the consolidation of a monument can be achieved by the use of any modern technique for conservation and construction, the efficacy of which has been shown by scientific data and proved by experience.
Article 11. The valid contributions of all periods to the building of a monument must be respected, since unity of style is not the aim of a restoration. When a building includes the superimposed work of different periods, the revealing of the underlying state can only be justified in exceptional circumstances and when what is removed is of little interest and the material which is brought to light is of great historical, archaeological or aesthetic value, and its state of preservation good enough to justify the action. Evaluation of the importance of the elements involved and the decision as to what may be destroyed cannot rest solely on the individual in charge of the work.
Article 12. Replacements of missing parts must integrate harmoniously with the whole, but at the same time must be distinguishable from the original so that restoration does not falsify the artistic or historic evidence.
Article 13. Additions cannot be allowed except in so far as they do not detract from the interesting parts of the building, its traditional setting, the balance of its composition and its relation with its surroundings.
Historic Sites
Article 14. The sites of monuments must be the object of special care in order to safeguard their integrity and ensure that they are cleared and presented in a seemly manner. The work of conservation and restoration carried out in such places should be inspired by the principles set forth in the foregoing articles.
Excavations
Article 15. Excavations should be carried out in accordance with scientific standards and the recommendation defining international principles to be applied in the case of archaeological excavation adopted by UNESCO in 1956.
Ruins must be maintained and measures necessary for the permanent conservation and protection of architectural features and of objects discovered must be taken. Furthermore, every means must be taken to facilitate the understanding of the monument and to reveal it without ever distorting its meaning.
All reconstruction work should however be ruled out "a priori." Only anastylosis, that is to say, the reassembling of existing but dismembered parts can be permitted. The material used for integration should always be recognizable and its use should be the least that will ensure the conservation of a monument and the reinstatement of its form.
Publication
Article 16. In all works of preservation, restoration or excavation, there should always be precise documentation in the form of analytical and critical reports, illustrated with drawings and photographs. Every stage of the work of clearing, consolidation, rearrangement and integration, as well as technical and formal features identified during the course of the work, should be included. This record should be placed in the archives of a public institution and made available to research workers. It is recommended that the report should be published.
The Committee
The following persons took part in the work of the Committee for drafting the International Charter for the Conservation and Restoration of Monuments:
Piero Gazzola (Italy), Chairman
Raymond M. Lemaire (Belgium), Reporter
Jose Bassegoda-Nonell (Spain)
Luis Benavente (Portugal)
Djurdje Boskovic (Yugoslavia)
Hiroshi Daifuku (UNESCO)
P.L de Vrieze (Netherlands)
Harald Langberg (Denmark)
Mario Matteucci (Italy)
Jean Merlet (France)
Carlos Flores Marini (Mexico)
Roberto Pane (Italy)
S.C.J. Pavel (Czechoslovakia)
Paul Philippot (ICCROM)
Victor Pimentel (Peru)
Harold Plenderleith (United Kingdom & ICCROM)
Deoclecio Redig de Campos (Vatican)
Jean Sonnier (France)
Francois Sorlin (France)
Eustathios Stikas (Greece)
Mrs. Gertrud Tripp (Austria)
Jan Zachwatowicz (Poland)
Mustafa S. Zbiss (Tunisia)
Outcome
The Venice Charter is the most influential document on conservation since 1964. However the following aspects are not covered in the Venice Charter:
The concept of site which also applies to historic landscapes and gardens
The concept of reversibility in restoration
The social and financial issues
In the years after the publishing, the purpose of the charter a number of symposiums took place in order to improve common understandings and awareness of it by those involved in the conservation and restoration works of the historic buildings. How it was applied in different countries varied according to their social, economic and cultural conditions, as well as the technical qualifications of those applying it. Translation mistakes and misunderstandings of the Charter also led to differences in its application.
Criticism
The Venice Charter and its subsequent interpretations have attracted criticism, especially by those who perceive it was built upon the Modernist biases of its creators. Professor of architecture Samir Younés has written: "The Charter’s abhorrence of restoration and reconstruction – with its implicit fear of "false history" – reflects the Modernist theory of historical determinism, rather than the idea of a living architectural tradition. Major advances over the last 40 years in traditional design fluency and building crafts skills have undercut and outmoded many of the assumptions implicit in the Venice Charter. As a result, many now believe that visual harmony, aesthetic balance and the essential character of a place are of greater importance than abstract Modernist theories."
Issue is taken particularly with the words in Article 9: "Any extra work which is indispensable must be distinct from the architectural composition and must bear a contemporary stamp." This declaration has had a major impact on the management of historic buildings globally. In the U.S., for example, it shaped the Secretary of the Interior’s Standard #9 so it stated "...new work shall be differentiated from the old". It has been commonly interpreted to mean that interventions and additions have to be in Modernist styles, rather than being discreetly indicated by such devices as dated cornerstones and descriptive plaques. Many popular reconstructions now considered intrinsic to their locations, such as the 1912 rebuilding of the Campanile di San Marco in Venice, would violate the Venice Charter’s dictum: “All reconstruction work should however be ruled out "a priori".
Because of concern over the damage being to historic settings by the Venice Charter's misapplication, in 2006 another conference was held in Venice under the auspices of INTBAU (the International Network for Traditional Building, Architecture & Urbanism). Its principal objective was to provide a theoretical framework that would enable new buildings and additions to be in greater harmony with their historic surroundings.
Critics of the Venice Charter point to the 2005 Charleston Charter as providing preferred guidelines for dealing with historic areas. It states: “New construction in historic settings, including alterations and additions to existing buildings, should not arbitrarily impose contrasting materials, scales, or design vocabularies, but clarify and extend the character of the place, seeking always continuity and wholeness in the built environment.”
Revisions
Beginning with the World Heritage Convention (1972), some of the limited explanations in the Venice Charter were revised. The understanding of cultural heritage, which was expressed as historic monuments, was categorized as monuments, groups of buildings and sites. Later on The Nara Document on Authenticity (1994) carried out the responsibility to clarify the authenticity related issues which were expressed in the articles 6 and 7 of the Venice Charter.
In the Naples ICOMOS meeting on 7 November 1995; the question ‘Should there be a review of the Venice Charter?’ was discussed with participation of Raymond Lemaire, the reporter of the Venice Charter in 1964. Thirty years after the Venice Charter, Lemaire declared that: "Charters are fashionable. They are considered to contribute to directing action. However they never contain more than the minimum on which the majority has agreed. Only exceptionally do they cover the whole of the issue which concerns them. This is the case with the Venice Charter." He further stated his opinions about the present understanding of monuments and their restoration. He pointed out the necessity of a new document, or an effective adaptation, with consideration of the need "to be addressed with caution and wisdom, with respect for all cultures and above all with ethical and intellectual discipline."
The Venice Charter has itself become an historic document. While some of its guidelines are considered to have proven their worth by both its supporters and critics, there are now plans for it to be rewritten.
Bibliography
Hardy, Matthew The Venice Charter Revisited: Modernism, Conservation and Tradition in the 21st Century, foreword by HRH The Prince of Wales, Cambridge Scholars Publishing, Newcastle upon Tyne, UK; 2008,
Stubbs, John H. Time Honored: A Global View of Architectural Conservation, John Wiley & Sons; Hoboken, New Jersey; 2009
See also
International Council on Monuments and Sites
Athens Charter
Cultural heritage
Building restoration
Historic preservation
Barcelona Charter – European Charter for the Conservation and Restoration of Traditional Ships in Operation
Convention Concerning the Protection of the World Cultural and Natural Heritage (1972 World Heritage Convention)
External links
Venice Charter
ICOMOS (International Council on Monuments and Sites)
Florence Charter on the preservation of historic gardens
Nara Document on Authenticity
Venice Charter: Detailed website about Venice Charter with preamble and all 16 articles in 7 languages as well as an extended model concerning building in existing fabric and historical environment (partially in English).
References
Urban planning
Architectural history
Historic preservation
Treaties concluded in 1964
International cultural heritage documents
Conservation and restoration of cultural heritage | Venice Charter | Engineering | 3,048 |
3,277,814 | https://en.wikipedia.org/wiki/Fluid%20coupling | A fluid coupling or hydraulic coupling is a hydrodynamic or 'hydrokinetic' device used to transmit rotating mechanical power. It has been used in automobile transmissions as an alternative to a mechanical clutch. It also has widespread application in marine and industrial machine drives, where variable speed operation and controlled start-up without shock loading of the power transmission system is essential.
Hydrokinetic drives, such as this, should be distinguished from hydrostatic drives, such as hydraulic pump and motor combinations.
History
The fluid coupling originates from the work of Hermann Föttinger, who was the chief designer at the AG Vulcan Works in Stettin. His patents from 1905 covered both fluid couplings and torque converters.
Dr Gustav Bauer of the Vulcan-Werke collaborated with English engineer Harold Sinclair of Hydraulic Coupling Patents Limited to adapt the Föttinger coupling to vehicle transmission in an attempt to mitigate the lurching Sinclair had experienced while riding on London buses during the 1920s Following Sinclair's discussions with the London General Omnibus Company begun in October 1926, and trials on an Associated Daimler bus chassis, Percy Martin of Daimler decided to apply the principle to the Daimler group's private cars.
During 1930 The Daimler Company of Coventry, England began to introduce a transmission system using a fluid coupling and Wilson self-changing gearbox for buses and their flagship cars. By 1933 the system was used in all new Daimler, Lanchester and BSA vehicles produced by the group from heavy commercial vehicles to small cars. It was soon extended to Daimler's military vehicles and in 1934 was featured in the Singer Eleven branded as Fluidrive. These couplings are described as constructed under Vulcan-Sinclair and Daimler patents.
In 1939 General Motors Corporation introduced Hydramatic drive, the first fully automatic automotive transmission system installed in a mass-produced automobile. The Hydramatic employed a fluid coupling.
The first diesel locomotives using fluid couplings were also produced in the 1930s.
Overview
A fluid coupling consists of three components, plus the hydraulic fluid:
The housing, also known as the shell (which must have an oil-tight seal around the drive shafts), contains the fluid and turbines.
Two turbines (fanlike components):
One connected to the input shaft; known as the pump or impeller, or primary wheel input turbine.
The other connected to the output shaft, known as the turbine, output turbine, secondary wheel or runner
The driving turbine, known as the 'pump', (or driving torus) is rotated by the prime mover, which is typically an internal combustion engine or electric motor. The impeller's motion imparts both outwards linear and rotational motion to the fluid.
The hydraulic fluid is directed by the 'pump' whose shape forces the flow in the direction of the 'output turbine' (or driven torus). Here, any difference in the angular velocities of 'input stage' and 'output stage' result in a net force on the 'output turbine' causing a torque; thus causing it to rotate in the same direction as the pump.
The motion of the fluid is effectively toroidal - travelling in one direction on paths that can be visualised as being on the surface of a torus:
If there is a difference between input and output angular velocities the motion has a poloidal component
If the input and output stages have identical angular velocities there is no net centripetal force - and the motion of the fluid is circular and co-axial with the axis of rotation (i.e. round the edges of a torus), there is no flow of fluid from one turbine to the other.
Stall speed
An important characteristic of a fluid coupling is its stall speed. The stall speed is defined as the highest speed at which the pump can turn when the output turbine is locked and full input torque (at the stall speed) is applied. Under stall conditions all of the engine's power at that speed would be dissipated in the fluid coupling as heat, possibly leading to damage.
Step-circuit coupling
A modification to the simple fluid coupling is the step-circuit coupling which was formerly manufactured as the "STC coupling" by the Fluidrive Engineering Company.
The STC coupling contains a reservoir to which some, but not all, of the oil gravitates when the output shaft is stalled. This reduces the "drag" on the input shaft, resulting in reduced fuel consumption when idling and a reduction in the vehicle's tendency to "creep".
When the output shaft begins to rotate, the oil is thrown out of the reservoir by centrifugal force, and returns to the main body of the coupling, so that normal power transmission is restored.
Slip
A fluid coupling cannot develop output torque when the input and output angular velocities are identical. Hence, a fluid coupling cannot achieve 100 percent power transmission efficiency. Due to slippage that will occur in any fluid coupling under load, some power will always be lost in fluid friction and turbulence, and dissipated as heat. Like other fluid dynamical devices, its efficiency tends to increase gradually with increasing scale, as measured by the Reynolds number.
Hydraulic fluid
As a fluid coupling operates kinetically, low-viscosity fluids are preferred. Generally speaking, multi-grade motor oils or automatic transmission fluids are used. Increasing density of the fluid increases the amount of torque that can be transmitted at a given input speed. However, hydraulic fluids, much like other fluids, are subject to changes in viscosity with temperature change. This leads to a change in transmission performance and so where unwanted performance/efficiency change has to be kept to a minimum, a motor oil or automatic transmission fluid with a high viscosity index should be used.
Hydrodynamic braking
Fluid couplings can also act as hydrodynamic brakes, dissipating rotational energy as heat through frictional forces (both viscous and fluid/container). When a fluid coupling is used for braking it is also known as a retarder.
Scoop control
Correct operation of a fluid coupling depends on it being correctly filled with fluid. An under-filled coupling will be unable to transmit the full torque, and the limited fluid volume is also likely to overheat, often with damage to the seals.
If a coupling is deliberately designed to operate safely when under-filled, usually by providing an ample fluid reservoir which is not engaged with the impeller, then controlling its fill level may be used to control the torque which it can transmit, and in some cases to also control the speed of a load.
Controlling the fill level is done with a 'scoop', a non-rotating pipe which enters the rotating coupling through a central, fixed hub. By moving this scoop, either rotating it or extending it, it scoops up fluid from the coupling and returns it to a holding tank outside the coupling. The oil may be pumped back into the coupling when needed, or some designs use a gravity feed - the scoop's action is enough to lift fluid into this holding tank, powered by the coupling's rotation.
Scoop control can be used for easily managed and stepless control of the transmission of very large torques. The Fell diesel locomotive, a British experimental diesel railway locomotive of the 1950s, used four engines and four couplings, each with independent scoop control, to engage each engine in turn. It is commonly used to provide variable speed drives.
Applications
Industrial
Fluid couplings are used in many industrial application involving rotational power, especially in machine drives that involve high-inertia starts or constant cyclic loading.
Rail transportation
Fluid couplings are found in some Diesel locomotives as part of the power transmission system. Self-Changing Gears made semi-automatic transmissions for British Rail, and Voith manufacture turbo-transmissions for diesel multiple units which contain various combinations of fluid couplings and torque converters.
Automotive
Fluid couplings were used in a variety of early semi-automatic transmissions and automatic transmissions. Since the late 1940s, the hydrodynamic torque converter has replaced the fluid coupling in automotive applications.
In automotive applications, the pump typically is connected to the flywheel of the engine—in fact, the coupling's enclosure may be part of the flywheel proper, and thus is turned by the engine's crankshaft. The turbine is connected to the input shaft of the transmission. While the transmission is in gear, as engine speed increases, torque is transferred from the engine to the input shaft by the motion of the fluid, propelling the vehicle. In this regard, the behaviour of the fluid coupling strongly resembles that of a mechanical clutch driving a manual transmission.
Fluid flywheels, as distinct from torque converters, are best known for their use in Daimler cars in conjunction with a Wilson pre-selector gearbox. Daimler used these throughout their range of luxury cars, until switching to automatic gearboxes with the 1958 Majestic. Daimler and Alvis were both also known for their military vehicles and armoured cars, some of which also used the combination of pre-selector gearbox and fluid flywheel.
Aviation
The most prominent use of fluid couplings in aeronautical applications was in the DB 601, DB 603 and DB 605 engines where it was used as a barometrically controlled hydraulic clutch for the centrifugal compressor and the Wright turbo-compound reciprocating engine, in which three power recovery turbines extracted approximately 20 percent of the energy or about from the engine's exhaust gases and then, using three fluid couplings and gearing, converted low-torque high-speed turbine rotation to low-speed, high-torque output to drive the propeller.
Calculations
Generally speaking, the power transmitting capability of a given fluid coupling is strongly related to pump speed, a characteristic that generally works well with applications where the applied load does not fluctuate to a great degree. The torque transmitting capacity of any hydrodynamic coupling can be described by the expression , where is the mass density of the fluid (kg/m3), is the impeller speed (rpm), and is the impeller diameter (m). In the case of automotive applications, where loading can vary to considerable extremes, is only an approximation. Stop-and-go driving will tend to operate the coupling in its least efficient range, causing an adverse effect on fuel economy.
Manufacture
Fluid couplings are relatively simple components to produce. For example, the turbines can be aluminium castings or steel stampings and the housing can also be a casting or made from stamped or forged steel.
Manufacturers of industrial fluid couplings include Voith, Transfluid, TwinDisc, Siemens, Parag, Fluidomat, Reuland Electric and TRI Transmission and Bearing Corp.
Patents
List of fluid coupling patents.
This is not an exhaustive list but is intended to give an idea of the development of fluid couplings in the 20th century.
See also
Torque amplifier
Torque converter
Water brake
Notes
References
External links
Fluid Coupling, The Principles of Operation, film
Rotating shaft couplings
Mechanical power transmission
Automotive transmission technologies | Fluid coupling | Physics | 2,229 |
8,350,306 | https://en.wikipedia.org/wiki/Spray%20tower | A spray tower (or spray column or spray chamber) is a gas-liquid contactor used to achieve mass and heat transfer between a continuous gas phase (that can contain dispersed solid particles) and a dispersed liquid phase. It consists of an empty cylindrical vessel made of steel or plastic, and nozzles that spray liquid into the vessel. The inlet gas stream usually enters at the bottom of the tower and moves upward, while the liquid is sprayed downward from one or more levels. This flow of inlet gas and liquid in opposite directions is called countercurrent flow.
Overview
This type of technology can be used for example as a wet scrubber for air pollution control. Countercurrent flow exposes the outlet gas with the lowest pollutant concentration to the freshest scrubbing liquid.
Many nozzles are placed across the tower at different heights to spray all of the gas as it moves up through the tower. The reason for using many nozzles is to maximize the number of fine droplets impacting the pollutant particles and to provide a large surface area for absorbing gas.
Theoretically, the smaller the droplets formed, the higher the collection efficiency achieved for both gaseous and particulate pollutants. However, the liquid droplets must be large enough to not be carried out of the scrubber by the scrubbed outlet gas stream. Therefore, spray towers use nozzles that produce droplets that are usually 500–1000 μm in diameter. Although small in size, these droplets are large compared to those created in venturi scrubbers that are 10–50 μm in size. The gas velocity is kept low, from 0.3 to 1.2 m/s (1–4 ft/s), to prevent excess droplets from being carried out of the tower.
In order to maintain low gas velocities, spray towers must be larger than other scrubbers that handle similar gas stream flow rates. Another problem occurring in spray towers is that after the droplets have fallen a short distance, they tend to agglomerate or hit the walls of the tower. Consequently, the total liquid surface area for contact is reduced, reducing the collection efficiency of the scrubber.
In addition to a countercurrent-flow configuration, the flow in spray towers can be either a cocurrent or crosscurrent in configuration.
In cocurrent-flow spray towers, the inlet gas and liquid flow in the same direction. Because the gas stream does not "push" against the liquid sprays, the gas velocities through the vessels are higher than in countercurrent-flow spray towers. Consequently, cocurrent-flow spray towers are smaller than countercurrent-flow spray towers treating the same amount of exhaust flow. In crosscurrent-flow spray towers, also called horizontal-spray scrubbers, the gas and liquid flow in directions perpendicular to each other.
In this vessel, the gas flows horizontally through a number of spray sections. The amount and quality of liquid sprayed in each section can be varied, usually with the cleanest liquid (if recycled liquid is used) sprayed in the last set of sprays.
Particle collection
Spray towers are low energy scrubbers. Contacting power is much lower than in venturi scrubbers, and the pressure drops across such systems are generally less than 2.5 cm (1 in) of water. The collection efficiency for small particles is correspondingly lower than in more energy-intensive devices. They are adequate for the collection of coarse particles larger than 10–25 μm in diameter, although with increased liquid inlet nozzle pressures, particles with diameters of 2.0 μm can be collected.
Smaller droplets can be formed by higher liquid pressures at the nozzle. The highest collection efficiencies are achieved when small droplets are produced and the difference between the velocity of the droplet and the velocity of the upward-moving particles is high. Small droplets, however, have small settling velocities, so there is an optimum range of droplet sizes for scrubbers that work by this mechanism.
This range of droplet sizes is between 500 and 1,000 μm for gravity-spray (counter current) towers. The injection of water at very high pressures – 2070–3100 kPa (300–450 psi) – creates a fog of very fine droplets. Higher particle-collection efficiencies can be achieved in such cases since collection mechanisms other than inertial impaction occur. However, these spray nozzles may use more power to form droplets than would a venturi operating at the same collection efficiency.
Gas collection
Spray towers can be used for gas absorption, but they are not as effective as packed or plate towers. Spray towers can be very effective in removing pollutants if the pollutants are highly soluble or if a chemical reagent is added to the liquid.
For example, spray towers are used to remove HCl gas from the tail-gas exhaust in manufacturing hydrochloric acid. In the production of superphosphate used in manufacturing fertilizer, SiF4 and HF gases are vented from various points in the processes. Spray towers have been used to remove these highly soluble compounds. Spray towers are also used for odor removal in bone meal and tallow manufacturing industries by scrubbing the exhaust gases with a solution of KMnO4.
Because of their ability to handle large gas volumes in corrosive atmospheres, spray towers are also used in a number of flue-gas desulfurization systems as the first or second stage in the pollutant removal process.
In a spray tower, absorption can be increased by decreasing the size of the liquid droplets and/or increasing the liquid-to-gas ratio (L/G). However, to accomplish either of these, an increase in both power consumed and operating cost is required. In addition, the physical size of the spray tower will limit the amount of liquid and the size of droplets that can be used.
Maintenance problems
The main advantage of spray towers over other scrubbers is their completely open design; they have no internal parts except for the spray nozzles. This feature eliminates many of the scale buildup and plugging problems associated with other scrubbers. The primary maintenance problems are spray-nozzle plugging or eroding, especially when using recycled scrubber liquid. To reduce these problems, a settling or filtration system is used to remove abrasive particles from the recycled scrubbing liquid before pumping it back into the nozzles.
Summary
Spray towers are inexpensive control devices primarily used for gas conditioning (cooling or humidifying) or for first-stage particle or gas removal. They are also used in many flue-gas desulfurization systems to reduce plugging and scale buildup by pollutants.
Many scrubbing systems use sprays either prior to or in the bottom of the primary scrubber to remove large particles that could plug it.
Spray towers have been used effectively to remove large particles and highly soluble gases. The pressure drop across the towers is very low – usually less than 2.5 cm (1.0 in) of water; thus, scrubber operating costs are relatively low. However, the liquid pumping costs can be very high.
Spray towers are constructed in various sizes – small ones to handle small gas flows of 0.05 m3/s (106 ft3/min) or less, and large ones to handle large exhaust flows of 50 m3/s (106,000 m3/min) or greater. Because of the low gas velocity required, units handling large gas flow rates tend to be large in size. Operating characteristics of spray towers are presented in the following table.
References
Bibliography
Gilbert, J. W. 1977. Jet venturi fume scrubbing. In P. N. Cheremisinoff and R. A. Young (Eds.), Air Pollution Control and Design Handbook. Part 2. New York: Marcel Dekker.
McIlvaine Company. 1974. The Wet Scrubber Handbook. Northbrook, IL: McIlvaine Company.
Richards, J. R. 1995. Control of Particulate Emissions (APTI Course 413). U.S. Environmental Protection Agency.
Richards, J. R. 1995. Control of Gaseous Emissions. (APTI Course 415). U.S. Environmental Protection Agency.
Pollution control technologies
Air pollution control systems
Scrubbers
Wet scrubbers
Liquid-phase contacting scrubbers | Spray tower | Chemistry,Engineering | 1,724 |
8,696,000 | https://en.wikipedia.org/wiki/CCL19 | Chemokine (C-C motif) ligand 19 (CCL19) is a protein that in humans is encoded by the CCL19 gene.
This gene is one of several CC cytokine genes clustered on the p-arm of chromosome 9. Cytokines are a family of secreted proteins involved in immunoregulatory and inflammatory processes. The CC cytokines are proteins characterized by two adjacent cysteines. The cytokine encoded by this gene may play a role in normal lymphocyte recirculation and homing. It also plays an important role in trafficking of T cells in thymus, and in T cell and B cell migration to secondary lymphoid organs. It specifically binds to chemokine receptor CCR7.
Chemokine (C-C motif) ligand 19 (CCL19) is a small cytokine belonging to the CC chemokine family that is also known as EBI1 ligand chemokine (ELC) and macrophage inflammatory protein-3-beta (MIP-3-beta). CCL19 is expressed abundantly in thymus and lymph nodes, with moderate levels in trachea and colon and low levels in stomach, small intestine, lung, kidney and spleen. The gene for CCL19 is located on human chromosome 9. This chemokine elicits its effects on its target cells by binding to the chemokine receptor chemokine receptor CCR7. It attracts certain cells of the immune system, including dendritic cells and antigen-engaged B cells, CCR7+ central-memory T-Cells.
References
Further reading
External links
Cytokines | CCL19 | Chemistry | 358 |
24,466,344 | https://en.wikipedia.org/wiki/Gymnopilus%20minutosporus | Gymnopilus minutosporus is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus minutosporus at Index Fungorum
minutosporus
Fungi of North America
Fungus species | Gymnopilus minutosporus | Biology | 59 |
61,607,995 | https://en.wikipedia.org/wiki/1982%20Bukit%20Merah%20radioactive%20pollution | The 1982 Bukit Merah radioactive pollution is a radioactive waste pollution incident in Bukit Merah of Kinta District in Central Perak, Malaysia. The outcome of the pollution case took several years to complete with no acknowledgement of responsibilities from companies involved despite the closure of factory in 1994 that become the source of pollution.
Background
A rare earth extracting company named Asia Rare Earth Sdn Bhd (ARE) was established in 1979 for yttrium extraction in Bukit Merah, Perak with the biggest shareholders for the company being the Mitsubishi Chemical Industries Ltd and Beh Minerals (both with 35% share) together with Tabung Haji and other Bumiputera businessmen owning lesser shares. In 1982, the newly established company began extracting yttrium from a mineral named monazite which contains rare-earth elements and the radioactive elements thorium and uranium. Since the ARE started its operation, residents from a nearby town of Papan began to complain of an unpleasant odour and smoke from the factory where they also had reported breathing difficulties as a result of the pollution. The residents later discovered in 1984 that the extracting company of ARE had built a waste channel to a disposal site near their town under the consent of the state government of Perak.
Affected residents reaction toward the pollution
Upon knowing that the state government were in part involved in the activities, around 6,700 residents from the affected town of Papan and several others from nearby towns signed a petition that was subsequently sent to various government departments, including the Menteri Besar of Perak, the Prime Minister of Malaysia, the Health Ministry and the Science, Technology, and Environment Ministry while 3,000 residents including women and children participated in a peaceful assembly and another 200 blocked the road to the waste disposal site.
Government responses
Nonetheless, the Malaysian Prime Minister at the time Mahathir Mohamad responded that the government had taken every precaution to ensure safety with the construction of the radioactive disposal site continuing regardless of the protest, with the Science, Technology and Environment Minister Stephen Yong Kuet Tze also denying any potential health hazards and stressing that the disposal site was safe as it was built under strict regulations, challenging the affected residents to back up their claims that the disposal site was indeed hazardous. Despite the conclusion gathered from international experts, the government decided to proceed with the activities where the residents later continue their protests and performed a one-day hunger strike against the government decision. In 1985, Malaysian Deputy Prime Minister Musa Hitam showed his concern by visiting the site and subsequent cabinet meeting led by the latter was held to move the disposal site to the Kledang Range, about 5 kilometres from the town area. Following the third revelation, the federal government through a minister from the Prime Minister's Office Kasitah Gaddam said the levels were still safe despite it being more than the limit, with the excuse of the number of sites being very small.
Investigation and subsequent events
To prove that their claims were indeed true, the residents of Papan aided by residents from the nearby towns of Bukit Merah, Lahat, Menglembu and Taman Badri Shah formed the Bukit Merah Acting Committee. The committee was visited by a local environmentalist group Sahabat Alam Malaysia (SAM) who measured the radiation levels at the open space and pool near the factory with the conclusion that the radiation in these places was 88 times higher than the upper limit allowed by the International Commission on Radiological Protection (ICRP) with a memorandum then being submitted to the country Prime Minister. With the increasing pressure, the Malaysian government then invited a team comprising members from International Atomic Energy Agency (IAEA) to visit the factory where three international nuclear experts from Japan, the United Kingdom, and the United States also found that the waste channel is not safe for the public. Another expert from Japan was called thereafter to gather further evidence where he found that the radiation levels were 800 times the permitted maximum level.
Court case
In 1985, eight of the town residents including one who is a cancer victim brought the case to the High Court with 1,500 people from the affected area present to hear the verdict. A temporary stop work order was subsequently issued by the court until satisfactory safety measures were taken by the ARE company. However, in just a month after the order, the company invited an American atomic specialist to prove the factory is safe to continue its operation. This was countered with the second visit by the expert from Japan along with two former workers of the company revealing several more thorium dumping sites in Bukit Merah to Atomic Energy Licensing Board (AELB) with the Japanese expert discovering that the radiation levels at these places were significantly over the ICRP's maximum safety limit. The company was then ordered by the court to stop all operations but the AELB still issued a license to the company to continue their operations in 1987.
With the company's refusal to stop despite the court order, the affected residents began to sue the company which evolved into a court battle that lasted 32 months and in July 1992, the residents won their case against the company with the court ordering them to close their factory within 14 days. Even with the second court order, the company filed an appeal case to the Federal Court where the Ipoh High Court's decision was suspended under two reasons, being that the ARE's experts were more trustworthy and asking the residents to ask the Malaysian atomic board themselves to withdraw the company license as the board had the power to do so under the Atomic Energy Licensing Act. Without wasting further time in a long case that was affecting their livelihoods, some of the affected residents travelled to Japan to meet up with the highest authority of Mitsubishi Chemical, one of the company major shareholders, to explain their dire situation which was also heard by Japanese environmentalists. With Mitsubishi's intervention and further international pressure, the company finally stopped their operations despite having won the court battle locally. Mitsubishi Chemical reached an out-of-court settlement with the affected residents by agreeing to donate $164,000 to the community's schools while denying any responsibility for the related illnesses from pollution caused by the ARE related works.
After the 1982 Bukit Merah radioactive pollution incident, the mine in Malaysia was the focus of a US$100 million cleanup that proceeded in 2011. After having accomplished the hilltop entombment of 11,000 truckloads of radioactively contaminated material, the project is expected to entail in the summer of 2011, the removal of "more than 80,000 steel barrels of radioactive waste to the hilltop repository."
See also
Lynas
Radioactive contamination
Further reading
References
Environmental issues in Malaysia
Health disasters in Malaysia
Man-made disasters in Malaysia
Radiation accidents and incidents
Radiation health effects
1982 crimes in Malaysia
1982 disasters in Malaysia
1982 in Malaysia
Waste disposal incidents | 1982 Bukit Merah radioactive pollution | Chemistry,Materials_science | 1,362 |
73,727,640 | https://en.wikipedia.org/wiki/Thursday%20salt | Thursday salt is a dark salt produced in the Kostroma region, Russia. It is associated with Russian Orthodox Easter traditions, and was historically produced at the Trinity Lavra of St. Sergius monastery on Maundy Thursday.
The salt is created by mixing rock salt with flavourings such as kvass, herbs, or rye, and baking the mixture at a very high temperature for several hours. The process reduces the sodium chloride content and increases the content of other minerals such as calcium and potassium.
See also
Black lava salt
Himalayan salt
Jugyeom
Kala namak
References
Religion in Russia
Edible salt
Kostroma
Folk Orthodoxy
Russian cuisine
Religious food and drink | Thursday salt | Chemistry | 134 |
49,095,574 | https://en.wikipedia.org/wiki/Code%3A%20Debugging%20the%20Gender%20Gap | CODE: Debugging the Gender Gap is a 2015 documentary by Robin Hauser Reynolds. It focuses on the lack of women and minorities in the field of software engineering. It premiered on April 19, 2015 at the Tribeca Film Festival in New York. The film focuses on inspiring young girls to pursue careers in computer science by profiling successful women in computer programming, such as, Danielle Feinberg of Pixar, Aliya Rahman of Code for Progress, and Julie Ann Horvath. By profiling and displaying the careers of these women, the film makers hope to show that computer science can be creative, lucrative, and rewarding.
The film traces the history of women in the U.S. technology industries, from the work of Ada Lovelace, Grace Hopper, and the women of ENIAC. It then follows the decline of women graduates in mathematics and computer science during the 1980s, linking the phenomenon to the release of the 1983 film WarGames, and a cultural shift that depicted men and boys as technology workers, and increasing hostility for women and girls in the tech industries. Additionally, the film highlights the work of women in the field, by featuring interviews with women in the tech industry, such as Kimberly Bryant (founder of Black Girls Code), Debbie Sterling (founder of GoldieBlox), Maria Klawe (president of Harvey Mudd College), and Danielle Feinberg (director of photography at Pixar).
Fundraising
Funding for the film was partially raised via Indiegogo and Reynolds was able to successfully receive additional funding from corporations like CapitalOne, MasterCard, Ericsson, NetApp, Qualcomm, and Silicon Valley Bank.
Reception
The general reception of the film by popular press has been positive. Stephen Cass of IEEE Spectrum, stated of the film: "Code doesn't have all the answers, of course. But ultimately, it does make a good case that everyone should think deliberately about diversity in their hiring." Some criticism has focused on the apparent lack of attention paid to the Gamergate controversy, and the work and experience of women in the gaming industries.
Graham Winfrey of Inc. magazine wrote, "CODE makes a compelling case that the lack of women in tech poses a significant threat to America's future."
Awards
Gold Audience Award for Active Cinema at the Mill Valley Film Festival (2015, won)
References
External links
2015 documentary films
2015 films
Documentary films about computing
American documentary films
Diversity in computing
Gender and employment
History of women in the United States
2010s English-language films
2010s American films
English-language documentary films | Code: Debugging the Gender Gap | Technology | 519 |
77,371,778 | https://en.wikipedia.org/wiki/NGC%207726 | NGC 7726 is a large barred spiral galaxy located in the constellation Pegasus in the northern sky. It is estimated to be 348 million light-years from the Milky Way and about 150,000 light-years in diameter. Many other objects are located within a close proximity to NGC 7726, including NGC 7720, NGC 7728, IC 5341 and IC 5342.
The object was discovered by astronomer Lewis Swift on August 8, 1886.
See also
List of NGC objects (7001–7840)
References
Barred spiral galaxies
Pegasus (constellation)
7726
12721
72024 | NGC 7726 | Astronomy | 119 |
4,145,225 | https://en.wikipedia.org/wiki/Great%20disnub%20dirhombidodecahedron | In geometry, the great disnub dirhombidodecahedron, also called Skilling's figure, is a degenerate uniform star polyhedron.
It was proven in 1970 that there are only 75 uniform polyhedra other than the infinite families of prisms and antiprisms. John Skilling discovered another degenerate example, the great disnub dirhombidodecahedron, by relaxing the condition that edges must be single. More precisely, he allowed any even number of faces to meet at each edge, as long as the set of faces couldn't be separated into two connected sets (Skilling, 1975). Due to its geometric realization having some double edges where 4 faces meet, it is considered a degenerate uniform polyhedron but not strictly a uniform polyhedron.
The number of edges is ambiguous, because the underlying abstract polyhedron has 360 edges, but 120 pairs of these have the same image in the geometric realization, so that the geometric realization has 120 single edges and 120 double edges where 4 faces meet, for a total of 240 edges. The Euler characteristic of the abstract polyhedron is −96. If the pairs of coinciding edges in the geometric realization are considered to be single edges, then it has only 240 edges and Euler characteristic 24.
The vertex figure has 4 square faces passing through the center of the model.
It may be constructed as the exclusive or (blend) of the great dirhombicosidodecahedron and compound of twenty octahedra.
Related polyhedra
It shares the same edge arrangement as the great dirhombicosidodecahedron, but has a different set of triangular faces. The vertices and edges are also shared with the uniform compounds of twenty octahedra or twenty tetrahemihexahedra. 180 of the edges are shared with the great snub dodecicosidodecahedron.
Dual polyhedron
The dual of the great disnub dirhombidodecahedron is called the great disnub dirhombidodecacron. It is a nonconvex infinite isohedral polyhedron.
Like the visually identical great dirhombicosidodecacron in Magnus Wenninger's Dual Models, it is represented with intersecting infinite prisms passing through the model center, cut off at a certain point that is convenient for the maker. Wenninger suggested these figures are members of a new class of stellation polyhedra, called stellation to infinity. However, he also acknowledged that strictly speaking they are not polyhedra because their construction does not conform to the usual definitions.
Gallery
See also
List of uniform polyhedra
References
.
http://www.software3d.com/MillersMonster.php
External links
http://www.orchidpalms.com/polyhedra/uniform/skilling.htm
http://www.georgehart.com/virtual-polyhedra/great_disnub_dirhombidodecahedron.html
Uniform polyhedra | Great disnub dirhombidodecahedron | Physics | 631 |
1,514,641 | https://en.wikipedia.org/wiki/Mercury%28II%29%20oxide | Mercury(II) oxide, also called mercuric oxide or simply mercury oxide, is the inorganic compound with the formula HgO. It has a red or orange color. Mercury(II) oxide is a solid at room temperature and pressure. The mineral form montroydite is very rarely found.
History
An experiment for the preparation of mercuric oxide was first described by 11th century Arab-Spanish alchemist, Maslama al-Majriti, in Rutbat al-hakim. It was historically called red precipitate (as opposed to white precepitate being the mercuric amidochloride).
In 1774, Joseph Priestley discovered that oxygen was released by heating mercuric oxide, although he did not identify the gas as oxygen (rather, Priestley called it "dephlogisticated air," as that was the paradigm that he was working under at the time).
Synthesis and reactions
The red form of HgO can be made by heating Hg in oxygen at roughly 350 °C, or by pyrolysis of Hg(NO3)2. The yellow form can be obtained by precipitation of aqueous Hg2+ with alkali. The difference in color is due to particle size; both forms have the same structure consisting of near linear O-Hg-O units linked in zigzag chains with an Hg-O-Hg angle of 108°.
It is sometimes said that HgO "is soluble in acids", but in fact it reacts with acids to make mercuric salts.
Structure
Under atmospheric pressure mercuric oxide has two crystalline forms: one is called montroydite (orthorhombic, 2/m 2/m 2/m, Pnma), and the second is analogous to the sulfide mineral cinnabar (hexagonal,
hP6, P3221); both are characterized by Hg-O chains. At pressures above 10 GPa both structures convert to a tetragonal form.
Uses
Mercury oxide is sometimes used in the production of mercury as it decomposes quite easily. When it decomposes, oxygen gas is generated.
It is also used as a material for cathodes in mercury batteries.
Health issues
Mercury oxide is a highly toxic substance which can be absorbed into the body by inhalation of its aerosol, through the skin and by ingestion. The substance is irritating to the eyes, the skin and the respiratory tract and may have effects on the kidneys, resulting in kidney impairment. In the food chain important to humans, bioaccumulation takes place, specifically in aquatic organisms. The substance is banned as a pesticide in the EU.
Evaporation at 20 °C is negligible. HgO decomposes on exposure to light or on heating above 500 °C. Heating produces highly toxic mercury fumes and oxygen, which increases the fire hazard. Mercury(II) oxide reacts violently with reducing agents, chlorine, hydrogen peroxide, magnesium (when heated), disulfur dichloride and hydrogen trisulfide. Shock-sensitive compounds are formed with metals and elements such as sulfur and phosphorus.
References
External links
National Pollutant Inventory – Mercury and compounds fact sheet
Information at Webelements.
Oxides
Mercury(II) compounds
Inorganic compounds | Mercury(II) oxide | Chemistry | 682 |
64,748,079 | https://en.wikipedia.org/wiki/Matibabu%20%28Rapid%20Malaria%20test%29 | matibabu fights to close the gap between the communities and their rightful access to healthcare.
Purpose and Use
In general, the lack of low cost diagnostics for malaria results in late diagnosis of the disease in many low income communities (contributing to high morbidity and mortality from severe forms of malaria), and over-treatment of malaria where syndromic management is used due to lack of point-of-care diagnostics (contributing to wastage of money on treatment of non-malarial illness especially since the new recommended Artemisinin-based therapies are expensive). Additionally, Inconsistent data relay to the Ministry of Health despite the fact that there are a number of data management platforms being used by health practitioners. The data received is inconsistent in terms of both quality and quantity and is often outdated or not in realtime. The current data collection and surveillance methods are through the national Health Management Information System (HMIS). Data is first collected at the health centre level where hard copies(paper/books) are used and electronic medical record systems for a few health centers that have the capacity. However there is very low usage of this system as paper-based records are lost in delivery, poor quality of data(inaccurate statistical data), untimely delivery of HMIS reports, exclusion of data from the private health providers and at the community level, inadequate segregation of HMIS data and limited political support. Lack of functional supply chains and adequate reporting around availability of supplies, means that often health facilities are without vital drugs and equipment for long periods of time and as a result drug and diagnostic performance can not be monitored.Efficient health information and data systems are vital for improved decision making and timely intervention. We are shifting into an era where data driven approaches have yielded appropriate resource utilization for implementing health programs.
The device is capable of detecting malaria parasites in red blood cells for diagnostic purposes. The device is being tested for use in hospitals, clinics and medical laboratories. The goal is to create Point-of-care-testing (POCT) opportunities in rural areas that lack healthcare access.
Invention
The company offers an array of solutions as listed below;
Matiscope: The matiscope is a portable parasite-based hardware device that uses principles of light scattering and magnetism to detect Plasmodium in blood samples. The kit offers both invasive and non-invasive diagnosis with desktop point of care.
Yotta: captures data, such as location data and health survey information, anonymized data points in a securely managed central data store, and includes both automated and expert data analysis, and customized outputs and feedback that lead to timely and targeted responses. The data visualisation also enables us to run prediction algorithms on the data to deduce geographically customized disease trends.
Yotta cards: Patient tracking to support the health facilities manage & track medication issued, schedule routine visits & also patients saving on the card for health care access topped up with loans
Yotta surveillance apps: Powered with image recognition algorithms, the application is used at the health facility to collect the disease data in almost real time, with both offline and online capabilities.
The device was invented in Kampala, Uganda by Matibabu CEO Brian Gitta and his team (Joshua Businge, Josiah Kavuma, Moris Atwine, Simon Lubambo and Shafik Sekitto).
Recognition
The team at matibabu, have realized Villgro Kenya, Bayer Foundation, e4impact, Merck Accelerator and the Resilient Africa Network(RAN) operating under The United States Agency for International Development.
matibabu has also been recognized on several occasions as the UN Empowerment Award through the Microsoft Imagine Cup, American Society for Mechanical Engineers’ iShow, The Duke of York’s Pitch@Palace, Royal Academy of Engineering The Aspirin Social Innovation Award, e4Impact, Disrupt 100, Time magazine Next Generation Leaders, 2019 Rolex Laureate and additionally, we have showcased at different platforms as the Consumer Electronics Show (CES), The Tech Open Air Festival, Republica, Global Sankalp forum, TechCrunch Hardware Battlefield. Brian Gitta was further invited to meet Bill Gates as part of the MTV Base Africa program in 2016.
Advisors
Matibabu was part of the Merck Accelerator Program at the Merck Innovation Center in Darmstadt, Germany.
The advisors of Matibabu include Dr. Nicole Kilian (Heidelberg University Hospital), Robert Karanja, MSc (Villgro Kenya) and Kush Mahan, MSc (ZoneIn).
References
Medical devices
Medical tests | Matibabu (Rapid Malaria test) | Biology | 929 |
74,571,954 | https://en.wikipedia.org/wiki/Caravelli-Traversa-Di%20Ventra%20equation | The Caravelli-Traversa-Di Ventra equation (CTDV) is a closed-form equation to the evolution of networks of memristors. It was derived by Francesco Caravelli (Los Alamos National Laboratory), Fabio L. Traversa (Memcomputing Inc.) and Massimiliano Di Ventra (UC San Diego) to study the exact evolution of complex circuits made of resistances with memory (memristors).
A memristor is a resistive device whose resistance changes as a function of the history of the applied voltage or current. A physical realization of the memristor was introduced in the Nature paper by Strukov and collaborators while studying titanium dioxide junctions, with a resistance experimentally observed to change approximately in accordance to the model
where is a parameter describing the evolution of resistance, is the current across the device and is an effective parameter which characterizes the response of the device to a current flow. If the device decays over time to a high resistance state, one can also add a term to the right-hand side of the evolution for , where is a decay constant. However, such resistive switching has been known since the late 60's. The model above is often called Williams-Strukov or Strukov model. Albeit this model is too simplistic to represent real devices, it still serves as a good model exhibiting a pinched hysteresis loop in the current-voltage diagram. However, because of Kirchhoff's laws, the evolution of networks of these components becomes utterly complicated, in particular for disordered neuromorphic materials such as nanowires. Often, these are called memristive networks. The simplest example of a memristive circuit or network is a memristors crossbar. A memristor crossbar is often used as a way to address single memristors for a variety of applications in artificial intelligence. However, this is a one particular example of memristive network arranged on a two dimensional grid. Memristive networks have also important applications, for instance, in reservoir computing. A network of memristors can serve as a reservoir for nonlinearly transforming an input signal into a high-dimensional feature space. The memristor-based reservoir concept was introduced by Kulkarni and Teuscher in 2012. While this model was initially employed for tasks like wave pattern classification and associative memory, the readout mechanism utilized a genetic algorithm, which inherently operates non-linearly.
A memristive network is a circuit that satisfies the Kirchhoff laws, e.g. the conservation of the currents at the nodes, and in which every circuit element is a memristive component. Kirchhoff's laws can be written in terms of the sum of the currents on node n as
where the first equation represents the time evolution of the memristive element's internal memory either in current or voltage, and the second equation represents the conservation of currents at the nodes. Since every element is Ohmic, then, which is Ohm's law and is the memory parameters. These parameters typically represent the internal memory of the resistive device and are associated to physical properties of the device changing as an effect of current/voltage. These equations become quickly highly nonlinear because the memristive device is typically nonlinear, and moreover Kirchhoff's laws introduce a higher layer of complexity. A silver nanowire connectome can be described using graph theory, and have applications ranging from sensors to information storage.Since memristive devices behave as axons in a neuronal network, the theory of memristive networks is the theory of nanoscale electric physical devices whose behavior parallels the one of real neuronal circuits.
In neuromorphic engineering, the goal is the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures similar to the ones in the nervous system. A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations. The development of the formalism of memristive networks is used to understand the behavior of memristors for a variety of purposes, including modelling and understand electronic plasticity in real circuits. Side applications of such theory is to understand the role of instances in memcomputers and self-organizing logic gates.
In a typical memristive network simulation one has to solve first for Kirchhoff's laws numerically, obtain voltage drops and currents for each device, and then evolve the parameters of the memristive device and/or junction to obtain the resistance or conductance. This means that effectively, as memristive devices change their resistance or conductance, such devices are interacting. Even for the simple memristor model, such a problem leads to nonlinearities strongly dependent on the circuit realizations. The CTDV equation is a model for the evolution of networks of arbitrary circuits composed of devices such as in eqn. (1), with the inclusion of a decay parameter controlling the volatility. It can be considered a generalization of the Strukov et al. model to arbitrary circuits.
For the case of the Strukov et al. model, equations (2) can be written explicitly by integrating analytically Kirchhoff's laws. The evolution of a network of memristive devices can be written in a closed form (Caravelli-Traversa-Di Ventra equation):
as a function of the properties of the physical memristive network and the external sources, where is the internal memory parameter of each device. The equation is valid in the case of the Strukov original toy model and it can be considered as a generalization of the single device model; in the case of ideal memristors, , although the hypothesis of the existence of an ideal memristor is debatable. In the equation above, is the "forgetting" time scale constant, typically associated to memory volatility, while is the adimensional ratio between the resistance gap and off resistance value. is the vector of the voltage sources in series to each junction. Instead, is a projection matrix in which the circuit enters directly, by projecting on the fundamental loops of the graph; such matrix enforces Kirchhoff's laws. Interestingly, the equation is valid for any network topology simply by changing the corresponding matrix . The constant has the dimension of a voltage and is associated to the properties of the memristor; its physical origin is the charge mobility in the conductor. The diagonal matrix and vector and respectively, are instead the dynamical internal value of the memristive devices, with values between 0 and 1. This equation thus requires adding extra constraints on the memory values in order to be reliable, but can be used for instance to predict analytically the presence of instantonic transitions in memristive networks.
References
External links
Memristive Circuits - MIT Net Advances in Physics, Accessed January 9, 2024
Memcomputing and Instantons, NASA, Accessed January 9, 2024
Kirchhoff's laws and memristive circuits, Bachelor thesis, Accessed January 9, 2024
F. Sheldon thesis, PhD thesis, Accessed January 9, 2024
Talk by JP Carbajal, Caravelli-Traversa-Di Ventra equation for optimization (minute 40), Accessed January 9, 2024
Equations of physics | Caravelli-Traversa-Di Ventra equation | Physics,Mathematics | 1,553 |
8,250,961 | https://en.wikipedia.org/wiki/Chromophobe%20cell | A chromophobe is a histological structure that does not stain readily, and thus appears relatively pale under the microscope.
Chromophobe cells are one of three cell stain types present in the anterior and intermediate lobes of the pituitary gland, the others being basophils and acidophils. One type of chromophobe cell is known as amphophils. Amphophils are epithelial cells found in the anterior and intermediate lobes of the pituitary. Together, these epithelial cells are responsible for producing the hormones of the anterior pituitary and releasing them into the bloodstream. Melanotrophs (also, Melanotropes) are another type of chromophobe which secrete melanocyte-stimulating hormone (MSH).
Clinical significance
"Chromophobe" also refers to a type of renal cell carcinoma (distinct from "clear cell"). Chromophobe renal cancer is part of a rare, genetic disorder known as Birt–Hogg–Dubé syndrome. While renal cell carcinoma is one of the most frequently diagnosed cancers, chromophobe renal cancer only accounts for five percent of renal cancer cases. Furthermore, 30% of patients with Birt–Hogg–Dubé syndrome will also develop chromophobe renal cancer. One of the only treatments for this type of cancer is to have surgery to remove any tumors that may be present.
See also
Melanotroph
Chromophil
Acidophil cell
Basophil cell
Oxyphil cell
Oxyphil cell (parathyroid)
Pituitary gland
Neuroendocrine cell
References
Staining | Chromophobe cell | Chemistry,Biology | 353 |
1,089,325 | https://en.wikipedia.org/wiki/Cadmium%20sulfate | Cadmium sulfate is the name of a series of related inorganic compounds with the formula CdSO4·H2O. The most common form is the monohydrate CdSO4·H2O, but two other forms are known CdSO4·H2O and the anhydrous salt (CdSO4). All salts are colourless and highly soluble in water.
Structure, preparation, and occurrence
X-ray crystallography shows that CdSO4·H2O is a typical coordination polymer. Each Cd2+ center has octahedral coordination geometry, being surrounded by four oxygen centers provided by four sulfate ligands and two oxygen centers from the bridging water ligands.
Cadmium sulfate hydrate can be prepared by the reaction of cadmium metal or its oxide or hydroxide with dilute sulfuric acid:
The anhydrous material can be prepared using sodium persulfate:
Cadmium sulfates occur as the following rare minerals drobecite (CdSO4·4H2O), voudourisite (monohydrate), and lazaridisite (the 8/3-hydrate).
Applications
Cadmium sulfate is used widely for the electroplating of cadmium in electronic circuits. It is also a precursor to cadmium-based pigment such as cadmium sulfide. It is also used for electrolyte in a Weston standard cell as well as a pigment in fluorescent screens.
References
Cadmium compounds
Sulfates | Cadmium sulfate | Chemistry | 307 |
37,345,315 | https://en.wikipedia.org/wiki/Bromine%20azide | Bromine azide is an explosive inorganic compound with the formula . It has been described as a crystal or a red liquid at room temperature. It is highly sensitive to small variations in temperature and pressure, with explosions occurring at Δp (pressure change) ≥ 0.05 Torr upon crystallization, thus extreme caution must be observed when working with this chemical.
Preparation
Bromine azide may be prepared by the reaction of sodium azide with . This reaction forms bromine azide and sodium bromide:
Structure
The high sensitivity of bromine azide has led to difficulty in discerning its crystal structure. Despite this, a crystal structure of bromine azide has been obtained using a miniature zone-melting procedure with focused infrared laser radiation. In contrast to , which forms an endless chain-like structure upon crystallization, forms a helical structure. Each molecule adopts a trans-bent structure, which is also found in the gas phase.
Reactions
Bromium azide adds to alkenes both through ionic and free-radical addition, each giving an opposite orientation in the products. The ionic addition occurs stereospecifically in trans.
Reactions involving bromine azide are difficult to work with. The molecule is very reactive and is known to explode easily. This makes it a key reagent in explosives.
Photochemistry experiments with bromine azide have found that UV photolysis of a small sample of bromine azide resulted in dissociation of the entire sample, making it unstable. Similar samples with azide molecules did not show such an effect. This shows bromine azide's unstable tendencies in that even in the presence of sunlight, bromine azide will be a reactive molecule.
Safety
Great care must be taken when handling bromine azide as it is potentially toxic and is able to explode under various conditions. Concentrated solutions in organic solvents may also explode. The liquid explodes on contact with arsenic, sodium, silver foil, or phosphorus. When heated to decomposition it emits highly toxic fumes of bromine and explodes. The amount of compound used during experimentation should be limited to 2 mmol. It also poses a potential moderate fire hazard in the form of vapor by chemical reaction. It is also a powerful oxidant.
It has been banned from transport in the United States by the US Department of Transportation.
References
Bromine(I) compounds
Azido compounds
Explosive chemicals
Pseudohalogens | Bromine azide | Chemistry | 491 |
77,467,873 | https://en.wikipedia.org/wiki/Pachycladon%20exile | Pachycladon exile is a species of plant in family Brassicaceae that is endemic to the South Island of New Zealand. Commonly known as limestone cress, it is a perennial herb with hairy leaves that is only found on one specific limestone outcrop site. It has been used to analyse principles behind adaptive radiation, together with other species of Pachycladon. Its conservation status is Threatened - Nationally Critical.
Taxonomy
Pachycladon exile is a species of plant that is endemic to the South Island of New Zealand in the family Brassicaceae. P. exile was originally described in 1999 as Ischnocarpus exilis by Peter Heenan. It was later transferred to the genus Pachycladon in 2002.
P. exile is morphologically similar to P. novae-zelandiae. It can be distinguished from that species by its slender growth habit, terete ovary, slender siliques, smaller flowers, leaves and inflorescences, and a style that is small but distinct. It is also similar to P. cheesemanii, as both species are polycarpic and have woody caudices, short branches, slender inflorescences, terete siliques, heterophyllous leaves, and seeds that are uniseriate and without wings.
Description
P. exile is a perennial, polycarpic, heterophyllous rosette plant that has slender inflorescences, a woody caudex, short branches, and hairy, heterophyllous leaves. Its fruit is a terete silique, and its seeds do not have wings, and are uniseriate.
Distribution and habitat
Pachycladon exile is only found on a specific limestone outcrop site in the Waitaki Valley. It is found in habitats that have a high fertility rock substrate, such as limestone, schist, and volcanics, from 10 to 1600 m above sea level.
Phylogeny
P. exile is closely related to P. cheesemanii. Alongside other Pachycladon species it has been used to analyse principles behind adaptive radiation.
Conservation status
Pachycladon exile is listed as Threatened - Nationally Critical, with the qualifiers CD (Conservation Dependent), DPT (Data Poor Trend), EF (Extreme Fluctuations), OL (One Location) in the most recent assessment (2023) of the New Zealand Threatened Classification for plants.
It is the sixth most endangered species in New Zealand.
It was featured as Critter of the Week on 12 May 2019 on Radio New Zealand.
References
Endangered species
Flora of New Zealand
Brassicaceae
Plants described in 1999
Endemic flora of New Zealand
Endangered flora of New Zealand | Pachycladon exile | Biology | 556 |
39,145,558 | https://en.wikipedia.org/wiki/Crack%20tip%20opening%20displacement | Crack tip opening displacement (CTOD) or is the distance between the opposite faces of a crack tip at the 90° intercept position. The position behind the crack tip at which the distance is measured is arbitrary but commonly used is the point where two 45° lines, starting at the crack tip, intersect the crack faces. The parameter is used in fracture mechanics to characterize the loading on a crack and can be related to other crack tip loading parameters such as the stress intensity factor and the elastic-plastic J-integral.
For plane stress conditions, the CTOD can be written as:
where is the yield stress, is the crack length, is the Young's modulus, and is the remote applied stress.
Under fatigue loading, the range of movement of the crack tip during a loading cycle can be used for determining the rate of fatigue growth using a crack growth equation. The crack extension for a cycle , is typically of the order of .
History
Examination of fractured test specimens led to the observation that the crack faces had moved apart prior to fracture, due to the blunting of an initially sharp crack by plastic deformation. The degree of crack blunting increased in proportion to the toughness of the material. This observation led to considering the opening at the crack tip as a measure of fracture toughness. The COD was originally independently proposed by Alan Cottrell and A. A. Wells. This parameter became known as CTOD. G. R. Irwin later postulated that crack-tip plasticity makes the crack behave as if it were slightly longer. Thus, estimation of CTOD can be done by solving for the displacement at the physical crack tip.
Use as a design parameter
CTOD is a single parameter that accommodates crack tip plasticity. It is easy to measure when compared with techniques such as J integral. It is a fracture parameter that has more physical meaning than the rest.
However, the equivalence of CTOD and J integral is proven only for non-linear materials, but not for plastic materials. It is hard to expand the concept of CTOD for large deformations. It is easier to calculate J-integral in case of a design process using finite element method techniques.
Relation with other crack tip parameters
K and CTOD
CTOD can be expressed in terms of stress intensity factor as:
where is the yield strength, is Young's modulus and for plane stress and for plane strain.
G and CTOD
CTOD can be related to the energy release rate G as:
J-integral and CTOD
The relationship between the CTOD and J is given by:
where the variable is typically between 0.3 and 0.8.
Testing
A CTOD test is usually done on materials that undergo plastic deformation prior to failure. The testing material more or less resembles the original one, although dimensions can be reduced proportionally. Loading is done to resemble the expected load. More than 3 tests are done to minimize any experimental deviations. The dimensions of the testing material must maintain proportionality. The specimen is placed on the work table and a notch is created exactly at the centre. The crack should be generated such that the defect length is about half the depth. The load applied on the specimen is generally a three-point bending load. A type of strain gauge called a crack-mouth clip gage is used to measure the crack opening. The crack tip plastically deforms until a critical point after which a cleavage crack is initiated that may lead to either partial or complete failure. The critical load and strain gauge measurements at the load are noted and a graph is plotted. The crack tip opening can be calculated from the length of the crack and opening at the mouth of the notch. According to the material used, the fracture can be brittle or ductile which can be concluded from the graph.
Standards for CTOD testing can be found in the ASTM E1820 - 20a code.
Laboratory measurement
Early experiments used a flat, paddle-shaped gauge that was inserted into the crack; as the crack opens, the paddle gauge rotates and an electronic signal is sent to an x–y plotter. This method was inaccurate, however, because it was difficult to reach the crack tip with the paddle gauge. Today, the displacement V at the crack mouth is measured and the CTOD is inferred by assuming that the specimen halves are rigid and rotate about a hinge point.
References
Fracture mechanics | Crack tip opening displacement | Materials_science,Engineering | 885 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.