id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
9,429,539 | https://en.wikipedia.org/wiki/The%20Hangman%20%28poem%29 | "The Hangman" is a poem written by Maurice Ogden in 1951 and first published in 1954. The poem was originally published under the title "Ballad of the Hangman" in Masses and Mainstream magazine under the pseudonym "Jack Denoya", before later being "[r]evised and retitled". Its plot concerns a hangman who arrives in a town and executes the citizens one by one. As each citizen is executed, the others are afraid to object out of fear that they will be next. Finally, there is nobody remaining in the town except the hangman and the narrator of the poem. The narrator is then executed by the hangman, as by then there is no one left who will defend him.
The poem contains four-line stanzas with the rhyming pattern AABB.
The poem is usually cited as an indictment of those who stand idly by while others commit grave evil or injustice, such as during the Holocaust. The story it tells is very similar to that of the famous statement First they came... that has been attributed to the anti-Nazi pastor Martin Niemöller as early as 1946. The poem may be interpreted as an attack on McCarthyism.
Animated film
In 1964, an animated 11-minute film was made by Les Goldman and Paul Julian. Herschel Bernardi narrated. The film was a co-winner of the Silver Sail award at the Locarno International Film Festival in 1964.
See also
"Not My Business"
"First they came ..."
References
External links
Poems about the Holocaust
Works about McCarthyism
1951 poems
Tragedy of the commons
Poems adapted into films | The Hangman (poem) | [
"Mathematics"
] | 331 | [
"Game theory",
"Tragedy of the commons"
] |
9,429,600 | https://en.wikipedia.org/wiki/NGC%20288 | NGC 288 is a globular cluster in the constellation Sculptor. Its visual appearance was described by John Dreyer in 1888. It is located about 1.8° southeast of the galaxy NGC 253, 37′ north-northeast of the South Galactic Pole, 15′ south-southeast of a 9th magnitude star, and encompassed by a half-circular chain of stars that opens on its southwest side. It can be observed through binoculars. It is not very concentrated and has a well resolved, large 3′ dense core that is surrounded by a much more diffuse and irregular 9′ diameter ring. Peripheral members extend farther outward towards the south and especially southwest.
References
External links
Globular clusters
Sculptor (constellation)
0288
17831027 | NGC 288 | [
"Astronomy"
] | 148 | [
"Constellations",
"Sculptor (constellation)"
] |
9,430,536 | https://en.wikipedia.org/wiki/LIESST | In chemistry and physics, LIESST (Light-Induced Excited Spin-State Trapping) is a method of changing the electronic spin state of a compound by means of irradiation with light.
Many transition metal complexes with electronic configuration d4-d7 are capable of spin crossover (and d8 when molecular symmetry is lower than Oh). Spin crossover refers to where a transition from the high spin (HS) state to the low spin (LS) state or vice versa occurs. Alternatives to LIESST include using thermal changes and pressure to induce spin crossover. The metal most commonly exhibiting spin crossover is iron, with the first known example, an iron(III) tris(dithiocarbamato) complex, reported by Cambi et al. in 1931.
For iron complexes, LIESST involves excitation of the low spin complex with green light to a triplet state. Two successive steps of intersystem crossing result in the high spin complex. Movement from the high spin complex to the low spin complex requires excitation with red light.
References
Laboratory techniques
Coordination chemistry | LIESST | [
"Chemistry"
] | 221 | [
"Coordination chemistry",
"nan"
] |
9,430,598 | https://en.wikipedia.org/wiki/Glycoazodyes | Glycoazodyes (or GADs) are a family of "naturalised" synthetic dyes, so called because they are the conjugation of common commercial azo dyes with sugar through a "linker". This principle is summarised in the scheme below.
Generations, Structure, and Synthesis
First-generation
The first-generation of Glycoazodyes was first reported in 2007. These Glycoazodyes use a diester linker, specifically a succinyl bridge. An ester group bonds the sugar to an n-alkane spacer, and the spacer bonds to the dye through another ester group.
Synthesis
First-generation Glycoazodyes are synthesized using glucose, galactose or lactose as the sugar group. The point of esterification is controlled by selectively protecting alcohol groups on the sugar, or by choosing an azo dye with a different alcohol group position. The dye or the sugar group can be succinylated by reacting a free alcohol group with succinic anhydride. The resulting hemisuccinate then reacts with a free alcohol group on the dye or the sugar. The condensation product is then deprotected.
Second-generation
The second-generation of Glycoazodyes was first reported in 2008. These Glycoazodyes use an etherel linker. An ether group bonds the sugar and the dye to an n-alkane spacer, and the spacer bonds to the dye through another ether group. Like first-generation Glycoazodyes, second-generation Glycoazodyes use glucose, galactose or lactose as the sugar group.
Synthesis
Like first-generation Glycoazodyes, second-generation Glycoazodyes are synthesized using a glucose, galactose, or lactose sugar group. The point of the ether bond is controlled by selectively protecting alcohol groups on the sugar, or by choosing an azo dye with a different alcohol group position. An unprotected alcohol group of either the sugar or the dye is reacted with an n-carbon, terminal dibromoalkane in a solution of potassium hydroxide and 18-crown-6 ether, using non-anhydrous tetrahydrofuran as the solvent. The potassium hydroxide produces an alkoxide ion from the alcohol while the 18-crown-6 ether acts as a phase-transfer agent. The reaction proceeds through a classic SN-2 nucleophilic substitution. A terminal Bromo group is eliminated, and a bond is formed between the oxygen of the alcohol and the carbon of the alkane. An ether is produced between the n-carbon linker and the sugar or the dye. At this stage, the terminal Bromo group that remains may react under the same conditions with the free alcohol of a corresponding sugar or dye. The condensation product is then deprotected.
Third-generation
The third-generation of Glycoazodyes was first reported in 2015. These Glycoazodyes use an amido-ester linker. An amide group bonds the sugar to an n-alkane spacer, and the spacer is bonded to the dye through an ester group.
Synthesis
Third-generation Glycoazodyes are synthesized using amino sugars such as 6-amino-6-deoxy-D-galactose or 6' amino-6'-deoxylactose. The point of the amide bond is controlled by protecting the alcohol groups on the sugar and allowing the free amine to react. The point of the ester group is controlled by choosing a azo dye with a different alcohol group position. Either the dye or the sugar is reacted with succinic anhydride. This forms an amide group with the sugar or an ester group with the dye. The free carboxylic acid may then react with the alcohol group or amine group on the corresponding dye or sugar. The condensation product is then deprotected.
Properties
A variety of fabrics such as wool, silk, nylon, polyester, polyacrylic, polyacetate, and polyurethane may be dyed with Glycoazodyes under moderate temperatures and pressures in aqueous solutions. First-generation Glycoazodyes dye cotton poorly. However, second-generation Glycoazodyes dye cotton effectively. Wool dyed with Glycoazodyes shows good fastness when exposed to the ISO 105-C06 washing and ISO 105 X12 rubbing tests.
Glycoazodyes vary in their water solubility. They may be soluble in cold to warm water and may dissolve after stirring or upon addition.
Minor variations in absorption spectra occur when Glycoazodye solutions are prepared, using water, acetone, or methanol solvents. Converting a parent azo dye to a Glycoazodye may produce a small hypsochromic shift in the absorption spectra.
Environmental impact
Several properties may make Glycoazodyes an environmentally friendly alternative to traditional synthetic dyes. The increased hydrophilicity of Glycoazodyes allows for the elimination of surfactants, mordants, and salts, during the dyeing process and permits the aqueous dying of a variety of textiles at moderate temperatures and pressures. The unique structure may also allow for the treatment of textile effluent through biological means. Fusarium oxysporum efficiently decolourizes the first-generation Glycoazodye 4-{N,N-Bis[2-(D-galactopyranos-6-yloxy)ethyl]-amino}azobenzene. Various other Ascomycota fungi show a similar potential to decolourise Glycoazodyes, but to a lesser extent. Detoxification has been measured, using the Daphnia magna acute toxicity test, showing a 92% dye detoxification after 6 days. This detoxification method produces low concentrations of nitrobenzene, aniline, and nitrosobenzene.
External links
http://onlinelibrary.wiley.com/doi/10.1002/ejoc.200600686/abstract
References
Dyes
Azo compounds
Organic pigments
Carbohydrate chemistry
Carbohydrates | Glycoazodyes | [
"Chemistry"
] | 1,347 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Organic compounds",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
9,431,311 | https://en.wikipedia.org/wiki/Toxic%20granulation | Toxic granulation refers to dark coarse granules found in granulocytes, particularly neutrophils, in patients with inflammatory conditions.
Clinical significance
Along with Döhle bodies and toxic vacuolation, which are two other findings in the cytoplasm of granulocytes, toxic granulation is a peripheral blood film finding suggestive of an inflammatory process. Toxic granulation is often found in patients with bacterial infection and sepsis, although the finding is nonspecific. Patients being treated with chemotherapy or granulocyte colony stimulating factor, a cytokine drug, may also exhibit toxic granulation.
Composition
Toxic granules are mainly composed of peroxidase and acid hydrolase enzymes, and are similar in composition to the primary granules found in immature granulocytic cells like promyelocytes. Although normal, mature neutrophils do contain some primary granules, the granules are difficult to identify by light microscopy because they lose their dark blue colour as the cells mature. Toxic granulation thus represents abnormal maturation of neutrophils.
Similar conditions
Patients with the inherited condition Alder-Reilly anomaly exhibit very large, darkly staining granules in their neutrophils, which can be confused with toxic granulation.
See also
Inflammation
Neutrophilia
References
Hematology
Histopathology
Abnormal clinical and laboratory findings for blood | Toxic granulation | [
"Chemistry"
] | 284 | [
"Histopathology",
"Microscopy"
] |
9,431,554 | https://en.wikipedia.org/wiki/Preprophase%20band | The preprophase band is a microtubule array found in plant cells that are about to undergo cell division and enter the preprophase stage of the plant cell cycle. Besides the phragmosome, it is the first microscopically visible sign that a plant cell is about to enter mitosis. The preprophase band was first observed and described by Jeremy Pickett-Heaps and Donald Northcote at Cambridge University in 1966.
Just before mitosis starts, the preprophase band forms as a dense band of microtubules around the phragmosome and the future division plane just below the plasma membrane. It encircles the nucleus at the equatorial plane of the future mitotic spindle when dividing cells enter the G2 phase of the cell cycle after DNA replication is complete. The preprophase band consists mainly of microtubules and microfilaments (actin) and is generally 2-3 μm wide. When stained with fluorescent markers, it can be seen as two bright spots close to the cell wall on either side of the nucleus.
Plant cells lack centrosomes as microtubule organizing centers. Instead, the microtubules of the mitotic spindle aggregate on the nuclear surface and are reoriented to form the spindle at the end of prophase. The preprophase band also functions in properly orienting the mitotic spindle, and contributes to efficient spindle formation during prometaphase
The preprophase band disappears as soon as the nuclear envelope breaks down and the mitotic spindle forms, leaving behind an actin-depleted zone. However, its position marks the future fusion sites for the new cell plate with the existing cell wall during telophase. When mitosis is completed, the cell plate and new cell wall form starting from the center along the plane occupied by the phragmosome. The cell plate grows outwards until it fuses with the cell wall of the dividing cell at exactly the spots predicted by the position of the preprophase band.
Bibliography
P.H. Raven, R.F. Evert, S.E. Eichhorn (2005): Biology of Plants, 7th Edition, W.H. Freeman and Company Publishers, New York, NY,
L. Taiz, E. Zeiger (2006): Plant Physiology, 4th Edition, Sinauer Associates, Inc., Publishers, Sunderland, MA,
Notes and references
Cell cycle
Mitosis
Plant cells | Preprophase band | [
"Biology"
] | 509 | [
"Cell cycle",
"Cellular processes",
"Mitosis"
] |
9,431,764 | https://en.wikipedia.org/wiki/List%20of%20musicians%20who%20play%20left-handed | This is a list of notable left-handed musicians who play their instruments naturally. This does not include left-handed people who play (or played) right-handed, such as Joe Perry, Mark Knopfler, and Gary Moore.
Guitarists and bassists
Left-handed people play guitar or electric bass in one of the following ways: (1) play the instrument truly right-handed, (2) play the instrument truly left-handed, (3) altering a right-handed instrument to play left-handed, or (4) turning a right-handed instrument upside down to pick with the left hand, but not altering the strings – leaving them reversed from the normal order. (The fingering is the same for methods 2 and 3.) Any style of picking with the left hand (flatpicking or fingerstyle guitar) is considered playing left-handed.
Guitarists
Left-handed with normal stringing
Guitarists in this category pick with their left hand and have the strings in the conventional order for a left-handed player (i.e. the low string on the top side of the neck). They either have true left-handed guitars or have right-handed guitars altered so the strings are correct for a left-handed player. Some guitarists in this category (e.g. Paul McCartney) play both genuine left-handed instruments and right-handed instruments altered for left-handed playing.
Changing the strings on a right-handed guitar involves several things. The nut of the guitar has to be changed to accommodate the string widths. The bridge needs to be changed to make the lower-note (thicker) strings longer than the higher-note (thinner) strings for correct intonation. On almost all acoustic guitars the bracing is non-symmetrical. On electric guitars altered this way, the controls will be backwards.
Notable players
Frank Agnello (The Fab Faux)
Adrian Borland (The Sound)
Al McKay (Earth, Wind & Fire)
Ali Campbell (ex-UB40)
Anton Cosmo (ex-Boston)
Atahualpa Yupanqui
Austin Carlile (ex-Attack Attack!, Of Mice & Men)
Barbara Lynn
Barry Winslow (The Royal Guardsmen)
Beeb Birtles (Little River Band)
Ben Howard
Billy Ray Cyrus
Blake Schwarzenbach (Jawbreaker)
Bryan Harvey (House of Freaks)
Calogero (plays guitar and bass left-handed)
Cesar Rosas (Los Lobos)
Cheyenne Kimball
Christian Savill (Slowdive)
Courtney Barnett
Craig Scanlon (The Fall)
Dave Kilminster (former lefty; originally played left-handed until injury, now exclusively plays right-handed)
Dave King (Flogging Molly)
Davey von Bohlen (The Promise Ring, Cap'n Jazz)
David Cook
David Reilly (God Lives Underwater)
Dickey Lee
Eef Barzelay (Clem Snide)
Elliot Easton (The Cars)
Emma Bale
Eric Bogle
Ernie C (Body Count)
Fyfe Dangerfield (Guillemots)
Georgina "Georgi" Kay
Greg Sage (right-handed, decided to play left-handed with the Wipers)
Gregor Mackintosh (Paradise Lost)
Gustavo Cordera (Bersuit Vergarabat)
Hayley Kiyoko
Huw Gower (The Records)
Ian Fowles (The Aquabats, Death By Stereo)
Iggy Pop
Imai Hisashi (Buck-Tick)
Jay Diggins
Jeffrey Steele (formerly of Boy Howdy)
Jill Barber
Jimi Hendrix (wrote right-handed)
Jo Callis (The Rezillos/The Human League)
Joanna Wang
Jonathan Butler
John Flansburgh (They Might Be Giants)
Josey Scott (Saliva)
Joyce Jonathan (French pop singer)
Justin Bieber
Mark "Kazzer" Kasprzyk (Redlight King/Solo)
Klaus Eichstadt (Ugly Kid Joe)
Kurt Cobain (Nirvana) (wrote right-handed)
Lars Johansson (Candlemass)
William H. "Lefty" Bates
Lukas Rossi can play the guitar with either hand.
Luke Morley (Thunder / The Union)
Mac Powell (Third Day)
Maria Taylor (Azure Ray, Little Red Rocket, Now It's Overhead, solo)
Martin Bramah (The Fall/Blue Orchids)
Mdou Moctar
Nicke Andersson (The Hellacopters, Imperial State Electric)
Ollie Halsall
Omar Rodríguez-López (At the Drive-In/The Mars Volta)
Pasi Koskinen (St. Mucus, Ajattara, To Separate the Flesh from the Bones)
Paul Gray (Slipknot) started out playing right-handed, then changed to left-handed as it was more comfortable being left-handed.
Paul McCartney (The Beatles) first struggled playing right-handed, but then saw a picture of Slim Whitman playing left-handed and realized that he could reverse the guitar, reverse the strings, and pick with the left hand .
Paul Mullen (The Automatic/Young Legionnaire/Yourcodenameis:milo)
Paula Fernandes
Paulo Furtado (Wraygunn/The Legendary Tigerman)
Pernilla Andersson
Perry Bamonte (The Cure)
Ragnar Þórhallsson (Of Monsters and Men)
Rami Yosifov (Teapacks)
Richie Stotts (Plasmatics)
Robin Campbell (UB40)
Ronnie Radke (Falling in Reverse, ex-Escape the Fate)
Santiago Feliú
Shae Dupuy
Slim Whitman (was right-handed but played guitar left-handed due to loss of his two fingers on the left hand)
Stella Parton
Sylvia Tyson
Ted Gärdestad
Ted Sablay (The Killers)
Templeton Thompson (female country singer-songwriter)
Tim Armstrong (Rancid)
Tony Iommi (Black Sabbath)
Toronzo Cannon
Travis Denning
Willie Duncan (Spider Murphy Gang)
Will Glover (The Pyramids)
Verónica Romero
Vicentico
Zacky Vengeance (Avenged Sevenfold; started to play right-handed but then shortly moved to left-handed playing)
Andrew "Whitey" White (Kaiser Chiefs)
Michael Zakarin (The Bravery)
Left-handed with strings backwards
These are left-handed players who play naturally, but with the strings organized to emulate an unaltered right-handed guitar, thus the strings are backwards for a left-handed player. The guitar is held left-handed with the high string on the top side of the neck (e.g. Bob Geldof). Some players in this category (e.g. Dick Dale and Albert King) had left-handed guitars with the strings as on a right-handed guitar, since they had learned to play that way.
Notable players
Amber Bain (The Japanese House)
Babyface
Dywane Thomas Jr.
Cormac Battle (Kerbdog)
Buddy Miles
Wallis Bird
Doyle Bramhall II
Chase Bryant
Glen Burtnik (Styx/solo)
Eddy Clearwater
Junior Campbell
Michael Card
Jimmy Cliff
Elizabeth Cotten
Dick Dale
Ed Deane
Cheick Hamala Diabate (RH instruments with original stringing and custom LH instruments with backwards stringing) also banjo and ngoni
Lefty Dizz
Eric Gales (naturally right-handed, but plays left-handed. His left-handed brother taught him that way.)
Bob Geldof (The Boomtown Rats)
Jimi Goodwin (Doves)
Ed Harcourt
Benn Jordan
Jacek Kaczmarski
Andy Kerr
Albert King
Little Jimmy King
Peter LeMarc
Anika Moa
Rick Moranis
Barry Winslow (The Royal Guardsmen)
Morgan
Coco Montoya
Malina Moye
Mic Murphy, (The System)
Kurt Nilsen (Winner of the World Idol competition after winning the first season of the Norwegian Idol series)
Paul Raymond
Nicolas Reyes
Gruff Rhys
Kris Roe (The Ataris)
Jim Rooney
Doctor Isaiah Ross
Otis Rush
Graham Russell (Air Supply)
Lætitia Sadier (McCarthy, Monade, Stereolab)
Evie Sands
Seal
Dan Seals
Bill Staines
Dan Swanö (Bloodbath, Edge of Sanity, Nightingale, Ribspreader; plays drums right-handed)
Wayman Tisdale
Dave Wakeling (The English Beat, General Public) (right-handed, but learned to play left-handed)
Karl Wallinger (World Party)
Bobby Womack
Melvin Williams
Unclassified left-handed players
Michael Angelo Batio (plays a double-guitar ambidextrously)
Shirlie Holliman (Pepsi & Shirlie)
Jon Oliva
Peter Plate (Rosenstolz)
Emily Robins (The Elephant Princess)
Arif Sağ (plays bağlama left-handed)
John Schumann
Lari White
Wendy Wild
Mick Flannery
Bassists
Jarkko Ahola (Teräsbetoni/Northern Kings/Raskasta Joulua/solo; plays both guitar and bass with strings backwards)
Martin Eric Ain (Celtic Frost)
Rosemary Butler (Formerly The Daisy Chain and Birtha; now backing and solo vocalist)
Gerald Casale (Devo; plays strings backwards)
Ken Casey (Dropkick Murphys)
Stuart Chatwood (The Tea Party)
Flavio Cianciarulo (Los Fabulosos Cadillacs/De la tierra/solo) plays with strings in correct order, both guitar and bass.
Nick Feldman (Wang Chung)
Gary Fletcher (The Blues Band) plays bass upside down, but guitar right-handed
Kathy Foster (The Thermals)
Keith Ferguson (The Fabulous Thunderbirds)
Ed Gagliardi (Foreigner; naturally right-handed, played left-handed to honour his hero, Paul McCartney)
Jimi Goodwin (Doves; plays both guitar and bass with strings backwards)
Karl Green (Herman's Hermits)
Paul Gray (Slipknot)
Jimmy Haslip (Yellowjackets; plays with strings reversed)
Yoko Hikasa (Ho-kago Tea Time; right-handed, but played left-handed)
Colin Hodgkinson (Back Door, Whitesnake)
Lee Jackson (The Nice)
Joe Long (Frankie Valli and the Four Seasons)
Alan Longmuir (Bay City Rollers; plays both guitar and bass with strings reversed)
Doug Lubahn
Paul McCartney (The Beatles/Wings/solo) plays with strings in correct order, both guitar and bass; plays drums right-handed
Robbie Merrill (Godsmack) actually right-handed, but plays guitar left-handed because of a birth defect that disabled the middle finger of his left hand.
Josh Newton (Every Time I Die)
Patrick Olive (Hot Chocolate)
Doug Pinnick (King's X)
Lee Pomeroy (plays with strings reversed)
Scott Reeder (Kyuss/The Obsessed/Unida; plays with strings reversed)
Brad Savage (Band from TV)
Danielle Nicole Schnebelen (Trampled Under Foot)
Jeff Schmidt (Bass Soloist, plays with strings reversed)
Wayman Tisdale (played with strings reversed)
Mark White (Spin Doctors)
Paul Wilson (Snow Patrol)
Pete Wright (Crass)
A. W. Yrjänä (CMX)
Drummers
A drum kit for a left-handed person is set up so that percussion instruments drummers would normally play with their right hand (ride cymbal, floor tom, etc.) are played with the left hand. The bass drum and hi-hat configurations are also set up so that the drummer plays the bass drum with their left foot, and operate the hi-hat (or, if using two bass drums, plays the second bass drum) with their right foot. Some drummers however have been known to play right-handed kits, but play leading with their left hand (e.g. playing open-handed on the hi-hat). This list does not include drummers who are naturally left-handed while playing drums purely right-handed such as Ringo Starr, Stewart Copeland, Dave Lombardo, Travis Barker, Eric Carr, and Chris Adler.
Nicke Andersson (Entombed)
Oli Beaudoin (Neuraxis, Kataklysm)
Carter Beauford (Dave Matthews Band) plays on a right-handed drum kit, frequently open-handed.
Rich Beddoe (Finger Eleven)
Jim Bonfanti (Raspberries) plays open-handed
Mike Bordin (Ozzy Osbourne, Faith No More) uses a right-handed setup, but with his primary ride cymbal on his left.
Bun E. Carlos (Cheap Trick) alternates between left-handed and right-handed playing
Régine Chassagne (Arcade Fire) plays a right-handed kit, but leads with left hand
Billy Cobham (Miles Davis, Mahavishnu Orchestra, solo), plays a right-handed kit.
Phil Collins (Genesis, solo)
Scott Columbus (Manowar)
Charles Connor (Little Richard)
Steve Coy (Dead or Alive) right-handed, but played open-handed on a left-handed kit.
Jonny Cragg (Spacehog)
Joe Daniels (Local H)
Micky Dolenz (The Monkees) right-handed, but plays open-handed on a left-handed kit.
Shawn Drover (Megadeth, Eidolon) plays open-handed
Joe English (Paul McCartney and Wings)
Joshua Eppard (Coheed and Cambria) right-handed, but plays open-handed
Fenriz (Darkthrone) plays guitar right-handed
Ginger Fish (Marilyn Manson, Rob Zombie)
Mike Gibbins (Badfinger)
Zachary Hanson (Hanson)
Buddy Harman (Elvis Presley, Patsy Cline, Roy Orbison)
Ian Haugland (Europe)
Steve Hewitt (Placebo)
Gene Hoglan (Testament)
Dominic Howard (Muse)
Tom Hunting (Exodus)
Mark Jackson (VNV Nation)
Steve Jansen (Japan, The Dolphin Brothers, Nine Horses)
Mika Karppinen (H.I.M.) plays open-handed
Stan Levey (Dizzy Gillespie, Charlie Parker, Frank Sinatra) right-handed and plays a complete left-handed kit
Buddy Miles (Band of Gypsys) plays right-handed kit, but leads left-handed
David Milhous (Lippy's Garden) right-handed and plays a complete left-handed kit
Rod Morgenstein (Dixie Dregs, Winger, Jelly Jam, Platypus)
Steve Negus (Saga)
Jerry Nolan (New York Dolls, The Heartbreakers)
Ian Paice (Deep Purple, Whitesnake)
Pat Pengelly (Bedouin Soundclash)
Slim Jim Phantom (Stray Cats)
Simon Phillips (Toto) plays open-handed
Brett Reed (Rancid)
Neil Sanderson (Three Days Grace) plays on a right-handed kit, but leads with left hand
Robert Schultzberg (Placebo)
Al Sobrante (Green Day)
Sebastian Thomson (Baroness, Trans Am, Publicist)
Michael Urbano (Smash Mouth) plays on a right-handed kit, but leads with left hand
Hannes Van Dahl (Sabaton)
Joey Waronker (Beck, R.E.M.)
Javier Weyler (Stereophonics)
Fred White (Earth, Wind & Fire)
Dennis Wilson (The Beach Boys) plays open-handed
Eliot Zigmund (Bill Evans, Vince Guaraldi)
Josh Dun (Twenty One Pilots) right-handed, but frequently plays open handed.
Violinists
The violin can be learned in either hand, and most left-handed players hold the violin under the left side of their jaw, the same as right-handed players. This allows all violinists to sit together in an orchestra.
Richard Barth
Paavo Berglund (A well known Finnish left-handed conductor who also played violin, often joining orchestra players for chamber music just for fun. Due to the value of his violin collection he did not want to change his instruments and had trained himself to play left-handed on violins with a standard set-up.)
Charlie Chaplin (wrote right-handed)
Ornette Coleman
Rudolf Kolisch
Ashley MacIsaac
Ukulele
Paul McCartney
Tiny Tim (played guitar right handed)
Ian Whitcomb
Trumpet
Sharkey Bonano
Freddie 'Posey' Jenkins
Wingy Manone
Paul McCartney
Trombone
Slide Hampton
Banjo
Elizabeth Cotten
Cheick Hamala Diabate
Paul McCartney
Mandolin
Cheyenne Kimball
Paul McCartney
Bansuri
Hariprasad Chaurasia, right-handed, started his career playing the bansuri, a side-blown flute, right-handed, and switched to left-handed playing
References
Bibliography
External links
Left-handed guitarists and drummers
Famous Left Handed Guitarists and Bassists
Video about left-handed guitar playing
Left-handed
Guitars
Handedness | List of musicians who play left-handed | [
"Physics",
"Chemistry",
"Biology"
] | 3,391 | [
"Behavior",
"Motor control",
"Chirality",
"Asymmetry",
"Handedness",
"Symmetry"
] |
9,431,918 | https://en.wikipedia.org/wiki/Phosphatidylethanolamine | Phosphatidylethanolamine (PE) is a class of phospholipids found in biological membranes. They are synthesized by the addition of cytidine diphosphate-ethanolamine to diglycerides, releasing cytidine monophosphate. S-Adenosyl methionine can subsequently methylate the amine of phosphatidylethanolamines to yield phosphatidylcholines.
Function
In cells
Phosphatidylethanolamines are found in all living cells, composing 25% of all phospholipids. In human physiology, they are found particularly in nervous tissue such as the white matter of brain, nerves, neural tissue, and in spinal cord, where they make up 45% of all phospholipids.
Phosphatidylethanolamines play a role in membrane fusion and in disassembly of the contractile ring during cytokinesis in cell division. Additionally, it is thought that phosphatidylethanolamine regulates membrane curvature. Phosphatidylethanolamine is an important precursor, substrate, or donor in several biological pathways.
As a polar head group, phosphatidylethanolamine creates a more viscous lipid membrane compared to phosphatidylcholine. For example, the melting temperature of di-oleoyl-phosphatidylethanolamine is -16 °C while the melting temperature of di-oleoyl-phosphatidylcholine is -20 °C. If the lipids had two palmitoyl chains, phosphatidylethanolamine would melt at 63 °C while phosphatidylcholine would melt already at 41 °C. Lower melting temperatures correspond, in a simplistic view, to more fluid membranes.
In humans
In humans, metabolism of phosphatidylethanolamine is thought to be important in the heart. When blood flow to the heart is restricted, the asymmetrical distribution of phosphatidylethanolamine between membrane leaflets is disrupted, and as a result the membrane is disrupted. Additionally, phosphatidylethanolamine plays a role in the secretion of lipoproteins in the liver. This is because vesicles for secretion of very low-density lipoproteins coming off of the Golgi apparatus have a significantly higher phosphatidylethanolamine concentration when compared to other vesicles containing very low-density lipoproteins. Phosphatidylethanolamine has also shown to be able to propagate infectious prions without the assistance of any proteins or nucleic acids, which is a unique characteristic of it. Phosphatidylethanolamine is also thought to play a role in blood clotting, as it works with phosphatidylserine to increase the rate of thrombin formation by promoting binding to factor V and factor X, two proteins which catalyze the formation of thrombin from prothrombin. The synthesis of endocannabinoid anandamide is performed from the phosphatidylethanolamine by the successive action of two enzymes, N-acetyltransferase and phospholipase-D.
In bacteria
Where phosphatidylcholine is the principal phospholipid in animals, phosphatidylethanolamine is the principal one in bacteria. One of the primary roles for phosphatidylethanolamine in bacterial membranes is to spread out the negative charge caused by anionic membrane phospholipids. In the bacterium E. coli, phosphatidylethanolamine play a role in supporting lactose permeases active transport of lactose into the cell, and may play a role in other transport systems as well. Phosphatidylethanolamine plays a role in the assembly of lactose permease and other membrane proteins. It acts as a 'chaperone' to help the membrane proteins correctly fold their tertiary structures so that they can function properly. When phosphatidylethanolamine is not present, the transport proteins have incorrect tertiary structures and do not function correctly.
Phosphatidylethanolamine also enables bacterial multidrug transporters to function properly and allows the formation of intermediates that are needed for the transporters to properly open and close.
Structure
As a lecithin, phosphatidylethanolamine consists of a combination of glycerol esterified with two fatty acids and phosphoric acid. Whereas the phosphate group is combined with choline in phosphatidylcholine, it is combined with ethanolamine in phosphatidylethanolamine. The two fatty acids may be identical or different, and are usually found in positions 1,2 (less commonly in positions 1,3).
Synthesis
The phosphatidylserine decarboxylation pathway and the cytidine diphosphate-ethanolamine pathways are used to synthesize phosphatidylethanolamine. Phosphatidylserine decarboxylase is the enzyme that is used to decarboxylate phosphatidylserine in the first pathway. The phosphatidylserine decarboxylation pathway is the main source of synthesis for phosphatidylethanolamine in the membranes of the mitochondria. Phosphatidylethanolamine produced in the mitochondrial membrane is also transported throughout the cell to other membranes for use. In a process that mirrors phosphatidylcholine synthesis, phosphatidylethanolamine is also made via the cytidine diphosphate-ethanolamine pathway, using ethanolamine as the substrate. Through several steps taking place in both the cytosol and endoplasmic reticulum, the synthesis pathway yields the end product of phosphatidylethanolamine. Phosphatidylethanolamine is also found abundantly in soy or egg lecithin and is produced commercially using chromatographic separation.
Regulation
Synthesis of phosphatidylethanolamine through the phosphatidylserine decarboxylation pathway occurs rapidly in the inner mitochondrial membrane. However, phosphatidylserine is made in the endoplasmic reticulum. Because of this, the transport of phosphatidylserine from the endoplasmic reticulum to the mitochondrial membrane and then to the inner mitochondrial membrane limits the rate of synthesis via this pathway. The mechanism for this transport is currently unknown but may play a role in the regulation of the rate of synthesis in this pathway.
Presence in food, health issues
Phosphatidylethanolamines in food break down to form phosphatidylethanolamine-linked Amadori products as a part of the Maillard reaction. These products accelerate membrane lipid peroxidation, causing oxidative stress to cells that come in contact with them. Oxidative stress is known to cause food deterioration and several diseases. Significant levels of Amadori-phosphatidylethanolamine products have been found in a wide variety of foods such as chocolate, soybean milk, infant formula, and other processed foods. The levels of Amadori-phosphatidylethanolamine products are higher in foods with high lipid and sugar concentrations that have high temperatures in processing. Additional studies have found that Amadori-phosphatidylethanolamine may play a role in vascular disease, act as the mechanism by which diabetes can increase the incidence of cancer, and potentially play a role in other diseases as well. Amadori-phosphatidylethanolamine has a higher plasma concentration in diabetes patients than healthy people, indicating it may play a role in the development of the disease or be a product of the disease.
See also
N-Acylphosphatidylethanolamine
Phosphatidyl ethanolamine methyltransferase
References
External links
Phosphatidylethanolamine at the AOCS Lipid Library.
Cholinergics
Phospholipids
Membrane biology
Phosphatidylethanolamines | Phosphatidylethanolamine | [
"Chemistry"
] | 1,770 | [
"Phospholipids",
"Molecular biology",
"Membrane biology",
"Signal transduction"
] |
9,431,966 | https://en.wikipedia.org/wiki/AFm%20phases | An AFm phase is an "alumina, ferric oxide, monosubstituted" phase, or aluminate ferrite monosubstituted, or , mono, in cement chemist notation (CCN). AFm phases are important hydration products in the hydration of Portland cements and hydraulic cements.
They are crystalline hydrates with generic, simplified, formula , where:
CaO, , represent calcium oxide, aluminium oxide, and ferric oxide, respectively;
CaX represents a calcium salt, where X replaces an oxide ion;
X is the substituted anion in CaX: – divalent (, …) with y = 1, or;– monovalent (, …) with y = 2.
n represents the number of water molecules in the hydrate and may be comprised between 13 and 19.
AFm form inter alia when tricalcium aluminate in CCN, reacts with dissolved calcium sulfate (), or calcium carbonate (). As the sulfate form is the dominant one in AFm phases in the hardened cement paste (HCP) in concrete, AFm is often simply referred to as Aluminate Ferrite monosulfate or calcium aluminate monosulfate. However, carbonate-AFm phases also exist (monocarbonate and hemicarbonate) and are thermodynamically more stable than the sulfate-AFm phase. During concrete carbonation by the atmospheric , sulfate-AFm phase is also slowly transformed into carbonate-AFm phases.
Different AFm phases
AFm phases belong to the class of layered double hydroxides (LDH). LDHs are hydroxides with a double layer structure. The main cation is divalent () and its electrical charge is compensated by 2 anions: . Some cations are replaced by a trivalent one (). This creates an excess of positive electrical charges which needs to be compensated by the same number of negative electrical charges born by anions. These anions are located in the space present in between adjacent hydroxide layers. The interlayers in LDHs are also occupied by water molecules accompanying the anions counterbalancing the excess of positive charges created by the cation isomorphic substitution in the hydroxides sheets.
In the most studied class of LDHs, the positive layer (c), consisting of divalent and trivalent cations, can be represented by the generic formula:
[()2]x+ [(Xn−)x/n · y]x-
where Xn− is the intercalating anion.
In AFm, the divalent cation is a calcium ion (), while the substituting trivalent cation is an aluminium ion (). The nature of the counterbalancing anion () can be very diverse: , , , , , . The thickness of the interlayer is sufficient to host a variety of relatively large anions often present as impurities: , , ... As other LDHs, AFm can incorporate in their structure toxic elements such as boron and selenium. Some AFm phases are presented in the table here below as a function of the nature of the anion counterbalancing the excess of positive charges in the hydroxide sheets. As in portlandite (), the hydroxide sheets of AFm are made of hexa-coordinated octahedral cations located in a same plane, but due to the excess of positive electrical charges, the hydroxide sheets are distorted.
To convert the oxide notation in LDH formula, the mass balance in the system has to respect the principle of the conservation of matter. Oxide ions () and water are transformed into 2 hydroxide anions () according to the acid-base reaction between and (a strong base) as typically exemplified by the quicklime (CaO) slaking process:
,
or simply,
AFm structure
AFm phases encompass a class of calcium aluminate hydrates (C-A-H) whose structure derives from that of hydrocalumite: , in which anions are partly replaced by or anions. The different mineral phases resulting from these anionic substitutions do not easily form solid solutions but behave as independent phases. The replacement of hydroxide ions by sulfate ions does not exceed . So, AFm does not refer to a single pure mineralogical phase but rather to a mix of several AFm phases co-existing in hydrated cement paste (HCP).
Considering a monovalent anion X, the chemical formula can be rearranged and expressed as 2 (or , as presented in the table in the former section). The octahedral ions are located in a plane as for calcium or magnesium hydroxides in portlandite or brucite hexagonal sheets respectively. The replacement of one divalent cation by a trivalent cation, or to a lesser extent by a cation, with a Ca:Al ratio of 2:1 (one Al substituted for every 3 cations) causes an excess of positive charge in the sheet: 2[2Ca to be compensated by 2 negative charges X–. The anions X– counterbalancing the positive charge imbalance born by the sheet are located in the interlayer whose spacing is much larger than in the layered structure of brucite or portlandite. This allows the AFm structure to accommodate larger anionic species along with water molecules.
The crystal structure of AFm phases is that of layered double hydroxide (LDH) and AFm phases also exhibit the same anion exchange properties. The carbonate anion () occupies the interlayer space in a privileged way with the highest selectivity coefficient and is more retained in the interlayer than other divalent or monovalent anions such as .
According to Miyata (1983), the equilibrium constant (selectivity coefficient) for anion exchange varies in the order for divalent anions, and for monovalent anions, but this order is not universal and varies with the nature of the LDH.
Thermodynamic stability
The thermodynamic stability of AFm phases studied at 25 °C depends on the nature of the anion present in the interlayer: stabilises AFm and displaces anions at their concentrations typically found in hardened cement paste (HCP). Different sources of carbonate can contribute to the carbonation of AFm phases: Addition of limestone filler finely ground, atmospheric , carbonate present as impurity in the gypsum interground with the clinker to avoid cement flash setting, and "alkali sulfates" condensed onto clinker during its cooling, or from added clinker kiln dust. Carbonation can rapidly occur within the fresh concrete during its setting and hardening (internal carbonate sources), or slowly continue in the long-term in the hardened cement paste in concrete exposed to external sources of carbonate: from the air, or bicarbonate anion () present in groundwater (immersed structures) or clay porewater (foundations and underground structures).
When the carbonate concentration increases in the hardened cement paste (HCP), hydroxy-AFm are progressively replaced, first by hemicarboaluminate and then by monocarboaluminate. The stability of AFm phases increases with their carbonate content as shown by Damidot and Glasser (1995) by means of their thermodynamic calculations of the system at .
When carbonate displaces sulfate from AFm, the sulfate released in the concrete pore water may react with portlandite () to form ettringite (), the main AFt phase present in the hydrated cement system.
As stressed by Matschei et al. (2007), the impact of small amounts of carbonate on the nature and stability of the AFm phases is noteworthy. Divet (2000) also notes that micromolar amount of carbonate can inhibit the formation of AFm sulfate, favoring so the crystallisation of ettringite (AFt sulfate).
See also
AFt phases
Concrete degradation#Chloride attack
Layered double hydroxides (LDH)
Friedel's salt
Ettringite (AFt)
Pitting corrosion of rebar induced by chloride attack
References
Further reading
Aluminium compounds
Cement
Concrete
Hydrates
Iron compounds
Iron(III) compounds
Silicates
Sulfate minerals
Sulfates
Carbonate minerals | AFm phases | [
"Chemistry",
"Engineering"
] | 1,721 | [
"Structural engineering",
"Sulfates",
"Hydrates",
"Salts",
"Concrete"
] |
9,432,014 | https://en.wikipedia.org/wiki/Tantalum%20hafnium%20carbide | Tantalum hafnium carbide is a refractory chemical compound with a general formula , which can be considered as a solid solution of tantalum carbide and hafnium carbide. It was originally thought to have the highest melting of any known substance but new research has proven that hafnium carbonitride has a higher melting point.
Properties
Individually, the tantalum and hafnium carbides have the highest melting points among the binary compounds, and , respectively, and their "alloy" with a composition Ta4HfC5 has a melting point of .
Very few measurements of melting point in tantalum hafnium carbide have been reported, because of the obvious experimental difficulties at extreme temperatures. A 1965 study of the TaC-HfC solid solutions at temperatures 2,225–2,275 °C found a minimum in the vaporization rate and thus maximum in the thermal stability for Ta4HfC5. This rate was comparable to that of tungsten and was weakly dependent on the initial density of the samples, which were sintered from TaC-HfC powder mixtures, also at 2,225–2,275 °C. In a separate study, Ta4HfC5 was found to have the minimum oxidation rate among the TaC-HfC solid solutions. Ta4HfC5 was manufactured by Goodfellow company as a 45 μm powder at a price of $9,540/kg (99.0% purity).
In 2015, atomistic simulations predicted that hafnium carbonitride could have a melting point exceeding Ta4Hf1C5 by 200 K. This was later verified by experimental evidence in 2020.
Structure
Individual tantalum and hafnium carbides have a rocksalt cubic lattice structure. They are usually carbon deficient and have nominal formulas TaCx and HfCx, with x = 0.7–1.0 for Ta and x = 0.56–1.0 for Hf. The same structure is also observed for at least some of their solid solutions. The density calculated from X-ray diffraction data is 13.6 g/cm3 for Ta0.5Hf0.5C. Hexagonal NiAs-type structure (space group P63/mmc, No. 194, Pearson symbol hP4) with a density of 14.76 g/cm3 was reported for Ta0.9Hf0.1C0.5.
See also
Tantalum carbide
Hafnium carbide
Hafnium carbonitride
References
Refractory materials
Carbides
Tantalum compounds
Hafnium compounds | Tantalum hafnium carbide | [
"Physics"
] | 551 | [
"Refractory materials",
"Materials",
"Matter"
] |
9,432,308 | https://en.wikipedia.org/wiki/Mothers%20against%20decapentaplegic | Mothers against decapentaplegic is a protein from the SMAD family that was discovered in Drosophila. During Drosophila research, it was found that a mutation in the gene in the mother repressed the gene decapentaplegic in the embryo. The phrase "Mothers against" was added as a humorous take-off on organizations opposing various issues e.g. Mothers Against Drunk Driving (MADD); and based on a tradition of such unusual naming within the gene research community.
Several human homologues are known:
Mothers against decapentaplegic homolog 1
Mothers against decapentaplegic homolog 2
Mothers against decapentaplegic homolog 3
Mothers against decapentaplegic homolog 4
Mothers against decapentaplegic homolog 5
Mothers against decapentaplegic homolog 6
Mothers against decapentaplegic homolog 7
Mothers against decapentaplegic homolog 9
References
Proteins
SMAD (protein) | Mothers against decapentaplegic | [
"Chemistry"
] | 198 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
9,432,643 | https://en.wikipedia.org/wiki/RNA-induced%20transcriptional%20silencing | RNA-induced transcriptional silencing (RITS) is a form of RNA interference by which short RNA molecules – such as small interfering RNA (siRNA) – trigger the downregulation of transcription of a particular gene or genomic region. This is usually accomplished by posttranslational modification of histone tails (e.g. methylation of lysine 9 of histone H3) which target the genomic region for heterochromatin formation. The protein complex that binds to siRNAs and interacts with the methylated lysine 9 residue of histones H3 (H3K9me2) is the RITS complex.
RITS was discovered in the fission yeast Schizosaccharomyces pombe, and has been shown to be involved in the initiation and spreading of heterochromatin in the mating-type region and in centromere formation. The RITS complex in S. pombe contains at least a piwi domain-containing RNase H-like argonaute, a chromodomain protein Chp1, and an argonaute interacting protein Tas3 which can also bind to Chp1, while heterochromatin formation has been shown to require at least argonaute and an RNA-dependent RNA polymerase. Loss of these genes in S. pombe results in abnormal heterochromatin organization and impairment of centromere function, resulting in lagging chromosomes on anaphase during cell division.
Function and mechanisms
The maintenance of heterochromatin regions by RITS complexes has been described as a self-reinforcing feedback loop, in which RITS complexes stably bind the methylated histones of a heterochromatin region using the Chp1 protein and induce co-transcriptional degradation of any nascent messenger RNA (mRNA) transcripts, which are then used as RNA-dependent RNA polymerase substrates to replenish the complement of siRNA molecules to form more RITS complexes. The RITS complex localizes to heterochromatic regions through the base pairing of the nascent heterochromatic transcripts as well as through the Chp chromodomain which recognizes methylated histones found in heterochromatin. Once incorporated into the heterochromatin, the RITS complex is also known to play a role in the recruitment of other RNAi complexes as well as other chromatin modifying enzymes to specific genomic regions. Heterochromatin formation, but possibly not maintenance, is dependent on the ribonuclease protein dicer, which is used to generate the initial complement of siRNAs.
Importance in other species
The relevance of observations from fission yeast mating-type regions and centromeres to mammals is not clear, as some evidence suggests that heterochromatin maintenance in mammalian cells is independent of the components of the RNAi pathway. It is known, however, that plants and animals have analogous mechanism for small RNA-guided heterochromatin formation, and it is believed that the mechanisms described above for S. pombe are highly conserved and play some role in heterochromatin formation in mammals as well. In higher eukaryotes, RNAi-dependent heterochromatic silencing appears to play a larger role in germline cells than in primary cells or cell lines, and is only one of the many different forms of gene silencing used throughout the genome, making it more difficult to study.
The role of RNAi in transcriptional gene silencing in plants has been characterized fairly well, and functions primarily through DNA methylation via the RdDM pathway. In this process, which is distinct from the process described above, argonaut-bound siRNA recognizes nascent RNA transcripts or the target DNA to guide the methylation and silencing of the target genomic region.
References
Gene expression
RNA | RNA-induced transcriptional silencing | [
"Chemistry",
"Biology"
] | 804 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
9,432,843 | https://en.wikipedia.org/wiki/Pyrenophora%20graminea | Pyrenophora graminea is the causal agent of barley stripe. Barley stripe is disease of barley that once caused significant crop yield losses in many areas of the world. Its associated anamorph is Drechslera graminea (Rabenhorst ex Schlechtendal) S. Ito 1930.
Identification
Asexual stage: Pycnidia are rarely observed in nature. They are 70–176 μm in diameter, globose to pear-shaped, and develop superficially or partly submerged. The wall is thin and fragile and is yellow to brown, with a short ostiole. Pycnidiospores are 1.4–3.2 x 1.0–1.6 μm, spherical or ellipsoidal, hyaline, and nonseptate.
Sexual stage: Perithecia are rare in nature, they occur in barley straw in the autumn. The perithecia are 576–728 x 442–572 μm. They are superficial to partly submerged and are elongate, with rigid setae on the surface. Acsi are club-shaped or cylindrical, clearly bitunicate, and rounded at the apex, with a short stalk at the base. Ascospores are 43–61 x 16–28 μm, light yellow-brown, ellipsoidal, and rounded at both ends, with transverse septa and one, occasionally two, septum in the median cells but never in the terminal cells.
Conidia are borne laterally and terminally on conidiophores, which usually occur in clusters of three to five. The conidia are straight with rounded ends and measure 11–24 x 30–100 μm. They are subhyaline to yellow-brown and have up to seven transverse septa.
In culture, mycelium is gray to olivaceous and is often sterile. Conidia may be formed when infected barley pieces as placed on water agar and incubated under diurnal light conditions followed by a period of chilling.
Disease symptoms
Severe seedling infection can cause stunting and post-emergence death, but symptoms are not usually apparent until later, when long, chlorotic or yellow stripes on leaves and sheaths appear. Most leaves of a diseased plant are usually affected. Dark brown streaks develop later in the stripes, which eventually dry out and cause leaf shedding. Ears may not emerge or be deformed and discoloured. Grain production by infected plants is severely restricted.
Geographical distribution
It is widespread and occurs in most barley-growing areas of the world.
Physiological specialization
Considerable variation in pathogenicity between isolates exists, but no formal physiologic races have been recognized.
Biology
The disease is mono-cyclic and seed-borne usually by mycelium in the pericarp. Perithecia are uncommon, but over-wintering sclerotia on crop debris have been reported from Russia. Secondary infection by conidia is apparently important only for floral infection and subsequent seed contamination.
Sources
Index Fungorum
USDA ARS Fungal Database
References
Pyrenophora
Fungal plant pathogens and diseases
Barley diseases
Fungi described in 1931
Fungus species | Pyrenophora graminea | [
"Biology"
] | 663 | [
"Fungi",
"Fungus species"
] |
9,433,462 | https://en.wikipedia.org/wiki/Mating-type%20locus | The mating-type locus is a specialized region in the genomes of some yeast and other fungi, usually organized into heterochromatin and possessing unique histone methylation patterns. The genes in this region regulate the mating type of the organism and therefore determine key events in its life cycle, such as whether it will reproduce sexually or asexually. In fission yeast such as S. pombe, the formation and maintenance of the heterochromatin organization is regulated by RNA-induced transcriptional silencing, a form of RNA interference responsible for genomic maintenance in many organisms. Mating type regions have also been well studied in budding yeast S. cerevisiae and in the fungus Neurospora crassa.
Mating-type switching
In the budding yeast Saccharomyces cerevisiae, mating-type is determined by two non-homologous alleles at the mating-type locus. S. cerevisiae has the capability of undergoing mating-type switching, that is conversion of some haploid cells in a colony from one mating-type to the other. Mating-type switching can occur as frequently as once every generation. Switching involves homologous recombinational repair of a site specific, programmed double-strand break, a highly organized process. This process replaces one mating type allelic DNA sequence with the sequence encoding the alternative mating-type allele. When two haploid cells of opposite mating type come into contact they can mate to form a diploid cell, a zygote, that may then undergo meiosis. Meiosis tends to occur under nutritionally limiting conditions associated with DNA damage.
See also
Mating of yeast
References
Molecular genetics
Mycology
Sexual dimorphism
Mating | Mating-type locus | [
"Physics",
"Chemistry",
"Biology"
] | 358 | [
"Behavior",
"Sexual dimorphism",
"Asymmetry",
"Mating",
"Sex",
"Mycology",
"Molecular genetics",
"Molecular biology",
"Ethology",
"Symmetry"
] |
596,405 | https://en.wikipedia.org/wiki/Collider | A collider is a type of particle accelerator that brings two opposing particle beams together such that the particles collide. Compared to other particle accelerators in which the moving particles collide with a stationary matter target, colliders can achieve higher collision energies. Colliders may either be ring accelerators or linear accelerators.
Colliders are used as a research tool in particle physics by accelerating particles to very high kinetic energy and letting them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for extremely short periods of time, and therefore may be hard or impossible to study in other ways.
Explanation
In particle physics one gains knowledge about elementary particles by accelerating particles to very high kinetic energy and guiding them to colide with other particles. For sufficiently high energy, a reaction occurs that transforms the particles into other particles. Detecting these products gives insight into the physics involved.
To do such experiments there are two possible setups:
Fixed target setup: A beam of particles (the projectiles) is accelerated with a particle accelerator, and as collision partner, one puts a stationary target into the path of the beam.
Collider: Two beams of particles are accelerated and the beams are directed against each other, so that the particles collide while flying in opposite directions.
The collider setup is harder to construct but has the great advantage that according to special relativity the energy of an inelastic collision between two particles approaching each other with a given velocity is not just 4 times as high as in the case of one particle resting (as it would be in non-relativistic physics); it can be orders of magnitude higher if the collision velocity is near the speed of light.
In the case of a collider where the collision point is at rest in the laboratory frame (i.e. ), the center of mass energy (the energy available for producing new particles in the collision) is simply , where and is the total energy of a particle from each beam.
For a fixed target experiment where particle 2 is at rest, .
History
The first serious proposal for a collider originated with a group at the Midwestern Universities Research Association (MURA). This group proposed building two tangent radial-sector FFAG accelerator rings. Tihiro Ohkawa, one of the authors of the first paper, went on to develop a radial-sector FFAG accelerator design that could accelerate two counterrotating particle beams within a single ring of magnets. The third FFAG prototype built by the MURA group was a 50 MeV electron machine built in 1961 to demonstrate the feasibility of this concept.
Gerard K. O'Neill proposed using a single accelerator to inject particles into a pair of tangent storage rings. As in the original MURA proposal, collisions would occur in the tangent section. The benefit of storage rings is that the storage ring can accumulate a high beam flux from an injection accelerator that achieves a much lower flux.
The first electron-positron colliders were built in late 1950s-early 1960s in Italy, at the Istituto Nazionale di Fisica Nucleare in Frascati near Rome, by the Austrian-Italian physicist Bruno Touschek and in the US, by the Stanford-Princeton team that included William C.Barber, Bernard Gittelman, Gerry O’Neill, and Burton Richter. Around the same time, the VEP-1 electron-electron collider was independently developed and built under supervision of Gersh Budker in the Institute of Nuclear Physics in Novosibirsk, USSR. The first observations of particle reactions in the colliding beams were reported almost simultaneously by the three teams in mid-1964 - early 1965.
In 1966, work began on the Intersecting Storage Rings at CERN, and in 1971, this collider was operational. The ISR was a pair of storage rings that accumulated and collided protons injected by the CERN Proton Synchrotron. This was the first hadron collider, as all of the earlier efforts had worked with electrons or with electrons and positrons.
In 1968 construction began on the highest energy proton accelerator complex at Fermilab. It was eventually upgraded to become the Tevatron collider and in October 1985 the first proton-antiproton collisions were recorded at a center of mass energy of 1.6 TeV, making it the highest energy collider in the world, at the time. The energy had later reached 1.96 TeV and at the end of the operation in 2011 the collider luminosity exceeded 430 times its original design goal.
Since 2009, the most high-energetic collider in the world is the Large Hadron Collider (LHC) at CERN. It currently operates at 13 TeV center of mass energy in proton-proton collisions. More than a dozen future particle collider projects of various types - circular and linear, colliding hadrons (proton-proton or ion-ion), leptons (electron-positron or muon-muon), or electrons and ions/protons - are currently under consideration for detail exploration of the Higgs/electroweak physics and discoveries at the post-LHC energy frontier.
Operating colliders
Sources: Information was taken from the website Particle Data Group.
See also
List of colliders
Fixed-target experiment
Large Electron–Positron Collider
Large Hadron Collider
Very Large Hadron Collider
Relativistic Heavy Ion Collider
International Linear Collider
Storage ring
Tevatron
International Conference on Photonic, Electronic and Atomic Collisions
Future Circular Collider
References
External links
LHC - The Large Hadron Collider on the web
The Relativistic Heavy Ion Collider (RHIC)
Accelerator physics | Collider | [
"Physics"
] | 1,210 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
596,407 | https://en.wikipedia.org/wiki/Omega%20%28TeX%29 | Omega is an extension of the TeX typesetting system that uses the Basic Multilingual Plane of Unicode. It was authored by John Plaice and Yannis Haralambous after TeX development was frozen in 1991, primarily to enhance TeX's multilingual typesetting abilities. It includes a new 16-bit font encoding for TeX, as well as fonts (omlgc and omah) covering a wide range of alphabets.
At the 2004 TeX Users Group conference, Plaice announced his decision to split off a new project (not yet public), while Haralambous continued to work on Omega proper.
LaTeX for Omega is invoked as lambda.
Aleph and LuaTeX
Although the project seemed very promising from the beginning, the development has been slow and the functionality rather unstable. A separate project was started with the goal of stabilizing the code and extending it with e-TeX functionality, known as Aleph, and led by Giuseppe Bilotta.
The LaTeX for Aleph is known as Lamed.
Aleph alone is not being developed any more, but most of its functionality has been integrated into LuaTeX, a new project initially funded by Colorado State University (through the Oriental TeX Project by Idris Samawi Hamid) and NTG. LuaTeX started in 2006 and released the first beta version in Summer 2007. It will be a successor of both Aleph and pdfTeX, using Lua as an integrated lightweight programming language. It is developed primarily by Taco Hoekwater.
See also
XeTeX and LuaTeX for recent Unicode capable TeX extensions.
List of TeX extensions
External links
Omega home page
TeX FAQ entry on Aleph and Omega
Mailing list for Omega
Mailing list for Aleph
TeX
Unicode | Omega (TeX) | [
"Mathematics"
] | 364 | [
"TeX",
"Mathematical markup languages"
] |
596,482 | https://en.wikipedia.org/wiki/Hair%20wax | Hair wax is a thick hairstyling product containing wax, used to assist with holding the hair. In contrast with hair gel, most of which contain alcohol, hair wax remains pliable and has less chance of drying out. It is often sold under names such as pomade, putty, glue, whip, molding gum, or styling paste. The texture, consistency, and purpose of these products varies widely and each has a different purported purpose depending on the manufacturer. Traditionally, pomade is a type of hair wax that also adds shine to one's hair.
Hair wax has been used for many years and a waxy soap-like substance was invented by the ancient Gauls as a hair styling agent and was not used as a cleaning agent until many years later.
Ingredients
The following are some of the ingredients typically found in commercial hair wax products:
Beeswax
Candelilla wax
Carnauba wax
Castor wax
Emulsifying wax
Japan wax
Lanolin
Ozokerite
Some stylists prefer making their own blends of hair wax customized for their clientele. Various recipes exist, including some with "secret" ingredients.
Notes
References
External links
See also
Moustache wax
Brylcreem
Murray's Pomade
Hair care products
Waxes
ja:整髪料#ヘアワックス | Hair wax | [
"Physics"
] | 266 | [
"Materials",
"Matter",
"Waxes"
] |
596,485 | https://en.wikipedia.org/wiki/Urine%20therapy | Urine therapy or urotherapy, (also urinotherapy, Shivambu, uropathy, or auto-urine therapy) in alternative medicine is the application of human urine for medicinal or cosmetic purposes, including drinking of one's own urine and massaging one's skin, or gums, with one's own urine. No scientific evidence exists to support any beneficial health claims of urine therapy.
History
Though urine has been believed useful for diagnostic and therapeutic purposes in several traditional systems, and mentioned in some medical texts, auto-urine therapy as a system of alternative medicine was popularized by British naturopath John W. Armstrong in the early 20th century. Armstrong was inspired by his family's practice of using urine to treat minor stings and toothaches, by a metaphorical misreading of the Hebrew Biblical Proverb 5:15 "Drink waters out of thine own cistern, and running waters out of thine own well", and his own experience with ill-health that he treated with a 45-day fast "on nothing but urine and tap water". Starting in 1918, Armstrong prescribed urine therapy regimens that he devised for thousands of patients, and in 1944 he published The Water of Life: A Treatise on urine therapy, which became a founding document of the field.
Armstrong's book sold widely, and in India inspired the writing of (Gujarati: Urine therapy; 1959) by Gandhian social reformer Raojibhai Manibhai Patel, and many later works. These works often reference Shivambu Kalpa, a treatise on the pharmaceutical value of urine, as a source of the practice in the East. They also cite passing references to properties and uses of urine in Yogic-texts such as Vayavaharasutra by Bhadrabahu and Hatha Yoga Pradapika by Svatmarama; and Ayurvedic texts such as Sushruta Samhita, Bhava Prakasha and Harit. However, according to medical anthropologist Joseph Atler, the practices of (drinking one's own urine) and recommended by modern Indian practitioners of urine therapy are closer to the ones propounded by Armstrong than traditional ayurveda or yoga, or even the practices described in Shivambu Kalpa.
Urine therapy has also been combined with other forms of alternative medicine.
It was used by ancient Roman dentists to whiten teeth.
Modern claims and findings
An exhaustive description of the composition of human urine was prepared for NASA in 1971. Urine is an aqueous solution of greater than 95% water. The remaining constituents are, in order of decreasing concentration: urea 9.3 g/L, chloride 1.87 g/L, sodium 1.17 g/L, potassium 0.750 g/L, creatinine 0.670 g/L and other dissolved ions, inorganic and organic compounds.
In China there is a Urine Therapy Association which claims thousand of members.
According to a BBC report, a Thai doctor promoting urine therapy said that Thai people had been practicing urophagia for a long time, but according to the Department of Thai Traditional and Alternative Medicine, there was no record of the practice. In 2022, Thawee Nanra, a self-proclaimed holy man from Thailand, was arrested by police; his followers were observed consuming his urine and feces which they believed to have healing properties.
Urinating on jellyfish stings is a common "folk remedy". This does not help with jellyfish stings, and can be counterproductive, activating nematocysts remaining at the site of the sting, making the pain worse. This is because nematocysts are triggered by the change in the concentration of solutes (e.g. salt), such as when freshwater or similarly-composed urine is applied to the site. The myth originated from the false idea that ammonia, urea, and other compounds in urine could break down the nematocysts: however, urine is much too low in concentration to have those effects.
Urine and urea have been claimed by some practitioners to have an anti-cancer effect, and urotherapy has been offered along with other forms of alternative therapy in some cancer clinics in Mexico. No well-controlled studies support this, and available scientific evidence does not support this theory.
In the Arabian Peninsula, bottled camel urine is sold by vendors as prophetic medicine. In 2015, Saudi police arrested a man for selling supposed "camel urine" that was actually his own.
In January 2022, Christopher Key, a spreader of COVID-19 misinformation, claimed that urine therapy is the antidote to the COVID-19 pandemic. Key also falsely claims that a 9-month research trial on urine therapy has been conducted. There is no scientific evidence supporting urine therapy as a cure for COVID-19.
Health concerns
There is no scientific evidence of therapeutic use for untreated urine.
According to the American Cancer Society, "available scientific evidence does not support claims that urine or urea given in any form is helpful for cancer patients".
In 2016 the Chinese Urine Therapy Association was included on a list of illegal organizations by the Ministry of Civil Affairs. However, the Municipal Bureau of Civil Affairs in Wuhan said they had no jurisdiction over the association.
Celebrities who used urine therapy
Morarji Desai, Indian Prime Minister.
Alfredo Bowman, aka Dr Sebi
See also
Conjugated estrogens, a hormone therapy medication manufactured by purification from horse urine
Fecal microbiota transplant
List of topics characterized as pseudoscience
List of unproven and disproven cancer treatments
Panchgavya, one of several uses of cow urine in Ayurveda
Urea-containing cream
Urinalysis, tests performed on urine for diagnostic purposes
Virgin boy egg, a traditional dish of Dongyang, Zhejiang, China in which eggs are boiled in the urine of young boys
Notes
References
Further reading
"Urine therapy", Martin Gardner, Skeptical Inquirer, May–June 1999.
Alternative cancer treatments
Alternative medical treatments
Biologically based therapies
Naturopathy
Pseudoscience
Urine | Urine therapy | [
"Biology"
] | 1,259 | [
"Urine",
"Excretion",
"Animal waste products"
] |
596,503 | https://en.wikipedia.org/wiki/Fat%20tree | The fat tree network is a universal network for provably efficient communication. It was invented by Charles E. Leiserson of the MIT in 1985. k-ary n-trees, the type of fat-trees commonly used in most high-performance networks, were initially formalized in 1997.
In a tree data structure, every branch has the same thickness (bandwidth), regardless of their place in the hierarchy—they are all "skinny" (skinny in this context means low-bandwidth). In a fat tree, branches nearer the top of the hierarchy are "fatter" (thicker) than branches further down the hierarchy. In a telecommunications network, the branches are data links; the varied thickness (bandwidth) of the data links allows for more efficient and technology-specific use.
Mesh and hypercube topologies have communication requirements that follow a rigid algorithm, and cannot be tailored to specific packaging technologies.
Applications in supercomputers
Supercomputers that use a fat tree network include the two fastest as of late 2018, Summit and Sierra, as well as Tianhe-2, the Meiko Scientific CS-2, Yellowstone, the Earth Simulator, the Cray X2, the Connection Machine CM-5, and various Altix supercomputers.
Mercury Computer Systems applied a variant of the fat tree topology—the hypertree network—to their multicomputers. In this architecture, 2 to 360 compute nodes are arranged in a circuit-switched fat tree network. Each node has local memory that can be mapped by any other node. Each node in this heterogeneous system could be an Intel i860, a PowerPC, or a group of three SHARC digital signal processors.
The fat tree network was particularly well suited to fast Fourier transform computations, which customers used for such signal processing tasks as radar, sonar, and medical imaging.
Related topologies
In August 2008, a team of computer scientists at UCSD published a scalable design for network architecture that uses a topology inspired by the fat tree topology to realize networks that scale better than those of previous hierarchical networks. The architecture uses commodity switches that are cheaper and more power-efficient than high-end modular data center switches.
This topology is actually a special instance of a Clos network, rather than a fat-tree as described above. That is because the edges near the root are emulated by many links to separate parents instead of a single high-capacity link to a single parent. However, many authors continue to use the term in this way.
References
Further reading
Network topology | Fat tree | [
"Mathematics"
] | 521 | [
"Network topology",
"Topology"
] |
596,600 | https://en.wikipedia.org/wiki/Relatively%20compact%20subspace | In mathematics, a relatively compact subspace (or relatively compact subset, or precompact subset) of a topological space is a subset whose closure is compact.
Properties
Every subset of a compact topological space is relatively compact (since a closed subset of a compact space is compact). And in an arbitrary topological space every subset of a relatively compact set is relatively compact.
Every compact subset of a Hausdorff space is relatively compact. In a non-Hausdorff space, such as the particular point topology on an infinite set, the closure of a compact subset is not necessarily compact; said differently, a compact subset of a non-Hausdorff space is not necessarily relatively compact.
Every compact subset of a (possibly non-Hausdorff) topological vector space is complete and relatively compact.
In the case of a metric topology, or more generally when sequences may be used to test for compactness, the criterion for relative compactness becomes that any sequence in has a subsequence convergent in .
Some major theorems characterize relatively compact subsets, in particular in function spaces. An example is the Arzelà–Ascoli theorem. Other cases of interest relate to uniform integrability, and the concept of normal family in complex analysis. Mahler's compactness theorem in the geometry of numbers characterizes relatively compact subsets in certain non-compact homogeneous spaces (specifically spaces of lattices).
Counterexample
As a counterexample take any finite neighbourhood of the particular point of an infinite particular point space. The neighbourhood itself is compact but is not relatively compact because its closure is the whole non-compact space.
Almost periodic functions
The definition of an almost periodic function at a conceptual level has to do with the translates of being a relatively compact set. This needs to be made precise in terms of the topology used, in a particular theory.
See also
Compactly embedded
Totally bounded space
References
page 12 of V. Khatskevich, D.Shoikhet, Differentiable Operators and Nonlinear Equations, Birkhäuser Verlag AG, Basel, 1993, 270 pp. at google books
Properties of topological spaces
Compactness (mathematics) | Relatively compact subspace | [
"Mathematics"
] | 439 | [
"Properties of topological spaces",
"Topological spaces",
"Topology",
"Space (mathematics)"
] |
596,622 | https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli%20theorem | The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. The theorem is the basis of many proofs in mathematics, including that of the Peano existence theorem in the theory of ordinary differential equations, Montel's theorem in complex analysis, and the Peter–Weyl theorem in harmonic analysis and various results concerning compactness of integral operators.
The notion of equicontinuity was introduced in the late 19th century by the Italian mathematicians Cesare Arzelà and Giulio Ascoli. A weak form of the theorem was proven by , who established the sufficient condition for compactness, and by , who established the necessary condition and gave the first clear presentation of the result. A further generalization of the theorem was proven by , to sets of real-valued continuous functions with domain a compact metric space . Modern formulations of the theorem allow for the domain to be compact Hausdorff and for the range to be an arbitrary metric space. More general formulations of the theorem exist that give necessary and sufficient conditions for a family of functions from a compactly generated Hausdorff space into a uniform space to be compact in the compact-open topology; see .
Statement and first consequences
By definition, a sequence of continuous functions on an interval is uniformly bounded if there is a number such that
for every function belonging to the sequence, and every . (Here, must be independent of and .)
The sequence is said to be uniformly equicontinuous if, for every , there exists a such that
whenever for all functions in the sequence. (Here, may depend on , but not , or .)
One version of the theorem can be stated as follows:
Consider a sequence of real-valued continuous functions defined on a closed and bounded interval of the real line. If this sequence is uniformly bounded and uniformly equicontinuous, then there exists a subsequence that converges uniformly.
The converse is also true, in the sense that if every subsequence of itself has a uniformly convergent subsequence, then is uniformly bounded and equicontinuous.
Immediate examples
Differentiable functions
The hypotheses of the theorem are satisfied by a uniformly bounded sequence of differentiable functions with uniformly bounded derivatives. Indeed, uniform boundedness of the derivatives implies by the mean value theorem that for all and ,
where is the supremum of the derivatives of functions in the sequence and is independent of . So, given , let to verify the definition of equicontinuity of the sequence. This proves the following corollary:
Let be a uniformly bounded sequence of real-valued differentiable functions on such that the derivatives are uniformly bounded. Then there exists a subsequence that converges uniformly on .
If, in addition, the sequence of second derivatives is also uniformly bounded, then the derivatives also converge uniformly (up to a subsequence), and so on. Another generalization holds for continuously differentiable functions. Suppose that the functions are continuously differentiable with derivatives . Suppose that are uniformly equicontinuous and uniformly bounded, and that the sequence is pointwise bounded (or just bounded at a single point). Then there is a subsequence of the converging uniformly to a continuously differentiable function.
The diagonalization argument can also be used to show that a family of infinitely differentiable functions, whose derivatives of each order are uniformly bounded, has a uniformly convergent subsequence, all of whose derivatives are also uniformly convergent. This is particularly important in the theory of distributions.
Lipschitz and Hölder continuous functions
The argument given above proves slightly more, specifically
If is a uniformly bounded sequence of real valued functions on such that each fn is Lipschitz continuous with the same Lipschitz constant :
for all and all , then there is a subsequence that converges uniformly on .
The limit function is also Lipschitz continuous with the same value for the Lipschitz constant. A slight refinement is
A set of functions on that is uniformly bounded and satisfies a Hölder condition of order , , with a fixed constant ,
is relatively compact in . In particular, the unit ball of the Hölder space is compact in .
This holds more generally for scalar functions on a compact metric space satisfying a Hölder condition with respect to the metric on .
Generalizations
Euclidean spaces
The Arzelà–Ascoli theorem holds, more generally, if the functions take values in -dimensional Euclidean space , and the proof is very simple: just apply the -valued version of the Arzelà–Ascoli theorem times to extract a subsequence that converges uniformly in the first coordinate, then a sub-subsequence that converges uniformly in the first two coordinates, and so on. The above examples generalize easily to the case of functions with values in Euclidean space.
Compact metric spaces and compact Hausdorff spaces
The definitions of boundedness and equicontinuity can be generalized to the setting of arbitrary compact metric spaces and, more generally still, compact Hausdorff spaces. Let X be a compact Hausdorff space, and let C(X) be the space of real-valued continuous functions on X. A subset is said to be equicontinuous if for every x ∈ X and every , x has a neighborhood Ux such that
A set is said to be pointwise bounded if for every x ∈ X,
A version of the Theorem holds also in the space C(X) of real-valued continuous functions on a compact Hausdorff space X :
Let X be a compact Hausdorff space. Then a subset F of C(X) is relatively compact in the topology induced by the uniform norm if and only if it is equicontinuous and pointwise bounded.
The Arzelà–Ascoli theorem is thus a fundamental result in the study of the algebra of continuous functions on a compact Hausdorff space.
Various generalizations of the above quoted result are possible. For instance, the functions can assume values in a metric space or (Hausdorff) topological vector space with only minimal changes to the statement (see, for instance, , ):
Let X be a compact Hausdorff space and Y a metric space. Then is compact in the compact-open topology if and only if it is equicontinuous, pointwise relatively compact and closed.
Here pointwise relatively compact means that for each x ∈ X, the set is relatively compact in Y.
In the case that Y is complete, the proof given above can be generalized in a way that does not rely on the separability of the domain. On a compact Hausdorff space X, for instance, the equicontinuity is used to extract, for each ε = 1/n, a finite open covering of X such that the oscillation of any function in the family is less than ε on each open set in the cover. The role of the rationals can then be played by a set of points drawn from each open set in each of the countably many covers obtained in this way, and the main part of the proof proceeds exactly as above. A similar argument is used as a part of the proof for the general version which does not assume completeness of Y.
Functions on non-compact spaces
The Arzela-Ascoli theorem generalises to functions where is not compact. Particularly important are cases where is a topological vector space. Recall that if
is a topological space and is a uniform space (such as any metric space or any topological group, metrisable or not), there is the topology of compact convergence on the set of functions ; it is set up so that a sequence (or more generally a
filter or net) of functions converges if and only if it converges uniformly on each compact subset of . Let be the subspace of
consisting of continuous functions, equipped with the topology of compact convergence.
Then one form of the Arzelà-Ascoli theorem is the following:
Let be a topological space, a Hausdorff uniform space and an equicontinuous set of continuous functions such that is relatively compact in for each . Then is relatively compact in .
This theorem immediately gives the more specialised statements above in cases where is compact
and the uniform structure of is given by a metric. There are a few other variants in terms of
the topology of precompact convergence or other related topologies on
. It is also possible to extend the statement to functions that are only continuous when restricted to the sets of a covering of by compact subsets. For details one can consult Bourbaki (1998), Chapter X, § 2, nr 5.
Non-continuous functions
Solutions of numerical schemes for parabolic equations are usually piecewise constant, and therefore not continuous, in time. As their jumps nevertheless tend to become small as the time step goes to , it is possible to establish uniform-in-time convergence properties using a generalisation to non-continuous functions of the classical Arzelà–Ascoli theorem (see e.g. ).
Denote by the space of functions from to endowed with the uniform metric
Then we have the following:
Let be a compact metric space and a complete metric space. Let be a sequence in such that there exists a function and a sequence satisfying
Assume also that, for all , is relatively compact in . Then is relatively compact in , and any limit of in this space is in .
Necessity
Whereas most formulations of the Arzelà–Ascoli theorem assert sufficient conditions for a family of functions to be (relatively) compact in some topology, these conditions are typically also necessary. For instance, if a set F is compact in C(X), the Banach space of real-valued continuous functions on a compact Hausdorff space with respect to its uniform norm, then it is bounded in the uniform norm on C(X) and in particular is pointwise bounded. Let N(ε, U) be the set of all functions in F whose oscillation over an open subset U ⊂ X is less than ε:
For a fixed x∈X and ε, the sets N(ε, U) form an open covering of F as U varies over all open neighborhoods of x. Choosing a finite subcover then gives equicontinuity.
Further examples
To every function that is -integrable on , with , associate the function defined on by
Let be the set of functions corresponding to functions in the unit ball of the space . If is the Hölder conjugate of , defined by , then Hölder's inequality implies that all functions in satisfy a Hölder condition with and constant .
It follows that is compact in . This means that the correspondence defines a compact linear operator between the Banach spaces and . Composing with the injection of into , one sees that acts compactly from to itself. The case can be seen as a simple instance of the fact that the injection from the Sobolev space into , for a bounded open set in , is compact.
When is a compact linear operator from a Banach space to a Banach space , its transpose is compact from the (continuous) dual to . This can be checked by the Arzelà–Ascoli theorem.
Indeed, the image of the closed unit ball of is contained in a compact subset of . The unit ball of defines, by restricting from to , a set of (linear) continuous functions on that is bounded and equicontinuous. By Arzelà–Ascoli, for every sequence in , there is a subsequence that converges uniformly on , and this implies that the image of that subsequence is Cauchy in .
When is holomorphic in an open disk , with modulus bounded by , then (for example by Cauchy's formula) its derivative has modulus bounded by in the smaller disk If a family of holomorphic functions on is bounded by on , it follows that the family of restrictions to is equicontinuous on . Therefore, a sequence converging uniformly on can be extracted. This is a first step in the direction of Montel's theorem.
Let be endowed with the uniform metric Assume that is a sequence of solutions of a certain partial differential equation (PDE), where the PDE ensures the following a priori estimates: is equicontinuous for all , is equitight for all , and, for all and all , is small enough when is small enough. Then by the Fréchet–Kolmogorov theorem, we can conclude that is relatively compact in . Hence, we can, by (a generalization of) the Arzelà–Ascoli theorem, conclude that is relatively compact in
See also
Helly's selection theorem
Fréchet–Kolmogorov theorem
References
.
.
.
.
.
.
.
Arzelà-Ascoli theorem at Encyclopaedia of Mathematics
Articles containing proofs
Compactness theorems
Theory of continuous functions
Theorems in real analysis
Theorems in functional analysis
Topology of function spaces | Arzelà–Ascoli theorem | [
"Mathematics"
] | 2,719 | [
"Compactness theorems",
"Theorems in mathematical analysis",
"Theory of continuous functions",
"Theorems in real analysis",
"Theorems in topology",
"Theorems in functional analysis",
"Topology",
"Articles containing proofs"
] |
596,646 | https://en.wikipedia.org/wiki/Recommender%20system | A recommender system (RecSys), or a recommendation system (sometimes replacing system with terms such as platform, engine, or algorithm), is a subclass of information filtering system that provides suggestions for items that are most pertinent to a particular user. Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.
Typically, the suggestions refer to various decision-making processes, such as what product to purchase, what music to listen to, or what online news to read.
Recommender systems are used in a variety of areas, with commonly recognised examples taking the form of playlist generators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders. These systems can operate using a single type of input, like music, or multiple inputs within and across platforms like news, books and search queries. There are also popular recommender systems for specific topics like restaurants and online dating. Recommender systems have also been developed to explore research articles and experts, collaborators, and financial services.
A content discovery platform is an implemented software recommendation platform which uses recommender system tools. It utilizes user metadata in order to discover and recommend appropriate content, whilst reducing ongoing maintenance and development costs. A content discovery platform delivers personalized content to websites, mobile devices and set-top boxes. A large range of content discovery platforms currently exist for various forms of content ranging from news articles and academic journal articles to television. As operators compete to be the gateway to home entertainment, personalized television is a key service differentiator. Academic content discovery has recently become another area of interest, with several companies being established to help academic researchers keep up to date with relevant academic content and serendipitously discover new content.
Overview
Recommender systems usually make use of either or both collaborative filtering and content-based filtering, as well as other systems such as knowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in. Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties.
The differences between collaborative and content-based filtering can be demonstrated by comparing two early music recommender systems, Last.fm and Pandora Radio.
Last.fm creates a "station" of recommended songs by observing what bands and individual tracks the user has listened to on a regular basis and comparing those against the listening behavior of other users. Last.fm will play tracks that do not appear in the user's library, but are often played by other users with similar interests. As this approach leverages the behavior of users, it is an example of a collaborative filtering technique.
Pandora uses the properties of a song or artist (a subset of the 400 attributes provided by the Music Genome Project) to seed a "station" that plays music with similar properties. User feedback is used to refine the station's results, deemphasizing certain attributes when a user "dislikes" a particular song and emphasizing other attributes when a user "likes" a song. This is an example of a content-based approach.
Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of the cold start problem, and is common in collaborative filtering systems. Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).
Recommender systems are a useful alternative to search algorithms since they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data.
Recommender systems have been the focus of several granted patents, and there are more than 50 software libraries that support the development of recommender systems including LensKit, RecBole, ReChorus and RecPack.
History
Elaine Rich created the first recommender system in 1979, called Grundy. She looked for a way to recommend users books they might like. Her idea was to create a system that asks users specific questions and classifies them into classes of preferences, or "stereotypes", depending on their answers. Depending on users' stereotype membership, they would then get recommendations for books they might like.
Another early recommender system, called a "digital bookshelf", was described in a 1990 technical report by Jussi Karlgren at Columbia University,
and implemented at scale and worked through in technical reports and publications from 1994 onwards by Jussi Karlgren, then at SICS,
and research groups led by Pattie Maes at MIT, Will Hill at Bellcore, and Paul Resnick, also at MIT, whose work with GroupLens was awarded the 2010 ACM Software Systems Award.
Montaner provided the first overview of recommender systems from an intelligent agent perspective. Adomavicius provided a new, alternate overview of recommender systems. Herlocker provides an additional overview of evaluation techniques for recommender systems, and Beel et al. discussed the problems of offline evaluations. Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.
Approaches
Collaborative filtering
One approach to the design of recommender systems that has wide use is collaborative filtering. Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm, while that of model-based approaches is matrix factorization (recommender systems).
A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach and the Pearson Correlation as first implemented by Allen.
When building a model from a user's behavior, a distinction is often made between explicit and implicit forms of data collection.
Examples of explicit data collection include the following:
Asking a user to rate an item on a sliding scale.
Asking a user to search.
Asking a user to rank a collection of items from favorite to least favorite.
Presenting two items to a user and asking him/her to choose the better one of them.
Asking a user to create a list of items that he/she likes (see Rocchio classification or other similar techniques).
Examples of implicit data collection include the following:
Observing the items that a user views in an online store.
Analyzing item/user viewing times.
Keeping a record of the items that a user purchases online.
Obtaining a list of items that a user has listened to or watched on his/her computer.
Analyzing the user's social network and discovering similar likes and dislikes.
Collaborative filtering approaches often suffer from three problems: cold start, scalability, and sparsity.
Cold start: For a new user or item, there is not enough data to make accurate recommendations. Note: one commonly implemented solution to this problem is the multi-armed bandit algorithm.
Scalability: There are millions of users and products in many of the environments in which these systems make recommendations. Thus, a large amount of computation power is often necessary to calculate recommendations.
Sparsity: The number of items sold on major e-commerce sites is extremely large. The most active users will only have rated a small subset of the overall database. Thus, even the most popular items have very few ratings.
One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system.
Many social networks originally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends. Collaborative filtering is still used as part of hybrid systems.
Content-based filtering
Another common approach when designing recommender systems is content-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences. These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features.
In this system, keywords are used to describe the items, and a user profile is built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots in information retrieval and information filtering research.
To create a user profile, the system mostly focuses on two types of information:
A model of the user's preference.
A history of the user's interaction with the recommender system.
Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is the tf–idf representation (also called vector space representation). The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such as Bayesian Classifiers, cluster analysis, decision trees, and artificial neural networks in order to estimate the probability that the user is going to like the item.
A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system.
Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improved metadata of items, because as they also reflect aspects of the item like metadata, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques including text mining, information retrieval, sentiment analysis (see also Multimodal sentiment analysis) and deep learning.
Hybrid recommendations approaches
Most recommender systems now use a hybrid approach, combining collaborative filtering, content-based filtering, and other approaches. There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model. Several studies that empirically compared the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck in knowledge-based approaches.
Netflix is a good example of the use of hybrid recommender systems. The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).
Some hybridization techniques include:
Weighted: Combining the score of different recommendation components numerically.
Switching: Choosing among recommendation components and applying the selected one.
Mixed: Recommendations from different recommenders are presented together to give the recommendation.
Cascade: Recommenders are given strict priority, with the lower priority ones breaking ties in the scoring of the higher ones.
Meta-level: One recommendation technique is applied and produces some sort of model, which is then the input used by the next technique.
Technologies
Session-based recommender systems
These recommender systems use the interactions of a user within a session to generate recommendations. Session-based recommender systems are used at YouTube and Amazon. These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains, where session-based recommendations are particularly relevant, include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such as recurrent neural networks, transformers, and other deep-learning-based approaches.
Reinforcement learning for recommender systems
The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user. One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest.
Multi-criteria recommender systems
Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems. See this chapter for an extended introduction.
Risk-aware recommender systems
The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue is DRARS, a system which models the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm.
Mobile recommender systems
Mobile recommender systems make use of internet-accessing smartphones to offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems.
There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy. Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).
One example of a mobile recommender system are the approaches taken by companies such as Uber and Lyft to generate driving routes for taxi drivers in a city. This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits.
Generative recommenders
Generative recommenders (GR) represent an approach that transforms recommendation tasks into sequential transduction problems, where user actions are treated like tokens in a generative modeling framework. In one method, known as HSTU (Hierarchical Sequential Transduction Units), high-cardinality, non-stationary, and streaming datasets are efficiently processed as sequences, enabling the model to learn from trillions of parameters and to handle user action histories orders of magnitude longer than before. By turning all of the system’s varied data into a single stream of tokens and using a custom self-attention approach instead of traditional neural network layers, generative recommenders make the model much simpler and less memory-hungry. As a result, it can improve recommendation quality in test simulations and in real-world tests, while being faster than previous Transformer-based systems when handling long lists of user actions. Ultimately, this approach allows the model’s performance to grow steadily as more computing power is used, laying a foundation for efficient and scalable “foundation models” for recommendations.
The Netflix Prize
One of the events that energized research in recommender systems was the Netflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.
The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.:
Predictive accuracy is substantially improved when blending multiple predictors. Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique. Consequently, our solution is an ensemble of many methods.
Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place founded Gravity R&D, a recommendation engine that's active in the RecSys community. 4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites.
A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on the Internet Movie Database (IMDb). As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and the Video Privacy Protection Act by releasing the datasets. This, as well as concerns from the Federal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010.
Evaluation
Performance measures
Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure the effectiveness of recommender systems, and compare different approaches, three types of evaluations are available: user studies, online evaluations (A/B tests), and offline evaluations.
The commonly used metrics are the mean squared error and root mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such as precision and recall or DCG are useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation. However, many of the classic evaluation measures are highly criticized.
Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise.
User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best.
In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such as conversion rate or click-through rate.
Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.
The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers. For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests. A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms. Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction. This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module. Researchers have concluded that the results of offline evaluations should be viewed critically.
Beyond accuracy
Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important.
Diversity – Users tend to be more satisfied with recommendations when there is a higher intra-list diversity, e.g. items from different artists.
Recommender persistence – In some situations, it is more effective to re-show recommendations, or let users re-rate items, than showing new items. There are several reasons for this. Users may ignore items when they are shown for the first time, for instance, because they had no time to inspect the recommendations carefully.
Privacy – Recommender systems usually have to deal with privacy concerns because users have to reveal sensitive information. Building user profiles using collaborative filtering can be problematic from a privacy point of view. Many European countries have a strong culture of data privacy, and every attempt to introduce any level of user profiling can result in a negative customer response. Much research has been conducted on ongoing privacy issues in this space. The Netflix Prize is particularly notable for the detailed personal information released in its dataset. Ramakrishnan et al. have conducted an extensive overview of the trade-offs between personalization and privacy and found that the combination of weak ties (an unexpected connection that provides serendipitous recommendations) and other data sources can be used to uncover identities of users in an anonymized dataset.
User demographics – Beel et al. found that user demographics may influence how satisfied users are with recommendations. In their paper they show that elderly users tend to be more interested in recommendations than younger users.
Robustness – When users can participate in the recommender system, the issue of fraud must be addressed.
Serendipity – Serendipity is a measure of "how surprising the recommendations are". For instance, a recommender system that recommends milk to a customer in a grocery store might be perfectly accurate, but it is not a good recommendation because it is an obvious item for the customer to buy. "[Serendipity] serves two purposes: First, the chance that users lose interest because the choice set is too uniform decreases. Second, these items are needed for algorithms to learn and improve themselves".
Trust – A recommender system is of little value for a user if the user does not trust the system. Trust can be built by a recommender system by explaining how it generates recommendations, and why it recommends an item.
Labelling – User satisfaction with recommendations may be influenced by the labeling of the recommendations. For instance, in the cited study click-through rate (CTR) for recommendations labeled as "Sponsored" were lower (CTR=5.93%) than CTR for identical recommendations labeled as "Organic" (CTR=8.86%). Recommendations with no label performed best (CTR=9.87%) in that study.
Reproducibility
Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to a reproducibility crisis in recommender systems publications. The topic of reproducibility seems to be a recurrent issue in some Machine Learning publication venues, but does not have a considerable effect beyond the world of scientific publication. In the context of recommender systems a 2019 paper surveyed a small number of hand-picked publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area.
More recent work on benchmarking a set of the same methods came to qualitatively very different results whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM, RecSys Challenge.
Moreover, neural and deep learning methods are widely used in industry where they are extensively tested. The topic of reproducibility is not new in recommender systems. By 2011, Ekstrand, Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently". Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions." As a consequence, much research about recommender systems can be considered as not reproducible. Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems. Said and Bellogín conducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used. Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation: "(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research."
Artificial intelligence applications in recommendation
Artificial intelligence (AI) applications in recommendation systems are the advanced methodologies that leverage AI technologies, to enhance the performance recommendation engines. The AI-based recommender can analyze complex data sets, learning from user behavior, preferences, and interactions to generate highly accurate and personalized content or product suggestions. The integration of AI in recommendation systems has marked a significant evolution from traditional recommendation methods. Traditional methods often relied on inflexible algorithms that could suggest items based on general user trends or apparent similarities in content. In comparison, AI-powered systems have the capability to detect patterns and subtle distinctions that may be overlooked by traditional methods. These systems can adapt to specific individual preferences, thereby offering recommendations that are more aligned with individual user needs. This approach marks a shift towards more personalized, user-centric suggestions.
Recommendation systems widely adopt AI techniques such as machine learning, deep learning, and natural language processing. These advanced methods enhance system capabilities to predict user preferences and deliver personalized content more accurately. Each technique contributes uniquely. The following sections will introduce specific AI models utilized by a recommendation system by illustrating their theories and functionalities.
KNN-based collaborative filters
Collaborative filtering (CF) is one of the most commonly used recommendation system algorithms. It generates personalized suggestions for users based on explicit or implicit behavioral patterns to form predictions. Specifically, it relies on external feedback such as star ratings, purchasing history and so on to make judgments. CF make predictions about users' preference based on similarity measurements. Essentially, the underlying theory is: "if user A is similar to user B, and if A likes item C, then it is likely that B also likes item C."
There are many models available for collaborative filtering. For AI-applied collaborative filtering, a common model is called K-nearest neighbors. The ideas are as follows:
Data Representation: Create a n-dimensional space where each axis represents a user's trait (ratings, purchases, etc.). Represent the user as a point in that space.
Statistical Distance: 'Distance' measures how far apart users are in this space. See statistical distance for computational details
Identifying Neighbors: Based on the computed distances, find k nearest neighbors of the user to which we want to make recommendations
Forming Predictive Recommendations: The system will analyze the similar preference of the k neighbors. The system will make recommendations based on that similarity
Neural networks
An artificial neural network (ANN), is a deep learning model structure which aims to mimic a human brain. They comprise a series of neurons, each responsible for receiving and processing information transmitted from other interconnected neurons. Similar to a human brain, these neurons will change activation state based on incoming signals (training input and backpropagated output), allowing the system to adjust activation weights during the network learning phase. ANN is usually designed to be a black-box model. Unlike regular machine learning where the underlying theoretical components are formal and rigid, the collaborative effects of neurons are not entirely clear, but modern experiments has shown the predictive power of ANN.
ANN is widely used in recommendation systems for its power to utilize various data. Other than feedback data, ANN can incorporate non-feedback data which are too intricate for collaborative filtering to learn, and the unique structure allows ANN to identify extra signal from non-feedback data to boost user experience. Following are some examples:
Time and Seasonality: what specify time and date or a season that a user interacts with the platform
User Navigation Patterns: sequence of pages visited, time spent on different parts of a website, mouse movement, etc.
External Social Trends: information from outer social media
Two-Tower Model
The Two-Tower model is a neural architecture commonly employed in large-scale recommendation systems, particularly for candidate retrieval tasks. It consists of two neural networks:
User Tower: Encodes user-specific features, such as interaction history or demographic data.
Item Tower: Encodes item-specific features, such as metadata or content embeddings.
The outputs of the two towers are fixed-length embeddings that represent users and items in a shared vector space. A similarity metric, such as dot product or cosine similarity, is used to measure relevance between a user and an item.
This model is highly efficient for large datasets as embeddings can be pre-computed for items, allowing rapid retrieval during inference. It is often used in conjunction with ranking models for end-to-end recommendation pipelines.
Natural language processing
Natural language processing is a series of AI algorithms to make natural human language accessible and analyzable to a machine. It is a fairly modern technique inspired by the growing amount of textual information. For application in recommendation system, a common case is the Amazon customer review. Amazon will analyze the feedbacks comments from each customer and report relevant data to other customers for reference. The recent years have witnessed the development of various text analysis models, including latent semantic analysis (LSA), singular value decomposition (SVD), latent Dirichlet allocation (LDA), etc. Their uses have consistently aimed to provide customers with more precise and tailored recommendations.
Specific applications
Academic content discovery
An emerging market for content discovery platforms is academic content. Approximately 6000 academic journal articles are published daily, making it increasingly difficult for researchers to balance time management with staying up to date with relevant research. Though traditional tools academic search tools such as Google Scholar or PubMed provide a readily accessible database of journal articles, content recommendation in these cases are performed in a 'linear' fashion, with users setting 'alarms' for new publications based on keywords, journals or particular authors.
Google Scholar provides an 'Updates' tool that suggests articles by using a statistical model that takes a researchers' authorized paper and citations as input. Whilst these recommendations have been noted to be extremely good, this poses a problem with early career researchers which may be lacking a sufficient body of work to produce accurate recommendations.
Decision-making
In contrast to an engagement-based ranking system employed by social media and other digital platforms, a bridging-based ranking optimizes for content that is unifying instead of polarizing. Examples include Polis and Remesh which have been used around the world to help find more consensus around specific political issues. Twitter has also used this approach for managing its community notes, which YouTube planned to pilot in 2024. Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empowering deliberative groups that are representative of the platform's users to control the design and implementation of the algorithm.
Television
As the connected television landscape continues to evolve, search and recommendation are seen as having an even more pivotal role in the discovery of content. With broadband-connected devices, consumers are projected to have access to content from linear broadcast sources as well as internet television. Therefore, there is a risk that the market could become fragmented, leaving it to the viewer to visit various locations and find what they want to watch in a way that is time-consuming and complicated for them. By using a search and recommendation engine, viewers are provided with a central 'portal' from which to discover content from several sources in just one location.
See also
Algorithmic radicalization
ACM Conference on Recommender Systems
Cold start
Collaborative filtering
Collective intelligence
Configurator
Enterprise bookmarking
Filter bubble
Information filtering system
Information explosion
Media monitoring service
Pattern recognition
Personalized marketing
Personalized search
Preference elicitation
Product finder
Rating site
Reputation management
Reputation system
References
Further reading
Books
Kim Falk (d 2019), Practical Recommender Systems, Manning Publications,
Seaver, Nick (2022). Computing Taste: Algorithms and the Makers of Music Recommendation. University of Chicago Press.
Scientific articles
Prem Melville, Raymond J. Mooney, and Ramadass Nagarajan. (2002) Content-Boosted Collaborative Filtering for Improved Recommendations. Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI-2002), pp. 187–192, Edmonton, Canada, July 2002.
Information systems
Mass media monitoring
Social information processing | Recommender system | [
"Technology"
] | 7,672 | [
"Information systems",
"Recommender systems",
"Information technology"
] |
596,688 | https://en.wikipedia.org/wiki/Integrated%20Windows%20Authentication | Integrated Windows Authentication (IWA)
is a term associated with Microsoft products that refers to the SPNEGO, Kerberos, and NTLMSSP authentication protocols with respect to SSPI functionality introduced with Microsoft Windows 2000 and included with later Windows NT-based operating systems. The term is used more commonly for the automatically authenticated connections between Microsoft Internet Information Services, Internet Explorer, and other Active Directory aware applications.
IWA is also known by several names like HTTP Negotiate authentication, NT Authentication, NTLM Authentication, Domain authentication, Windows Integrated Authentication, Windows NT Challenge/Response authentication, or simply Windows Authentication.
Overview
Integrated Windows Authentication uses the security features of Windows clients and servers. Unlike Basic Authentication or Digest Authentication, initially, it does not prompt users for a user name and password. The current Windows user information on the client computer is supplied by the web browser through a cryptographic exchange involving hashing with the Web server. If the authentication exchange initially fails to identify the user, the web browser will prompt the user for a Windows user account user name and password.
Integrated Windows Authentication itself is not a standard or an authentication protocol. When IWA is selected as an option of a program (e.g. within the Directory Security tab of the IIS site properties dialog) this implies that underlying security mechanisms should be used in a preferential order. If the Kerberos provider is functional and a Kerberos ticket can be obtained for the target, and any associated settings permit Kerberos authentication to occur (e.g. Intranet sites settings in Internet Explorer), the Kerberos 5 protocol will be attempted. Otherwise NTLMSSP authentication is attempted. Similarly, if Kerberos authentication is attempted, yet it fails, then NTLMSSP is attempted. IWA uses SPNEGO to allow initiators and acceptors to negotiate either Kerberos or NTLMSSP. Third party utilities have extended the Integrated Windows Authentication paradigm to UNIX, Linux and Mac systems.
Supported web browsers
Integrated Windows Authentication works with most modern web browsers, but does not work over some HTTP proxy servers. Therefore, it is best for use in intranets where all the clients are within a single domain. It may work with other web browsers if they have been configured to pass the user's logon credentials to the server that is requesting authentication. Where a proxy itself requires NTLM authentication, some applications like Java may not work because the protocol is not described in RFC-2069 for proxy authentication.
Internet Explorer 2 and later versions.
In Mozilla Firefox on Windows operating systems, the names of the domains/websites to which the authentication is to be passed can be entered (comma delimited for multiple domains) for the "network.negotiate-auth.trusted-uris" (for Kerberos) or in the "network.automatic-ntlm-auth.trusted-uris" (NTLM) Preference Name on the about:config page. On the Macintosh operating systems this works if you have a kerberos ticket (use negotiate). Some websites may also require configuring the "network.negotiate-auth.delegation-uris".
Opera 9.01 and later versions can use NTLM/Negotiate, but will use Basic or Digest authentication if that is offered by the server.
Google Chrome works as of 8.0.
Safari works, once you have a Kerberos ticket.
Microsoft Edge 77 and later.
Supported mobile browsers
iOS natively supports Kerberos via Kerberos Single Sign-on extension. Configuring the extension enables Safari and Edge to use Kerberos.
Android has SPNEGO support in Chrome which is adding Kerberos support with a solution like Hypergate Authenticator.
See also
SSPI (Security Support Provider Interface)
NTLM (NT Lan Manager)
SPNEGO (Simple and Protected GSSAPI Negotiation Mechanism)
GSSAPI (Generic Security Services Application Program Interface)
References
External links
Discussion of IWA in Microsoft IIS 6.0 Technical Reference
Microsoft Windows security technology
Internet Explorer
Computer access control | Integrated Windows Authentication | [
"Engineering"
] | 839 | [
"Cybersecurity engineering",
"Computer access control"
] |
596,706 | https://en.wikipedia.org/wiki/Gas%20chromatography | Gas chromatography (GC) is a common type of chromatography used in analytical chemistry for separating and analyzing compounds that can be vaporized without decomposition. Typical uses of GC include testing the purity of a particular substance, or separating the different components of a mixture. In preparative chromatography, GC can be used to prepare pure compounds from a mixture.
Gas chromatography is also sometimes known as vapor-phase chromatography (VPC), or gas–liquid partition chromatography (GLPC). These alternative names, as well as their respective abbreviations, are frequently used in scientific literature.
Gas chromatography is the process of separating compounds in a mixture by injecting a gaseous or liquid sample into a mobile phase, typically called the carrier gas, and passing the gas through a stationary phase. The mobile phase is usually an inert gas or an unreactive gas such as helium, argon, nitrogen or hydrogen. The stationary phase can be solid or liquid, although most GC systems today use a polymeric liquid stationary phase. The stationary phase is contained inside of a separation column. Today, most GC columns are fused silica capillaries with an inner diameter of and a length of . The GC column is located inside an oven where the temperature of the gas can be controlled and the effluent coming off the column is monitored by a suitable detector.
Operating principle
A gas chromatograph is made of a narrow tube, known as the column, through which the vaporized sample passes, carried along by a continuous flow of inert or nonreactive gas. Components of the sample pass through the column at different rates, depending on their chemical and physical properties and the resulting interactions with the column lining or filling, called the stationary phase. The column is typically enclosed within a temperature controlled oven. As the chemicals exit the end of the column, they are detected and identified electronically.
History
Background
Chromatography dates to 1903 in the work of the Russian scientist, Mikhail Semenovich Tswett, who separated plant pigments via liquid column chromatography.
Invention
The invention of gas chromatography is generally attributed to Anthony T. James and Archer J.P. Martin. Their gas chromatograph used partition chromatography as the separating principle, rather than adsorption chromatography. The popularity of gas chromatography quickly rose after the development of the flame ionization detector.
Martin and another one of their colleagues, Richard Synge, with whom he shared the 1952 Nobel Prize in Chemistry, had noted in an earlier paper that chromatography might also be used to separate gases. Synge pursued other work while Martin continued his work with James.
Gas adsorption chromatography precursors
German physical chemist Erika Cremer in 1947 together with Austrian graduate student Fritz Prior developed what could be considered the first gas chromatograph that consisted of a carrier gas, a column packed with silica gel, and a thermal conductivity detector. They exhibited the chromatograph at ACHEMA in Frankfurt, but nobody was interested in it.
N.C. Turner with the Burrell Corporation introduced in 1943 a massive instrument that used a charcoal column and mercury vapors. Stig Claesson of Uppsala University published in 1946 his work on a charcoal column that also used mercury.
Gerhard Hesse, while a professor at the University of Marburg/Lahn decided to test the prevailing opinion among German chemists that molecules could not be separated in a moving gas stream. He set up a simple glass column filled with starch and successfully separated bromine and iodine using nitrogen as the carrier gas. He then built a system that flowed an inert gas through a glass condenser packed with silica gel and collected the eluted fractions.
Courtenay S.G Phillips of Oxford University investigated separation in a charcoal column using a thermal conductivity detector. He consulted with Claesson and decided to use displacement as his separating principle. After learning about the results of James and Martin, he switched to partition chromatography.
Column technology
Early gas chromatography used packed columns, made of block 1–5 m long, 1–5 mm diameter, and filled with particles. The resolution of packed columns was improved by the invention of capillary column, in which the stationary phase is coated on the inner wall of the capillary.
Physical components
Autosamplers
The autosampler provides the means to introduce a sample automatically into the inlets. Manual insertion of the sample is possible but is no longer common. Automatic insertion provides better reproducibility and time-optimization.Different kinds of autosamplers exist. Autosamplers can be classified in relation to sample capacity (auto-injectors vs. autosamplers, where auto-injectors can work a small number of samples), to robotic technologies (XYZ robot vs. rotating robot – the most common), or to analysis:
Liquid
Static head-space by syringe technology
Dynamic head-space by transfer-line technology
Solid phase microextraction (SPME)
Inlets
The column inlet (or injector) provides the means to introduce a sample into a continuous flow of carrier gas. The inlet is a piece of hardware attached to the column head.
Common inlet types are:
S/SL (split/splitless) injector – a sample is introduced into a heated small chamber via a syringe through a septum – the heat facilitates volatilization of the sample and sample matrix. The carrier gas then either sweeps the entirety (splitless mode) or a portion (split mode) of the sample into the column. In split mode, a part of the sample/carrier gas mixture in the injection chamber is exhausted through the split vent. Split injection is preferred when working with samples with high analyte concentrations (>0.1%) whereas splitless injection is best suited for trace analysis with low amounts of analytes (<0.01%). In splitless mode the split valve opens after a pre-set amount of time to purge heavier elements that would otherwise contaminate the system. This pre-set (splitless) time should be optimized, the shorter time (e.g., 0.2 min) ensures less tailing but loss in response, the longer time (2 min) increases tailing but also signal.
On-column inlet – the sample is here introduced directly into the column in its entirety without heat, or at a temperature below the boiling point of the solvent. The low temperature condenses the sample into a narrow zone. The column and inlet can then be heated, releasing the sample into the gas phase. This ensures the lowest possible temperature for chromatography and keeps samples from decomposing above their boiling point.
PTV injector – Temperature-programmed sample introduction was first described by Vogt in 1979. Originally Vogt developed the technique as a method for the introduction of large sample volumes (up to 250 μL) in capillary GC. Vogt introduced the sample into the liner at a controlled injection rate. The temperature of the liner was chosen slightly below the boiling point of the solvent. The low-boiling solvent was continuously evaporated and vented through the split line. Based on this technique, Poy developed the programmed temperature vaporising injector; PTV. By introducing the sample at a low initial liner temperature many of the disadvantages of the classic hot injection techniques could be circumvented.
Gas source inlet or gas switching valve – gaseous samples in collection bottles are connected to what is most commonly a six-port switching valve. The carrier gas flow is not interrupted while a sample can be expanded into a previously evacuated sample loop. Upon switching, the contents of the sample loop are inserted into the carrier gas stream.
P/T (purge-and-trap) system – An inert gas is bubbled through an aqueous sample causing insoluble volatile chemicals to be purged from the matrix. The volatiles are 'trapped' on an absorbent column (known as a trap or concentrator) at ambient temperature. The trap is then heated and the volatiles are directed into the carrier gas stream. Samples requiring preconcentration or purification can be introduced via such a system, usually hooked up to the S/SL port.
The choice of carrier gas (mobile phase) is important. Hydrogen has a range of flow rates that are comparable to helium in efficiency. However, helium may be more efficient and provide the best separation if flow rates are optimized. Helium is non-flammable and works with a greater number of detectors and older instruments. Therefore, helium is the most common carrier gas used. However, the price of helium has gone up considerably over recent years, causing an increasing number of chromatographers to switch to hydrogen gas. Historical use, rather than rational consideration, may contribute to the continued preferential use of helium.
Detectors
Commonly used detectors are the flame ionization detector (FID) and the thermal conductivity detector (TCD). While TCDs are beneficial in that they are non-destructive, its low detection limit for most analytes inhibits widespread use. FIDs are sensitive primarily to hydrocarbons, and are more sensitive to them than TCD. FIDs cannot detect water or carbon dioxide which make them ideal for environmental organic analyte analysis. FID is two to three times more sensitive to analyte detection than TCD.
The TCD relies on the thermal conductivity of matter passing around a thin wire of tungsten-rhenium with a current traveling through it. In this set up helium or nitrogen serve as the carrier gas because of their relatively high thermal conductivity which keep the filament cool and maintain uniform resistivity and electrical efficiency of the filament. When analyte molecules elute from the column, mixed with carrier gas, the thermal conductivity decreases while there is an increase in filament temperature and resistivity resulting in fluctuations in voltage ultimately causing a detector response. Detector sensitivity is proportional to filament current while it is inversely proportional to the immediate environmental temperature of that detector as well as flow rate of the carrier gas.
In a flame ionization detector (FID), electrodes are placed adjacent to a flame fueled by hydrogen / air near the exit of the column, and when carbon containing compounds exit the column they are pyrolyzed by the flame. This detector works only for organic / hydrocarbon containing compounds due to the ability of the carbons to form cations and electrons upon pyrolysis which generates a current between the electrodes. The increase in current is translated and appears as a peak in a chromatogram. FIDs have low detection limits (a few picograms per second) but they are unable to generate ions from carbonyl containing carbons. FID compatible carrier gasses include helium, hydrogen, nitrogen, and argon.
In FID, sometimes the stream is modified before entering the detector. A methanizer converts carbon monoxide and carbon dioxide into methane so that it can be detected. A different technology is the polyarc, by Activated Research Inc, that converts all compounds to methane.
Alkali flame detector (AFD) or alkali flame ionization detector (AFID) has high sensitivity to nitrogen and phosphorus, similar to NPD. However, the alkaline metal ions are supplied with the hydrogen gas, rather than a bead above the flame. For this reason AFD does not suffer the "fatigue" of the NPD, but provides a constant sensitivity over long period of time. In addition, when alkali ions are not added to the flame, AFD operates like a standard FID. A catalytic combustion detector (CCD) measures combustible hydrocarbons and hydrogen. Discharge ionization detector (DID) uses a high-voltage electric discharge to produce ions.
Flame photometric detector (FPD) uses a photomultiplier tube to detect spectral lines of the compounds as they are burned in a flame. Compounds eluting off the column are carried into a hydrogen fueled flame which excites specific elements in the molecules, and the excited elements (P,S, Halogens, Some Metals) emit light of specific characteristic wavelengths. The emitted light is filtered and detected by a photomultiplier tube. In particular, phosphorus emission is around 510–536 nm and sulfur emission is at 394 nm. With an atomic emission detector (AED), a sample eluting from a column enters a chamber which is energized by microwaves that induce a plasma. The plasma causes the analyte sample to decompose and certain elements generate an atomic emission spectra. The atomic emission spectra is diffracted by a diffraction grating and detected by a series of photomultiplier tubes or photo diodes.
Electron capture detector (ECD) uses a radioactive beta particle (electron) source to measure the degree of electron capture. ECD are used for the detection of molecules containing electronegative / withdrawing elements and functional groups like halogens, carbonyl, nitriles, nitro groups, and organometalics. In this type of detector either nitrogen or 5% methane in argon is used as the mobile phase carrier gas. The carrier gas passes between two electrodes placed at the end of the column, and adjacent to the cathode (negative electrode) resides a radioactive foil such as 63Ni. The radioactive foil emits a beta particle (electron) which collides with and ionizes the carrier gas to generate more ions resulting in a current. When analyte molecules with electronegative / withdrawing elements or functional groups electrons are captured which results in a decrease in current generating a detector response.
Nitrogen–phosphorus detector (NPD), a form of thermionic detector where nitrogen and phosphorus alter the work function on a specially coated bead and a resulting current is measured.
Dry electrolytic conductivity detector (DELCD) uses an air phase and high temperature (v. Coulsen) to measure chlorinated compounds.
Mass spectrometer (MS), also called GC-MS; highly effective and sensitive, even in a small quantity of sample. This detector can be used to identify the analytes in chromatograms by their mass spectrum. Some GC-MS are connected to an NMR spectrometer which acts as a backup detector. This combination is known as GC-MS-NMR. Some GC-MS-NMR are connected to an infrared spectrophotometer which acts as a backup detector. This combination is known as GC-MS-NMR-IR. It must, however, be stressed this is very rare as most analyses needed can be concluded via purely GC-MS.
Vacuum ultraviolet (VUV) represents the most recent development in gas chromatography detectors. Most chemical species absorb and have unique gas phase absorption cross sections in the approximately 120–240 nm VUV wavelength range monitored. Where absorption cross sections are known for analytes, the VUV detector is capable of absolute determination (without calibration) of the number of molecules present in the flow cell in the absence of chemical interferences.
Olfactometric detector, also called GC-O, uses a human assessor to analyse the odour activity of compounds. With an odour port or a sniffing port, the quality of the odour, the intensity of the odour and the duration of the odour activity of a compound can be assessed.
Other detectors include the Hall electrolytic conductivity detector (ElCD), helium ionization detector (HID), infrared detector (IRD), photo-ionization detector (PID), pulsed discharge ionization detector (PDD), and thermionic ionization detector (TID).
Methods
The method is the collection of conditions in which the GC operates for a given analysis. Method development is the process of determining what conditions are adequate and/or ideal for the analysis required.
Conditions which can be varied to accommodate a required analysis include inlet temperature, detector temperature, column temperature and temperature program, carrier gas and carrier gas flow rates, the column's stationary phase, diameter and length, inlet type and flow rates, sample size and injection technique. Depending on the detector(s) (see below) installed on the GC, there may be a number of detector conditions that can also be varied. Some GCs also include valves which can change the route of sample and carrier flow. The timing of the opening and closing of these valves can be important to method development.
Carrier gas selection and flow rates
Typical carrier gases include helium, nitrogen, argon, and hydrogen. Which gas to use is usually determined by the detector being used, for example, a DID requires helium as the carrier gas. When analyzing gas samples the carrier is also selected based on the sample's matrix, for example, when analyzing a mixture in argon, an argon carrier is preferred because the argon in the sample does not show up on the chromatogram. Safety and availability can also influence carrier selection.
The purity of the carrier gas is also frequently determined by the detector, though the level of sensitivity needed can also play a significant role. Typically, purities of 99.995% or higher are used. The most common purity grades required by modern instruments for the majority of sensitivities are 5.0 grades, or 99.999% pure meaning that there is a total of 10 ppm of impurities in the carrier gas that could affect the results. The highest purity grades in common use are 6.0 grades, but the need for detection at very low levels in some forensic and environmental applications has driven the need for carrier gases at 7.0 grade purity and these are now commercially available. Trade names for typical purities include "Zero Grade", "Ultra-High Purity (UHP) Grade", "4.5 Grade" and "5.0 Grade".
The carrier gas linear velocity affects the analysis in the same way that temperature does (see above). The higher the linear velocity the faster the analysis, but the lower the separation between analytes. Selecting the linear velocity is therefore the same compromise between the level of separation and length of analysis as selecting the column temperature. The linear velocity will be implemented by means of the carrier gas flow rate, with regards to the inner diameter of the column.
With GCs made before the 1990s, carrier flow rate was controlled indirectly by controlling the carrier inlet pressure, or "column head pressure". The actual flow rate was measured at the outlet of the column or the detector with an electronic flow meter, or a bubble flow meter, and could be an involved, time consuming, and frustrating process. It was not possible to vary the pressure setting during the run, and thus the flow was essentially constant during the analysis. The relation between flow rate and inlet pressure is calculated with Poiseuille's equation for compressible fluids.
Many modern GCs, however, electronically measure the flow rate, and electronically control the carrier gas pressure to set the flow rate. Consequently, carrier pressures and flow rates can be adjusted during the run, creating pressure/flow programs similar to temperature programs.
Stationary compound selection
The polarity of the solute is crucial for the choice of stationary compound, which in an optimal case would have a similar polarity as the solute. Common stationary phases in open tubular columns are cyanopropylphenyl dimethyl polysiloxane, carbowax polyethyleneglycol, biscyanopropyl cyanopropylphenyl polysiloxane and diphenyl dimethyl polysiloxane. For packed columns more options are available.
Inlet types and flow rates
The choice of inlet type and injection technique depends on if the sample is in liquid, gas, adsorbed, or solid form, and on whether a solvent matrix is present that has to be vaporized. Dissolved samples can be introduced directly onto the column via a COC injector, if the conditions are well known; if a solvent matrix has to be vaporized and partially removed, a S/SL injector is used (most common injection technique); gaseous samples (e.g., air cylinders) are usually injected using a gas switching valve system; adsorbed samples (e.g., on adsorbent tubes) are introduced using either an external (on-line or off-line) desorption apparatus such as a purge-and-trap system, or are desorbed in the injector (SPME applications).
Sample size and injection technique
Sample injection
The real chromatographic analysis starts with the introduction of the sample onto the column. The development of capillary gas chromatography resulted in many practical problems with the injection technique. The technique of on-column injection, often used with packed columns, is usually not possible with capillary columns. In the injection system in the capillary gas chromatograph the amount injected should not overload the column and
the width of the injected plug should be small compared to the spreading due to the chromatographic process. Failure to comply with this latter requirement will reduce the separation capability of the column. As a general rule, the volume injected, Vinj, and the volume of the detector cell, Vdet, should be about 1/10 of the volume occupied by the portion of sample containing the molecules of interest (analytes) when they exit the column.
Some general requirements which a good injection technique should fulfill are that it should be possible to obtain the column's optimum separation efficiency, it should allow accurate and reproducible injections of small amounts of representative samples, it should induce no change in sample composition, it should not exhibit discrimination based on differences in boiling point, polarity, concentration or thermal/catalytic stability, and it should be applicable for trace analysis as well as for undiluted samples.
However, there are a number of problems inherent in the use of syringes for injection. Even the best syringes claim an accuracy of only 3%, and in unskilled hands, errors are much larger. The needle may cut small pieces of rubber from the septum as it injects sample through it. These can block the needle and prevent the syringe filling the next time it is used. It may not be obvious that this has happened. A fraction of the sample may get trapped in the rubber, to be released during subsequent injections. This can give rise to ghost peaks in the chromatogram. There may be selective loss of the more volatile components of the sample by evaporation from the tip of the needle.
Column selection
The choice of column depends on the sample and the active measured. The main chemical attribute regarded when choosing a column is the polarity of the mixture, but functional groups can play a large part in column selection. The polarity of the sample must closely match the polarity of the column stationary phase to increase resolution and separation while reducing run time. The separation and run time also depends on the film thickness (of the stationary phase), the column diameter and the column length.
Column temperature and temperature program
The column(s) in a GC are contained in an oven, the temperature of which is precisely controlled electronically. (When discussing the "temperature of the column," an analyst is technically referring to the temperature of the column oven. The distinction, however, is not important and will not subsequently be made in this article.)
The rate at which a sample passes through the column is directly proportional to the temperature of the column. The higher the column temperature, the faster the sample moves through the column. However, the faster a sample moves through the column, the less it interacts with the stationary phase, and the less the analytes are separated.
In general, the column temperature is selected to compromise between the length of the analysis and the level of separation.
A method which holds the column at the same temperature for the entire analysis is called "isothermal". Most methods, however, increase the column temperature during the analysis, the initial temperature, rate of temperature increase (the temperature "ramp"), and final temperature are called the temperature program.
A temperature program allows analytes that elute early in the analysis to separate adequately, while shortening the time it takes for late-eluting analytes to pass through the column.
Data reduction and analysis
Qualitative analysis
Generally, chromatographic data is presented as a graph of detector response (y-axis) against retention time (x-axis), which is called a chromatogram. This provides a spectrum of peaks for a sample representing the analytes present in a sample eluting from the column at different times. Retention time can be used to identify analytes if the method conditions are constant. Also, the pattern of peaks will be constant for a sample under constant conditions and can identify complex mixtures of analytes. However, in most modern applications, the GC is connected to a mass spectrometer or similar detector that is capable of identifying the analytes represented by the peaks.
Quantitative analysis
The area under a peak is proportional to the amount of analyte present in the chromatogram. By calculating the area of the peak using the mathematical function of integration, the concentration of an analyte in the original sample can be determined. Concentration can be calculated using a calibration curve created by finding the response for a series of concentrations of analyte, or by determining the relative response factor of an analyte. The relative response factor is the expected ratio of an analyte to an internal standard (or external standard) and is calculated by finding the response of a known amount of analyte and a constant amount of internal standard (a chemical added to the sample at a constant concentration, with a distinct retention time to the analyte).
In most modern GC-MS systems, computer software is used to draw and integrate peaks, and match MS spectra to library spectra.
Applications
In general, substances that vaporize below 300 °C (and therefore are stable up to that temperature) can be measured quantitatively. The samples are also required to be salt-free; they should not contain ions. Very minute amounts of a substance can be measured, but it is often required that the sample must be measured in comparison to a sample containing the pure, suspected substance known as a reference standard.
Various temperature programs can be used to make the readings more meaningful; for example to differentiate between substances that behave similarly during the GC process.
Professionals working with GC analyze the content of a chemical product, for example in assuring the quality of products in the chemical industry; or measuring chemicals in soil, air or water, such as soil gases. GC is very accurate if used properly and can measure picomoles of a substance in a 1 ml liquid sample, or parts-per-billion concentrations in gaseous samples.
In practical courses at colleges, students sometimes get acquainted to the GC by studying the contents of lavender oil or measuring the ethylene that is secreted by Nicotiana benthamiana plants after artificially injuring their leaves. These GC analyse hydrocarbons (C2-C40+). In a typical experiment, a packed column is used to separate the light gases, which are then detected with a TCD. The hydrocarbons are separated using a capillary column and detected with a FID. A complication with light gas analyses that include H2 is that He, which is the most common and most sensitive inert carrier (sensitivity is proportional to molecular mass) has an almost identical thermal conductivity to hydrogen (it is the difference in thermal conductivity between two separate filaments in a Wheatstone Bridge type arrangement that shows when a component has been eluted). For this reason, dual TCD instruments used with a separate channel for hydrogen that uses nitrogen as a carrier are common. Argon is often used when analysing gas phase chemistry reactions such as F-T synthesis so that a single carrier gas can be used rather than two separate ones. The sensitivity is reduced, but this is a trade off for simplicity in the gas supply.
Gas chromatography is used extensively in forensic science. Disciplines as diverse as solid drug dose (pre-consumption form) identification and quantification, arson investigation, paint chip analysis, and toxicology cases, employ GC to identify and quantify various biological specimens and crime-scene evidence.
See also
Analytical chemistry
Chromatography
Gas chromatography–mass spectrometry
Gas chromatography-olfactometry
High-performance liquid chromatography
Inverse gas chromatography
Proton transfer reaction mass spectrometry
Secondary electrospray ionization
Selected ion flow tube mass spectrometry
Standard addition
Thin layer chromatography
Unresolved complex mixture
References
External links
Chromatographic Columns in the Chemistry LibreTexts Library
Laboratory techniques | Gas chromatography | [
"Chemistry"
] | 6,004 | [
"Chromatography",
"Gas chromatography",
"nan"
] |
596,745 | https://en.wikipedia.org/wiki/Lipophilicity | Lipophilicity (from Greek λίπος "fat" and φίλος "friendly") is the ability of a chemical compound to dissolve in fats, oils, lipids, and non-polar solvents such as hexane or toluene. Such compounds are called lipophilic (translated as "fat-loving" or "fat-liking"). Such non-polar solvents are themselves lipophilic, and the adage "like dissolves like" generally holds true. Thus lipophilic substances tend to dissolve in other lipophilic substances, whereas hydrophilic ("water-loving") substances tend to dissolve in water and other hydrophilic substances.
Lipophilicity, hydrophobicity, and non-polarity may describe the same tendency towards participation in the London dispersion force, as the terms are often used interchangeably. However, the terms "lipophilic" and "hydrophobic" are not synonymous, as can be seen with silicones and fluorocarbons, which are hydrophobic but not lipophilic.
Surfactants
Hydrocarbon-based surfactants are compounds that are amphiphilic (or amphipathic), having a hydrophilic, water interactive "end", referred to as their "head group", and a lipophilic "end", usually a long chain hydrocarbon fragment, referred to as their "tail". They congregate at low energy surfaces, including the air-water interface (lowering surface tension) and the surfaces of the water-immiscible droplets found in oil/water emulsions (lowering interfacial tension). At these surfaces they naturally orient themselves with their head groups in water and their tails either sticking up and largely out of water (as at the air-water interface) or dissolved in the water-immiscible phase that the water is in contact with (e.g. as the emulsified oil droplet). In both these configurations the head groups strongly interact with water while the tails avoid all contact with water. Surfactant molecules also aggregate in water as micelles with their head groups sticking out and their tails bunched together. Micelles draw oily substances into their hydrophobic cores, explaining the basic action of soaps and detergents used for personal cleanliness and for laundering clothes. Micelles are also biologically important for the transport of fatty substances in the small intestine surface in the first step that leads to the absorption of the components of fats (largely fatty acids and 2-monoglycerides).
Cell membranes are bilayer structures principally formed from phospholipids, molecules which have a highly water interactive, ionic phosphate head groups attached to two long alkyl tails.
By contrast, fluorosurfactants are not amphiphilic or detergents because fluorocarbons are not lipophilic.
Oxybenzone, a common cosmetic ingredient often used in sunscreens, penetrates the skin particularly well because it is not very lipophilic. Anywhere from 0.4% to 8.7% of oxybenzone can be absorbed after one topical sunscreen application, as measured in urine excretions.
See also
Ionic partition diagram
ITIES
Lipophilic bacteria
Lipophobicity
Microemulsion
References
Chemical properties | Lipophilicity | [
"Chemistry"
] | 702 | [
"nan"
] |
596,816 | https://en.wikipedia.org/wiki/Particle-in-cell | In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles (or fluid elements) in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.
PIC methods were already in use as early as 1955,
even before the first Fortran compilers were available. The method gained popularity for plasma simulation in the late 1950s and early 1960s by Buneman, Dawson, Hockney, Birdsall, Morse and others. In plasma physics applications, the method amounts to following the trajectories of charged particles in self-consistent electromagnetic (or electrostatic) fields computed on a fixed mesh.
Technical aspects
For many types of problems, the classical PIC method invented by Buneman, Dawson, Hockney, Birdsall, Morse and others is relatively intuitive and straightforward to implement. This probably accounts for much of its success, particularly for plasma simulation, for which the method typically includes the following procedures:
Integration of the equations of motion.
Interpolation of charge and current source terms to the field mesh.
Computation of the fields on mesh points.
Interpolation of the fields from the mesh to the particle locations.
Models which include interactions of particles only through the average fields are called PM (particle-mesh). Those which include direct binary interactions are PP (particle-particle). Models with both types of interactions are called PP-PM or P3M.
Since the early days, it has been recognized that the PIC method is susceptible to error from so-called discrete particle noise.
This error is statistical in nature, and today it remains less-well understood than for traditional fixed-grid methods, such as Eulerian or semi-Lagrangian schemes.
Modern geometric PIC algorithms are based on a very different theoretical framework. These algorithms use tools of discrete manifold, interpolating differential forms, and canonical or non-canonical symplectic integrators to guarantee gauge invariant and conservation of charge, energy-momentum, and more importantly the infinitely dimensional symplectic structure of the particle-field system.
These desired features are attributed to the fact that geometric PIC algorithms are built on the more fundamental field-theoretical framework and are directly linked to the perfect form, i.e., the variational principle of physics.
Basics of the PIC plasma simulation technique
Inside the plasma research community, systems of different species (electrons, ions, neutrals, molecules, dust particles, etc.) are investigated. The set of equations associated with PIC codes are therefore the Lorentz force as the equation of motion, solved in the so-called pusher or particle mover of the code, and Maxwell's equations determining the electric and magnetic fields, calculated in the (field) solver.
Super-particles
The real systems studied are often extremely large in terms of the number of particles they contain. In order to make simulations efficient or at all possible, so-called super-particles are used. A super-particle (or macroparticle) is a computational particle that represents many real particles; it may be millions of electrons or ions in the case of a plasma simulation, or, for instance, a vortex element in a fluid simulation. It is allowed to rescale the number of particles, because the acceleration from the Lorentz force depends only on the charge-to-mass ratio, so a super-particle will follow the same trajectory as a real particle would.
The number of real particles corresponding to a super-particle must be chosen such that sufficient statistics can be collected on the particle motion. If there is a significant difference between the density of different species in the system (between ions and neutrals, for instance), separate real to super-particle ratios can be used for them.
The particle mover
Even with super-particles, the number of simulated particles is usually very large (> 105), and often the particle mover is the most time consuming part of PIC, since it has to be done for each particle separately. Thus, the pusher is required to be of high accuracy and speed and much effort is spent on optimizing the different schemes.
The schemes used for the particle mover can be split into two categories, implicit and explicit solvers. While implicit solvers (e.g. implicit Euler scheme) calculate the particle velocity from the already updated fields, explicit solvers use only the old force from the previous time step, and are therefore simpler and faster, but require a smaller time step. In PIC simulation the leapfrog method is used, a second-order explicit method. Also the Boris algorithm is used which cancel out the magnetic field in the Newton-Lorentz equation.
For plasma applications, the leapfrog method takes the following form:
where the subscript refers to "old" quantities from the previous time step, to updated quantities from the next time step (i.e. ), and velocities are calculated in-between the usual time steps .
The equations of the Boris scheme which are substitute in the above equations are:
with
and .
Because of its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. It was realized that the excellent long term accuracy of nonrelativistic Boris algorithm is due to the fact it conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas. It has also been shown
that one can improve on the relativistic Boris push to make it both volume preserving and have a constant-velocity solution in crossed E and B fields.
The field solver
The most commonly used methods for solving Maxwell's equations (or more generally, partial differential equations (PDE)) belong to one of the following three categories:
Finite difference methods (FDM)
Finite element methods (FEM)
Spectral methods
With the FDM, the continuous domain is replaced with a discrete grid of points, on which the electric and magnetic fields are calculated. Derivatives are then approximated with differences between neighboring grid-point values and thus PDEs are turned into algebraic equations.
Using FEM, the continuous domain is divided into a discrete mesh of elements. The PDEs are treated as an eigenvalue problem and initially a trial solution is calculated using basis functions that are localized in each element. The final solution is then obtained by optimization until the required accuracy is reached.
Also spectral methods, such as the fast Fourier transform (FFT), transform the PDEs into an eigenvalue problem, but this time the basis functions are high order and defined globally over the whole domain. The domain itself is not discretized in this case, it remains continuous. Again, a trial solution is found by inserting the basis functions into the eigenvalue equation and then optimized to determine the best values of the initial trial parameters.
Particle and field weighting
The name "particle-in-cell" originates in the way that plasma macro-quantities (number density, current density, etc.) are assigned to simulation particles (i.e., the particle weighting). Particles can be situated anywhere on the continuous domain, but macro-quantities are calculated only on the mesh points, just as the fields are. To obtain the macro-quantities, one assumes that the particles have a given "shape" determined by the shape function
where is the coordinate of the particle and the observation point. Perhaps the easiest and most used choice for the shape function is the so-called cloud-in-cell (CIC) scheme, which is a first order (linear) weighting scheme. Whatever the scheme is, the shape function has to satisfy the following conditions:
space isotropy, charge conservation, and increasing accuracy (convergence) for higher-order terms.
The fields obtained from the field solver are determined only on the grid points and can't be used directly in the particle mover to calculate the force acting on particles, but have to be interpolated via the field weighting:
where the subscript labels the grid point. To ensure that the forces acting on particles are self-consistently obtained, the way of calculating macro-quantities from particle positions on the grid points and interpolating fields from grid points to particle positions has to be consistent, too, since they both appear in Maxwell's equations. Above all, the field interpolation scheme should conserve momentum. This can be achieved by choosing the same weighting scheme for particles and fields and by ensuring the appropriate space symmetry (i.e. no self-force and fulfilling the action-reaction law) of the field solver at the same time
Collisions
As the field solver is required to be free of self-forces, inside a cell the field generated by a particle must decrease with decreasing distance from the particle, and hence inter-particle forces inside the cells are underestimated. This can be balanced with the aid of Coulomb collisions between charged particles. Simulating the interaction for every pair of a big system would be computationally too expensive, so several Monte Carlo methods have been developed instead. A widely used method is the binary collision model, in which particles are grouped according to their cell, then these particles are paired randomly, and finally the pairs are collided.
In a real plasma, many other reactions may play a role, ranging from elastic collisions, such as collisions between charged and neutral particles, over inelastic collisions, such as electron-neutral ionization collision, to chemical reactions; each of them requiring separate treatment. Most of the collision models handling charged-neutral collisions use either the direct Monte-Carlo scheme, in which all particles carry information about their collision probability, or the null-collision scheme, which does not analyze all particles but uses the maximum collision probability for each charged species instead.
Accuracy and stability conditions
As in every simulation method, also in PIC, the time step and the grid size must be well chosen, so that the time and length scale phenomena of interest are properly resolved in the problem. In addition, time step and grid size affect the speed and accuracy of the code.
For an electrostatic plasma simulation using an explicit time integration scheme (e.g. leapfrog, which is most commonly used), two important conditions regarding the grid size and the time step should be fulfilled in order to ensure the stability of the solution:
which can be derived considering the harmonic oscillations of a one-dimensional unmagnetized plasma. The latter conditions is strictly required but practical considerations related to energy conservation suggest to use a much stricter constraint where the factor 2 is replaced by a number one order of magnitude smaller. The use of is typical. Not surprisingly, the natural time scale in the plasma is given by the inverse plasma frequency and length scale by the Debye length .
For an explicit electromagnetic plasma simulation, the time step must also satisfy the CFL condition:
where , and is the speed of light.
Applications
Within plasma physics, PIC simulation has been used successfully to study laser-plasma interactions, electron acceleration and ion heating in the auroral ionosphere, magnetohydrodynamics, magnetic reconnection, as well as ion-temperature-gradient and other microinstabilities in tokamaks, furthermore vacuum discharges, and dusty plasmas.
Hybrid models may use the PIC method for the kinetic treatment of some species, while other species (that are Maxwellian) are simulated with a fluid model.
PIC simulations have also been applied outside of plasma physics to problems in solid and fluid mechanics.
Electromagnetic particle-in-cell computational applications
See also
Plasma modeling
Multiphase particle-in-cell method
References
Bibliography
External links
Beam, Plasma & Accelerator Simulation Toolkit (BLAST)
Particle-In-Cell and Kinetic Simulation Software Center (PICKSC), UCLA.
Open source 3D Particle-In-Cell code for spacecraft plasma interactions (mandatory user registration required).
Simple Particle-In-Cell code in MATLAB
Plasma Theory and Simulation Group (Berkeley) Contains links to freely available software.
Introduction to PIC codes (Univ. of Texas)
open-pic - 3D Hybrid Particle-In-Cell simulation of plasma dynamics
Numerical differential equations
Computational fluid dynamics
Mathematical modeling
Computational electromagnetics | Particle-in-cell | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,527 | [
"Computational electromagnetics",
"Mathematical modeling",
"Computational fluid dynamics",
"Applied mathematics",
"Computational physics",
"Fluid dynamics"
] |
596,833 | https://en.wikipedia.org/wiki/Tully%E2%80%93Fisher%20relation | In astronomy, the Tully–Fisher relation (TFR) is a widely verified empirical relationship between the mass or intrinsic luminosity of a spiral galaxy and its asymptotic rotation velocity or emission line width. Since the observed brightness of a galaxy is distance-dependent, the relationship can be used to estimate distances to galaxies from measurements of their rotational velocity.
History
The connection between rotational velocity measured spectroscopically and distance was first used in 1922 by Ernst Öpik to estimate the distance to the Andromeda Galaxy. In the 1970s, Balkowski, C., et al. measured 13 galaxies but focused on using the data to distinguish galaxy shapes rather than extract distances.
The relationship was first published in 1977 by astronomers R. Brent Tully and J. Richard Fisher. The luminosity is calculated by multiplying the galaxy's apparent brightness by , where is its distance from Earth, and the spectral-line width is measured using long-slit spectroscopy.
A series of collaborative catalogs of galaxy peculiar velocity values called CosmicFlow uses Tully–Fisher analysis; the Cosmicflow-4 catalog has reached 10000 galaxies. Many values of the Hubble constant have been derived from Tully–Fisher analysis, starting with the first paper and continuing through 2024.
Subtypes
Several different forms of the TFR exist, depending on which precise measures of mass, luminosity or rotation velocity one takes it to relate. Tully and Fisher used optical luminosity, but subsequent work showed the relation to be tighter when defined using microwave to infrared (K band) radiation (a good proxy for stellar mass), and even tighter when luminosity is replaced by the galaxy's total stellar mass. The relation in terms of stellar mass is dubbed the "stellar mass Tully Fisher relation" (STFR), and its scatter only shows correlations with the galaxy's kinematic morphology, such that more dispersion-supported systems scatter below the relation. The tightest correlation is recovered when considering the total baryonic mass (the sum of its mass in stars and gas). This latter form of the relation is known as the baryonic Tully–Fisher relation (BTFR), and states that baryonic mass is proportional to velocity to the power of roughly 3.5–4.
The TFR can be used to estimate the distance to spiral galaxies by allowing the luminosity of a galaxy to be derived from its directly measurable line width. The distance can then be found by comparing the luminosity to the apparent brightness. Thus the TFR constitutes a rung of the cosmic distance ladder, where it is calibrated using more direct distance measurement techniques and used in turn to calibrate methods extending to larger distance.
In the dark matter paradigm, a galaxy's rotation velocity (and hence line width) is primarily determined by the mass of the dark matter halo in which it lives, making the TFR a manifestation of the connection between visible and dark matter mass. In Modified Newtonian dynamics (MOND), the BTFR (with power-law index exactly 4) is a direct consequence of the gravitational force law effective at low acceleration.
The analogues of the TFR for non-rotationally-supported galaxies, such as ellipticals, are known as the Faber–Jackson relation and the fundamental plane.
See also
Distance modulus
Standard candle
Cosmic distance ladder
Faber–Jackson relation
Fundamental plane
Dark matter
Standard model of cosmology
Modified Newtonian dynamics
Freeman law
References
External links
Scholarpedia article on the subject written by R. Brent Tully
Extragalactic astronomy
Physical cosmological concepts
Spiral galaxies | Tully–Fisher relation | [
"Physics",
"Astronomy"
] | 735 | [
"Physical cosmological concepts",
"Extragalactic astronomy",
"Concepts in astrophysics",
"Astronomical sub-disciplines"
] |
596,963 | https://en.wikipedia.org/wiki/Xanthan%20gum | Xanthan gum () is a polysaccharide with many industrial uses, including as a common food additive. It is an effective thickening agent and stabilizer that prevents ingredients from separating. It can be produced from simple sugars by fermentation and derives its name from the species of bacteria used, Xanthomonas campestris.
History
Xanthan gum was discovered by Allene Rosalind Jeanes and her research team at the United States Department of Agriculture, and brought into commercial production by CP Kelco under the trade name Kelzan in the early 1960s, remaining the only manufacuturer in the US. It was approved for use in foods in 1968 and is accepted as a safe food additive in the US, Canada, European countries, and many other countries, with E number E415, and CAS number 11138-66-2.
Xanthan gum derives its name from the species of bacteria used during the fermentation process, Xanthomonas campestris.
Uses
Addition of 1% of xanthan gum can produce a significant increase in the viscosity of a liquid.
In foods, xanthan gum is a common ingredient in salad dressings and sauces. It helps to prevent oil separation by stabilizing the emulsion, although it is not an emulsifier. Xanthan gum also helps suspend solid particles, such as spices. Xanthan gum helps create the desired texture in many ice creams. Toothpaste often contains xanthan gum as a binder to keep the product uniform. Xanthan gum also helps thicken commercial egg substitutes made from egg whites, to replace the fat and emulsifiers found in yolks. It is also a preferred method of thickening liquids for those with swallowing disorders, since it does not change the color or flavor of foods or beverages at typical use levels. In gluten-free baking, xanthan gum is used to give the dough or batter the stickiness that would otherwise be achieved with gluten. In most foods, it is used at concentrations of 0.5% or less. Xanthan gum is used in a wide range of food products, such as sauces, dressings, meat and poultry products, bakery products, confectionery products, beverages, dairy products, and others.
In the oil industry, xanthan gum is used in large quantities to thicken drilling mud. These fluids carry the solids cut by the drilling bit to the surface. Xanthan gum provides improved "low end" rheology. When circulation stops, the solids remain suspended in the drilling fluid. The widespread use of horizontal drilling and the demand for good control of drilled solids has led to its expanded use. It has been added to concrete poured underwater, to increase its viscosity and prevent washout.
In cosmetics, xanthan gum is used to prepare water gels. It is also used in oil-in-water emulsions to enhance droplet coalescence. Xanthan gum is under preliminary research for its potential uses in tissue engineering to construct hydrogels and scaffolds supporting three-dimensional tissue formation. Furthermore, thiolated xanthan gum (see thiomers) has shown potential for drug delivery, since by the covalent attachment of thiol groups to this polysaccharide high mucoadhesive and permeation enhancing properties can be introduced.
Shear thinning
The viscosity of xanthan gum solutions decreases with higher shear rates. This is called shear thinning or pseudoplasticity. This means that a product subjected to shear, whether from mixing, shaking or chewing, will thin. This is similar to the behaviour of tomato ketchup. When the shear forces are removed, the food will thicken again. In salad dressing, the addition of xanthan gum makes it thick enough at rest in the bottle to keep the mixture fairly homogeneous, but the shear forces generated by shaking and pouring thins it, so it can be easily poured. When it exits the bottle, the shear forces are removed and it thickens again, so it clings to the salad. The rheology of xanthan aqua solutions become visco-elastic at higher concentrations of xanthan gum in water.
Concentrations used
The greater the concentration of xanthan gum in a liquid, the thicker the liquid will become. An emulsion can be formed with as little as 0.1% (by weight). Increasing the concentration of gum gives a thicker, more stable emulsion up to 1% xanthan gum. A teaspoon of xanthan gum weighs about 2.5 grams and brings one cup (250ml) of water to a 1% concentration.
To make a foam, 0.2–0.8% xanthan gum is typically used. Larger amounts result in larger bubbles and denser foam. Egg white powder (0.2–2.0%) with 0.1–0.4% xanthan gum yields bubbles similar to soap bubbles.
Safety
According to a 2017 safety review by a scientific panel of the European Food Safety Authority (EFSA), xanthan gum (European food additive number E 415) is extensively digested during intestinal fermentation, and causes no adverse effects, even at high intake amounts. The EFSA panel found no concern about genotoxicity from long-term consumption. The EFSA concluded that there is no safety concern for the general population when xanthan gum is consumed as a food additive.
Preparation
Xanthan gum is produced by the fermentation of glucose and sucrose. The medium is well-aerated and stirred, and the xanthan polymer is produced extracellularly into the medium. After one to four days, the polymer is precipitated from the medium by the addition of isopropyl alcohol, and the precipitate is dried and milled to give a powder that is readily soluble in water or brine.
It is composed of pentasaccharide repeat units, comprising glucose, mannose, and glucuronic acid in the molar ratio 2:2:1.
A strain of X. campestris that will grow on lactose has been developed – which allows it to be used to process whey, a waste product of cheese production. This can produce 30 g/L of xanthan gum for every 40 g/L of whey powder. Whey-derived xanthan gum is commonly used in many commercial products, such as shampoos and salad dressings.
Detail of the biosynthesis
Synthesis originates from glucose as substrate for synthesis of the sugar nucleotides precursors UDP-glucose, UDP-glucuronate, and GDP-mannose that are required for building the pentasaccharide repeat unit. This links the synthesis of xanthan to carbohydrate metabolism. The repeat units are built up at undecaprenylphosphate lipid carriers that are anchored in the cytoplasmic membrane.
Specific glycosyltransferases sequentially transfer the sugar moieties of the nucleotide sugar xanthan precursors to the lipid carriers. Acetyl and pyruvyl residues are added as non-carbohydrate decorations. Mature repeat units are polymerized and exported in a way resembling the Wzy-dependent polysaccharide synthesis mechanism of Enterobacteriaceae. Products of the gum gene cluster drive synthesis, polymerization, and export of the repeat unit.
References
Edible thickening agents
Food additives
Natural gums
Polysaccharides
E-number additives | Xanthan gum | [
"Chemistry"
] | 1,589 | [
"Carbohydrates",
"Polysaccharides"
] |
596,987 | https://en.wikipedia.org/wiki/Synchrocyclotron | A synchrocyclotron is a special type of cyclotron, patented by Edwin McMillan in 1952, in which the frequency of the driving RF electric field is varied to compensate for relativistic effects as the particles' velocity begins to approach the speed of light. This is in contrast to the classical cyclotron, where this frequency is constant.
There are two major differences between the synchrocyclotron and the classical cyclotron. In the synchrocyclotron, only one dee (hollow D-shaped sheet metal electrode) retains its classical shape, while the other pole is open (see patent sketch). Furthermore, the frequency of oscillating electric field in a synchrocyclotron is decreasing continuously instead of kept constant so as to maintain cyclotron resonance for relativistic velocities. One terminal of the oscillating electric potential varying periodically is applied to the dee and the other terminal is on ground potential. The protons or deuterons to be accelerated are made to move in circles of increasing radius. The acceleration of particles takes place as they enter or leave the dee. At the outer edge, the ion beam can be removed with the aid of electrostatic deflector. The first synchrocyclotron produced 195 MeV deuterons and 390 MeV α-particles.
Differences from the classical cyclotron
In a classical cyclotron, the angular frequency of the electric field is given by
,
Where is the angular frequency of the electric field, is the charge on the particle, is the magnetic field, and is the mass of the particle. This makes the assumption that the particle is classical, and does not experience relativistic phenomena such as length contraction. These effects start to become significant when , the velocity of the particle greater than . To correct for this, the relativistic mass is used instead of the rest mass; thus, a factor of multiplies the mass, such that
,
where
.
This is then the angular frequency of the field applied to the particles as they are accelerated around the synchrocyclotron.
Advantages
The chief advantage of the synchrocyclotron is that there is no need to restrict the number of revolutions executed by the ion before its exit. As such, the potential difference supplied between the dees can be much smaller.
The smaller potential difference needed across the gap has the following uses:
There is no need for a narrow gap between the dees as in the case of conventional cyclotron, because strong electric fields for producing large acceleration are not required. Thus only one dee can be used instead of two, the other end of the oscillating voltage supply being connected to earth.
The magnetic pole pieces can be brought closer, thus making it possible to increase greatly the magnetic flux density.
The frequency valve oscillator is able to function with much greater efficiency.
Disadvantages
The main drawback of this device is that, as a result of the variation in the frequency of the oscillating voltage supply, only a very small fraction of the ions leaving the source are captured in phase-stable orbits of maximum radius and energy with the result that the output beam current has a low duty cycle, and the average beam current is only a small fraction of the instantaneous beam current. Thus the machine produces high energy ions, though with comparatively low intensity.
The next development step of the cyclotron concept, the isochronous cyclotron, maintains a constant RF driving frequency and compensates for relativistic effects by increasing the magnetic field with radius. Isochronous cyclotrons are capable of producing much greater beam current than synchrocyclotrons. As a result, isochronous cyclotrons became more popular in the research field.
History
In 1945, Robert Lyster Thornton at Ernest Lawrence's Radiation Laboratory led the construction of the 730 MeV cyclotron. In 1946, he oversaw the conversion of the cyclotron to the new design made by McMillan which would become the first synchrocyclotron with could produce 195 MeV deuterons and 390 MeV α-particles.
After the first synchrocyclotron was operational, the Office of Naval Research (ONR) funded two synchrocyclotron construction initiatives. The first funding was in 1946 for Carnegie Institute of Technology to build a 435-MeV synchrocyclotron led by Edward Creutz and to start its nuclear physics research program. The second initiative was in 1947 for University of Chicago to build a 450-MeV synchrocyclotron under the direction of Enrico Fermi.
In 1948, University of Rochester completed the construction of its 240-MeV synchrocyclotron, followed by a completion of 380-MeV synchrocyclotron at Columbia University in 1950.
In 1950 the 435-MeV synchrocyclotron at Carnegie Institute of Technology was operational, followed by 450-MeV synchrocyclotron of University of Chicago in 1951.
The construction of the 400-Mev synchrocyclotron at the University of Liverpool was completed in 1952 and by April 1954 it was operational. The Liverpool synchrocyclotron first demonstrated the extraction of a particle beam from such a machine, removing the constraint of having to fit experiments inside the synchrocyclotron.
At a UNESCO meeting in Paris in December 1951, there was a discussion on finding a solution to have a medium-energy accelerator for the soon-to-be-formed European Organization for Nuclear Research (CERN). The synchrocyclotron was proposed as a solution to bridge the gap before the 28-GeV Proton Synchrotron was completed. In 1952, Cornelis Bakker led the group to design and construct the synchrocyclotron named Synchro-Cyclotron (SC) at CERN. The design of the Synchro-Cyclotron with in circumference started in 1953. The construction started in 1954 and it achieved 600 MeV proton acceleration in August 1957, with the experimental program started in April 1958.
Current developments
Synchrocyclotrons are attractive for use in proton therapy because of the ability to make compact systems using high magnetic fields. Medical physics companies Ion Beam Applications and Mevion Medical Systems have developed superconducting synchrocyclotrons that can fit comfortably into hospitals.
References
Accelerator physics
Particle accelerators | Synchrocyclotron | [
"Physics"
] | 1,369 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
597,244 | https://en.wikipedia.org/wiki/Carbon%20star | A carbon star (C-type star) is typically an asymptotic giant branch star, a luminous red giant, whose atmosphere contains more carbon than oxygen. The two elements combine in the upper layers of the star, forming carbon monoxide, which consumes most of the oxygen in the atmosphere, leaving carbon atoms free to form other carbon compounds, giving the star a "sooty" atmosphere and a strikingly ruby red appearance. There are also some dwarf and supergiant carbon stars, with the more common giant stars sometimes being called classical carbon stars to distinguish them.
In most stars (such as the Sun), the atmosphere is richer in oxygen than carbon. Ordinary stars not exhibiting the characteristics of carbon stars but cool enough to form carbon monoxide are therefore called oxygen-rich stars.
Carbon stars have quite distinctive spectral characteristics, and they were first recognized by their spectra by Angelo Secchi in the 1860s, a pioneering time in astronomical spectroscopy.
Spectra
By definition carbon stars have dominant spectral Swan bands from the molecule C2. Many other carbon compounds may be present at high levels, such as CH, CN (cyanogen), C3 and SiC2. Carbon is formed in the core and circulated into its upper layers, dramatically changing the layers' composition. In addition to carbon, S-process elements such as barium, technetium, and zirconium are formed in the shell flashes and are "dredged up" to the surface.
When astronomers developed the spectral classification of the carbon stars, they had considerable difficulty when trying to correlate the spectra to the stars' effective temperatures. The trouble was with all the atmospheric carbon hiding the absorption lines normally used as temperature indicators for the stars.
Carbon stars also show a rich spectrum of molecular lines at millimeter wavelengths and submillimeter wavelengths. In the carbon star CW Leonis more than 50 different circumstellar molecules have been detected. This star is often used to search for new circumstellar molecules.
Secchi
Carbon stars were discovered already in the 1860s when spectral classification pioneer Angelo Secchi erected the Secchi class IV for the carbon stars, which in the late 1890s were reclassified as N class stars.
Harvard
Using this new Harvard classification, the N class was later enhanced by an R class for less deeply red stars sharing the characteristic carbon bands of the spectrum. Later correlation of this R to N scheme with conventional spectra, showed that the R-N sequence approximately run in parallel with c:a G7 to M10 with regards to star temperature.
Morgan–Keenan C system
The later N classes correspond less well to the counterparting M types, because the Harvard classification was only partially based on temperature, but also carbon abundance; so it soon became clear that this kind of carbon star classification was incomplete. Instead a new dual number star class C was erected so to deal with temperature and carbon abundance. Such a spectrum measured for Y Canum Venaticorum, was determined to be C54, where 5 refers to temperature dependent features, and 4 to the strength of the C2 Swan bands in the spectrum. (C54 is very often alternatively written C5,4). This Morgan–Keenan C system classification replaced the older R-N classifications from 1960 to 1993.
The Revised Morgan–Keenan system
The two-dimensional Morgan–Keenan C classification failed to fulfill the creators' expectations:
it failed to correlate to temperature measurements based on infrared,
originally being two-dimensional it was soon enhanced by suffixes, CH, CN, j and other features making it impractical for en-masse analyses of foreign galaxies' carbon star populations,
and it gradually occurred that the old R and N stars actually were two distinct types of carbon stars, having real astrophysical significance.
A new revised Morgan–Keenan classification was published in 1993 by Philip Keenan, defining the classes: C-N, C-R and C-H. Later the classes C-J and C-Hd were added. This constitutes the established classification system used today.
Astrophysical mechanisms
Carbon stars can be explained by more than one astrophysical mechanism. Classical carbon stars are distinguished from non-classical ones on the grounds of mass, with classical carbon stars being the more massive.
In the classical carbon stars, those belonging to the modern spectral types C-R and C-N, the abundance of carbon is thought to be a product of helium fusion, specifically the triple-alpha process within a star, which giants reach near the end of their lives in the asymptotic giant branch (AGB). These fusion products have been brought to the stellar surface by episodes of convection (the so-called third dredge-up) after the carbon and other products were made. Normally this kind of AGB carbon star fuses hydrogen in a hydrogen burning shell, but in episodes separated by 104–105 years, the star transforms to burning helium in a shell, while the hydrogen fusion temporarily ceases. In this phase, the star's luminosity rises, and material from the interior of the star (notably carbon) moves up. Since the luminosity rises, the star expands so that the helium fusion ceases, and the hydrogen shell burning restarts. During these shell helium flashes, the mass loss from the star is significant, and after many shell helium flashes, an AGB star is transformed into a hot white dwarf and its atmosphere becomes material for a planetary nebula.
The non-classical kinds of carbon stars, belonging to the types C-J and C-H, are believed to be binary stars, where one star is observed to be a giant star (or occasionally a red dwarf) and the other a white dwarf. The star presently observed to be a giant star accreted carbon-rich material when it was still a main-sequence star from its companion (that is, the star that is now the white dwarf) when the latter was still a classical carbon star. That phase of stellar evolution is relatively brief, and most such stars ultimately end up as white dwarfs. These systems are now being observed a comparatively long time after the mass transfer event, so the extra carbon observed in the present red giant was not produced within that star. This scenario is also accepted as the origin of the barium stars, which are also characterized as having strong spectral features of carbon molecules and of barium (an s-process element). Sometimes the stars whose excess carbon came from this mass transfer are called "extrinsic" carbon stars to distinguish them from the "intrinsic" AGB stars which produce the carbon internally. Many of these extrinsic carbon stars are not luminous or cool enough to have made their own carbon, which was a puzzle until their binary nature was discovered.
The enigmatic hydrogen deficient carbon stars (HdC), belonging to the spectral class C-Hd, seems to have some relation to R Coronae Borealis variables (RCB), but are not variable themselves and lack a certain infrared radiation typical for RCB:s. Only five HdC:s are known, and none is known to be binary, so the relation to the non-classical carbon stars is not known.
Other less convincing theories, such as CNO cycle unbalancing and core helium flash have also been proposed as mechanisms for carbon enrichment in the atmospheres of smaller carbon stars.
Other characteristics
Most classical carbon stars are variable stars of the long period variable types.
Observing carbon stars
Due to the insensitivity of night vision to red and a slow adaption of the red sensitive eye rods to the light of the stars, astronomers making magnitude estimates of red variable stars, especially carbon stars, have to know how to deal with the Purkinje effect in order not to underestimate the magnitude of the observed star.
Generation of interstellar dust
Owing to its low surface gravity, as much as half (or more) of the total mass of a carbon star may be lost by way of powerful stellar winds. The star's remnants, carbon-rich "dust" similar to graphite, therefore become part of the interstellar dust. This dust is believed to be a significant factor in providing the raw materials for the creation of subsequent generations of stars and their planetary systems. The material surrounding a carbon star may blanket it to the extent that the dust absorbs all visible light.
Silicon carbide outflow from carbon stars was accreted in the early solar nebula and survived in the matrices of relatively unaltered chondritic meteorites. This allows for direct isotopic analysis of the circumstellar environment of 1-3 M☉ carbon stars. Stellar outflow from carbon stars is the source of the majority of presolar silicon carbide found in meteorites.
Other classifications
Other types of carbon stars include:
CCS – Cool Carbon Star
CEMP – Carbon-Enhanced Metal-Poor
CEMP-no – Carbon-Enhanced Metal-Poor star with no enhancement of elements produced by the r-process or s-process nucleosynthesis
CEMP-r – Carbon-Enhanced Metal-Poor star with an enhancement of elements produced by r-process nucleosynthesis
CEMP-s – Carbon-Enhanced Metal-Poor star with an enhancement of elements produced by s-process nucleosynthesis
CEMP-r/s – Carbon-Enhanced Metal-Poor star with an enhancement of elements produced by both r-process and s-process nucleosynthesis
CGCS – Cool Galactic Carbon Star
Use as standard candles
Classical carbon stars are very luminous, especially in the near-infrared, so they can be detected in nearby galaxies. Because of the strong absorption features in their spectra, carbon stars are redder in the near-infrared than oxygen-rich stars are, and they can be identified by their photometric colors. While individual carbon stars do not all have the same luminosity, a large sample of carbon stars will have a luminosity probability density function (PDF) with nearly the same median value, in similar galaxies. So the median value of that function can be used as a standard candle for the determination of the distance to a galaxy. The shape of the PDF may vary depending upon the average metallicity of the AGB stars within a galaxy, so it is important to calibrate this distance indicator using several nearby galaxies for which the distances are known through other means.
See also
S-type star, similar, but not as extreme
Technetium star, another type of chemically peculiar star
Marc Aaronson, American astronomer and researcher of carbon stars
La Superba, one of the more well known carbon stars
LL Pegasi, which has so much soot in it that it has created a spiral trail of smoke extending light years into space
References
External links
List of 110 carbon stars. Includes HD number; secondary identification for most; position in right ascension and declination ; magnitude; spectrum; magnitude range (for variable stars); period (of variability cycle).
Star types | Carbon star | [
"Astronomy"
] | 2,225 | [
"Star types",
"Astronomical classification systems"
] |
597,280 | https://en.wikipedia.org/wiki/Robert%20J.%20Cenker | Robert Joseph "Bob" Cenker (born November 5, 1948) is an American aerospace and electrical engineer, aerospace systems consultant, and former astronaut. Cenker worked for 18 years at RCA Astro-Electronics, and its successor company GE Astro Space, on a variety of spacecraft projects. He spent most of his career working on commercial communications satellites, including the Satcom, Spacenet and GStar programs.
In January 1986, Cenker was a crew member on the twenty-fourth mission of NASA's Space Shuttle program, the seventh flight of Space Shuttle Columbia, designated as mission STS-61-C. Cenker served as a Payload Specialist, representing RCA Astro-Electronics. This mission was the final flight before the Challenger disaster, which caused the Space Shuttle program to be suspended until 1988, and impacted NASA's Payload Specialist program for even longer. As a result, Cenker's mission was called "The End of Innocence" for the Shuttle program. Following the completion of his Shuttle mission, Cenker returned to work in the commercial aerospace field. Since his flight, he has made numerous public appearances representing NASA and the Shuttle program, in the United States, as well as internationally.
Early life and education
Cenker was born on November 5, 1948, and raised in Menallen Township, Pennsylvania. He started his education at St. Fidelis College Seminary in Herman, Pennsylvania, leaving in 1962. In 1970 Cenker enrolled at Penn State University where he earned a Bachelor of Science degree in aerospace engineering. He continued his studies at Penn State and earned a Master of Science degree in 1973, also in aerospace engineering. Cenker earned a second Master of Science degree in electrical engineering from Rutgers University in 1977.
Pre-spaceflight career
Cenker worked for 18 years at RCA Astro-Electronics and its successor company GE Astro Space. Cenker worked on hardware design and systems design concerning satellite attitude control. He also worked on in-orbit operations, as well as spacecraft assembly, test, and pre-launch operations. He spent two years on the Navy navigation satellite program, but spent most of his career working on commercial communications satellites.
Cenker's positions included integration and test manager for the Satcom D and E spacecraft, where he was responsible for all launch site activities. He also served as spacecraft bus manager on the Spacenet/GStar programs. He was responsible for ensuring the spacecraft could interface with multiple rockets, including the Delta, Space Shuttle, and Ariane launch vehicles.
Spaceflight experience
As an incentive for a spacecraft owner to contract with NASA to use a Shuttle launch instead of an unmanned, commercial launch system, NASA permitted contracting companies to apply for a payload specialist seat on the same mission. When RCA contracted with NASA to launch Satcom Ku-1, RCA Astro-Electronics' manager of systems engineering for the Satcom-K program Bob Cenker, and his co-worker Gerard Magilton, were selected to train as payload specialists so that one of the pair could accompany Satcom Ku-1 into space. Cenker and Magilton trained with career astronauts as well as other payload and mission specialists, including those scheduled for the next scheduled flight, that of the Challenger mission, STS-51-L.
This flight of Columbia was originally scheduled to occur in August 1985, but the timeline slipped. In July 1985 the payload was finalized to include the RCA satellite, and Cenker was assigned to the mission, now designated as STS-61-C. Magilton was assigned as the back-up.
Prior to its successful launch, Columbia had several aborted launch attempts, including one on January 6 which was "one of the most hazardous in the Shuttle’s operational history" to that point. As documented in Crewmember Bill Nelson's book "Mission: An American Congressman's Voyage to Space", and as reported in Spaceflight Insider, "The launch attempt on Jan. 6, 1986 was halted at T-31 seconds. The weather was perfect for the scheduled launch at dawn, but a failure of a liquid oxygen drain valve prevented it to close properly. The valve was then closed manually, but not quickly enough to prevent a low temperature in one fuel line." However, Nelson says that what really happened was that "the valve did not close because it was not commanded to close", and that it was later determined that the Rogers Commission, investigating the series of mistakes that forced this second scrub, recognized that the problems were personnel-related, caused by fatigue from overwork: One potentially catastrophic human error occurred 4 minutes 55 seconds before the scheduled launch of mission 61-C on January 6, 1986. According to a Lockheed Space Operations Company incident report, 18,000 pounds of liquid oxygen were inadvertently drained from the Shuttle external fuel tank due to operator error. Fortunately, the liquid oxygen flow dropped the main engine inlet temperature below the acceptable limit causing a launch hold, but only 31 seconds before lift-off. As the report states, "Had the mission not been scrubbed, the ability of the orbiter to reach a defined orbit may have been significantly impacted.
There was another near-catastrophic launch abort three days later. Referring to the January 9 abort, pilot Charlie Bolden later stated that it "...would have been catastrophic, because the engine would have exploded had we launched. In all, it took a record eight attempts to get Columbia off the ground. Columbia finally launched and achieved orbit on January 12, 1986, with a full crew of seven. Along with Cenker, the crew included Robert L. "Hoot" Gibson, future NASA Administrator Charles F. Bolden, George D. Nelson, Steven A. Hawley, Franklin R. Chang-Diaz, and US Representative Bill Nelson. Cenker and his crewmates traveled over 2.5 million miles in 98 orbits aboard Columbia and logged over 146 hours in space.
During the six-day mission, January 12–18, Cenker performed a variety of physiological tests, operated a primary experiment – an infrared imaging camera – and assisted with the deployment of RCA Americom's Satcom Ku-1 satellite, the primary mission objective. Satcom Ku-1 was deployed nearly 10 hours into the mission, and Satcom later reached its designated geostationary orbital position at 85 degrees West longitude where it remained operational until April 1997, the last major commercial satellite deployed by the Space Shuttle program. In a 2014 video of the "Tell Me a Story" series titled "Close My Eyes & Drift Away", posted to the Kennedy Space Center Visitor Complex YouTube channel, Cenker tells a humorous story regarding a zero-g sleeping problem he faced on his mission.
The next Shuttle launch, ten days after the return of Columbia, resulted in the destruction of the Challenger with the loss of all aboard, including Cenker's counterpart from Hughes Aircraft, civilian crew member and Payload Specialist Greg Jarvis. Accordingly, commander Gibson later called the STS-61-C mission "The End of Innocence" for the Shuttle program.
Following the Challenger disaster, the Shuttle fleet was grounded until 1988. Even after Shuttle missions resumed, civilian payload specialists like Cenker were excluded until the payload specialist program was reinstated on December 2, 1990, when Samuel T. Durrance, an Applied Physics Laboratory astrophysicist and Ronald A. Parise, a Computer Sciences Corporation astronomer, flew aboard STS-35. By that time, RCA had been purchased by General Electric, and RCA Astro-Electronics became part of GE. Following two additional ownership transitions, the facility was closed in 1998. As a result, Cenker was the only RCA Astro-Electronics employee, and only employee in the history of the facility under all of its subsequent names, to ever fly in space.
NASA's Payload Specialist program has been criticized for giving limited Shuttle flight positions to civilian aerospace engineers such as Cenker and Greg Jarvis (killed aboard Challenger), politicians such as Bill Nelson, and other civilians such as Teacher in Space Christa McAuliffe (also killed aboard Challenger). Even the flight of former Mercury astronaut and US Senator John Glenn was questioned. The concern was that these people had replaced career astronauts in very limited flight opportunities, and some may have flown without fully understanding the level of danger involved in a Shuttle mission.
Post-spaceflight
Following the completion of his Shuttle mission, Cenker returned to work in the civilian aerospace field. Cenker's last two years with RCA Astro-Electronics and its successor GE Astro Space were spent as Manager of Payload Accommodations on an EOS spacecraft program. After leaving GE, Cenker served as a consultant for various aerospace companies regarding micro-gravity research, and spacecraft design, assembly and flight operations. Cenker supported systems engineering and systems architecture studies for various spacecraft projects, including smallsats, military communications satellites, and large, assembled-in-orbit platforms. His contributions included launch vehicle evaluation and systems engineering support for Motorola on Iridium, and launch readiness for the Globalstar constellation. Other efforts include systems engineering and operations support for Intelsat on Intelsat K and Intelsat VIII, for AT&T on Telstar 401 and 402, for Fairchild-Matra on SPAS III, for Martin Marietta on Astra 1B, BS-3N, ACTS, and for the Lockheed Martin Series 7000 communications satellites.
In 2017, Cenker's STS-61C crewmate former US Senator Bill Nelson spoke at a session of the US House of Representatives. In an address, titled "Mission to Mars and Space Shuttle Flight 30th Anniversary", he read into the Congressional Record the details of the mission of STS-61C, as well as the names and function of each crew member including Cenker.
In June 2017, Cenker traveled to Scotland where he and astronaut Doug Wheelock gave a series of talks to children in Fife schools as part of the Scottish Space School.
Cenker continues to make periodic public appearances representing NASA and the astronaut program, including at the Kennedy Space Center Visitor Complex in March 2017, January 2023, and April 2024.
Apollo 11 commemoration activities
Leading up to the 50th anniversary of the Apollo 11 mission, Cenker participated in several public events with other former NASA astronauts.
During an interview to discuss his scheduled appearance at The New Jersey Governor's School of Engineering & Technology at Rutgers University in July 2019, Cenker talked about his education at Rutgers, his work at RCA, his shuttle mission, his connection to the Challenger crew, his thoughts on the importance of the Apollo 11 mission, and of space travel in general. He concluded:
The Cradle of Aviation in Garden City, New York invited Cenker to participate in its "Moon Fest" planned for July 20, 2019, exactly fifty years after the Apollo 11 landing. It was announced that Cenker would join two fellow shuttle astronauts from New York, Bill Shepherd and Charlie Camarda, at the celebration.
Personal life and beliefs
Bob Cenker is married to Barbara Ann Cenker; they have two sons and a daughter.
In a July 2019 interview discussing the 50th anniversary of Apollo 11, Cenker commented that he believes that humans have an innate desire to explore, saying "It’s not learned... It’s in your genes". Discussing his religious beliefs, Cenker said "I'm a good, practicing Catholic. One of the guys I flew with was an agnostic. I think going into space reinforces what you believe when you went... [The agnostic astronaut] couldn’t grasp how one being could create all this. I came back thinking ‘God, you have to be there’".
Professional societies
Associate Fellow in the American Institute of Aeronautics and Astronautics (AIAA)
Life Member of the Penn State Alumni Association
Life Member of the Association of Space Explorers
Registered Professional Engineer in the state of New Jersey
Senior Member of the Institute of Electrical and Electronics Engineers (IEEE)
Sigma Gamma Tau
Tau Beta Pi
See also
1986 in spaceflight
List of human spaceflights
List of Space Shuttle missions
List of Space Shuttle crews
List of Shuttle payload specialists
Photo gallery
Notes
References
External links
STS-61C Press Kit
NASA: 61-C Mission Page
Space Shuttle Flight 24 (STS-61C) Post Flight Presentation on YouTube
1948 births
Living people
Aerospace engineers
People from Uniontown, Pennsylvania
Penn State College of Engineering alumni
Rutgers University alumni
NASA sponsored astronauts
Space Shuttle program astronauts
RCA people
General Electric employees
Lockheed Martin people
Martin Marietta people
Engineers from New Jersey
Senior members of the IEEE
Engineers from Pennsylvania | Robert J. Cenker | [
"Engineering"
] | 2,555 | [
"Aerospace engineers",
"Aerospace engineering"
] |
597,416 | https://en.wikipedia.org/wiki/Bar%20Lev%20Line | The Bar-Lev Line ( ; ) was a chain of fortifications built by Israel along the eastern bank of the Suez Canal shortly after the 1967 Arab–Israeli War, during which Egypt lost the entire Sinai Peninsula. It was considered impenetrable by the Israeli military until it was overrun in less than two hours during Egypt's Operation Badr, which sparked the 1973 Arab–Israeli War.
History
Six-Day War and War of Attrition
The Bar-Lev Line evolved from a group of rudimentary fortifications placed along the canal line. In response to Egyptian artillery bombardments during the War of Attrition, Israel developed the fortifications into an elaborate defense system spanning along Suez Canal, with the exception of the Great Bitter Lake (where a canal crossing was unlikely due to the width of the lake). The Bar-Lev Line was designed to defend against any major Egyptian assault across the canal, and was expected to function as a "graveyard for Egyptian troops".
Cost, construction, and materials
The line, costing around $300 million in 1973, was named after Israeli Chief of Staff Haim Bar-Lev. The line was built at the Suez Canal, a unique water barrier that Moshe Dayan described as "one of the best anti-tank ditches in the world." The line incorporated a massive, continuous sand wall lining the entire canal, and was supported by a concrete wall. The sand wall, which varied in height from , was inclined at an angle of 45–65 degrees. The sand wall and its concrete support prevented any armored or amphibious units from landing on the east bank of the Suez Canal without prior engineering preparations. Israeli planners estimated it would take at least 24 hours, probably a full 48 hours for the Egyptians to breach the sand wall and establish a bridge across the canal.
Immediately behind this sand wall was the front line of Israeli fortifications. After the War of Attrition, there were 22 forts, which incorporated 35 strongpoints. The forts were designed to be manned by a platoon. The strongpoints, which were built several stories into the sand, were on average situated less than from each other, but at likely crossing points they were less than apart. The strongpoints incorporated trenches, minefields, barbed wire and a sand embankment. Major strongpoints had up to 26 bunkers with medium and heavy machineguns, 24 troop shelters, six mortar positions, four bunkers housing anti-aircraft weapons, and three firing positions for tanks. The strongpoints were surrounded by nearly fifteen circles of barbed wire and minefields to a depth of . A strongpoint's perimeter averaged . The bunkers and shelters provided protection against anything less than a 500 kg bomb, and offered luxuries to the defenders such as air conditioning. Between behind the canal, there were prepared firing positions designed to be occupied by tanks assigned to the support of the strongpoints. Some of the names of the strongpoints were Tasa, Maftzach, Milano, Mezach, Chizayon, Mifreket, Orcal, Budapest (the largest), Nisan, Lituf, Chashiva. In addition, there were eleven strongholds located behind the canal, which were built along sandy hills. Each stronghold was designed to hold a company of troops.
To take advantage of the water obstacle, the Israelis installed an underwater pipe system to pump flammable crude oil into the Suez Canal, thereby creating a sheet of flame. Some Israeli sources claim the system was unreliable and only a few of the taps were operational. Nevertheless, the Egyptians took this threat seriously and, on the eve of the war, during the late evening of 5 October, teams of Egyptian frogmen blocked the underwater openings with concrete.
Defensive plans
To support the Bar-Lev Line, Israel built a well-planned and elaborate system of roads. Three main roads ran north–south. The first was the Lexicon Road (Infantry Road), running along the canal, which allowed the Israelis to move between the fortifications and conduct patrols. The second was the Artillery Road, around from the canal. Its name came from the twenty artillery and air defense positions located on it; it also linked armored concentration areas and logistical bases. The Lateral Road (Supply Road), from the canal, was meant to allow the concentration of Israeli operational reserves which, in case of an Egyptian offensive, would counterattack the main Egyptian assault. A number of other roads running east to west, Quantara Road, Hemingway Road, and Jerusalem Road were designed to facilitate the movement of Israeli troops towards the canal.
The defense of the Sinai depended upon two plans, Dovecote (שׁוֹבָךְ יוֹנִים/Shovakh Yonim) and Rock (סֶלַע/Sela). In both plans, the Israeli General Staff expected the Bar-Lev Line to serve as a "stop line" or kav atzira—a defensive line that had to be held at all cost. As noted by an Israeli colonel shortly after the War of Attrition, "The line was created to provide military answers to two basic needs: first, to prevent the possibility of a major Egyptian assault on Sinai with the consequent creation of a bridgehead which could lead to all-out war; and, second, to reduce as much as possible the casualties among the defending troops."
Israeli planning was based on a 48-hour advance warning by intelligence services of an impending Egyptian attack. During these 48 hours, the Israeli Air Force (IAF) would assault enemy air defense systems, while Israeli forces deployed as planned. The Israelis expected an Egyptian attack would be defeated by armored brigades supported by the superior IAF.
Dovecote tasked a regular armored division to the defense of the Sinai. The division was supported by an additional tank battalion, twelve infantry companies and seventeen artillery batteries. This gave a total of over 300 tanks, 70 artillery guns and 18,000 troops. These forces, which represented the Sinai garrison, were tasked with the mission of defeating an Egyptian crossing at or near the canal line. It called for around 800 soldiers to man the forward fortifications on the canal line. Meanwhile, along Artillery Road, a brigade of 110 tanks was stationed with the objective of advancing and occupying the firing positions and tanks ramparts along the canal in case of an Egyptian attack. There were two additional armored brigades, one to reinforce the forward brigade, and the other to counterattack the main Egyptian attack.
Should the regular armored division prove incapable of repulsing an Egyptian attack, the Israeli army would activate Rock, mobilizing two reserve armored divisions with support elements; implementation of Rock signified a major war.
Israeli skepticism
Generals Ariel Sharon and Israel Tal objected to the line and argued that it would not succeed in fending off Egyptian attackers. Sharon said that it would pin down large military formations, which would be sitting ducks for deadly artillery attacks, but the line was completed in spring 1970.
Yom Kippur War
Egyptian breach
During the Yom Kippur War, the Egyptian army, led by Chief of staff Saad El Shazly, overran the Bar-Lev Line in less than two hours due to the element of surprise and overwhelming firepower. To deal with the massive earthen ramparts, the Egyptians used water cannons fashioned from hoses attached to dredging pumps in the canal. Other methods involving explosives, artillery, and bulldozers were too costly in time and required nearly ideal working conditions. In 1971, a young Egyptian officer, Baki Zaki Yousef, suggested a small, light, petrol-fueled pump as the answer to the crossing dilemma. The Egyptian military purchased 300 British-made pumps, five of which could blast 1,500 cubic meters of sand in three hours. In 1972, it acquired 150 more powerful German pumps driven by small gas turbines. A combination of two German or three British pumps would cut the breaching time down to two hours. These cannons pumped out powerful jets of water, creating 81 breaches in the line and removing three million cubic metres of packed dirt on the first day of the war.
The Egyptians assaulted the Bar-Lev Line with two field armies and forces from Port Said and the Red Sea Military District. The Second Field Army covered the area from north of Qantara to south of Deversoir, while the Third Field Army was responsible for the area from Bitter Lakes to south of Port Tawfiq.
The Egyptians began their simultaneous air and artillery attacks with 250 Egyptian Air Force planes attacking their assigned targets accurately in Sinai. Meanwhile, 2,000 artillery pieces opened massive fire against all the strong points along the Bar-Lev Line, a barrage that lasted 53 minutes and dropped 10,500 shells in the first minute alone, or 175 shells per second.
Within the first hour of the war, the Egyptian engineering corps tackled the sand barrier. Seventy engineer groups, each one responsible for opening a single passage, worked from wooden boats. With hoses attached to water pumps, they began attacking the sand obstacle. Many breaches occurred within two to three hours of the start of operations—according to schedule; however, engineers at several places experienced unexpected problems. The sand from the breached openings in the barrier was reduced to mud, which was one meter deep in some areas. This problem required that the engineers emplace floors of wood, rails, stone, sandbags, steel plates, or metal nets for the passage of heavy vehicles. The Third Army, in particular, had difficulty in its sector. There, the clay proved resistant to high-water pressure and, consequently, the engineers experienced delays in their breaching. Engineers in the Second Army completed the erection of their bridges and ferries within nine hours, whereas the Third Army needed more than sixteen hours.
Israeli defense
Of the 441 Israeli soldiers in 16 forts on the Bar-Lev Line at the start of the war, 126 were killed and 161 captured. Only Budapest, to the north of the line near the Mediterranean city of Port Said, held out for the duration of the war, while all the others were overrun.
Criticism
In his book The Yom Kippur War: The Epic Encounter That Transformed the Middle East, historian Abraham Rabinovich posits that the Bar-Lev line was a blunder—too lightly manned to be an effective defensive line and too heavily manned to be an expendable tripwire. Moreover, it can be argued that the concept of the line was counter-intuitive to the strengths of Israeli battle tactics, which, at their core, relied on agile mobile forces moving rapidly through the battlefield rather than utilizing a heavy reliance on fixed defenses.
See also
Maginot Line, a chain of French fortifications that was bypassed by the Germans during World War II
Closure of the Suez Canal (1967–1975), a phase of the Arab–Israeli conflict
References
Bibliography
The Yom Kippur War : The Epic Encounter That Transformed the Middle East by Abraham Rabinovich.
The 1973 Arab-Israeli war: The albatross of decisive victory by Dr. George W. Gawrych. Leavenworth papers US ISSN 0195-3451
Yom Kippur War
Fortifications in Israel
War of Attrition
Fortifications in Sinai
Historic defensive lines
Eponymous border lines | Bar Lev Line | [
"Engineering"
] | 2,265 | [
"Fortification lines",
"Historic defensive lines"
] |
597,564 | https://en.wikipedia.org/wiki/Just-noticeable%20difference | In the branch of experimental psychology focused on sense, sensation, and perception, which is called psychophysics, a just-noticeable difference or JND is the amount something must be changed in order for a difference to be noticeable, detectable at least half the time. This limen is also known as the difference limen, difference threshold, or least perceptible difference.
Quantification
For many sensory modalities, over a wide range of stimulus magnitudes sufficiently far from the upper and lower limits of perception, the 'JND' is a fixed proportion of the reference sensory level, and so the ratio of the JND/reference is roughly constant (that is the JND is a constant proportion/percentage of the reference level). Measured in physical units, we have:
where is the original intensity of the particular stimulation, is the addition to it required for the change to be perceived (the JND), and k is a constant. This rule was first discovered by Ernst Heinrich Weber (1795–1878), an anatomist and physiologist, in experiments on the thresholds of perception of lifted weights. A theoretical rationale (not universally accepted) was subsequently provided by Gustav Fechner, so the rule is therefore known either as the Weber Law or as the Weber–Fechner law; the constant k is called the Weber constant. It is true, at least to a good approximation, of many but not all sensory dimensions, for example the brightness of lights, and the intensity and the pitch of sounds. It is not true, however, for the wavelength of light. Stanley Smith Stevens argued that it would hold only for what he called prothetic sensory continua, where change of input takes the form of increase in intensity or something obviously analogous; it would not hold for metathetic continua, where change of input produces a qualitative rather than a quantitative change of the percept. Stevens developed his own law, called Stevens' Power Law, that raises the stimulus to a constant power while, like Weber, also multiplying it by a constant factor in order to achieve the perceived stimulus.
The JND is a statistical, rather than an exact quantity: from trial to trial, the difference that a given person notices will vary somewhat, and it is therefore necessary to conduct many trials in order to determine the threshold. The JND usually reported is the difference that a person notices on 50% of trials. If a different proportion is used, this should be included in the description—for example one might report the value of the "75% JND".
Modern approaches to psychophysics, for example signal detection theory, imply that the observed JND, even in this statistical sense, is not an absolute quantity, but will depend on situational and motivational as well as perceptual factors. For example, when a researcher flashes a very dim light, a participant may report seeing it on some trials but not on others.
The JND formula has an objective interpretation (implied at the start of this entry) as the disparity between levels of the presented stimulus that is detected on 50% of occasions by a particular observed response, rather than what is subjectively "noticed" or as a difference in magnitudes of consciously experienced 'sensations'. This 50%-discriminated disparity can be used as a universal unit of measurement of the psychological distance of the level of a feature in an object or situation and an internal standard of comparison in memory, such as the 'template' for a category or the 'norm' of recognition. The JND-scaled distances from norm can be combined among observed and inferred psychophysical functions to generate diagnostics among hypothesised information-transforming (mental) processes mediating observed quantitative judgments.
Music production applications
In music production, a single change in a property of sound which is below the JND does not affect perception of the sound. For amplitude, the JND for humans is around 1 dB.
The JND for tone is dependent on the tone's frequency content. Below 500 Hz, the JND is about 3 Hz for sine waves; above 1000 Hz, the JND for sine waves is about 0.6% (about 10 cents).
The JND is typically tested by playing two tones in quick succession with the listener asked if there was a difference in their pitches. The JND becomes smaller if the two tones are played simultaneously as the listener is then able to discern beat frequencies. The total number of perceptible pitch steps in the range of human hearing is about 1,400; the total number of notes in the equal-tempered scale, from 16 to 16,000 Hz, is 120.
In speech perception
JND analysis is frequently occurring in both music and speech, the two being related and overlapping in the analysis of speech prosody (i.e. speech melody). Although JND varies as a function of the frequency band being tested, it has been shown that JND for the best performers at around 1 kHz is well below 1 Hz, (i.e. less than a tenth of a percent). It is, however, important to be aware of the role played by critical bandwidth when performing this kind of analysis.
When analysing speech melody, rather than musical tones, accuracy decreases. This is not surprising given that speech does not stay at fixed intervals in the way that tones in music do. Johan 't Hart (1981) found that JND for speech averaged between 1 and 2 STs but concluded that "only differences of more than 3 semitones play a part in communicative situations".
Note that, given the logarithmic characteristics of Hz, for both music and speech perception results should not be reported in Hz but either as percentages or in STs (5 Hz between 20 and 25 Hz is very different from 5 Hz between 2000 and 2005 Hz, but an ~18.9% or 3 semitone increase is perceptually the same size difference, regardless of whether one starts at 20Hz or at 2000Hz).
Marketing applications
Weber's law has important applications in marketing. Manufacturers and marketers endeavor to determine the relevant JND for their products for two very different reasons:
so that negative changes (e.g. reductions in product size or quality, or increase in product price) are not discernible to the public (i.e. remain below JND) and
so that product improvements (e.g. improved or updated packaging, larger size or lower price) are very apparent to consumers without being wastefully extravagant (i.e. they are at or just above the JND).
When it comes to product improvements, marketers very much want to meet or exceed the consumer's differential threshold; that is, they want consumers to readily perceive any improvements made in the original products. Marketers use the JND to determine the amount of improvement they should make in their products. Less than the JND is wasted effort because the improvement will not be perceived; more than the JND is again wasteful because it reduces the level of repeat sales. On the other hand, when it comes to price increases, less than the JND is desirable because consumers are unlikely to notice it.
Haptics applications
Weber's law is used in haptic devices and robotic applications. Exerting the proper amount of force to human operator is a critical aspects in human robot interactions and tele operation scenarios. It can highly improve the performance of the user in accomplishing a task.
See also
Absolute threshold
ABX test
Color difference
Limen
Minimal clinically important difference
Mutatis mutandis
Psychometric function
Sensor resolution
Visual perception
Weber–Fechner law
References
Citations
Sources
Perception
Psychophysics | Just-noticeable difference | [
"Physics"
] | 1,579 | [
"Psychophysics",
"Applied and interdisciplinary physics"
] |
597,584 | https://en.wikipedia.org/wiki/Tree%20traversal | In computer science, tree traversal (also known as tree search and walking the tree) is a form of graph traversal and refers to the process of visiting (e.g. retrieving, updating, or deleting) each node in a tree data structure, exactly once. Such traversals are classified by the order in which the nodes are visited. The following algorithms are described for a binary tree, but they may be generalized to other trees as well.
Types
Unlike linked lists, one-dimensional arrays and other linear data structures, which are canonically traversed in linear order, trees may be traversed in multiple ways. They may be traversed in depth-first or breadth-first order. There are three common ways to traverse them in depth-first order: in-order, pre-order and post-order. Beyond these basic traversals, various more complex or hybrid schemes are possible, such as depth-limited searches like iterative deepening depth-first search. The latter, as well as breadth-first search, can also be used to traverse infinite trees, see below.
Data structures for tree traversal
Traversing a tree involves iterating over all nodes in some manner. Because from a given node there is more than one possible next node (it is not a linear data structure), then, assuming sequential computation (not parallel), some nodes must be deferred—stored in some way for later visiting. This is often done via a stack (LIFO) or queue (FIFO). As a tree is a self-referential (recursively defined) data structure, traversal can be defined by recursion or, more subtly, corecursion, in a natural and clear fashion; in these cases the deferred nodes are stored implicitly in the call stack.
Depth-first search is easily implemented via a stack, including recursively (via the call stack), while breadth-first search is easily implemented via a queue, including corecursively.
Depth-first search
In depth-first search (DFS), the search tree is deepened as much as possible before going to the next sibling.
To traverse binary trees with depth-first search, perform the following operations at each node:
If the current node is empty then return.
Execute the following three operations in a certain order:
N: Visit the current node.
L: Recursively traverse the current node's left subtree.
R: Recursively traverse the current node's right subtree.
The trace of a traversal is called a sequentialisation of the tree. The traversal trace is a list of each visited node. No one sequentialisation according to pre-, in- or post-order describes the underlying tree uniquely. Given a tree with distinct elements, either pre-order or post-order paired with in-order is sufficient to describe the tree uniquely. However, pre-order with post-order leaves some ambiguity in the tree structure.
There are three methods at which position of the traversal relative to the node (in the figure: red, green, or blue) the visit of the node shall take place. The choice of exactly one color determines exactly one visit of a node as described below. Visit at all three colors results in a threefold visit of the same node yielding the “all-order” sequentialisation:
--------------------------
Pre-order, NLR
Visit the current node (in the figure: position red).
Recursively traverse the current node's left subtree.
Recursively traverse the current node's right subtree.
The pre-order traversal is a topologically sorted one, because a parent node is processed before any of its child nodes is done.
Post-order, LRN
Recursively traverse the current node's left subtree.
Recursively traverse the current node's right subtree.
Visit the current node (in the figure: position blue).
Post-order traversal can be useful to get postfix expression of a binary expression tree.
In-order, LNR
Recursively traverse the current node's left subtree.
Visit the current node (in the figure: position green).
Recursively traverse the current node's right subtree.
In a binary search tree ordered such that in each node the key is greater than all keys in its left subtree and less than all keys in its right subtree, in-order traversal retrieves the keys in ascending sorted order.
Reverse pre-order, NRL
Visit the current node.
Recursively traverse the current node's right subtree.
Recursively traverse the current node's left subtree.
Reverse post-order, RLN
Recursively traverse the current node's right subtree.
Recursively traverse the current node's left subtree.
Visit the current node.
Reverse in-order, RNL
Recursively traverse the current node's right subtree.
Visit the current node.
Recursively traverse the current node's left subtree.
In a binary search tree ordered such that in each node the key is greater than all keys in its left subtree and less than all keys in its right subtree, reverse in-order traversal retrieves the keys in descending sorted order.
Arbitrary trees
To traverse arbitrary trees (not necessarily binary trees) with depth-first search, perform the following operations at each node:
If the current node is empty then return.
Visit the current node for pre-order traversal.
For each i from 1 to the current node's number of subtrees − 1, or from the latter to the former for reverse traversal, do:
Recursively traverse the current node's i-th subtree.
Visit the current node for in-order traversal.
Recursively traverse the current node's last subtree.
Visit the current node for post-order traversal.
Depending on the problem at hand, pre-order, post-order, and especially one of the number of subtrees − 1 in-order operations may be optional. Also, in practice more than one of pre-order, post-order, and in-order operations may be required. For example, when inserting into a ternary tree, a pre-order operation is performed by comparing items. A post-order operation may be needed afterwards to re-balance the tree.
Breadth-first search
In breadth-first search (BFS) or level-order search, the search tree is broadened as much as possible before going to the next depth.
Other types
There are also tree traversal algorithms that classify as neither depth-first search nor breadth-first search. One such algorithm is Monte Carlo tree search, which concentrates on analyzing the most promising moves, basing the expansion of the search tree on random sampling of the search space.
Applications
Pre-order traversal can be used to make a prefix expression (Polish notation) from expression trees: traverse the expression tree pre-orderly. For example, traversing the depicted arithmetic expression in pre-order yields "+ * A − B C + D E". In prefix notation, there is no need for any parentheses as long as each operator has a fixed number of operands. Pre-order traversal is also used to create a copy of the tree.
Post-order traversal can generate a postfix representation (Reverse Polish notation) of a binary tree. Traversing the depicted arithmetic expression in post-order yields "A B C − * D E + +"; the latter can easily be transformed into machine code to evaluate the expression by a stack machine. Post-order traversal is also used to delete the tree. Each node is freed after freeing its children.
In-order traversal is very commonly used on binary search trees because it returns values from the underlying set in order, according to the comparator that set up the binary search tree.
Implementations
Depth-first search implementation
Pre-order implementation
Post-order implementation
In-order implementation
Another variant of pre-order
If the tree is represented by an array (first index is 0), it is possible to calculate the index of the next element:
procedure bubbleUp(array, i, leaf)
k ← 1
i ← (i - 1)/2
while (leaf + 1) % (k * 2) ≠ k
i ← (i - 1)/2
k ← 2 * k
return i
procedure preorder(array)
i ← 0
while i ≠ array.size
visit(array[i])
if i = size - 1
i ← size
else if i < size/2
i ← i * 2 + 1
else
leaf ← i - size/2
parent ← bubble_up(array, i, leaf)
i ← parent * 2 + 2
Advancing to the next or previous node
The node to be started with may have been found in the binary search tree bst by means of a standard search function, which is shown here in an implementation without parent pointers, i.e. it uses a stack for holding the ancestor pointers.
procedure search(bst, key)
// returns a (node, stack)
node ← bst.root
stack ← empty stack
while node ≠ null
stack.push(node)
if key = node.key
return (node, stack)
if key < node.key
node ← node.left
else
node ← node.right
return (null, empty stack)
The function inorderNext returns an in-order-neighbor of node, either the (for dir=1) or the (for dir=0), and the updated stack, so that the binary search tree may be sequentially in-order-traversed and searched in the given direction dir further on.
procedure inorderNext(node, dir, stack)
newnode ← node.child[dir]
if newnode ≠ null
do
node ← newnode
stack.push(node)
newnode ← node.child[1-dir]
until newnode = null
return (node, stack)
// node does not have a dir-child:
do
if stack.isEmpty()
return (null, empty stack)
oldnode ← node
node ← stack.pop() // parent of oldnode
until oldnode ≠ node.child[dir]
// now oldnode = node.child[1-dir],
// i.e. node = ancestor (and predecessor/successor) of original node
return (node, stack)
Note that the function does not use keys, which means that the sequential structure is completely recorded by the binary search tree’s edges. For traversals without change of direction, the (amortised) average complexity is because a full traversal takes steps for a BST of size 1 step for edge up and 1 for edge down. The worst-case complexity is with as the height of the tree.
All the above implementations require stack space proportional to the height of the tree which is a call stack for the recursive and a parent (ancestor) stack for the iterative ones. In a poorly balanced tree, this can be considerable. With the iterative implementations we can remove the stack requirement by maintaining parent pointers in each node, or by threading the tree (next section).
Morris in-order traversal using threading
A binary tree is threaded by making every left child pointer (that would otherwise be null) point to the in-order predecessor of the node (if it exists) and every right child pointer (that would otherwise be null) point to the in-order successor of the node (if it exists).
Advantages:
Avoids recursion, which uses a call stack and consumes memory and time.
The node keeps a record of its parent.
Disadvantages:
The tree is more complex.
We can make only one traversal at a time.
It is more prone to errors when both the children are not present and both values of nodes point to their ancestors.
Morris traversal is an implementation of in-order traversal that uses threading:
Create links to the in-order successor.
Print the data using these links.
Revert the changes to restore original tree.
Breadth-first search
Also, listed below is pseudocode for a simple queue based level-order traversal, and will require space proportional to the maximum number of nodes at a given depth. This can be as much as half the total number of nodes. A more space-efficient approach for this type of traversal can be implemented using an iterative deepening depth-first search.
procedure levelorder(node)
queue ← empty queue
queue.enqueue(node)
while not queue.isEmpty()
node ← queue.dequeue()
visit(node)
if node.left ≠ null
queue.enqueue(node.left)
if node.right ≠ null
queue.enqueue(node.right)
If the tree is represented by an array (first index is 0), it is sufficient iterating through all elements:
procedure levelorder(array)
for i from 0 to array.size
visit(array[i])
Infinite trees
While traversal is usually done for trees with a finite number of nodes (and hence finite depth and finite branching factor) it can also be done for infinite trees. This is of particular interest in functional programming (particularly with lazy evaluation), as infinite data structures can often be easily defined and worked with, though they are not (strictly) evaluated, as this would take infinite time. Some finite trees are too large to represent explicitly, such as the game tree for chess or go, and so it is useful to analyze them as if they were infinite.
A basic requirement for traversal is to visit every node eventually. For infinite trees, simple algorithms often fail this. For example, given a binary tree of infinite depth, a depth-first search will go down one side (by convention the left side) of the tree, never visiting the rest, and indeed an in-order or post-order traversal will never visit any nodes, as it has not reached a leaf (and in fact never will). By contrast, a breadth-first (level-order) traversal will traverse a binary tree of infinite depth without problem, and indeed will traverse any tree with bounded branching factor.
On the other hand, given a tree of depth 2, where the root has infinitely many children, and each of these children has two children, a depth-first search will visit all nodes, as once it exhausts the grandchildren (children of children of one node), it will move on to the next (assuming it is not post-order, in which case it never reaches the root). By contrast, a breadth-first search will never reach the grandchildren, as it seeks to exhaust the children first.
A more sophisticated analysis of running time can be given via infinite ordinal numbers; for example, the breadth-first search of the depth 2 tree above will take ω·2 steps: ω for the first level, and then another ω for the second level.
Thus, simple depth-first or breadth-first searches do not traverse every infinite tree, and are not efficient on very large trees. However, hybrid methods can traverse any (countably) infinite tree, essentially via a diagonal argument ("diagonal"—a combination of vertical and horizontal—corresponds to a combination of depth and breadth).
Concretely, given the infinitely branching tree of infinite depth, label the root (), the children of the root (1), (2), ..., the grandchildren (1, 1), (1, 2), ..., (2, 1), (2, 2), ..., and so on. The nodes are thus in a one-to-one correspondence with finite (possibly empty) sequences of positive numbers, which are countable and can be placed in order first by sum of entries, and then by lexicographic order within a given sum (only finitely many sequences sum to a given value, so all entries are reached—formally there are a finite number of compositions of a given natural number, specifically 2n−1 compositions of ), which gives a traversal. Explicitly:
()
(1)
(1, 1) (2)
(1, 1, 1) (1, 2) (2, 1) (3)
(1, 1, 1, 1) (1, 1, 2) (1, 2, 1) (1, 3) (2, 1, 1) (2, 2) (3, 1) (4)
etc.
This can be interpreted as mapping the infinite depth binary tree onto this tree and then applying breadth-first search: replace the "down" edges connecting a parent node to its second and later children with "right" edges from the first child to the second child, from the second child to the third child, etc. Thus at each step one can either go down (append a (, 1) to the end) or go right (add one to the last number) (except the root, which is extra and can only go down), which shows the correspondence between the infinite binary tree and the above numbering; the sum of the entries (minus one) corresponds to the distance from the root, which agrees with the 2n−1 nodes at depth in the infinite binary tree (2 corresponds to binary).
References
Sources
Dale, Nell. Lilly, Susan D. "Pascal Plus Data Structures". D. C. Heath and Company. Lexington, MA. 1995. Fourth Edition.
Drozdek, Adam. "Data Structures and Algorithms in C++". Brook/Cole. Pacific Grove, CA. 2001. Second edition.
"Tree Transversal" (math.northwestern.edu)
External links
Storing Hierarchical Data in a Database with traversal examples in PHP
Managing Hierarchical Data in MySQL
Working with Graphs in MySQL
See tree traversal implemented in various programming language on Rosetta Code
Tree traversal without recursion
Tree Traversal Algorithms
Binary Tree Traversal
Tree Traversal In Data Structure
Trees (data structures)
Articles with example pseudocode
Graph algorithms
Recursion
Iteration in programming
de:Binärbaum#Traversierung
ja:木構造 (データ構造) | Tree traversal | [
"Mathematics"
] | 3,793 | [
"Mathematical logic",
"Recursion"
] |
597,645 | https://en.wikipedia.org/wiki/Northern%20leopard%20frog | Lithobates pipiens formerly Rana pipiens, commonly known as the northern leopard frog, is a species of leopard frog from the true frog family, native to parts of Canada and the United States. It is the state amphibian of Minnesota and Vermont.
Description
The northern leopard frog is a fairly large species of frog, reaching about in snout-to-vent length. It varies from green to brown in dorsal color, with large, dark, circular spots on its back, sides, and legs. Each spot is normally bordered by a lighter ring. A pair of dorsolateral folds starting from the back of the eye runs parallel to each other down the back. These dorsolateral folds are often lighter or occasionally pinkish in colour. Also, a pale stripe runs from the nostril, under the eye and tympanum, terminating at the shoulder. The ventral surface is white or pale green. The iris is golden and toes are webbed.
Tadpoles are dark brown or grey, with light blotches on the underside. The tail is pale tan.
Color variations
The northern leopard frog has several different color variations, with the most common two being the green and the brown morphs, with another morph known as the burnsi morph. Individuals with the burnsi morph coloration lack spots on their backs, but may or may not retain them on their legs. They can be bright green or brown and have yellow dorsal folds. Albinism also appears in this species, but is very rare. They can also be blue, and this is quite rare also.
Ecology and behavior
Northern leopard frogs have a wide range of habitats. They are found in permanent ponds, swamps, marshes, and slow-moving streams throughout forest, open, and urban areas. They normally inhabit water bodies with abundant aquatic vegetation. In the summer, they often abandon ponds and move to grassy areas and lawns. They are well adapted to cold and can be found above above mean sea level. Males make a short, snore-like call from water during spring and summer. The northern leopard frog breeds in the spring (March–June). Up to 6500 eggs are laid in water, and tadpoles complete development within the breeding pond. Tadpoles are light brown with black spots, and development takes 70–110 days, depending on conditions. Metamorph frogs are long and resemble the adult.
This species was once quite common through parts of western Canada and the United States until declines started occurring during the 1970s. Although the definitive cause of this decline is unknown, habitat loss and fragmentation, environmental contaminants, introduced fish, drought, and disease have been proposed as mechanisms of decline and are likely preventing species' recovery in many areas. Many populations of northern leopard frogs have not yet recovered from these declines.
Northern leopard frogs are preyed upon by many different animals, such as snakes, raccoons, other frogs, and even humans. They do not produce distasteful skin secretions and rely on speed to evade predation.
They eat a wide variety of animals, including crickets, flies, worms, and smaller frogs. Using their large mouths, they can even swallow birds and garter snakes. In one case, a bat was recorded as prey of this frog. This species is similar to the pickerel frog (Lithobates palustris) and the southern leopard frog (Lithobates sphenocephalus).
Research
Medical
The northern leopard frog produces specific ribonucleases to its oocytes. Those enzymes are potential drugs for cancer. One such molecule, called ranpirnase (onconase), is in clinical trials as a treatment for pleural mesothelioma and lung tumors. Another, amphinase, has been described as a potential treatment for brain tumors.
Neuroscience
The northern leopard frog has been a preferred species for making discoveries about basic properties of neurons since the 1950s. The neuromuscular junction of the sciatic nerve fibers of the sartorius muscle of this frog has been the source of initial data about the nervous system.
Muscle physiology and biomechanics
The northern leopard frog is a popular species for in vitro experiments in muscle physiology and biomechanics due to the ease of accessibility for investigators in its native range and the ability of the sartorius muscle to stay alive in vitro for several hours. Furthermore, the reliance of the frog on two major modes of locomotion (jumping and swimming) allows for understanding how muscle properties contribute to organismal performance in each of these modes.
Range
Northern leopard frogs occur from Great Slave Lake and Hudson Bay, Canada, south to Kentucky and New Mexico, USA. It is also found in Panama, where it is endemic to the central cordillera and western Pacific lowlands, although this is most likely an undescribed species. They occupy grasslands, lakeshores, and marshes.
See also
Southern leopard frog
Plains leopard frog
Rio Grande leopard frog
Lowland leopard frog
Relict leopard frog
American bullfrog
Pickerel frog
References
Further reading
AmphibiaWeb, available at http://amphibiaweb.org/
Ankley GT, Tietge JE, DeFoe DL, Jensen KM, Holcombe GW, Durhan EJ, Diamond SA. (1998). "Effects of ultraviolet light and methoprene on survival and development of Rana pipiens ". Environmental Toxicology and Chemistry 17 (12): 2530-2542. (abstract)*
Schreber JCD von. (1782). "Beytrag zur Naturgeschichte der Frösche ". Der Naturforscher, Halle 18: 182-193. (Rana pipiens, new species). (in German).
External links
Northern Leopard Frog (Rana pipiens) — Natural Resources Canada.
AWD: Rana pipiens — animal diversity, University of Michigan.
BBC news: "Rana pipiens and the treatment of brain tumours."
Lithobates
Amphibians of the United States
Amphibians of Canada
Fauna of the Great Lakes region (North America)
Fauna of the Eastern United States
Fauna of the Plains-Midwest (United States)
Animal models
Amphibians described in 1782
Symbols of Minnesota
Symbols of Vermont | Northern leopard frog | [
"Biology"
] | 1,287 | [
"Model organisms",
"Animal models"
] |
597,727 | https://en.wikipedia.org/wiki/List%20of%20Internet%20phenomena | Internet phenomena are social and cultural phenomena specific to the Internet, such as Internet memes, which include popular catchphrases, images, viral videos, and jokes. When such fads and sensations occur online, they tend to grow rapidly and become more widespread because the instant communication facilitates word of mouth transmission.
This list focuses on the internet phenomena that is accessible regardless of local internet regulations.
Advertising and products
Amazon Coat – an unnamed coat sold on the online store Amazon.com by the Chinese clothing brand Orolay, previously known for its home furnishings. It became a viral phenomenon from the period between December 2018 and the COVID-19 pandemic.
Beanie Babies – Cited as being the world's first Internet sensation in 1995.
Cerveza Cristal – A Chilean beer company that produced a series of advertisements during a Star Wars original trilogy broadcast in 2003. The commercials, titled The Force is with Cristal Beer, would air seamlessly with the scenes in the trilogy, such as a pair of hands like Obi Wan's opening a chest, revealing the beer. The advertisements were critically acclaimed in the country and became internationally viral on Twitter in March 2024.
Cooks Source infringement controversy – This publication drew backlash after it committed copyright infringement by using an online article without permission for commercial purposes. This backlash further increased due to Cooks Source's response which showed a misunderstanding of copyright and an increasing agitation to the original writer of the article.
Elf Yourself (2006) and Scrooge Yourself (2007) – Interactive websites created by Jason Zada and Evolution Bureau for OfficeMax's holiday season advertising campaign. Elf Yourself allows visitors to upload images of themselves or their friends, see them as dancing elves, and includes options to save or share the video. According to ClickZ, visiting the Elf Yourself site "has become an annual tradition that people look forward to". While not selling any one specific product, the two were created to raise consumer awareness of the sponsoring firm.
Flex Tape – An infomercial of the product Flex Tape. It became a meme after YouTuber JonTron made a video reviewing the infomercial.
FreeCreditReport.com – A series of TV commercials that were posted on the Internet; many spoofs of the commercials were made and posted on YouTube.
HeadOn – A June 2006 advertisement for a homeopathic product claimed to relieve headaches. Ads featured the tagline, "HeadOn. Apply directly to the forehead", stated three times in succession, accompanied by a video of a model using the product without ever directly stating the product's purpose. The ads were successively parodied on sites such as YouTube and rapper Lil Jon even made fun of it.
Kerfuś – A robot with cat face use as a mascot for Carrefour. The robot became viral in Poland in 2022, where Kerfuś became the main character of many memes and erotic pictures
Little Darth Vader – An advertisement by Volkswagen featuring young Max Page dressed in a Darth Vader costume running around his house trying to use "the Force". It was released on the Internet a few days prior to Super Bowl XLV in 2011, and quickly became popular. As of 2013 it was the most shared ad of all time.
LowerMyBills.com – Banner ads from this mortgage company feature endless loops of cowboys, women, aliens, and office workers dancing.
The Man Your Man Could Smell Like – A television commercial starring Isaiah Mustafa reciting a quick, deadpan monologue while shirtless about how "anything is possible" if men use Old Spice. It eventually led to a popular viral marketing campaign which had Mustafa responding to various Internet comments in short YouTube videos on Old Spice's YouTube channel.
"Mac Tonight/Moon Man" – A McDonald's commercial made to promote dinner sales. Starting in 2007, the character in the commercial, "Mac Tonight" was used in videos where he is depicted promoting violence against minorities and promoting the KKK with racist parodies of rap songs. The best-known parody, "Notorious KKK" (a parody of Hypnotize by The Notorious B.I.G.), has accumulated over 119,000 views on YTMND.
Nicole Kidman AMC Theatres commercial – In September 2021, AMC Theatres began airing a commercial starring actress Nicole Kidman in its theaters and on television. The ad, written by screenwriter Billy Ray, was intended to spur theater attendance following the COVID-19 pandemic by highlighting the "magic" of the movie theater experience. The commercial's grand style and the earnest melodrama of Kidman's monologue has led the commercial to be appreciated as an artifact of camp. The commercial has been the subject of internet memes, parodies, merchandise, and audience participation rituals.
"Nope, Chuck Testa" – A local commercial made for Ojai Valley Taxidermy, owned by Chuck Testa, suggesting that the stuffed creatures were alive until Testa appeared, saying "Nope, Chuck Testa!"; the ad soon went viral.
Potato Parcel – a web site that allows the user to send anonymous personalized messages on potatoes via the mail.
Pepsi MAX & Jeff Gordon Present: Test Drive – A short film where NASCAR driver Jeff Gordon poses as an average car buyer to prank a cars salesman. A sequel, Test Drive 2, was released the following year, with Gordon pranking a writer who had branded the original video as fake.
"Rivals" – A commercial for video game retailer EB Games that promoted Call of Duty: Advanced Warfare. The commercial drew criticism for its concept and the performances of its actors.
Shake Weight – Infomercial clips of the modified dumbbell went viral as a result of the product's sexually suggestive nature.
Vans (2016) – Featured in the "Damn Daniel" viral internet meme.
What Would You Do for a Klondike Bar? – A slogan at the end of commercials advertising the ice cream sandwich Klondike bar. People on YouTube and Facebook began posting videos depicting people in dangerous and absurdist situations attempting to reach a Klondike Bar in response to the slogan.
Whopper Whopper – A song by American restaurant fast-food chain Burger King which serves as a jingle for the restaurant's signature burger, the Whopper
Will It Blend? – The blender product Blendtec, claimed by its creator Tom Dickson to be the most powerful blender, is featured in a series of YouTube videos, "Will It Blend?" where numerous food and non-food items are used within the blender.
Xtranormal – A website allowing users to create videos by scripting the dialog and choosing from a menu of camera angles and predesigned CGI characters and scenes. Though originally designed to be used to ease storyboard development for filmmakers, the site quickly became popular after videos made with the tool, including "iPhone 4 vs HTC Evo", became viral.
Animation and comics
Animutations – Early Adobe Flash-based animations, pioneered by Neil Cicierega in 2001, typically featuring foreign language songs (primarily Japanese, such as "Yatta"), set to random pop-culture images. The form is said to have launched the use of Flash for inexpensive animations that are now more common on the Internet.
Arthur – A 1996 PBS educational series that became popular on the Internet in July 2016 through humorous stills, including a still of the title character's clenched fist.
Ate my balls – One of the earliest examples of an internet meme, which involved web pages depicting a particular celebrity, fictional character, or other subject's relish for eating testicles.
Axe Cop – Initially a web comic series with stories created by five-year-old Malachai Nicolle and drawn into comic form by his 29-year-old brother Ethan, the series gained viral popularity on the Internet due to the vividness and non-sequitur nature of Malachai's imagination, and has led to physical publication and a series of animated shorts in the 2012–2013 season for the Fox Television Network.
Badger Badger Badger – A hypnotic loop of animal calisthenics set to the chant of "badger, badger, badger", created by Jonti "Weebl" Picking.
Big Chungus – A still frame of the 1941 Merrie Melodies short Wabbit Twouble when Bugs Bunny mocks a fat Elmer Fudd. The meme originated from fictitious cover art for a video game titled Big Chungus (with "chungus" being a neologism associated with video game commentator James Stephanie Sterling), which featured a still from the scene, and was popularized by a Facebook post by a GameStop manager who alleged that a colleague's mother had inquired about purchasing the "game" as a gift. Warner Bros. later incorporated Big Chungus into its own video game Looney Tunes World of Mayhem.
Bongo Cat – Originated on Twitter on 7 May 2018, when a simple animated cat GIF, was edited for it to play the song "Athletic" from the Super Mario World soundtrack. This cat has since been edited to play various songs on bongos, and later other instruments.
"Caramelldansen" – A spoof from the Japanese visual novel opening Popotan that shows the two main characters doing a hip swing dance with their hands over their heads, imitating rabbit ears, while the background song plays the sped-up version of the song "Caramelldansen", sung by the Swedish music group Caramell. Also known as Caramelldansen Speedycake Remix or Uma uma dance in Japan, the song was parodied by artists and fans who then copy the animation and include characters from other anime performing the dance.
Charlie the Unicorn – A five-part series of videos involving the titular unicorn who is repeatedly hoodwinked by two other blue and pink unicorns, Lolz and Roffle, who take him on elaborate adventures to steal his belongings or cause him physical harm.
Dancing baby – A 3D-rendered dancing baby that first appeared in 1996 by the creators of Character Studio for 3D Studio MAX, and became something of a late 1990s cultural icon, in part due to its exposure on worldwide commercials, editorials about Character Studio, and the popular television series Ally McBeal.
The End of the World – A Flash-animated video by Jason Windsor in 2003 that depicts a situation when the entire world is nuked by rivalling countries.
Happy Tree Friends – A series of Flash cartoons featuring cartoon animals experiencing violent and gruesome accidents.
Homestar Runner – A Flash animated Internet cartoon by Mike Chapman, Craig Zobel, and Matt Chapman, created in 1996 and popularized in 2000. The cartoon contains many references to popular culture from the 1980s and 1990s, including video games, television, and popular music.
Joe Cartoon – Creator of interactive Flash animations Frog in a Blender and Gerbil in a Microwave, which were two of the first Flash cartoons to receive fame on the Internet.
Kung Fu Bear – an Internet meme involving an Asian black bear who skillfully twirls, throws and catches a long staff.
Loituma Girl (also known as Leekspin) – A looped Flash animation of an anime girl Orihime Inoue from the Bleach series twirling a leek, set to a scat singing section of the traditional Finnish folk song "Ievan Polkka", sung by the Finnish quartet Loituma on their 1995 debut album Things of Beauty. The band's popularity rose tremendously after the animation was posted in Russian LiveJournal in 2006. The song clip soon enjoyed overwhelming popularity as a ringtone.
"Loss" – A webcomic strip published on 2 June 2008, by Tim Buckley for his gaming-related webcomic Ctrl+Alt+Del. Set during a storyline in which the main character Ethan and his fiancée Lilah are expecting their first child, the strip – presented as a four-panel comic with no dialogue – shows Ethan entering a hospital, where he sees Lilah weeping in a hospital bed; she has suffered a miscarriage. It has received negative reception from critics and webcomic creators and been adapted and parodied many times.
Motu Patlu – An Indian cartoon aired on Nickelodeon (India), made widely popular by a Nick India ad celebrating Teacher's Day in India, which has been reposted under the title "D se Dab".
Nyan Cat – A YouTube video of an animated flying cat, set to an Utau song.
Polandball (more commonly known as Countryballs) – A user-generated Internet meme which originated on the /int/ board of German imageboard Krautchan.net in the latter half of 2009. The meme is manifested in a large number of online comics, where countries are presented as spherical personas that interact in often broken English, poking fun at national stereotypes and international relations, as well as historical conflicts.
Pusheen – An animated grey tabby cat, originally drawn as a character in the webcomic "Everyday Cute" by artists Clare Belton and Andrew Duff. Belton has since released a Pusheen book.
Rage comics – A large set of pre-drawn images including crudely drawn stick figures, clip art, and other artwork, typically assembled through website generators, to allow anyone to assemble a comic and post to various websites and boards. The New York Times reports that thousands of these are created daily. Typically these are drawn in response to a real-life event that has angered the comic's creator, hence the term "rage comics", but comics assembled for any other purpose are also made. Certain images from rage comics are known by specific titles, such as "trollface" (a widely grinning man), "forever alone" (a man crying to himself), or "rage guy" (a man shouting "FUUUUU...").
Salad Fingers – A Flash animation series surrounding a green man with severely elongated fingers in a desolate world populated mostly by deformed, functionally mute people.
Shut the fuck up, TERF – A crudely photoshopped image featuring Zombie Land Saga character Lily Hoshikawa, a trans girl, holding a gun with the caption "Shut the fuck up, TERF". The image was criticized as constituting a threat of violence, and presented in UK Parliament in May 2019 during a convening of the Human Rights Committee while questioning a Twitter employee on the subject of abuse. In a tweet in January 2023, J. K. Rowling likened the meme to early twentieth century anti-suffragist artwork.
Simpsonwave – A genre of videos where clips of the American animated sitcom The Simpsons are filtered with tinted, VHS-like effects and played over psychedelic vaporwave or chillwave tracks.
Skibidi Toilet – A series of viral YouTube animations made by animator Alexey Gerasimov using Source Filmmaker which depicts a war between skibidi toilets (disembodied heads inside moving toilets which can be killed by being flushed down) and a faction of people with cameras, TVs and loudspeakers for heads.
The Spirit of Christmas – Consists of two different animated short films made by Trey Parker and Matt Stone, which are precursors to the animated series South Park. To differentiate between the two homonymous shorts, the first short is often referred to as Jesus vs. Frosty (1992), and the second short as Jesus vs. Santa (1995). Fox executive Brian Graden sent copies of Jesus vs. Santa to several of his friends, and from there it was copied and distributed, including on the internet, where it became one of the first viral videos. They were created by animating construction paper cut-outs with stop motion, and features prototypes of the main characters of South Park.
Steamed Hams – Remixes of a segment of The Simpsons episode "22 Short Films About Springfield" involving Principal Skinner and Superintendent Chalmers, in which Skinner has invited Chalmers over to dinner, inadvertently sets his ham on fire, and covers it up by serving fast food hamburgers as "steamed hams".
"This is fine" – A two-panel comic drawn in 2013 by KC Green as part of the Gunshow webcomic, showing an anthropomorphic dog sitting in a room on fire, and saying "This is fine". The comic emerged as a meme in 2016, used in situations, as described by The New York Times, "halfway between a shrug and complete denial of reality". Numerous derivatives of the "This is fine" comic have been made.
"Tuxedo Winnie the Pooh" – A photoshopped image of Winnie the Pooh sitting in an armchair from the featurette Winnie the Pooh and the Honey Tree, which became popular on Reddit in 2019. The meme, which is also known as "A fellow man of culture", features Winnie the Pooh wearing a tuxedo and smiling.
The Ultimate Showdown of Ultimate Destiny – A lethal battle royale between many notable real and fictitious characters from popular culture. Set to a song of the same name, written and performed by Neil Cicierega under his musician alias, "Lemon Demon."
Ultra Instinct Shaggy – A character interpretation that the Scooby-Doo character Shaggy is immensely more powerful than he presents himself. The meme is usually presented as still frames of a behind-the-scenes interview of the 2002 live-action movie with subtitles implying that Shaggy is restraining his power to prevent catastrophe. Subsequently, Warner Bros. canonized the meme as part of a credits gag in the animated film Mortal Kombat Legends: Battle of the Realms, as well as including Shaggy as a fighter in the MultiVersus crossover fighting game.
Weebl and Bob – A series of Flash cartoons created by Jonti Picking featuring two egg-shaped characters that like pie and speak in a stylistic manner.
xkcd – A webcomic created by Randall Munroe, popularized on the Internet due to a high level of math-, science- and geek-related humor, with certain jokes being reflected in real-life, such as using Wikipedia's "" tag on real world signs or the addition of an audio preview for YouTube comments.
Challenges
Challenges generally feature Internet users recording themselves performing certain actions, and then distributing the resulting video through social media sites, often inspiring or daring other users to repeat the challenge.
Dance
Coffin Dance/Dancing Pallbearers – A group of Ghanaian pallbearers that respectfully dance during funeral processions were covered by the BBC in 2017 and gained some initial Internet popularity. In the wake of the COVID-19 pandemic, a popular TikTok video mashed the BBC footage with the EDM song "Astronomia" from Russian artist Tony Igy, creating a meme that appeared to spread as a morbidly humorous reminder about the dangers of COVID-19.
Dab – A dance move where a person drops their head into a bent, slanted arm, with the other arm out straight and parallel.
"Dancing Banana" – A banana dancing to the song "Peanut Butter Jelly Time" by the Buckwheat Boyz.
Hampster Dance – A page filled with hamsters dancing, linking to other animated pages. It spawned a fictional band complete with its own CD album release.
Harlem Shake – A video based on Harlem shake dance, originally created by YouTube personality Filthy Frank, and using an electronica version of the song by Baauer. In such videos, one person is dancing or acting strange among a room full of others going about routine business. After the drop in the song and a video cut, everyone starts dancing or acting strangely. The attempts to recreate the dance led to a viral spread on YouTube.
"Hit the Quan" – A viral dance challenge to the song "Hit the Quan" by American rapper iLoveMemphis. Rich Homie Quan originally performed this dance in his music video for his song "Flex (Ooh, Ooh, Ooh)". iLoveMemphis produced the "Hit The Quan" based around Rich Homie Quan's dance. iLoveMemphis' song launched the "Hit the Quan" viral dance challenge because of its convenient lyrics to dance to. "Hit the Quan" reached 20 on the Billboard Hot 100 chart because of the popularity of the dance. The dance challenge was very popular on social media platforms, especially Vine. Many celebrities participated in the popular dance challenge.
"Indian Thriller" – A viral scene from the Indian film Donga with added subtitles phonetically approximating the original lyrics as English sentences.
JK Wedding Entrance Dance – The wedding procession for Jill Peterson and Kevin Heinz of St. Paul, Minnesota, choreographed to the song "Forever" by Chris Brown. Popularized on YouTube with 1.75 million views in less than five days in 2009. The video was later imitated in an episode of The Office on NBC.
"Kiki Challenge" or "#DoThe Shiggy" – A viral dance challenge to the song "In My Feelings" by Drake. This challenge was started by a comedian named Shiggy on the night that Drake released the album Scorpion. Shiggy posted a video of himself on his Instagram account dancing along to part of the lyrics in what looks like in the middle of a neighborhood street. Shiggy commented #DoTheShiggy. Drake claims the success of the song was due to Shiggy's popular dance to his song. The dance challenge is often filmed with a twist of the original. The most popular twist of the dance is filmed from the passenger side of a moving vehicle through the open driver door where the would be driver is dancing moves along with the slowly moving car. This challenge received a lot of controversy due to the fact nobody was in control of the car. Performers have received fines and sometimes suffered injury. This viral dance challenge was performed by a number of professional athletes and celebrities. The dance challenge was performed by people in the U.S. and spread to the rest of the world.
Little Superstar – A video of Thavakalai, a short Indian actor, break-dancing to MC Miker G & DJ Sven's remix of the Madonna song "Holiday". The clip comes from a 1990 Tamil film Adhisaya Piravi, featuring actor Rajnikanth.
Running Man Challenge – A dance move where participants in a way resembling running to the 1996 R&B song "My Boo" by Ghost Town DJ's. First posted to Vine by two teenagers from New Jersey, the dance went viral in 2016 after two University of Maryland basketball players posted their rendition. The dance gets its name because it is an adaptation of the original running man dance move.
T-pose – A surrealist "dance move" that became popular in April 2018 modelled after the default pose (also known as a bind pose) that many 3D models in games, animations, and more take in their raw file form.
Techno Viking – A muscular Nordic raver dancing in a technoparade in Berlin.
"Thriller" by the CPDRC Dancing Inmates – A recreation of Michael Jackson's hit performed by prisoners at the Cebu Provincial Detention and Rehabilitation Center (CPDRC) in the Philippines. In January 2010, it was among the ten most popular videos on YouTube with over 20 million hits.
Triangle Dance Challenge – Three individuals place hands on each other's shoulders and jump to a different point on an invisible triangle. This gained popularity in 2019.
Email
Bill Gates Email Beta Test – An email chain-letter that first appeared in 1997 and still circulates. The message claims that America Online and Microsoft are conducting a beta test and for each person one forward the email to, they will receive a payment from Bill Gates of more than $200. Realistic contact information for a lawyer appears in the message.
Craig Shergold – A British former cancer patient who is most famous for receiving an estimated 350 million greeting cards, earning him a place in the Guinness Book of World Records in 1991 and 1992. Variations of the plea for greeting cards sent out on his behalf in 1989 are still being distributed through the Internet, although Shergold himself died in 2020, making the plea one of the most persistent urban legends.
Goodtimes virus – An infamous, fraudulent virus warning that first appeared in 1994. The email claimed that an email virus with the subject line "Good Times" was spreading, which would "send your CPU into a nth-complexity infinite binary loop", among other dire predictions.
Lighthouse and naval vessel urban legend – Purportedly an actual transcript of an increasingly heated radio conversation between a U.S. Navy ship and a Canadian who insists the naval vessel change a collision course, ending in the punchline. This urban legend first appeared on the Internet in its commonly quoted format in 1995, although versions of the story predate it by several decades. It continues to circulate; the Military Officers Association of America reported in 2011 that it is forwarded to them an average of three times a day. The Navy has a page specifically devoted to pointing out that many of the ships named weren't even in service at the time.
MAKE.MONEY.FAST – One of the first spam messages that was spread primarily through Usenet, or even earlier BBS systems, in the late 1980s or early 1990s. The original email is attributed to an individual who used the name "Dave Rhodes", who may or may not have existed. The message is a classic pyramid scheme – one receives an email with a list of names and is asked to send $5 by postal mail to the person whose name is at the top of the list, add their own name to the bottom, and forward the updated list to a number of other people.
Neiman Marcus Cookie recipe – An email chain-letter dating back to the early 1990s, but originating as Xeroxlore, in which a person tells a story about being ripped off for over $200 for a cookie recipe from Neiman Marcus. The email claims the person is attempting to exact revenge by passing the recipe out for free.
Nigerian Scam/419 scam – A mail scam attempt popularized by the ability to send millions of emails. The scam claims the sender is a high-ranking official of Nigeria with knowledge of a large sum of money or equivalent goods that they cannot claim but must divest themselves of; to do so, they claim to require a smaller sum of money up front to access the sum to send to the receiver. The nature of the scam has mutated to be from any number of countries, high-ranking persons, barristers, or relationships to said people.
Film and television
The Babadook (2014) – An Australian psychological horror film that started trending on Twitter in June 2017 when the title character became an unofficial mascot for the LGBT community. Prior to that, rumors of the Babadook's sexuality began in October 2016, when some Netflix users reported seeing the film categorized as an LGBT movie on Netflix.
Barbenheimer (2023) – A portmanteau of Barbie and Oppenheimer. Barbenheimer began circulating ahead of the theatrical release of both films on 21 July 2023, with social media users creating and sharing memes noting the juxtaposition between the films.
Bee Movie (2007) – Sped-up or slowed-down clips of the film have become popular on YouTube. One upload by "Avoid at All Costs" exceeded 12 million views as of December 2016. Many of the edited videos in this trend were taken down for spam due to the volume of videos posted by some channels. From September 2013 onwards, a few Internet users posted the entirety of the Bee Movie script on sites like Tumblr and Facebook.
The Blair Witch Project (1999) – The film's producers used Internet marketing to create the impression that the documentary-style horror film featured real, as opposed to fictional events.
Bye, Felicia – A line from the 1995 film Friday originally uttered by Ice Cube's character to dismiss Angela Means' character, Felisha. The line became viral beginning in the 2010s.
Cloverfield (2008) – Paramount Pictures used a viral marketing campaign to promote this monster movie.
Dahmer – Monster: The Jeffrey Dahmer Story (2022) – An anthology thriller true crime series by Ryan Murphy and Ian Brennan for Netflix. after its release, it became viral over Twitter and TikTok.
Dear Evan Hansen (2021) – A film adaptation of the stage musical of the same name that featured then 27-year-old Ben Platt reprising his role as 17-year-old high schooler Evan Hansen, a casting decision that sparked widespread backlash from critics and the public, all of whom attributed it to nepotism. Two scenes from the film instantly became internet memes the moment it was made available digitally as a result of the controversy: a close-up of Evan crying during the climax of "Words Fail," his expression wrenched and tortured, and the moment Evan runs off from Zoe Murphy (Kaitlyn Dever) in the hallways during their first meeting at school. Jameson Rich of The New York Times observed "The image of a crying Platt is already a much-iterated joke, and its thrust is, overwhelmingly, derisive. But being the target of the internet's scorn is not de facto a bad thing. When a meme circulates far enough, the underlying movie can gain what feels like cultural currency. The very fact that the images are not part of any intentional advertising actually lends them a note of authenticity. They are, in a perverse way, resonating on their own merit. Is there a better form of contemporary publicity?"
Downfall (2004) – A film depicting Adolf Hitler (portrayed in this film by Swiss actor Bruno Ganz) during his final days of his life. Multiple scenes in which Hitler rants in German have been parodied innumerable times on the Internet, including when Hitler finds out that Felix Steiner has failed to carry out his orders and when Hitler finds out SS-Gruppenführer Hermann Fegelein has gone AWOL. This scene often has its English subtitles replaced by mock subtitles to give the appearance that Hitler is ranting about modern, often trivial topics, and sometimes even breaks the fourth wall by referencing the Internet meme itself. While the clips are frequently removed for copyright violations, the film's director, Oliver Hirschbiegel, has stated that he enjoys them, and claimed to have seen about 145 of them.
Figwit (abbreviated from "Frodo is great...who is that?") – A background elf character with only seconds of screen time and one line of dialog from The Lord of the Rings film trilogy played by Flight of the Conchords member Bret McKenzie, which became a fascination with a large number of fans. This ultimately led to McKenzie being brought back to play an elf in The Hobbit.
Goncharov – A nonexistent film invented by users on Tumblr. It is purported to be "the greatest mafia movie ever made," released in 1973. In 2020, a user posted a picture of a tag found on a pair of boots which featured details on the nonexistent film Goncharov in place of a brand label, which suggested it was "A film by Matteo JWHJ0715" and "presented" by Martin Scorsese. Users have inconsistently described the film as being directed by either Matteo JWHJ0715 or Scorsese. This label was speculated by several users to be a misprint of Gomorrah. Goncharov picked up traction again in late November 2022 when a user created a poster for the film that featured a lineup of actors and character names, ultimately sparking an elaborate fiction of the film's existence. Discussion of the film involved detailed critical analysis of the plot, themes, symbolism, and characters, as well as creation of gifs, fan art, and theme music, all presented as if the film were real. The meme's popularity caused it to become a trending topic on the Tumblr platform. A similar meme that emerged on TikTok nine months later—about a fictional 1980s horror film, Zepotha—drew comparisons to Goncharov.
LazyTown (2004) – A children's television program originating from Iceland, which became very popular after one of the primary actors, Stefán Karl Stefánsson, was diagnosed with cancer and set up a GoFundMe page for support. The song "We Are Number One" became a meme in October 2016, and many videos were created. It became one of the fastest growing memes in history, with 250 videos uploaded in 5 days.
Les Misérables (2012) – Tom Hooper's film adaptation of the globally popular stage musical of the same name based on Victor Hugo's 1862 novel of the same name. In April 2022, a clip of the film's version of the "Do You Hear the People Sing?" musical sequence circulated on Twitter in protest of the lockdown during the 2022 Shanghai COVID-19 outbreak. The clip was ultimately blocked by the Chinese government to stop further protest.
The Lord of the Rings trilogy – Released between 2001 and 2003, just as meme culture was taking off, several moments from the films became part of the online culture, with, most notably, Sean Bean's character of Boromir stating "One does not simply walk into Mordor" as one of the most commonly referenced.
Marble Hornets – A documentary-style horror, suspense short film series based on alternate reality experiences of the Slenderman tale. Marble Hornets was instrumental in codifying parts of the Slender Man mythos, but is not part of the inter-continuity crossover that includes many of the blogs and vlogs that followed it, although MH does feature in other canons as either a chronicle of real events or a fictional series.
Marriage Story (2019) – Noah Baumbach's critically acclaimed drama about a warring couple going through a coast-to-coast divorce spawned multiple memes despite its serious tone. According to Wired, a meme of Adam Driver punching a wall during Charlie and Nicole's argument scene has contributed to "re-contextualizing Charlie and Nicole's fight into something light and silly". Driver punching a wall has been repurposed to represent general arguments over trivial matters in which a participant becomes angry and overreacts.
Mega Shark Versus Giant Octopus (2009) – The theatrical trailer released in mid-May 2009 became a viral hit, scoring over one million hits on MTV.com and another 300,000 hits on YouTube upon launch, prompting brisk pre-orders of the DVD.
Minions - The mischievous yellow creatures from the Despicable Me franchise have, since their introduction in 2010, become ubiquitous in certain layers of meme culture. The memes created with images of Minions have frequently been derided as bland or unintentionally absurd. In 2022, a phenomenon known as "Gentleminions" arose, in which young men and teen boys would arrive to Minions: The Rise of Gru in formal attire.
My Little Pony: Friendship Is Magic – Hasbro's 2010 animated series to revive its toy line was discovered by members of 4chan and subsequently spawned a large adult, mostly male fanbase calling themselves "bronies" and creating numerous Internet memes and mashups based on elements from the show.
Re-cut trailer – User-made trailers for established films, using scenes, voice-overs, and music, to alter the appearance of the film's true genre or meaning or to create a new, apparently seamless, film. Examples include casting the thriller-drama The Shining into a romantic comedy, or using footage from the respective films to create Robocop vs. Terminator.
The Nutshack (2007) – a Filipino-American adult animated television series that has been widely mocked for its obnoxious characters, bad writing and animation, and especially for the theme song.
Pingu – An animated Swiss children's television series. The show's animation style has spawned many memes. In particular, a meme in which Mozart's Requiem accompanies a viral video of Pingu the penguin saying "Noot Noot" gained popularity, using the choir symphony to depict feelings of terror and dread.
The Room (2003) – Written, produced, directed, and starring Tommy Wiseau, the low- budget independent film is considered one of the worst films ever made. However, through social media and interest from comedians, gained a large number of ironic fans and turned into a cult classic. It is a popular source for memes based on some of the poorly delivered lines in the movie, such as "You're tearing me apart, Lisa!" (a shoehorned reference to an iconic James Dean line in Rebel Without a Cause) and "Oh hi, Mark."
Saltburn (2023) – A black comedy psychological thriller film written, directed, and co-produced by Emerald Fennell. After its theatrical release, it became a streaming hit on Amazon Prime Video and went viral on TikTok.
Sharknado (2013) – A made-for-television film produced by The Asylum and aired on the SyFy network as a mockbuster of other disaster films, centered on the appearance of a tornado filled with sharks in downtown Los Angeles. Though similar to other films from the Asylum, elements of the film, such as low-budget effects and choice of actors, led to the film becoming a social media hit and leading to at least four additional sequels.
Shrek – A DreamWorks franchise that has an internet fandom likes the series. The viral video "Shrek is Love, Shrek is Life" was based on a homoerotic story on 4chan depicting the titular ogre engaging in anal sex with a young boy.
Snakes on a Plane (2006) – Attracted attention a year before its planned release, and before any promotional material was released, due to the film's working title, its seemingly absurd premise, and the piquing of actor Samuel L. Jackson's interest to work on the film. Producers of the film responded to the Internet buzz by adding several scenes and dialogue imagined by the fans.
SpongeBob SquarePants – A Nickelodeon animated television series that has spawned various Internet memes. These memes include "Surprised Patrick", "Mr. Krabs Blur", "Caveman SpongeBob", "Handsome Squidward", and "Mocking SpongeBob". In 2019, Nickelodeon officially released merchandise based on the memes.
Star War: The Third Gathers: The Backstroke of the West – Around the time of release, a bootleg recording circulated on the internet via peer-to-peer sharing websites. It quickly became notorious for its notable use of Engrish, like the translation of Darth Vader's line "No!" rendered as "Do not want". About a decade after the release of the bootleg, a fandub matching its subtitles was posted on YouTube.
Steamed Hams – A clip from the season seven episode of The Simpsons, 22 Short Films About Springfield, gained popularity with many remixes and edits to the Skinner and The Superintendent segment.
Take This Lollipop (2011) – An interactive horror short film and Facebook app, written and directed by Jason Zada to personalize and underscore the dangers inherent in posting too much personal information about oneself on the Internet. Information gathered from a viewer's Facebook profile by the film's app, used once and then deleted, makes the film different for each viewer.
The Three Bears (1939) – An animated short film made by Terrytoons based on the story Goldilocks and the Three Bears. One of the scenes from the short depicting Papa Bear saying "Somebody toucha my spaghet!" in a stereotypically thick Italian accent became an internet meme in December 2017.
Treasure Island (1988) – A Russian animated film developed and distributed by Kievnauchfilm based on the novel of the same name by Robert Louis Stevenson. A loop of a scene from the film showing three characters in a walk cycle with Dr. Livesey showing a highly pronounced swagger, often overlaid with the phonk song, "Why Not" by Ghostface Playa, became an internet meme in August 2022.
A Very Brady Sequel (1996) – A moment where Marcia Brady says "Sure, Jan" became a popular internet meme during the mid-2010s, usually as a response gif. The original writers and actors responded to the meme during a 2021 interview with Vice.
West Side Story (2021) – A clip of the opening long take shot of "The Dance at the Gym" sequence from Steven Spielberg's 2021 film version of the musical was uploaded to Twitter on 25 February 2022, and went viral over the weekend, reaching 3 million views and over 32,000 likes. It led to many users sharing images and clips of their favorite scenes and shots from the film during that time, while praising Spielberg's direction and Janusz Kamiński's cinematography. This was further amplified by a Twitter thread by filmmaker Guillermo del Toro analyzing the camerawork and blocking on this particular shot.
Gaming
"All your base are belong to us" – Badly translated English from the opening cutscene of the European Mega Drive version of the 1989 arcade game Zero Wing. It has become a catchphrase, inspiring videos and other derivative works.
Angry Birds – A mobile game series made by Rovio Entertainment in December 2009 for the iOS and Nokia app stores, with a Google Play version releasing in October 2010. Since its release, the game has amassed a large following on both the internet and in media for its visuals and simple-to-understand game mechanics of launching a bird from a slingshot. The game has also seen many forms of merchandising, with 30% of Rovio Entertainment's revenue coming from merchandise sales in 2011. One of the largest earlier endeavors was the brand's first licensed theme park in Tampere, Finland that was set to open on 1 May 2012.
Among Us – A game made by game studio Innersloth released on Steam in 2018. The game reached internet fame in 2020 due to Twitch streamers and YouTubers playing the game frequently. Still images from the game, phrases from the game like "Emergency Meeting" and "Dead body reported" as well as typical gameplay events have influenced internet memes. Other terms like "Sus", "Sussy", "Sussy Baka", "Amogus", and "When the imposter is sus" also became notable memes on social media platforms, later taking on a more ironic usage.
Arrow in the knee – City guards in The Elder Scrolls V: Skyrim would utter the line: "I used to be an adventurer like you, then I took an arrow in the knee". The latter part of this phrase quickly took off as a catchphrase and a snowclone in the form of "I used to X, but then I took an arrow in the knee" with numerous image macros and video parodies created.
Bowsette – A fan-made depiction of the Super Mario character Bowser using Toadette's Super Crown power-up from the Nintendo Switch title New Super Mario Bros. U Deluxe to transform into a lookalike of Princess Peach. The character became popular following a four-panel webcomic posted by a user on Twitter and DeviantArt in September 2018.
But can it run Crysis? – A question often asked by PC gaming and hardware enthusiasts. When released in 2007, Crysis was extremely taxing on computer hardware, with even the most advanced consumer graphics cards of the time unable to provide satisfactory frame rates when the game was played on its maximum graphical settings. As a result, this question is asked as a way of judging a certain computer's capability at gaming.
Can it run Doom? – A common joke question with any hardware that has a CPU, due to the vast amount of ports the game has received. Examples of unconventional hardware that Doom has been ported to include a Canon Proxima printer, the VIC-20, the Touch Bar on the 2016 MacBook Pro, a smart fridge, an ATM, a billboard truck, and within the game itself.
Doomguy and Isabelle – The pairing of Isabelle from the Animal Crossing video game series and Doomguy from the Doom franchise due to the shared release date of Animal Crossing: New Horizons and Doom Eternal.
Elden Ring – A 2022 video game that spawned multiple memes, such as:
– The colloquial name for an Elden Ring player who specializes in fighting Malenia, one of the game's most difficult bosses, and whose character wears no armor but a jar as a helmet. "Let me solo her" became widely acclaimed within the game's online community after volunteering to deal with Malenia on behalf of other players through the game's player summoning feature, and successfully defeating her at least four thousand times without assistance. Videos of the player's performances became popular and widely shared on multiple social news websites. The player's exploits was acknowledged by the game's publisher, and became the subject of fan labor. Let me solo her was awarded PC Gamer's Player of the Year award for 2022.
"Maidenless" – a term used by multiple non-player characters to describe the player character. In its original context, it implies that the player character lacks a type of important ally, a "maiden", but it has been appropriated by the player community as a joke or insult, who uses it to imply that its recipient lacks a romantic partner.
Flappy Bird – A free-to-play casual mobile game released on the iOS App Store on 24 May 2013, and on Google Play on 30 January 2014, by indie mobile app developer Dong Nguyen. The game began rapidly rising in popularity in late-December 2013 to January 2014 with up to 50 million downloads by 5 February. On 9 February, Nguyen removed the game from the mobile app stores citing negative effects of the game's success on his health and its addictiveness to players. Following the game's removal from the app stores, numerous clones and derivatives of the game were released with varying similarities to the original game.
I Love Bees – An alternate reality game that was spread virally after a one-second mention inside a Halo 2 advertisement. Purported to be a website about honey bees that was infected and damaged by a strange artificial intelligence, done in a disjointed, chaotic style resembling a crashing computer. At its height, over 500,000 people were checking the website every time it updated.
Lamar Roasts Franklin – A cutscene in the 2013 action-adventure video game Grand Theft Auto V where Lamar Davis, portrayed by comedian Slink Johnson, berates Franklin Clinton, portrayed by actor and former rapper Shawn Fonteno, for Franklin's haircut and his relationship with his girlfriend, ending in Lamar uttering the word "nigga" in a condescending, sing-song voice and giving Franklin the middle finger, much to the latter's chagrin. The cutscene experienced a resurgence in popularity in late 2020 when parodies of the scene were uploaded on YouTube and other video hosting sites. It usually involves Lamar's character model being replaced with various popular culture icons such as Darth Vader, Vegeta, and Snow White among others, with Lamar's dialogue dubbed to account for the characters used. In 2021, Fonteno and Johnson reprised their roles as Franklin and Lamar respectively in a live-action re-enactment of the cutscene. Later that year, Fonteno and Johnson once again reprised their roles in The Contract DLC for Grand Theft Auto Online, complete with a homage to the original roast cutscene.
Leeroy Jenkins – A World of Warcraft player charges into a high-level dungeon with a distinctive cry of "Leeeeeeeerooooy... Jeeenkins!", ruining the meticulous attack plans of his group and getting them all killed.
Let's Play – Videos created by video game players that add their commentary and typically humorous reactions atop them playing through a video game. These videos have created a number of Internet celebrities who have made significant money through ad revenue sharing, such as PewDiePie who earned over $12 million from his videos in 2015.
Line Rider – A Flash game where the player draws lines that act as ramps and hills for a small rider on a sled.
Mafia City – A mobile game that has become infamous for its odd advertising involving a person drastically increasing their stats for doing various mob-related activities, and for the phrase "That's how mafia works".
Portal – The games in the Portal series introduced several Internet memes, including the phrase "the cake is a lie", and the space-obsessed "Space Core" character.
Press to pay respects – A prompt for the player to press a button on the PC version of Call of Duty: Advanced Warfare, where the player character would approach the coffin of a fallen comrade in response. The mechanic is repeatedly criticized and ridiculed for both being arbitrary and unnecessary, uninteresting gameplay, as well as being inappropriate to the tone of the funeral the game otherwise intends to convey. The phrase has since become an Internet meme in its own right, sometimes used unironically: during the tribute stream for the Jacksonville Landing shooting, viewers posted a single letter "F" in the chat.
Roblox – a sandbox game that has spawned several memes, such as its "oof" sound.
QWOP – A browser-based game requiring the player to control a sprint runner by using the Q, W, O, and P keys to control the runner's legs. The game is notoriously difficult to control, typically leaving the runner character flailing about. The concept developed into memes based on the game, as well as describing real-life mishaps as attributable to QWOP.
Six Degrees of Kevin Bacon – A trivia/parlor game based around linking an actor to Kevin Bacon through a chain of co-starring actors in films, television, and other productions, with the hypothesis that no actor was more than six connections away from Bacon. It is similar to the theory of six degrees of separation or the Erdős number in mathematics. The game was created in 1994, just at the start of the wider spread of Internet use, populated further with the creation of movie database sites like IMDb, and since has become a board game and contributed towards the field of network science.
Sonic the Hedgehog – A video game series created by Sega that has spawned multiple memes, such as:
Sonic Real-Time Fandubs – The YouTube channel SnapCube has produced a series of improvisational comedy gag dubs of several Sonic titles, including Sonic Adventure 2, Sonic the Hedgehog (2006) and Shadow the Hedgehog, in which their cutscenes are dubbed with new, inaccurate dialogue on purpose. They have themselves earned their own fandom and derivative works based on jokes from the series. The dub over the scene in Sonic Adventure 2 where Doctor Eggman destroys half of the moon featuring an expletive-filled rant from the actor has spawned several memes.
Sanic – A purposely misdrawn Sonic that has been referenced by Sega themselves, and used in merchandise;
and "Ugandan Knuckles" – A meme that gained high popularity thanks to the social game VRChat, where players with a crude Knuckles model asked other players if they "knew da wae" ("know the way"), who their "queen" was, clicking their tongue, and spitting repeatedly.
Surprised Pikachu – An image of the Pokémon Pikachu with a blank look and an open mouth. It is used as a reaction image to show either shock or lack thereof.
Twitch Plays Pokémon – An "experiment" and channel created by an anonymous user on Twitch in February 2014. Logged-in viewers to the channel can enter commands in chat corresponding to the physical inputs used in the JRPG video game Pokémon Red. These are collected and parsed by a chat software robot that uses the commands to control the main character in the game, which is then live-streamed from the channel. The stream attracted more than 80,000 simultaneous players with over 10 million views with a week of going live, creating a chaotic series of movements and actions within the game, a number of original memes, and derivative fan art. The combination has been called an entertainment hybrid of "a video game, live video and a participatory experience," which has inspired similar versions for other games.
U R MR GAY – A message allegedly hidden in the Super Mario Galaxy box art, which appears when each letter not decorated with a star is removed from the art. It was first noticed by a NeoGAF poster in September 2007. Video game journalists have debated as to whether the message was placed on purpose or was simply a humorous coincidence. In Super Mario Galaxy 2, an alleged response to the former's message can be inferred in the title by reading the letters that sparkle in the box art from bottom to top, spelling out "YA I M R U?"
Untitled Goose Game – A 2019 video game developed by Australian game studio House House, in which the player controls a goose causing mischief in an English village. An early teaser for the game in 2017 led to strong interest in the title, and on release, the game quickly became an Internet meme.
Wordle – A word-guessing game similar to Jotto and Mastermind, where the player has only six tries to guess a five-letter word each day, the game indicating whether letters are in the word and/or in the correct position. The game grew popular over a few weeks after the ability to share results with others via social media was added near the end of 2021. The game's popularity led to The New York Times Company acquiring the game from its creator Josh Wardle at the end of January 2022 for an undisclosed seven-figure sum.
Images
Baby mugging and Baby suiting – MommyShorts blogger Ilana Wiles began posting pictures of babies in mugs, and later adult business suits, both of which led to numerous others doing the same.
Babylonokia – A clay tablet, shaped like a mobile phone designed by Karl Weingärtner. Fringe scientists and alternative archaeology proponents subsequently misrepresented a photograph of the artwork as showing an 800-year-old archaeological find; that story was popularised in a video on the YouTube channel Paranormal Crucible and led to the object being reported by some press sources as a mystery.
Bert is Evil – A satirical website stated that Bert of Sesame Street is the root of many evils. A juxtaposition of Bert and Osama bin Laden subsequently appeared in a real poster in a Bangladesh protest.
Blinking white guy – An animated GIF of former Giant Bomb video producer Drew Scanlon blinking in surprise, originating from a 2013 video on the website, became an internet meme in 2017. Multiple outlets have noted the versatility of the GIF's use as a reaction.
Blue waffle – A hoax originating in 2010 claiming to show the effects of an unknown sexually transmitted disease affecting only women, causing severe vaginal infection with a blue discoloration. The disease has been confirmed as false. In Trenton, New Jersey, councilwoman Kathy McBride cited the image in a 2013 city council meeting, not realizing that it was a hoax.
#BreakTheInternet – The November 2014 issue of Paper included a cover image of Kim Kardashian in a partially nude pose, exposing her buttocks, taken by photographer Jean-Paul Goude. It was captioned "#breaktheinternet", as the magazine desired to set a record in social media response from it. Several other photos from the shoot were also released, including one that mimicked one that Goude took for his book Jungle Fever involving a "campaign incident". Papers campaign set a record for hits for their site, and the photographs became part of Internet memes.
Brian Peppers – In 2005, a photo surfaced of a man named Brian Peppers, noted for his appearance, which suggests Apert syndrome or Crouzon syndrome. Found on the Ohio sex offender registry website, the photo gained traction after being shared on website YTMND. Peppers died in 2012 at the age of 43.
Crasher Squirrel – A photograph by Melissa Brandts of a squirrel which popped up into a timer-delayed shot of Brandts and her husband while vacationing in Banff National Park, Canada, just as the camera went off. The image of the squirrel has since been added into numerous images on the Internet.
CSI: Miami Puts on Sunglasses – The cold opening for nearly all CSI: Miami episodes ended with star David Caruso as Horatio Caine, in the initial stages of an investigation, putting on his sunglasses and making a quip or pun related to the crime, before the show hard cut to the opening credits, played against the scream of "Yeah!" in The Who's "Won't Get Fooled Again". Image macros of Caruso putting on sunglasses, or similar images for other fictional characters, and the introductory scenes of the CSI: Miami opening became frequent, typically used as response to other puns made on user forums or with the puns and the following "YEAH!" incorporated into the image macro.
Cursed images – Images (usually photographs) that are perceived as odd or disturbing due to their content, poor quality or both.
Dat Boi – An animated GIF of a unicycling frog associated with the text "here come dat boi!" that began on Tumblr in 2015 before gaining popularity on Twitter in 2016.
DashCon Ball Pit – A convention held in July 2014 by users of Tumblr that "imploded" due to a number of financial difficulties and low turnout. During the convention, a portable ball pit was brought into a large empty room. When some premium panels were cancelled, the attendees were offered an extra hour in the ball pit as compensation. The implosion and absurdity of aspects like the ball pit quickly spread through social media.
DALL-E – A web-based program introduced in 2022 that uses artificial intelligence to construct an array of images from a text prompt. The resulting images, often shared across social media, can range from humorous, to uncanny, to near-perfect results.
Distracted boyfriend – A stock photograph taken in 2015 which went viral as an Internet meme in August 2017.
Dog shaming – Originating on Tumblr, these images feature images of dogs photographed with signs explaining what antics they recently got up to.
Doge – Images of dogs, typically of the Shiba Inus, overlaid with simple but poor grammatical expressions, typically in the Comic Sans MS font, gaining popularity in late 2013. The meme saw an ironic resurgence towards the end of the decade, and was recognised by multiple media outlets as one of the most influential memes of the 2010s. The meme has also spawned Dogecoin, a form of cryptocurrency.
Don't talk to me or my son ever again – Images of a subject, be they product or individual, pictured with a smaller version of themself, captioned with the text "don't talk to me or my son ever again". Popular in 2016.
The Dress – An image of a dress posted to Tumblr that, due to how the photograph was taken, created an optical illusion where the dress would either appear white and gold, or blue and black. Within 48 hours, the post gained over 400,000 notes and was later featured on many different websites.
Ecce Homo / Ecce Mono / Potato Jesus – An attempt in August 2012 by a local woman to restore Elías García Martínez's aging fresco of Jesus in Borja, Spain led to a botched, amateurish, monkey-looking image, leading to several memes.
Every time you masturbate... God kills a kitten – An image featuring a kitten being chased by two Domos, and has the tagline "Please, think of the kittens".
First World problems – A stock image of a woman crying with superimposed text mocking people with trivial complaints compared to that of issues in the Third World.
Floppa – a collection of images either portraying caracals or a specific caracal by the name of Goshe, Shlepa or more commonly Big Floppa. The collection of images do not portray to a specific theme per se, but always hold Floppa as a centerpoint or personification of something.
Goatse.cx – A shock image of a distended anus.
Grogu – The popularity of the TV series The Mandalorian led to many memes of the "Baby Yoda" character.
Grumpy Cat – A cat named Tardar Sauce that appears to have a permanent scowl on her face due to feline dwarfism, according to its owner. Pictures of the cat circulated the Internet, leading it to win the 2013 Webby for Meme of the Year, and her popularity has led her to star in a feature film. Tardar Sauce died on 14 May 2019.
Hide the Pain Harold – A Hungarian electrical engineer named András Arató became a meme after posing for stock photos on the websites iWiW and Dreamstime. He initially wasn't very happy with his popularity, but has grown to accept it. He realized he did similar things when he was younger such as drawing on Hungarian poet John Arany's portraits, making him look like a pirate. The meme depicts photos of Arató smiling, while viewers believe the smile masks serious sorrow and pain, hence the name "Hide the Pain Harold".
Homophobic dog – A series of images of a white dachshund accompanied by homophobic captions, such as "not too fond of gay people" and "let's hope it's just a phase". According to the dog's owners, a gay couple, most of those memes were made and shared by members of the LGBTQ community to mock homophobic people. A fake Washington Post headline describing the dog as "the new face of online homophobia" was criticized by Christina Pushaw, press secretary of Florida Governor Ron DeSantis, unaware that it was not a real article.
Hurricane Shark or Street Shark, a recurring hoax circulated after a variety of natural disasters, appearing to show a shark swimming in a flooded urban area, usually after a hurricane. Several images have been used, most often one of a freeway that first appeared during Hurricane Irene in 2011. However, a 2022 video of a shark or other large fish swimming in Hurricane Ian's floodwaters in Fort Myers, Florida, proved to be real, itself becoming part of the phenomenon and leading to phrases like "Hurricane Shark is real".
Instagram egg – A photograph of an egg on Instagram, which formerly received the most number of likes on both the platform and the highest in any social media.
Islamic Rage Boy – A series of photos of Shakeel Bhat, a Muslim activist whose face became a personification of angry Islamism in the western media. The first photo dates back to his appearance in 2007 at a rally in Srinigar, the capital of Indian-administered Kashmir. Several other photos in other media outlets followed, and by November 2007, there were over one million hits for "Islamic Rage Boy" on Google and his face appeared on boxer shorts and bumper stickers.
Keep Calm and Carry On – A phrasal template or snowclone that was originally a motivational poster produced by the UK government in 1939 intended to raise public morale. It was rediscovered in 2000, became increasingly used during the 2009 global recession, and has spawned various parodies and imitations.
Listenbourg – An image of a photoshopped map of Europe with a red arrow pointing to the outline of a fictional country adjacent to Portugal and Spain.
Little Fatty – Starting in 2003, the face of Qian Zhijun, a student from Shanghai, was superimposed onto various other images.
Lolcat – A collection of humorous image macros featuring cats with misspelled phrases, such as "I Can Has Cheezburger?". The earliest versions of LOLcats appeared on 4chan, usually on Saturdays, which were designated "Caturday", as a day to post photos of cats.
Manul – A Russian meme that was introduced in 2008. It is typically an image macro with a picture of an unfriendly and stern-looking Pallas's cat (also known as a manul) accompanied by a caption in which the cat invites you to pet it.
McKayla is not impressed – A Tumblr blog that went viral after taking an image of McKayla Maroney, the American gymnast who won the silver medal in the vault at the 2012 Summer Olympics, on the medal podium with a disappointed look on her face, and photoshopping it into various "impressive" places and situations, e.g. on top of the Great Wall of China and standing next to Usain Bolt.
Nimoy Sunset Pie – A Tumblr blog that posted mashups combining American actor Leonard Nimoy, sunsets, and pie.
O RLY? – Originally a text phrase on Something Awful, and then an image macro done for 4chan. Based around a picture of a snowy owl.
Oolong – Photos featured on a popular Japanese website of a rabbit that is famous for its ability to balance a variety of objects on its head.
Pepe the Frog – A cartoon frog character from a 2005 web cartoon became widely used on 4chan in 2008, often with the phrase "feels good man". In 2015, the New Zealand government accepted proposals for a new national flag and a flag with Pepe, known as "Te Pepe", was submitted.
Seriously McDonalds – A photograph apparently showing racist policies introduced by McDonald's. The photograph, which is a hoax, went viral, especially on Twitter, in June 2011.
Spider-Man Pointing at Spider-Man – An image of the episode "Double Identity" of the 1967 TV series Spider-Man where the character Spider-Man and a criminal with the same costume point at each other. It is often used online when a person coincidentally acts or looks like another person. The meme was referenced in the post-credit scene of Spider-Man: Into the Spider-Verse and a real-life version with three Spider-Man actors – Tom Holland, Andrew Garfield and Tobey Maguire – was tweeted by Marvel to announce the release of Spider-Man: No Way Home on 4K UHD and Blu-ray.
Stonks – An image featuring Meme Man in a suit against an image of the stock market, used to highlight or satirize absurd topics related to finance or the economy.
Success Kid – An image of a baby who is clenching his fist while featuring a determined look on his face.
Trash Doves – A sticker set of a purple bird for iOS, Facebook messenger, Facebook comments, and other messaging apps created by Syd Weiler. The animated headbanging pigeon from the sticker set started to go viral in Thailand and it became globally viral on social media.
Tron Guy – Jay Maynard, a computer consultant, designed a Tron costume, complete with skin-tight spandex and light-up plastic armor, in 2003 for Penguicon 1.0 in Detroit, Michigan. The Internet phenomenon began when an article was posted to Slashdot, followed by Fark, including images of this costume.
Vancouver Riot Kiss – An image supposedly of a young couple lying on the ground kissing each other behind a group of rioters during the riots following the Vancouver Canucks' Stanley Cup loss to the Boston Bruins on 15 June 2011. The couple, later identified as Australian, Scott Jones, and local resident, Alexandra Thomas, were not actually kissing but Jones was consoling Thomas after being knocked down by a police charge.
Wojak – also known "Feels Guy", a bald male character with a sad expression on his face, often used as a reaction image to represent feelings such as melancholy, regret or loneliness. It has been used to convey different feelings by means of memetic transformation and modification into many various unique forms, all with different meanings. Some represent specific ideas or roles in certain situations, such as the NPC meme, which mocks supposed groupthink and a lack of individuality among a group of people. It has also spawned many derived characters, all based on the original but used to represent different emotions.
Woman yelling at a cat – A screenshot of the members of the television show The Real Housewives of Beverly Hills Taylor Armstrong and Kyle Richards showing Armstrong shouting and pointing with the finger, followed by a photo of a confused cat (identified as Smudge) sitting behind a table with food. The meme emerged in mid-2019, when Twitter users joined the photos and included texts that looked like a mockery of the cat to the angry woman.
Worst person you know – a satirical article by ClickHole with a picture of Josep Maria García.
Wood Sitting on a Bed – An image of a nude man sitting on a bed that gained notoriety at the beginning of the COVID-19 pandemic.
"You are not immune to propaganda." – A glitch art representation of Garfield, with the caption "You are not immune to propaganda" surrounding it.
What the fuck did I just read? – two side-by-side portraits of English lexicographer Samuel Johnson which indicate bewilderment.
Music
The Most Mysterious Song on the Internet – A song recorded on an audio cassette off German radio in the early 1980s, the artist and song title of which remained unknown for many years, despite efforts by devoted internet sleuths who have attempted to identify the band. In November 2024, the song was finally identified as "Subways of Your Mind" by the German band FEX.
"Sigma Boy" – A song by Russian bloggers 11-year-old Betsy and 12-year-old Maria Yankovskaya. German TikToker Streichbruder (@simonbth1) started a trend in which he put the song on at full volume in public transport. It was part of a larger trend where bloggers go to a public place and blast silly songs that you would normally be ashamed of listening to in front of other people.
People
Krzysztof Kononowicz – Polish man who became a phenomenon of the Polish Internet in 2006 after appearing in the debate of candidates for the president of Białystok.
Meme Man – Fictional character often featured in surreal memes, depicted as a 3D render of a smooth, bald, and often disembodied and blue-eyed male head.
Salt Bae – Turkish chef and restaurateur Nusret Gökçe earned fame in 2017 for his camera-friendly approach to preparing and seasoning meat, including a video in 2017 which he sprinkles salt, sparkling in the sunlight, onto a steak. Gökçe's approach has been compared to dinner theater, in that his actual finished product is secondary to the performance.
Hide the Pain Harold – Hungarian model András István Arató became the subject of a meme in 2011, due to his seemingly fake smile as the model in stock images.
Politics
Arrest of Vladimir Putin – A viral video showing the mock arrest of Vladimir Putin and his trial.
Barack Obama vs. Mitt Romney – A fictitious rap battle between 2012 election candidates Barack Obama and Mitt Romney. As of October 2020, the video has over 150 million views.
Bernie or Hillary? – A political poster that compares the positions of Hillary Clinton and Bernie Sanders on certain issues. It was typically used by Sanders supporters to make fun of Clinton's attempts to seem relatable to the voter base while they perceived Sanders to be more knowledgeable and in-depth on the issues. However, some critiqued the meme by saying that it played into sexist stereotypes.
Joe Biden – There are numerous iterations of President Joe Biden as a meme. The portrayal of Biden in The Onion was popular on the Internet and influenced other memes about him, as well as his broader public image. After Donald Trump won the 2016 U.S. presidential election, images of Biden as the "Biden Bro" or "Prankster Joe Biden" began circulating online. In these memes, Biden was paired with Barack Obama and captioned with various fictional conversations planning pranks and jokes on the president-elect. Biden is portrayed as the immature prankster of the duo, with Obama as his exasperated straight man.
Bush shoeing incident – During a press conference in 2008, Muntadhar al-Zaidi threw both of his shoes at then-president George W. Bush. Afterwards, various Flash-based browser games and gifs were created to poke fun of the incident.
Crush on Obama – A music video by Amber Lee Ettinger that circulated during the 2008 United States presidential election. As well as its sequels, the video caught the attention of bloggers, mainstream media, and other candidates, and achieved 12.5 million views on YouTube by 1 January 2009.
Dean scream – Former Governor of Vermont Howard Dean's concession speech following the 2004 New Hampshire Democratic primaries included Dean rattling off a list of states in escalating volume as crowd noise rose, resulting in increasingly distorted audio and culminating in an unusual "yeehaw" scream. It was one of the first political Internet memes.
Delete your account – A phrase used on Twitter to criticize the opinions of opponents. On 9 June 2016, Hillary Clinton tweeted this phrase towards Donald Trump. Afterwards, the tweet has become her most retweeted tweet of all time.
Don't Tase Me, Bro! – An incident at a campus talk by Senator John Kerry where a student yelled his now-infamous phrase while being restrained by police.
Eastwooding – After Clint Eastwood's speech at the 2012 Republican National Convention, in which he spoke to an empty chair representing President Barack Obama, photos were posted by users on the Internet of people talking to empty chairs, with various captions referring to the chair as either Obama or Eastwood.
"Epstein didn't kill himself" – A bait-and-switch joke originating on the app iFunny in October 2019, two months after his death in August. Many memes alleged involvement of Donald Trump, Hillary Clinton, or other notable figures. The meme saw mainstream popularity in late 2019, being unexpectedly snuck into cable news interviews by guests such as on FOX News and MSNBC. It was also referenced by Ricky Gervais at the 77th Golden Globe Awards due to the alleged connections between Epstein and people in the Hollywood film industry.
Forest raking – After U.S. President's Donald Trump's comments that Finland spent "a lot of time on raking and cleaning its forest floor", Finnish people began circulating satirical images of themselves raking the forests to stop wildfires.
Jesusland map – A map created shortly after the 2004 U.S. presidential election that satirizes the red/blue states scheme by dividing the United States and Canada into "The United States of Canada" and "Jesusland".
Kekistan – A fictional country created by 4chan members that has become a political meme and online movement used notably by the alt-right.
Ladies and Gentlemen, We Got Him – A quote said by American diplomat Paul Bremer during a 2003 press conference announcing the capture of Saddam Hussein. The scene, coupled with audio from the Breakbot song "Baby I'm Yours", began to be widely used with clips of people being apprehended or caught off-guard in some fashion, often in the context of FBI operations.
Miss Me Yet? – Billboards that appeared on American highways in early 2010 that featured George Bush asking "Miss me yet?". Inspired a series of themed merchandise from online agencies such as CafePress.
Mug shot of Donald Trump – the first ever mugshot of the U.S. president, Donald Trump, in August 2023.
Series of tubes – A phrase originally coined as an analogy by Senator Ted Stevens to describe the Internet in the context of opposing network neutrality. His statement was later remixed on YouTube and YTMND.
Strong – A political advertisement issued by Texas Governor Rick Perry presidential campaign in December 2011 for the 2012 Republican Party presidential primaries. The video was parodied and became one of the most disliked videos on YouTube.
Ted Cruz–Zodiac meme – A mock conspiracy theory suggesting that American Senator and Presidential candidate Ted Cruz was the Zodiac Killer, an unidentified Californian serial killer of the late 1960s and early 1970s.
Thanks Obama – A sarcastic expression used by critics of President Barack Obama to blame personal troubles and inconveniences on public policies supported or enacted by the administration.
This Land – Flash animation produced by JibJab featuring cartoon faces of George W. Bush and John Kerry singing a parody of "This Land Is Your Land" that spoofs the 2004 United States presidential election. The video became a viral hit and viewed by over 100 million, leading to the production of other JibJab hits, including Good to Be in D.C. and Big Box Mart.
"Running through fields of wheat" – In 2017, then UK Prime Minister Theresa May was asked by interviewer Julie Etchingham what the "naughtiest thing" she had done as a child was. May responded that she and her friend "used to run through the fields of wheat", something "the farmers weren't too pleased about". The statement became the subject of mockery and a meme.
Winnie the Pooh comparison to Xi Jinping – In 2013, a still image of China Chinese leader Xi Jinping meeting with US President Barack Obama was compared to Winnie the Pooh and Tigger. As comparisons of Pooh to Xi persists, the government tightened its censorship to suppress the trend. The comparisons are not limited from internet users in China. The phenomenon has been reported to occur in the Philippines.
Videos
Other phenomena
"And I oop" – A video of drag queen Jasmine Masters stopping a story to say the phrase "and I oop" after accidentally hitting himself in the testes.
April the Giraffe – A reticulated giraffe who had two of her live births streamed on the Internet to much fanfare.
"Banana for scale" – An internet meme that became popular for humorously measuring lengths of various objects. In this internet phenomenon, other objects juxtaposed with a banana are accompanied with the text "banana for scale".
Ben Drowned – A self-published three-part multimedia ARG web serial and web series inspired by creepypasta and The Legend of Zelda: Majora's Mask, created by Alexander D. Hall.
Binod – An internet fad which became popular in India in 2020. It originated from a comment by a user with the screen name 'Binod', who had added only the word 'Binod' as a comment. This was followed by a video by Slayy Point, mocking "Binod" and YouTube comment sections in general. People started spamming the word 'Binod' across social media, primarily in YouTube comments and stream chats. A number of organisations also posted memes, including Netflix India, Twitter and Tinder. Paytm temporarily changed its Twitter name to 'Binod'.
Brad's Wife – On 27 February 2017, Brad Byrd of Harrison County, Indiana posted on Cracker Barrel's Facebook page, asking them why they fired his wife, Nanette, after 11 years of service. The intense and serious nature of the post drew viral attention, and internet users began semi-sarcastically demanding answers, using hashtags such as #BradsWife and #JusticeForBradsWife. This meme was notable for being popular with baby boomers as well as younger internet users. After the post was about a week old, several corporations jumped on the viral bandwagon and began to publicly send job offers to Nanette Byrd.
Cats on the Internet – Images of cats are very popular on the Internet, and have seen extensive use in internet memes, as well as some cats becoming Internet celebrities.
Chuck Norris facts – Satirical factoids about martial artist and actor Chuck Norris that became popular culture after spreading through the Internet.
Creepypasta – Urban legends or scary stories circulating on the Internet, many times revolving around specific videos, pictures, or video games. The term "creepypasta" is a mutation of the term "copypasta": a short, readily available piece of text that is easily copied and pasted into a text field. "Copypasta" is derived from "copy/paste", and in its original sense commonly referred to presumably initially sincere text (e.g. a blog or forum post) perceived by the copy/paster as undesirable or otherwise preposterous, which was then copied and pasted to other sites as a form of trolling. In the pre-Internet era, such material regularly circulated as faxlore.
Dicks out for Harambe – A slogan that was popularized months after the death of Harambe, a gorilla in a Cincinnati zoo, which could be interpreted as telling individuals to expose their penises in public in honor of the gorilla (although the word "dicks" here is slang for guns). The line was notably uttered by actor Danny Trejo.
DignifAI – A 4chan-linked campaign to use AI tools to make women in photos look more modestly dressed. The trend is the opposite of deepfake pornography in that it is used to add clothes rather than remove them, and it has been used as a form of slut-shaming.
Dumb Ways to Die – A 2012 Metro Trains Melbourne safety campaign that became popular on the Internet in November 2012.
Elsagate – controversy surrounding children's YouTube videos in the late 2010s and 2020s.
Florida Man – Crimes involving bizarre behavior, perpetrated by men from the state of Florida.
Freecycling – The exchange of unwanted goods via the Internet.
Gabe the Dog – Gabe was a miniature American Eskimo dog owned by YouTube user gravycp. In January 2013, gravycp uploaded a short video of Gabe barking. The footage itself never went viral though it was used in dozens of song remixes, some of which accrued up to half a million views.
Get stick bugged lol a video clip of a stick insect swaying as bait-and-switch meme similar to Rickrolling, in which an irrelevant video would unexpectedly transition to the clip when the stickbug revealed with the caption "Get stick bugged LOL".
Get Out of My Car – an animated video created by Psychicpebbles, which uses the real audio of a man yelling at a woman to get out of his car.
Have You Seen This Man? – A viral website that emerged on the Internet in the late 2000s, claiming to gather data about a mysterious figure only known as This Man that appears in dreams of people who never saw him before.
Horse ebooks / Pronunciation Book – A five-year-long viral marketing alternate reality game for a larger art project developed by Synydyne. "Horse_ebooks" was a Twitter account that seemed to promote e-books, while "Pronunciation Book" was a YouTube channel that provided ways to pronounce English words. Both accounts engaged in non-sequiturs, making some believe that the accounts were run by automated services. Pronunciation Book shifted to pronouncing numerals in a countdown fashion in mid-2013, concluding in late September 2013 revealing the connection to Horse_ebook and identity of Synydyne behind the accounts, and the introduction of their next art project.
Hou De Kharcha, a meme in Marathi
Unregistered HyperCam 2 – The watermark which displayed in the upper-left corner of footage recorded with free versions of the HyperCam 2 screen capture software developed by Hyperionics, Inc. The software was widely used to screen record for YouTube videos during late 2000s to early 2010s, and was frequently used in the production of tutorial videos and Club Penguin gameplay. Videos with the watermark were often accompanied by "Trance" or "Dreamscape" by 009 Sound System.
I am lonely will anyone speak to me – A thread created on MovieCodec.com's forums, which has been described as the "Web's Top Hangout for Lonely Folk" by Wired magazine.
Johnny Johnny Yes Papa – a children's nursery rhyme series.
Ligma joke – a meme to set up a crude joke.
Most Awesomest Thing Ever – a defunct website that randomly paired two objects, celebrities and activities, and asked viewers to decide their favourite. The ultimate goal of the project was to see what viewers considered the most "awesomest". At the website's closure in 2022, teleportation was ranked number 1.
Netflix and chill – An English language slang term using an invitation to watch Netflix together as a euphemism for sex, either between partners or casually as a booty call. The phrase has been popularized through the Internet.
Omission of New Zealand from maps – New Zealand is often excluded from world maps, which has caught the attention of New Zealander users on the Internet.
One red paperclip – The story of a Canadian blogger who bartered his way from a red paperclip to a house in a year's time.
Planking – Also known as the Lying Down Game. An activity consisting of lying in a face down position, with palms touching the body's sides and toes touching the ground, sometimes in bizarre locations. Some compete to find the most unusual and original location in which to play.
Reality shifting – A mental phenomenon similar to lucid dreaming or maladaptive daydreaming that appeared on TikTok, in which practitioners believe they travel to alternate realities, usually fictional (for example the Wizarding World of the Harry Potter franchise).
Rickrolling – an internet prank in which a video unexpectedly plays the music video for "Never Gonna Give You Up" by Rick Astley instead of what was advertised.
Savage Babies – also known as the Most Savage Babies in Human History, a meme popular in 2016 that uses clips from the Indian children's YouTube channel VideoGyan 3D Rhymes, namely their series of nursery rhymes "Zool Babies". The videos are heavily distorted and given edgy, ironic titles that exaggerate the meaning of the video, such as "Five Little Babies Dressed as Pilots" becoming "Savage Babies Cause 9/11".
SCP Foundation – A creative writing website that contains thousands of fictitious containment procedures for paranormal objects captured by the in-universe SCP Foundation, a secret organization tasked with securing and documenting objects that violate natural law and/or pose a threat to humanity's perception of normalcy and further existence. The website has inspired numerous spin-off works, including a stage play and video games such as SCP – Containment Breach.
Siren Head – A fictional cryptid which has an air raid siren as a head, created by horror artist Trevor Henderson. It has accumulated a fan following which has spawned numerous pieces of fan works and fan-made video games. Many video edits have depicted Siren Head playing various songs over a populated area. Siren Head has been erroneously recognized as an SCP, most notably when the character was briefly submitted to the SCP Foundation Wiki as SCP-6789; the entry was removed after Henderson and site users expressed intention to keep Siren Head independent of the SCP Foundation Wiki. Another entry, SCP-5987, was inspired by the character name and the controversy from the deleted entry.
Smash or Pass – A game in which players decide whether they would hypothetically "smash" (have sex with) someone or "pass" (choose not to).
Spiders Georg – A meme which imagines that the (untrue) statistic that the "average person eats 3 spiders a year" is the result of a statistical error caused by the incorporation of "Spiders Georg", a fictional character who resides in a cave and eats over ten thousand spiders every day, into the study from which this conclusion was drawn. The meme originated with a Tumblr post by user Max Lavergne, and has inspired many derivative works about the character. Variations of the meme have imagined other characters named "Georg" to explain other real or imagined statistics and beliefs.
Steak and Blowjob Day – A meme suggesting that a complementary holiday to Valentine's Day, primarily for men, takes place on 14 March each year.
Storm Area 51 – A joke event created on Facebook to "storm" the highly classified Area 51 military base, with over 1,700,000 people claiming to be attending and another 1,300,000 claiming they were "interested" in going. 1,500 people arrived in the vicinity of Area 51 the day of the event, 20 September 2019, only one of whom actually breached the boundary and was quickly escorted off the premises.
Slender Man or Slenderman – A creepypasta meme and urban-legend fakelore tale created on 8 June 2009, by user Victor Surge on Something Awful as part of a contest to edit photographs to contain "supernatural" entities and then pass them off as legitimate on paranormal forums. The Slender Man gained prominence as a frightening malevolent entity: a tall thin man wearing a suit and lacking a face with "his" head only being blank, white, and featureless. After the initial creation, numerous stories and videos were created by fans of the character. Slender Man was later adapted into a video game in 2012 and became more widely known. There is also a film released in 2018 to negative reviews.
Surreal memes – A type of meme that are artistically bizarre in appearance and whose humor derives from their absurd style. Certain qualities and characters, such as Meme Man, Mr. Orange, and a minimalist style, are frequent markers of the meme.
The Million Dollar Homepage – A website conceived in 2005 by Alex Tew, a student from Wiltshire, England, to raise money for his university education. The home page consists of a million pixels arranged in a 1000 × 1000 pixel grid. The image-based links on it were sold for $1 per pixel in 10 × 10 blocks.
Three Wolf Moon – A t-shirt with many ironic reviews on Amazon.
Throwback Thursday – The trend of posting older, nostalgic photos on Thursdays under the hashtag #ThrowbackThursday or #TBT.
The Undertaker vs. Mankind – A copypasta where at the end of a comment of an irrelevant topic, the event is referenced.
Vibe Check – Generally ascribed as a spiritual evaluation of a person's mental and emotional state.
Vuvuzelas – The near-constant playing of the buzz-sounding vuvuzela instrument during games of the 2010 World Cup in South Africa led to numerous vuvuzela-based memes, including YouTube temporarily adding a vuvuzela effect that could be added to any video during the World Cup.
Willy's Chocolate Experience – An unlicenced event based on the Charlie and the Chocolate Factory franchise held in Glasgow, Scotland. Due to the misleading AI-generated advertisements and its sparsely decorated warehouse location, images of the event went viral. Notable viral images include a dispirited woman dressed as an Oompa-Loompa and an original character called "The Unknown".
Yanny or Laurel – An audio illusion where individuals hear either the word "Yanny" or "Laurel".
YouTube poop – Video mashups in which users deconstruct and piece together video for psychedelic or absurdist effect.
See also
List of Internet phenomena in China
List of Internet phenomena in Pakistan
Cats and the Internet
Index of Internet-related articles
Internet culture
Internet meme
Know Your Meme
List of YouTubers
Outline of the Internet
Urban legends and myths
Usenet personality
Viral phenomenon
Notes
References
Phenomena
-
Phenomena
Urban legends
Lists of phenomena | List of Internet phenomena | [
"Technology"
] | 18,956 | [
"Computing-related lists",
"Internet-related lists"
] |
597,751 | https://en.wikipedia.org/wiki/Monica%20and%20Friends | Monica and Friends (Portuguese: Turma da Mônica), previously published as Monica's Gang in Anglophone territories and as Frizz and Friends in London, is a Brazilian comic book series and media franchise created by Mauricio de Sousa.
The series originated in a comic strip first published by the newspaper Folha da Manhã in 1959, in which the protagonists were Blu (Bidu) and Franklin (Franjinha), however, in the following years the series was shaped towards its current identity with the introduction of new characters such as Monica (Mônica) and Jimmy Five (Cebolinha) who became the new protagonists. The stories revolve around a group of children who live in a fictional neighborhood in São Paulo known as Lemon Tree District (Bairro do Limoeiro) which has a street with the same name called Lemon Tree Street (Rua do Limoeiro) where Monica and her several friends live, inspired by the neighborhood of Cambuí in Campinas and the city of Mogi das Cruzes, where Mauricio spent his childhood.
Although the title of the franchise mainly refers to the core group of children who live on Lemon Tree Street, it's also used as an umbrella title who encompasses other works created by Mauricio throughout his career such as Chuck Billy 'n' Folks, Tina's Pals, Lionel's Kingdom, Bug-a-Booo, The Cavern Clan, Bubbly the Astronaut, Horacio's World, The Tribe, and others, since stories from these series are frequently published in comics focused in characters such as Monica, Jimmy Five, Smudge, Maggy and Chuck Billy. Since 1970, in the form of comic books, the characters have been published by publishers such as Abril (1970-1986), Globo (1987-2006) and Panini Comics (2007-present), totaling almost 2,000 issues already published for each character.
The English title of the series was later changed to Monica and Friends. The characters and comics were subsequently adapted into, among other media, an animated television series as well as films, most of which are anthologies.
In 2008, a spin-off series, Monica Teen, was created in a manga style and features the characters as teenagers.
Monica is considered the most well-known comic book character in Brazil. In 2015 alone, the characters were used on three million products for over 150 companies. Nowadays the comics are sold in 40 countries in 14 languages.
Publication history
Maurício de Sousa, then-reporter for Folha da Manhã, in 1959, decided to enter the field of comics hitherto disputed in the country. In the same year made his first characters Blu and Franklin, and decided that Blu would be the protagonist. Both were based on his own childhood, Franklin based on Maurício himself and Blu on his pet dog Cuíca. The main inspirations of Monica and Friends were through American comics like Peanuts and Little Lulu, which have inspired some recurring themes like the boys' club. In the following year of 1960, the characters gained ground through the children's magazine Zaz Traz by publisher Editora Outubro, later getting their own comic titled Bidu by Editora Continental. However, the magazines were canceled that same year.
After that the characters returned to newspaper strips, the character Jimmy Five which had won great popularity in the previous magazines has become protagonist of their own strips of newspaper next to Blu and Franklin in 1961. Seeing potential in the character, Mauricio went on to create several supporting characters to appear in the Jimmy Five strips, like Smudge and Specs. But after so long Maurício received a complaint for the lack of feminine characters in his comics. So to avoid further controversy, in 1963 Monica was created, initially as a supporting character in the Jimmy Five strips (at first as the Specs' little sister) based on Mauricio's real daughter Mônica Sousa. Over time, the success and the charisma of the character made her be the protagonist of the series alongside Jimmy Five which became her sidekick. Also in 1963, Mauricio began to create new projects of comic strips with other characters without links with Monica and Jimmy Five, like Zezinho and Hiroshi (now called Chuck Billy 'n' Folks), The Cavern Clan, Bug-a-Booo, The Tribe and Raposão (now called Lionel's Kingdom).
The characters only returned to be published in a monthly comic book from 1970, by publisher Abril, initially under the title of "Mônica e Sua Turma" (Monica and Her Gang), later being changed to "Mônica" and "Turma da Mônica" (Monica's Gang), the latter title used only for merchandising. With so many of the characters ever created by Mauricio in strips of newspaper also began to appear in the comics of Monica's Gang. At that time the area of comics was more competitive in Brazil, many Brazilian artists were trying to hold his own in the stalls along the publications of American comics like Donald Duck, José Carioca, Little Lulu, and many others. Even with these new comics, Monica's Gang remained with their good sales on newsstands and with this soon came to the Jimmy Five's solo comic book three years later. Sales were great and with that a contract with the footballer Pelé was made for the launch of a character based on him, the Pelezinho, which was a phenomenon among children at the time, marking time in the history of Brazilian comics. Pelezinho was one of the few black characters made by Maurício, and his design wasn't the most flattering to the black community, with a pale colored circle around the character's mouth to represent his lips, a concept made infamous by the black face depiction of African people. In late 2013, however, Pelezinho's design was updated to reflect more modern sensibilities.
The staff of cartoonists has grown, leading to the foundation of the Estúdios Maurício de Sousa, that produce comics with the characters created by Mauricio de Sousa, and other marketing projects and cartoons of the characters. The first attempts to make a Monica and Friends cartoon occurred in the late 60s during a deal with Cica food company that produced some commercials for television, these being commercial who originated the character Thunder, who is currently a main character of the Lionel's Kingdom stories. The first film was only released in 1982, As Aventuras da Turma da Mônica which was produced in partnership with Black & White & Color and distributed by Embrafilme.
Over the years, other characters gained their own comic books, like Smudge (1982), Chuck Billy (1982) and Maggy (1989). Other media have strengthened over the years and Monica-branded merchandise were launched, with products like books, toys, discs, CD-ROM and video games. A Monica-themed amusement park was created in 1993 in São Paulo, titled "Parque da Mônica" (Monica Park). In 2008 the franchise received a manga-style comic entitled Monica Teen.
Characters
The Monica and Friends series has an extensive amount of main and secondary characters. It has as main protagonists Monica, Jimmy Five, Smudge and Maggy and each has its own comic book. Currently the character Milena was recently added as the fifth main character of the group. Other characters from other series created by Mauricio de Sousa are also included on Monica and Friends, making crossovers or quotations from each other in several stories, among several other characters. The main setting of the stories is the fictional Lemon Tree Street (Rua do Limoeiro).
Most stories focus on the daily lives of the main characters and occasionally on the secondary characters; the humour usually uses various types of repetitions, allusions, appeals to the nonsense, paranomasias, sarcasm and metalanguage. The stories with Monica and Jimmy Five revolve around the conflict between the two. Jimmy Five is a troublemaker and bully who always tries to scold Monica or steal her stuffed bunny to give knots in its ears (usually having Smudge or another boy accomplice), always having Monica get her revenge by hitting him with her stuffed bunny, often leaving him bruised and with black eyes. Often Jimmy Five makes plans against her with various traps, sometimes using Franklin's inventions or talking Smudge into helping, but he always loses to Monica at the end.
Smudge's stories often focus on his penchant for dirt and mess and his fear of water, having never bathed in his life, and constantly being threatened by villains or his friends to take a shower, with him always succeeding in not getting wet and staying dirty. The stories with Maggy generally focus on her gluttony, with a superhuman ability to eat more than a normal person without ever getting fat and sometimes stealing food from her friends.
Among the villains are Captain Fray, a supervillain with the power to control garbage and dirt, and Lord Raider, a space rabbit whose first appearance was in the movies. A joke often breaks the fourth wall.
Related works
Other related works which relate to the series are:
Thunder - Based on the first character of Monica and Friends, it shows the character Blu, a sentient blue dog of the Schnauzer breed, as acting as a mere actor in a studio of comics.
Chuck Billy 'n' Folks - Focuses on the comic daily life of Chuck Billy, a lazy farmer boy and his friends who live on a farm in Brazil's countryside. He's the one that most often appears in contact with Monica and Friends. The character have his own spin-off comic book series. As Monica and Friends, he received a spin-off manga-style comic in 2013 called Chico Bento Moço (Chuck Billy Young Man).
Bug-a-Booo - Focuses on comedy stories involving monsters of horror films (ghosts, vampires, werewolfs, Frankenstein and even the death) in a cemetery.
Tina's Pals - Shows a daily routine of a group of college students and their daily lives as they date, go to college, among other issues. In 2014, it received its own spin-off comic book series.
The Funnies - Focuses on the comic adventures of Bubbly, an astronaut, as he explores space.
Lionel's Kingdom - Focuses on the daily lives of a group of anthropomorphic animals of different species in a forest led by a lion king.
The Cavern Clan - Focuses on Pitheco, a caveman inventor who is living in the midst of pre-history and tries to revolutionize humanity with his inventions.
Horacio's World - Focuses on the life of a young philosopher, the dinosaur Horacio, and his friends.
The Tribe - Focuses on an Amazonian native, Tom-Tom, who lives in a tropical forest constantly threatened by white men.
Media
Publications
Monica and Friends and its related works are released in a number of different books. Firstly, they were published by Editora Abril, from 1970 to 1986, then Editora Globo, from 1987 to 2006. From 2007 on, Panini Comics was chosen to keep the publications. So far there are comic books starring many characters, among the best known and sold are the characters of Monica, Jimmy Five, Smudge, Maggy and Chuck Billy, plus almanacs with republication of classic stories with varying characters. In 2015 was released an app for smartphones gathering more than 500 editions of comics franchise for download.
Television and film adaptations
The characters of Monica and Friends are also the protagonists of which can be considered the first Brazilian animation series. After being introduced on television as advertising-boys in TV advertisements from the mid-1960s, complete stories began to be produced in 1976 and distributed through film-compilations during the 1980s and 1990s (initially released in movies in cinema in the 1980s and then directly to video in the 1990s).
In 1999, a series of shorts with the characters was produced to the children's programming of Rede Globo (which would display these episodes only between the years 2010 and 2014). A regular broadcast of the series on TV only occurred in 2004 after negotiations with the Cartoon Network that since this year began broadcasting new episodes exclusively on the channel, still remaining on the schedule together of sister channels Tooncast (currently transmitted in Cartoon Cartoons block) and Boomerang.
In addition to the animations, the characters also starred in two animated feature films who presents bigger stories instead of short compilations; these are: "The Princess and the Robot" (1984) and "Monica's Gang in an Adventure in Time" (2007) - as well as two TV and video specials in live-action: "Mônica e Cebolinha: No Mundo de Romeu e Julieta" (1979) and "A Rádio do Chico Bento" (1989).
Compilations
As Aventuras da Turma da Mônica (1982)
As Novas Aventuras da Turma da Mônica (1986)
Mônica e a Sereia do Rio (1987)
O Bicho-Papão (1987)
A Estrelinha Mágica (1988, direct-to-video)
Chico Bento, Óia a Onça! (1990, direct-to-video)
O Natal de Todos Nós (1992, direct-to-video)
Quadro a Quadro (1996, direct-to-video)
Videogibi: O Mônico (1997, direct-to-video)
Videogibi: O Plano Sangrento (1998, direct-to-video)
Videogibi: O Estranho Soro do Dr. X (1998, direct-to-video)
Videogibi: A Ilha Misteriosa (1999, direct-to-video)
Coleção Grandes Aventuras da Turma da Mônica (2002, recovery direct-to-video including new episodes)
Cine Gibi: O Filme (2004)
Cine Gibi 2 (2005, direct-to-video)
Cine Gibi 3: Planos Infalíveis (2008, direct-to-video)
Cine Gibi 4: Meninos e Meninas (2009, direct-to-video)
Cine Gibi 5: Luzes, Câmera, Ação! (2010, direct-to-video)
Se Liga na Turma da Mônica - Volume 1 (2011, direct-to-video)
Se Liga na Turma da Mônica - Volume 2 (2012, direct-to-video)
Cine Gibi 6 - Hora do Banho (2013, direct-to-video)
Cine Gibi 7 - Bagunça Animal (2014, direct-to-video)
Cine Gibi 8 - Tá Brincando? (2015, direct-to-video)
Cine Gibi 9 - Vamos Fazer de Conta! (2016, direct-to-video)
Monica Toy
Series launched to promote "Monica Toy" line launched by Tok & Stok, which features an ultra-stylized version of the characters of Monica in traits that refer to Toy Arts and famous franchises like Hello Kitty, Pucca and Gogo's Crazy Bones. There are currently 9 seasons, with mini-episodes each with about 30 seconds long, with 2D animation and exclusive language for web delivery.
From the episode "Hiccups" (released on July 17, 2013), the series was renamed and had its shortened to just "Monica Toy" title. On October 7, 2013, the series premiered on Cartoon Network in the interprograma format, debuting two episodes a week and airing at various times.
Turma da Mônica: Laços
In 2019, in cooperation with Paris Filmes, Globo Filmes and Paramount Pictures, Mauricio de Sousa Produções released in Brazilian cinemas the movie Turma da Mônica: Laços, on June 27, being directed by Daniel Rezende. The movie was based on the best-selling eponymous graphic novel, written and drawn by siblings Victor Cafaggi and Lu Cafaggi. This was the first Monica and Friends live-action movie in which the main characters are played by real-life children, since the previous live-action movies were presented in the form of theater performances using adults in articulated mascots costumes.
The plot of the movie revolves around the relationship of the friends. When Jimmy Five's dog, Fluffy, is abruptly abducted, the children must unite and eventually overcome their differences while facing new adventures on the way so they can bring their loyal friend back home.
The movie presents Giulia Benite as Monica, Kevin Vechiatto as Jimmy Five, Laura Rauseo as Maggy and Gabriel Moreira as Smudge. It has been overly praised by Brazilian critics, who praises the balance between cartoonish situations with human conflicts. The movie represented one of the highest Brazilian box office of 2019.
Turma da Mônica: Lições
In 2021, Turma da Mônica: Lições was released in Brazilian cinemas on December 30. It is the sequel to Turma da Mônica: Laços, and based on the graphic novel of the same name written and drawn by siblings Victor Cafaggi and Lu Cafaggi. The film features Monica, Jimmy Five, Maggy and Smudge forget to do their homework and run away from school (6th or 7th year of elementary school). But not everything goes as expected, and Mônica is transferred to another school, triggering the group's separation. The decision was made by the parents of the four friends, who think that they are spending too much time together and that this could harm them in their responsibilities. From this rupture, they are forced to find new friends and face their own fears and insecurities. The group will then have to face the consequences of their decisions, mature and discover the real value and meaning of the word friendship.
Turma da Mônica – A Série
MSP formed a partnership with Globoplay (Globo's streaming service) produced the live-action series inspired by Monica and Friends characters and being the sequel to the films Turma da Mônica: Laços and Turma da Mônica: Lições, titled Turma da Mônica – A Série, which premiered on July 21, 2022, which contains 8 episodes.
On November 27, 2023, it was renewed for a second season.
Vamos Brincar com a Turma da Mônica
In 2017, the Vamos Brincar com a Turma da Mônica series was announced during the Comic Con Experience in São Paulo. In 2019, MSP announced that the series would launch on Gloob and Gloobinho channels in 2020, the first episodes of the production were made available on the now-defunct Giga Gloob app on October 12, 2022.
On July 11, 2023, after Giga Gloob was discontinued by Globo, the first 10 episodes were transferred and made available on Globoplay streaming. The second part, with 16 episodes, was released on the platform on October 12, the same day the series debuted on the Gloobinho channel.
Franjinha & Milena: Em Busca da Ciência
A spin-off of the franchise presented by Franklin and Milena, accompanied by the dog Blu. The series began filming on July 6, 2022 until it concluded in September. On February 15, 2024, Max (formerly HBO Max) announced that the series would debut on February 27 for its platform and also on the Discovery Kids channel.
Video games
In the 1990s, Westone developed three Monica and Friends—based titles for Tectoy for the Master System and Mega Drive, modified from installments in Westone's Wonder Boy series.
In the 1990s, MSP released three CD-ROMs with short stories complemented by minigames: Mônica Dentuça (1995), Cebolinha e Floquinho (1996) and A Roça do Chico Bento (1998). Two CDs for creating comic books with both Monica and Chuck Billy were also released.
In September 2010, Tec Toy announced that they would produce an exclusive game for videogame console Zeebo Turma da Mônica - Vamos Brincar Nº1, but the game was canceled months later. The idea was to create a series of eight puzzle games, but due to the end of the island the project was canceled.
In 2012, "Quero Ser da Turma da Mônica", a game where the user can create a digital avatar in the style of the characters, was released for iOS. In 2013 a game was released for iOS and Android called "Coelhadas da Mônica" (Bunny-Bashing Monica), a puzzle game like Angry Birds. And in 2014, the game "Jogo do Cascão" (The Smudge Game) was released as a 2D platform running game with multiple stages.
Monica Park
An amusement park themed with Mauricio de Sousa's characters is located at the Eldorado Shopping in São Paulo. It was opened in January 1993, and features a number of attractions, like the "Carrossel do Horácio" (Horacio's merry-go-round, a carousel featuring dinosaurs instead of horses), the 3-D cinema, and many more. The Parque da Mônica of Curitiba and of Rio de Janeiro were also created, in 1998 and 2001, respectively, but both were closed, in 2000 and 2005, respectively. Until the end of 2006, it had its own comic book, featuring adventures of Monica and her friends at the park. When Panini Comics started to publish Mauricio's works, this comic was replaced by Turma da Mônica (Monica and Friends).
The park was removed from Eldorado Shopping Center on early 2010, when the space occupied by it was asked back by the shopping administrators. Mauricio de Sousa had already announced that the mall had requested the area back in July 2009. The administration of the Shopping said that the mall was seeking more up-to-date alternatives, which fall under consumers' expectations.
In March 2015, it was announced that the park would be reopened at Shopping SP Market.
Merchandising
In the late 60s a line of dolls with characters Monica, Jimmy Five, Thunder, Horacio and Lucinda was manufactured by Trol Company, this was the first merchandising of the franchise. The character Thunder became mascot of a tomato sauce brand of the food company Cica in the late of 60s. An extensive line of toys and other products with the characters began to be manufactured since the 70s by various toy companies and remain heavily sold to the present in Brazilian stores.
During the 80s, the franchise also came to have its own store network. The Lojinha da Mônica and Trenzinho da Mônica, had branches in several states of Brazil, selling products related to characters. In 2013, the franchise was restarted in the form of an e-commerce portal.
Food products
In addition to Cica, Mauricio has formed long-lasting partnerships, providing his characters for a series of food products such as alfajores, pasta, cookies, etc. In 1993, a brand of chocolate bars was launched in partnership with Nestlé with the characters printed in white chocolate on each bar, although it was discontinued after the 90s the brand became memorable among many who grew up in the decade to the point that many fans asked for the brand to return, the chocolate was announced to have a return at the end of 2017, but this did not happen.
In 1994 Mauricio formed a partnership with the fruit company Fischer to sell small selected apples to children with their characters printed on the packaging. The brand became popular to the point of selling 900 thousand tons of packages every month, according to Mauricio his idea came when he visited a farm that produced smaller apples that were used to produce paste, fertilizer or animal food, however, Mauricio decided to target these apples to children.
International distribution
Monica and Friends and related works have been published in 40 countries in 14 languages, including Spanish, Greek and Japanese.
In Germany, comics were published between 1975 and 1980 under the name Fratz und Freunde and years later as Monica und Ihre Freunde. In the UK, comics have already been published under the name Frizz and Friends for a short time. In Italy, some comic books and classic episodes of cartoon were distributed on DVDs in the 90s, the cartoon was broadcast on Rai Due channel under the title La Banda di Monica. In Indonesia, the series is published under the title Monika dan kawan kawan, the comics are published there since 1997, along the comic books of the characters Jimmi Lima (Jimmy Five) and Ciko Bento (Chuck Billy N' Folks). In China the comics were distributed to schools in 2007, receiving years later adaptation of the cartoon, in 2011 one of the albums was awarded to children's literature.
There were plans for distribution in the United States and some Latin American countries, but they never came to fruition (with the exception of the proper cartoon broadcast dubbed into Spanish in some Latin American countries), however translated comics in English and Spanish are sold directly in Brazil. Episodes of the cartoon dubbed in English are available on YouTube.
References
External links
Manga adaptation
Brazilian comic strips
Comic book digests
1959 comics debuts
Brazilian comics adapted into films
Comics about parallel universes
Fiction about size change
Metafictional comics
Brazil-exclusive video games
Humor comics
Satirical comics
Gag-a-day comics
Comics adapted into animated series
Comics adapted into television series
Comics adapted into video games | Monica and Friends | [
"Physics",
"Mathematics"
] | 5,245 | [
"Fiction about size change",
"Quantity",
"Physical quantities",
"Size"
] |
597,785 | https://en.wikipedia.org/wiki/Adverse%20possession | Adverse possession in common law, and the related civil law concept of usucaption (also acquisitive prescription or prescriptive acquisition), are legal mechanisms under which a person who does not have legal title to a piece of property, usually real property, may acquire legal ownership based on continuous possession or occupation without the permission (licence) of its legal owner.
It is sometimes colloquially described as squatter's rights, a term associated with occupation without legal title during the westward expansion in North America, as occupying real property without permission is central to adverse possession. Some jurisdictions regulate squatting separately from adverse possession.
Description
In general, a property owner has the right to recover possession of their property from unauthorised possessors through legal action such as ejectment. However, many legal systems courts recognize that once someone has occupied property without permission for a significant period of time without the property owner exercising their right to recover their property, not only is the original owner prevented from exercising their right to eject, but a new title to the property "springs up" in the adverse possessor. In effect, the adverse possessor becomes the property's new owner. Over time, legislatures created statutes of limitations setting a time limit for how long owners have to recover possession of their property from adverse possessors. In the United States, for example, these limitation periods vary widely between individual states, ranging from as low as three years to as long as 40 years.
Although the elements of an adverse possession action differ by jurisdiction, a person claiming adverse possession in a common law system is usually required to prove non-permissive use of the property that is actual, open and notorious, exclusive, adverse and continuous for the statutory period. The possession by a person is not adverse during periods when they are in possession as a tenant or licensee of the legal owner.
Civil Law jurisdictions may recognize a similar right of acquisitive prescription. For example, the French Code Civil 2258 et. seq. recognizes that title may be acquired through thirty years of "continuous and uninterrupted possession which is peaceful, public, unequivocal, and as owner." It is related to the Roman law concept of usucaption or usucapio.
In Denmark, the concept was first mentioned as "Hævd" in Jyske Lov in 1241, though only regulating between peasants and the church, with an asymmetric time limit of 30 years for the church, and 40 years for the peasant. In 1475, the 40 year limit was ruled to apply between farmers as well. In 1547 (after the Reformation) a rule was passed to change this to 20 years for everyone. The rule was later adopted into the Danish Code, published in 1683, this specific part still being in force today. The Norwegian Code from 1688, also contains a similar provision.
Personal property, traditionally known as chattel, may also be adversely possessed, but owing to the differences in the nature of real and chattel property, the rules governing such claims are rather more stringent, and favour the legal owner rather than the adverse possessor. Claims for adverse possession of chattel often involve works of art.
History
In Roman law, usucapio laws allowed someone who was in possession of a good without title to become the lawful proprietor if the original owner did not appear after some time (one or two years), unless the good was obtained illegally (by theft or force). Stemming from Roman law and its successor, the Napoleonic Code generally recognizes two time periods for the acquisition of property: 30 years and some lesser time period, depending on the bona fides of the possessor and the location of the parties involved.
Parliament passed England's first general statute limiting the right to recover possession of land in 1623, the Limitation Act 1623 (21 Jas. 1. c. 16). At common law, if entitlement to possession of land was in dispute (originally only in what were known as real actions), the person claiming a right to possession was not allowed to allege that the land had come into their possession in the past (in older terminology that he had been 'put into seisin') at a time before the reign of Henry I. The law recognised a cutoff date going back into the past, before which date the law would not be interested. There was no requirement for a defendant to show any form of adverse possession. As time went on, the date was moved by statute—first to the reign of Henry II, and then to the reign of Richard I. No further changes were made of this kind. By the reign of Henry VIII the fact that there had been no changes to the cutoff date had become very inconvenient. A new approach was taken whereby the person claiming possession had to show possession of the land for a continuous period, a certain number of years (60, 50 or 30 depending on the kind of claim made) before the date of the claim. Later statutes have shortened the limitation period in most common law jurisdictions.
At traditional English common law, it was not possible to obtain title to property of the Crown by means of adverse possession. This principle was embodied by the Latin maxim nullum tempus occurrit regi ('no time runs against the king'). In the United States, this privilege was carried over to the federal and state governments; government land is immune from loss by adverse possession. Land with registered title in some Torrens title systems is also immune, for example, land that has been registered in the Hawaii Land Court system.
In the common law system of England and its historical colonies, local legislatures—such as Parliament in England or American state legislatures—generally create statutes of limitations that bar the owners from recovering the property after a certain number of years have passed.
England and Wales
Adverse possession is one of the most contentious methods of acquiring property, albeit one that has played a huge role in the history of English land. Historically, if someone possessed land for long enough, it was thought that this in itself justified acquisition of a good title. This meant that while English land was continually conquered, pillaged, and stolen by various factions, lords or barons throughout the Middle Ages, those who could show they possessed land long enough would not have their title questioned.
A more modern function has been that land which is disused or neglected by an owner may be converted into another's property if continual use is made. Squatting in England has been a way for land to be efficiently utilised, particularly in periods of economic decline. Before the Land Registration Act 2002, if a person had possessed land for 12 years, then at common law, the previous owner's right of action to eject the "adverse possessor" would expire. The common legal justification was that under the Limitation Act 1623 (21 Jas. 1. c. 16), just like a cause of action in contract or tort had to be used within a time limit, so did an action to recover land. This promoted the finality of litigation and the certainty of claims. Time would start running when someone took exclusive possession of land, or part of it, and intended to possess it adversely to the interests of the current owner. Provided the common law requirements of "possession" that was "adverse" were fulfilled, after 12 years, the owner would cease to be able to assert a claim. Different rules are in place for the limitation periods of adverse possession in unregistered land and registered land. However, in the Land Registration Act 2002 adverse possession of registered land became much harder.
In recent times the Land Registry has made the process of claiming adverse possession and being awarded “title absolute” more difficult. Simply occupying or grazing the land will no longer justify the grant of title, instead the person in adverse possession must demonstrate commitment to own and utilize the land to the exclusion of all others.
Another significant limit on the principle, in the case of leases, is that adverse possession actions will only succeed against the leaseholder, and not the freeholder once the lease has expired.
Land Registration Act 2002
The Land Registration Act 2002 received royal assent on 26 February 2002. The rules for unregistered land remained as before. But under schedule 6 of the Land Registration Act 2002, paragraphs 1 to 5, after 10 years the adverse possessor is entitled to apply to the registrar to become the new registered owner. The registrar then contacts the registered title holder and notifies him of the application. If no proceedings are launched for two years to eject the adverse possessor, only then would the registrar transfer title. Prior to the Land Registration Act 2002, a land owner could simply lose title without being aware of it or notified. This was the rule because it indicated the owner had never paid sufficient attention to how the land was in fact being used, and therefore the former owner did not deserve to keep it. Before 2002, time was seen to cure everything. The rule's function was to ensure land was used efficiently.
Requirements
Before the considerable hurdle of giving a registered owner notice was introduced in the Land Registration Act 2002, the particular requirements of adverse possession were reasonably straight forward.
First, under schedule 1, paragraphs 1 and 8 of the Limitation Act 1980, the time when adverse possession began was when "possession" was taken. This had to be more than something temporary or transitory, such as simply storing goods on a land for a brief period. But "possession" did not require actual occupation. So in Powell v McFarlane, it was held to be "possession" when Mr Powell, from age 14, let his cows roam into Mr McFarlane's land. The intruder must also show that they were dealing with the land as an occupying owner might have done, and that no one else had done so.
The second requirement, however, was that there needed to be an intention to possess the land. Mr Powell lost his claim because simply letting his cows roam was an equivocal act: it was only later that there was evidence he intended to take possession, for instance by erecting signs on the land and parking a lorry. But this had not happened long enough for the 12-year time limit on McFarlane's claim to have expired. As a result, proving intention to possess is likely to rely heavily on the factual matrix of the case and the squatters' factual possession. In Clowes Developments (UK) Ltd v Walters and Others [2005] EWHC (Ch), the squatter cannot be found to have an intention to possess if they mistakenly believe that they are on the property with the permission of the title owner.
Third, possession is not considered "adverse" if the person is there with the owner's consent. For example, in BP Properties Ltd v Buckler, Dillon LJ held that Mrs Buckler could not claim adverse possession over land owned by BP because BP had told her she could stay rent free for life. Fourth, under the Limitation Act 1980 sections 29 and 30, the adverse possessor must not have acknowledged the title of the owner in any express way, or the clock starts running again. However, the courts have interpreted this requirement flexibly.
Human rights challenges
In JA Pye (Oxford) Ltd v Graham, Mr and Mrs Graham had been let a part of Mr Pye's land, and then the lease had expired. Mr Pye refused to renew a lease, on the basis that this might disturb getting planning permission. In fact the land remained unused, Mr Pye did nothing, while the Grahams continued to retain a key to the property and used it as part of their farm. At the end of the limitation period, they claimed the land was theirs. They had in fact offered to buy a licence from Mr Pye, but the House of Lords held that this did not amount to an acknowledgement of title that would deprive them of a claim. Having lost in the UK courts, Mr Pye took the case to the European Court of Human Rights, arguing that his business should receive £10 million in compensation because it was a breach of his right to "peaceful enjoyment of possessions" under Protocol 1, article 1 of the European Convention on Human Rights. The court in its Grand Chamber rejected this, holding that it was within a national government's margin of appreciation to determine the relevant property rules. The House of Lords in Ofulue v Bossert in 2009 confirmed this understanding.
Timing
For registered land, adverse possession claims completed before 13 October 2003 (the date the Land Registration Act 2002 came into force) are governed by section 75(1) and 75(2) of the Land Registration Act 1925. The limitation period remains the same (12 years) but instead of the original owner's title to the land being extinguished, the original owner holds the land on trust for the adverse possessor. The adverse possessor can then apply to be the new registered proprietor of the land.
For registered land, adverse possession claims completed after 13 October 2003 follow a different procedure. Where land is registered, the adverse possessor may henceforth apply to be registered as owner after 10 years of adverse possession and the Land Registry must give notice to the true owner of this application. This gives the landowner a statutory period of time (65 business days) to object to the adverse possession, object to the application on the ground that there has not actually been the necessary 10 years' adverse possession, and/or to serve a "counter-notice". If a counter-notice is served, then the application fails unless
it would be unconscionable because of an equity by estoppel for the registered proprietor to seek to dispossess the squatter and the squatter ought in the circumstances to be registered as proprietor, or
the squatter is for some other reason entitled to be registered as proprietor, or
the squatter has been in adverse possession of land adjacent to their own under the mistaken but reasonable belief that they are the owner of it, the exact line of the boundary with this adjacent land has not been determined and the estate to which the application relates was registered more than a year prior to the date of the application.
Otherwise, the squatter becomes the registered proprietor according to the land registry. If the true owner is unable to evict the squatter in the two years following the first [unsuccessful] application, the squatter can apply again after this period and be successful despite the opposition of the owner. The process effectively prevents the removal of a landowner's right to property without their knowledge, while ensuring squatters have a fair way of exercising their rights.
Where a tenant adversely possesses land, there is a presumption that they are doing so in a way that will benefit the landlord at the end of their term. If the land does not belong to their landlord, the land will become part of both the tenancy and the reversion. If the land does belong to their landlord, it would seem that it will be gained by the tenant but only for the period of their term.
Since September 2012, squatting in a residential building is a criminal offence, but this does not prevent title being claimed by reason of adverse possession even if the claimant is committing a criminal offence. This was confirmed in Best v Chief Land Registrar, where it was held that criminal and land law should be kept separate.
United States
Requirements
The party seeking title by adverse possession may be called the disseisor, meaning one who dispossesses the true owner of the property. Although the elements of an adverse possession claim may be different in a number of states, adverse possession requires at a minimum five basic conditions being met to perfect the title of the disseisor. These are that the disseisor must openly occupy the property exclusively, in a manner that is open and notorious, continuously, and use it as if it were their own in a manner expected for the type of property. Some states impose additional requirements. Many of the states have enacted statutes regulating the rules of adverse possession. Some states require a hostility requirement to secure adverse possession. While most states take an objective approach to the hostility requirement, some states require a showing of good faith. Good faith means that claimants must demonstrate that they had some basis to believe that they actually owned the property at issue. Four states east of the Mississippi that require good faith in some form are Georgia, Illinois, New York, and Wisconsin.
Additional requirements
In addition to the basic elements of an adverse possession case, state law may require one or more additional elements to be proved by a person claiming adverse possession. Depending upon the state, additional requirements may include:
Color of title, claim of title, or claim of right. Color of title and claim of title involve a legal document that appears (incorrectly) to give the disseisor title. In some jurisdictions the mere intent to take the land as one's own may constitute "claim of right", with no documentation required. Other cases have determined that a claim of right exists if the person believes they have rightful claim to the property, even if that belief is mistaken. A negative example would be a timber thief who sneaks onto a property, cuts timber not visible from the road, and hauls the logs away at night. Their actions, though they demonstrate actual possession, also demonstrate knowledge of guilt, as opposed to claim of right.
Good faith (in a minority of states) or bad faith (sometimes called the "Maine Doctrine" although it is now abolished in Maine)
Improvement, cultivation, or enclosure
Payment of property taxes. This may be required by statute, such as in California, or just a contributing element to a court's determination of possession. Both payment by the disseisor and by the true owner are relevant.
Dispossession of land owned by a governmental entity: Generally, a disseisor cannot dispossess land legally owned by a government entity even if all other elements of adverse possession are met. One exception is when the government entity is acting like a business rather than a government entity.
Consequences
A disseisor will be committing a civil trespass on the property he has taken and the owner of the property could cause him to be evicted by an action in trespass ("ejectment") or by bringing an action for possession. All common law jurisdictions require that an ejectment action be brought within a specified time, after which the true owner is assumed to have acquiesced. The effect of a failure by the true landowner to evict the adverse possessor depends on the jurisdiction, but will eventually result in title by adverse possession.
In 2008, due to the volume of adverse possession and boundary dispute cases throughout New York City, the New York State Legislature amended and limited the ability of land to be acquired by adverse possession. Prior to the 2008 amendment, to acquire property by adverse possession, all that was required was a showing that the possession constituted an actual invasion of or infringement upon the owner's rights. Approximately eight years after the 2008 amendment, on 30 June 2016, the New York State Appellate Division, First Department (i.e., the appellate court covering the territory of Manhattan) determined the legal questions concerning the scope of rights acquired by adverse possession and how the First Department would treat claims of adverse possession where title had vested prior to 2008. The Court specifically held that title to the adversely possessed property vested when the plaintiff "satisfie[d] the requirement of the statute in effect at the relevant time." In other words, if title had vested at some time "after" the 2008 amendment, a plaintiff would have to satisfy the adverse possession standards amended by the New York State Legislature in 2008; however, if title vested at some time "before" the 2008 amendment, a plaintiff would have lawfully acquired title to the disputed area by satisfying the pre-amendment standard for adverse possession. Hudson Square Hotel also resolved two often highly litigated issues in adverse possession cases where the air rights are more valuable than the underlying land itself: (a) "where" (i.e., in three-dimensional physical space) is an encroachment required in order for such encroachment to have any relevant operative effect or consequences under the law of adverse possession, and (b) "what" property rights are acquired as a result of title to the ground floor area (i.e., the land) vesting with the plaintiff. In Hudson Square Hotel the defendant argued that the plaintiff had only acquired title to the underlying land, but not the air rights, because the plaintiff never encroached above the two-story building. This argument was motivated, in part, by the fact that the zoning laws at the time permitted the owner of the land to build (i.e., develop) up to six times the square footage of the ground floor area. For example, if the disputed area was 1,000 square feet, there would be 6,000 square feet of buildable square footage to potentially be won or lost by adverse possession. The Court clarified, "It is the encroachment on the land ... that allows title to pass to the adverse possessor." In other words, the plaintiff did not need to encroach upon all six stories in order to adversely possess the air rights above the land. The Court also held, "With title to land come air rights." In other words, by acquiring title to the land (i.e., ground floor area), the plaintiff also acquired ownership of the more valuable air rights that were derivative of title to the underlying land.
In other jurisdictions, the disseisor acquires merely an equitable title; the landowner is considered to be a trustee of the property for the disseisor.
Adverse possession extends only to the property actually possessed. If the original owner had a title to a greater area (or volume) of property, the disseisor does not obtain all of it. The exception to this is when the disseisor enters the land under a color of title to an entire parcel, their continuous and actual possession of a small part of that parcel will perfect their title to the entire parcel defined in their color of title. Thus a disseisor need not build a dwelling on, or farm on, every portion of a large tract in order to prove possession, as long as their title does correctly describe the entire parcel.
In some jurisdictions, a person who has successfully obtained title to property by adverse possession may (optionally) bring an action in land court to "quiet title" of record in their name on some or all of the former owner's property. Such action will make it simpler to convey the interest to others in a definitive manner, and also serves as notice that there is a new owner of record, which may be a prerequisite to benefits such as equity loans or judicial standing as an abutter. Even if such action is not taken, the title is legally considered to belong to the new titleholder, with most of the benefits and duties, including paying property taxes to avoid losing title to the tax collector. The effects of having a stranger to the title paying taxes on property may vary from one jurisdiction to another. (Many jurisdictions have accepted tax payment for the same parcel from two different parties without raising an objection or notifying either party that the other had also paid.)
Adverse possession does not typically work against property owned by the public.
The process of adverse possession would require a thorough analysis if private property is taken by eminent domain, after which control is given to a private corporation (such as a railroad), and then abandoned.
Where land is registered under a Torrens title registration system or similar, special rules apply. It may be that the land cannot be affected by adverse possession (as was the case in England and Wales from 1875 to 1926, and as is still the case in the state of Minnesota) or that special rules apply.
Adverse possession may also apply to territorial rights. In the United States, Georgia lost an island in the Savannah River to South Carolina in 1990, when South Carolina had used fill from dredging to attach the island to its own shore. Since Georgia knew of this yet did nothing about it, the U.S. Supreme Court (which has original jurisdiction in such matters) granted this land to South Carolina, although the Treaty of Beaufort (1787) explicitly specified that the river's islands belonged to Georgia.
Louisiana
Louisiana, which is a civil law state, adopts the legal doctrine of acquisitive prescription. It is derived from French law and governs the right of a person to gain possession of immovable property (a home). Pursuant to Civil Code Article 742, there are two ways that a squatter can gain possession of an immovable property: (1) peaceable and uninterrupted possession... for ten years in good faith and by just title; [or] (2) uninterrupted possession for thirty years without title or good faith.
Squatter's rights
Most cases of adverse possession deal with boundary line disputes between two parties who hold clear title to their property. The term "squatter's rights" has no precise and fixed legal meaning. In some jurisdictions the term refers to temporary rights available to squatters that prevent them, in some circumstances, from being removed from property without due process. For example, in England and Wales reference is usually to section 6 of the Criminal Law Act 1977. In the United States, no ownership rights are created by mere possession, and a squatter may only take possession through adverse possession if the squatter can prove all elements of an adverse possession claim for the jurisdiction in which the property is located.
As with any adverse possession claim, if a squatter abandons the property for a period, or if the rightful owner effectively removes the squatter's access even temporarily during the statutory period, or gives their permission, the "clock" usually stops. For example, if the required period in a given jurisdiction is twenty years and the squatter is removed after only 15 years, the squatter loses the benefit of that 15-year possession (i.e., the clock is reset at zero). If that squatter later retakes possession of the property, that squatter must, to acquire title, remain on the property for a full 20 years after the date on which the squatter retook possession. In this example, the squatter would have held the property for 35 years (the original 15 years plus the later 20 years) to acquire title.
Depending on the jurisdiction, one squatter may or may not pass along continuous possession to another squatter, known as "tacking". Tacking is defined as "The joining of consecutive periods of possession by different persons to treat the periods as one continuous period; esp., the adding of one's own period of land possession to that of a prior possessor to establish continuous adverse possession for the statutory period." There are three types of Privity: Privity of Contract; Privity of Possession; and Privity of Estate. One of the three types of privity is required in order for one adverse possessor to "tack" their time onto another adverse possessor in order to complete the statutory time period. One way tacking occurs is when the conveyance of the property from one adverse possessor to another is founded upon a written document (usually an erroneous deed), indicating "color of title." A lawful owner may also restart the clock at zero by giving temporary permission for the occupation of the property, thus defeating the necessary "continuous and hostile" element. Evidence that a squatter paid rent to the owner would defeat adverse possession for that period.
England and Wales
Although "squatting" is a criminal offence in England and Wales under Section 144 of the Legal Aid, Sentencing and Punishment of Offenders Act 2012 (LASPO), the Court of Appeal has clarified that Section 144 will not bar a person who wants to claim adverse possession, based on the rule of ex turpi causa, from relying on illegal squatting as an act demonstrating possession of the property.
Section 17 of the Limitation Act 1980 means the old title holder's estate will be extinguished and a new one will be created for the successful squatter. This section also inserted an immediate power of entry to the premises into Section 17 of the Police and Criminal Evidence Act 1984.
Finally, schedule 6 (para 7) of the Land Registration Act 2002 allows the successful adverse possessor to be registered as the new proprietor.
Theory and scholarship
Utilitarianism
Scholars have identified four utilitarian policies which justify adverse possession.
The first is that it exists to cure potential or actual defects in real estate titles by putting a statute of limitations on possible litigation over ownership and possession. Because of the doctrine of adverse possession, a landowner can be secure in title to their land. Otherwise, long-lost heirs of any former owner, possessor or lien holder of centuries past could come forward with a legal claim on the property.
The second theory is that adverse possession is "a useful method for curing minor title defects". For example, someone may have had the intention to sell all of a parcel of land but mistakenly excluded a portion of it on the title. Thus, adverse possession allows the purchaser of the land to maintain ownership of the parcel which they believed was theirs from the impressions given by the seller.
The third theory is that adverse possession encourages and rewards productive use of land. Essentially, "by vesting title in the industrious settler—rather than the absentee landowner—adverse possession promote[s] rapid development".
The fourth theory is that the adverse possessor places a high personal value on the land while the real title holder has effectively abandoned it, thus on principles of personhood and efficiency it makes sense to allow the change in title.
Copyright scholarship
Some legal scholars have proposed the extension of the concept of adverse possession to intellectual property law, in particular to reconcile intellectual property and antitrust law or to unify copyright law and property law.
See also
Adverse possession in Australia
Appropriation (economics)
Informal settlement
Title (property)
Pedis possessio ("possession of the foot"): ownership by first occupier
Squatting: unauthorized use of abandoned property with no implied ownership or right to use
Usucaption: acquisitive prescription ownership after defined period of unauthorized occupation
Uti possidetis
Preemption Act of 1841: acquisitive prescription law in USA
Rights of way in England and Wales
Usufruct: acquired right to use but not own property
Revised statute 2477: highways & paths law in USA
Easement: for related adverse rights
Lien: right of others to hold or sell to solve debt
List of real estate topics
Possession is nine-tenths of the law
Property law
Property rights
Deed: legal document defining interest or ownership
Adverse abandonment
Notes
Citations
Works cited
Further reading
O Jones, 'Out with the Owners: The Eurasian Sequels to J A Pye (Oxford) Ltd v. United Kingdom' (2008) 27 Civil Justice Quarterly 260–276
External links
Real property law
Common law legal terminology
English property law
United States property case law
Time in government | Adverse possession | [
"Physics"
] | 6,364 | [
"Spacetime",
"Physical quantities",
"Time in government",
"Time"
] |
597,792 | https://en.wikipedia.org/wiki/Sex%20segregation | Sex segregation, sex separation, sex partition, gender segregation, gender separation, or gender partition is the physical, legal, or cultural separation of people according to their gender or biological sex at any age. Sex segregation can simply refer to the physical and spatial separation by sex without any connotation of illegal discrimination. In other circumstances, sex segregation can be controversial. Depending on the circumstances, it can be a violation of capabilities and human rights and can create economic inefficiencies; on the other hand, some supporters argue that it is central to certain religious laws and social and cultural histories and traditions.
Definitions
The term "sex" in "sex segregation" refers to the biological distinctions between men and women, used in contrast to "gender". The term "segregation" refers to separation of the sexes, which can be enforced by rules, laws, and policies, or be a de facto outcome in which people are separated by sex. Even as a de facto outcome, sex segregation taken as a whole can be caused by societal pressures, historical practices, socialized preferences and “fundamental biological differences”. Sex segregation can refer to literal physical and spatial separation by sex. The term is also used for the exclusion of one sex from participation in an occupation, institution, or group. Sex segregation can be complete or partial, as when members of one sex predominate within, but do not exclusively constitute, a group or organization.
In the United States some scholars use the term sex separation and not sex segregation.
The term gender apartheid (or sexual apartheid) also has been applied to segregation of people by gender, implying that it is sexual discrimination. If sex segregation is a form of sex discrimination, its effects have important consequences for gender equality and equity.
Types
Sex segregation can occur in both public and private contexts, and be classified in many ways. Legal and gender studies scholar David S. Cohen offers one taxonomy in categorizing sex segregation as mandatory, administrative, permissive, or voluntary. Mandatory and administrative sex segregation are required and enforced by governments in public environments, while permissive and voluntary sex segregation are stances chosen by public or private institutions, but within the capacity of the law.
Mandatory
Mandatory sex segregation is legally required and enforces separation based on sex. Examples include separation of men and women in prisons, law enforcement, military service, public toilets, and housing. These mandatory rules can be nuanced, as in military service, where sexes are often separated in laws about conscription, in housing, and in regulations on which sexes can participate in certain roles, like frontline infantry. Mandatory sex segregation also includes less obvious cases of separation, as when men and women are required to have same-sex attendants for body searches. Mandatory sex segregation can thus dictate parameters for employment in sex segregated spaces, including medical and care work contexts, and can be a form of occupational segregation. For example, a government may mandate that clinics hire female nurses to care for female patients.
Administrative
Administrative sex segregation involves public and government institutions segregating by sex in their operating capacity, rather than as the result of a formal mandate. Examples of administrative sex segregation include sex segregation in government sponsored medical research, sports leagues, public hospitals with shared rooms, rehabilitation programs, and some public education facilities. Administrative sex segregation can occur in these environments simply as through the provisioning of sex segregated public toilets despite limited explicit legal requirements to do so.
Permissive
Permissive sex segregation is segregation which is explicitly permitted by law, i.e. affirmatively authorized, but not necessarily legally required or encouraged. Permissive sex segregation exempts certain things from anti-sex-discrimination laws, often allowing for, among others, segregation of religious and military schools, undergraduate schools that have traditionally admitted based on sex, health clubs, athletic teams, social fraternities and sororities, choirs and choruses, voluntary youth service organizations such as the Girl Scouts and Boy Scouts, father/son and mother/daughter activities, and sex-exclusive beauty pageants and scholarships.
Voluntary
Sex segregation that is neither legally mandated, nor enacted in an administrative capacity, nor explicitly permitted by law, is recognized as voluntary sex segregation. Voluntary sex segregation refers to lack of explicit legal prescriptions; it does not necessarily indicate the free choice of either the segregated or the segregators, and it may be imposed by social and cultural norms. Voluntary sex segregation takes place in numerous national professional and interest-based membership organizations, local and larger clubs, professional sports teams, private recreational facilities, religious institutions, performing arts, and more.
Theoretical explanations
Within feminist theory and feminist legal theory, there are six main theoretical approaches that can be considered and used to analyze the causes and effects of sex segregation. They include libertarianism, equal treatment, difference feminism, anti-subordination, critical race feminism, and anti-essentialism.
Libertarianism
Libertarian feminist theory stems from ideologies similar to libertarian political theory; that legal and governmental institutions should not regulate choices and should allow people's free will to govern their life trajectories. Libertarianism takes a free market approach to sex segregation saying that women have a natural right and are the most informed to make decisions for themselves but rejects special protections specifically for women. Autonomy is central to libertarianism, so theorists believe that the government should not interfere with decision making or be concerned with reasoning behind such decisions since men and women culturally and naturally make different and often diverging choices. Policies and laws enforced by the government should not act to change any inherent differences between the sexes.
Libertarianism most directly relates to voluntary sex segregation as it maintains that the government should not regulate private institutions or entities' segregation by sex and should not regulate how individuals privately group themselves. Libertarian feminist David Berstein argues that while sex segregation can cause harm, guarding the freedom of choice for men and women is more important than preventing such sex segregation since methods of prevention can often cause more harm than good for both sexes. Women's health clubs are an example of how sex segregation benefits women since desegregation would interfere with women's abilities to exercise without the distraction of men and 'ogling' without any direct benefit to allowing men a membership. Additionally, libertarians would allow for permissive sex segregation since it allows people to choose how to organize their interactions and relationships with others.
Libertarian feminists acknowledge that there is legal precedence for sex segregation laws, but argue for such parameters to ensure equal treatment of similarly situated men and women. As such, libertarianism could allow or reject specific forms of sex segregation created to account for natural or biological differences between the sexes.
Equal treatment
Equal treatment theory or formal equity often works in tandem with libertarianism in that equal treatment theorists believe governments should treat men and women similarly when their situations are similar. In countries whose governments have taken to legislation eliminating sex segregation, equal treatment theory is most frequently used as support for such rules and regulation. For example, equal treatment theory was adopted by the many feminists during the United States' feminist movement in the 1970s. This utilization of equal treatment theory led to the adoption of intermediate scrutiny as a standard for sex discrimination on the basis that men and women should be treated equally when in similar situations. While equal treatment theory provides a sound framework for equality, application is quite tricky, as many critics question the standards by which men and women should be treated similarly or differently. In this manner, libertarianism and equal treatment theory provide good foundations for their agendas in sex segregation, but conceptually do not prevent it, leaving room for mandatory and administrative sex segregation to remain as long as separation is based on celebrated differences between men and women. Some forms of mandatory and administrative segregation may perpetuate sex segregation by depicting a difference between male and female employees where there is no such difference, as in combat exclusion policies.
Difference feminism
Difference feminism arose from libertarianism and equal treatment theories' perceived failure to create systemic equality for women. Difference feminism celebrates biological, psychological, moral differences between men and women, accusing laws concerning sex segregation of diluting these important differences. Difference feminists believe that such laws not only ignore these important differences, but also can exclude participation of women in the world. Difference feminism's goal is to bring about a consciousness of women's femininity and to cause the revaluation of women's attributes in a more respectful, reverent manner.
Difference feminism and equal treatment theory are quite contrasting feminist theories of sex segregation. Difference feminism often justifies sex segregation through women's and men's differences while equal treatment theory does not support separation because of differences in sex. Difference feminism, however, argues against segregation that stems from societal and "old-fashioned" differences between men and women, but believes that segregation that takes women's differences into account and promotes equality is acceptable, even going so far as to say that some forms of sex segregation are necessary to ensure equality, such as athletics and education, and policies such as Title IX.
Anti-subordination
Anti-subordination feminist theory examines sex segregation of power to determine whether women are treated or placed subordinately compared to men in a law or situation. The theory focuses on male dominance and female subordination and promotes destroying a sex-based hierarchy in legal and social institutions and preventing future hierarchies from arising. Anti-subordination also supports laws that promote the status of women even if they lower men's status as a consequence. Controversial applications of anti-subordination that can either perpetuate the subordination of women or create the subordination of men include sex segregation in education and in the military.
Critical race feminism
Critical race feminism developed due to the lack of racial inclusivity of feminist theories and lack of gender inclusivity of racial theories. This theory is more global than the others, attempting to take into account the intersectionality of gender and race. Critical race feminism demands that theorists reexamine surface-level segregation and focus on how sex segregation stems from different histories, causing different effects based on race, especially for women of color. This segregation is evident in many racially divided countries, especially in the relationship between the end of race-segregated schools and sex segregation. Critical race feminism critiques other theories' failure to take into account their different applications once race, class, sexual orientation, or other identity factors are included in a segregated situation. It creates the need to examine mandatory and administrative sex segregation to determine whether or if they sustain racial stereotypes, particularly towards women of color. Additionally, critical race feminists wonder whether permissive and voluntary sex segregation are socially acceptable manners by which to separate races and sexes or whether they maintain and perpetuate inequalities. Critical race feminism is a form of anti-essentialism (below).
Anti-essentialism
Anti-essentialists maintain that sex and gender categories are limiting and fail to include the unlimited variety of difference in human identity and impose identities rather than simply note their differences. Theorists believe that there is variation in what it means to be a man and what it means to be a woman, and by promoting the differences through sex segregation, people are confined to categories, limiting freedom. Anti-essentialists examine how society imposes specific identities within the sex dichotomy and how subsequently sex and gender hierarchies are created, perpetuated, and normalized. This theory requires that there is a specific disentanglement between sex and gender. Anti-essentialists believe that there should not be an idea of what constitutes masculinity or femininity, but that individual characteristics should be fluid to eliminate sex and gender-based stereotypes. No specific types of sex segregation are outwardly promoted or supported by anti-essentialists since mandatory and administrative sex segregation reinforce power struggles between the sexes and genders while permissive or voluntary forms of sex segregation allow institutions and society to sort individuals into categories with differential access to power, and supporting the government's elimination of such permission for certain institutions and norms to continue to exist.
Contemporary policy examples
Sex segregation is a global phenomenon manifested differently in varying localities. Sex segregation and integration considered harmless or normal in one country can be considered radical or illegal in others. At the same time, many laws and policies promoting segregation or desegregation recur across multiple national contexts. Safety and privacy concerns, traditional values and cultural norms, and belief that sex segregation can produce positive educational and overall social outcomes all shape public policy regarding sex segregation.
Safety and privacy
Some sex segregation occurs for reasons of safety and privacy. Worldwide, laws often mandate sex segregation in public toilets, changing rooms, showers, and similar spaces, based on a common perceived need for privacy. This type of segregation policy can protect against sexual harassment and sexual abuse. To combat groping, street harassment, and eve teasing of women in crowded public places, some countries have also designated women only spaces. For example, sex-segregated buses, women-only passenger cars, and compartments on trains have been introduced in Mexico, Japan, the Philippines, the UAE and other countries to reduce sexual harassment. Some places in Germany, Korea, and China all have women's parking spaces, often for related safety issues. Many more countries, including Canada, the United States, Italy, Japan, and the United Kingdom also grant parking privileges to pregnant women, for safety or access reasons.
Sex segregation rooted in safety considerations can furthermore extend beyond the physical to the psychological and emotional as well. A refuge for battered mothers or wives may refuse to admit men, even those who are themselves the victims of domestic violence, both to exclude those who might commit or threaten violence to women and because women who have been subjected to abuse by a male might feel threatened by the presence of any man. Women's health clinics and women's resource centers, whether in Africa or North America, are further examples of spaces where sex segregation may facilitate private and highly personal decisions. Women-only banks may be similarly intended to provide autonomy to women's decision making.
Religious and cultural ideas
Sex segregation can also be motivated by religious or cultural ideas about men and women. Such cultural assumptions may even exist in the aforementioned policies enacted under the pretenses of safety or privacy concerns. Gender separation and segregation in Judaism and Islam reflect religiously motivated sex segregation. In Buddhism, Christianity, and Hinduism, monastic orders, prayer spaces, and leadership roles have also been segregated by sex.
From a policy perspective, theocracies and countries with a state religion have sometimes made sex segregation laws based partially in religious traditions. Even when not legally enforced, such traditions can be reinforced by social institutions and in turn result in sex segregation. In the South Asian context, one institution conducive to sex segregation, sometimes but not always rooted in national law, is purdah.
During the Taiping Rebellion (1851–64) against the Qing dynasty, areas controlled by the Taiping Heavenly Kingdom had strict sex separation enforced. Even married couples were not allowed to live together until 1858.
The Muslim world and the Middle East have been particularly scrutinized by scholars analyzing sex segregation resulting from the consequences of Sharia, the moral and religious code of Islam that, in the strictest version, Muslims hold to be the perfect law created by God. Saudi Arabia has been called an epicenter of sex segregation, stemming partially from its conservative Sunni Islamic practices and partially from its monarchy's legal constraints. Sex segregation in Saudi Arabia is not inherent to the country's culture, but was promoted in the 1980s and 1990s by the government, the Sahwa movement, and conservative and religious behavioral enforcers (i.e. police, government officers, etc.).
Israel has also been noted both for its military draft of both sexes and its sex-segregated Mehadrin bus lines.
Education and socialization
Sex segregation is sometimes pursued through policy because it is thought to produce better educational outcomes. In some parts of the world, especially in Europe, where education is available to girls as well as boys, educational establishments were frequently single-gender. Such single-sex schools are still found in many countries, including but not limited to, Australia, the United Kingdom, and the United States.
In the United States in particular, two federal laws give public and private entities permission to segregate based on sex: Title VII of the Civil Rights Act of 1964 and Title IX of the Educational Amendments of 1972. These laws permit sex segregation of contact sports, choruses, sex education, and in areas such as math and reading, within public schools.
Studies have analyzed whether single-sex or co-ed schools produce better educational outcomes. Teachers and school environments tend to be more conducive to girls' learning habits and participation rates improve in single-sex schools. In developing countries, single-sex education provides women and girls an opportunity to increase female education and future labor force participation. Girls in single-sex schools outperform their counterparts in co-educational schools in math, average class scores for girls are higher, girls in single-sex math and science classes are more likely to continue to take math and science classes in higher education, and in case studies, boys and girls have reported that single-sex classes and single-sex teachers create a better environment for learning for both sexes.
Critics of single-sex schools and classes claim that single-sex schooling is inherently unequal and that its physical separation contributes to gender bias on an academic and social basis. Single-sex schooling also allegedly limits the socialization between sexes that co-educational schools provide. Coeducational school settings have been shown to foster less anxiety, have happier classrooms, and enable students to participate in a simulated social environment with the tools to maneuver, network, and succeed in the world outside of school. Even in co-ed schools, certain classes, such as sex education, are sometimes segregated on the basis of sex. Parallel education occurs in some schools, when administrators decide to segregate students only in core subjects.
Segregation by specialization is also evident in higher education and actually increases with economic development of a country. Cambodia, Laos, Morocco, and Namibia are countries with the least amount of gender segregation in tertiary studies while Croatia, Finland, Japan, and Lithuania have the most.
Sport
Even outside of educational settings women's sports are sex segregated from men's sports.
Desegregation
Desegregation policies often seek to prevent sex discrimination, or alleviate occupational segregation. These policies encourage women and men to participate in environments typically predominated by the opposite sex. Examples include government quotas, gender-specific scholarships, co-ed recreational leagues, or programming designed to change social norms.
In China, deputies to the National People's Congress and members of the Chinese People's Political Consultative Conference National Committee proposed that the public should be more attentive to widespread instances of occupational segregation in China. Often employers reject specifically women applicants or create sex requirements in order to apply. The Labour Contract Law of the People's Republic of China and Law of the People's Republic of China on the Protection of Rights and Interests of Women state that no employer can refuse to employ women based on sex or raise application standards for women specifically, but also do not currently have clear sanctions for those who do segregate based on sex.
China has also begun to encourage women in rural villages to take up positions of management in their committees. Specifically, China's Village Committee Organization Law mandates that women should make up one third or more of the members of village committees. The Dunhuang Women's Federation of Dunhuang City, in China's Gansu Province, provided training for their village's women in order to build political knowledge.
In March 2013 in the European Union, a resolution was passed to invest in training and professional development for women, promote women-run businesses, and include women on company boards. In Israel, the Minister of Religious Services, Yaakov Margi Shas, has recently supported removal of signs at cemeteries segregating women and men for eulogies and funerals, prohibiting women from taking part in the services. The Minister agreed with academic and politician, Yesh Atid MK Aliza Lavie, who questioned him about segregation policies enacted by rabbis and burial officials, that governmental opposition to sex segregation was necessary to combat these practices not supported by Jewish or Israeli law.
In other cases, sex segregation in one arena can be pursued to enable sex desegregation in another. For example, separation of boys and girls for early math and science education may be part of an effort to increase the representation of women in engineering or women in science.
Sometimes, countries will also argue that segregation in other nations violates human rights. For example, the United Nations and Western countries have encouraged kings of Saudi Arabia to end its strict segregation of institutions such as schools, government institutions, hospitals, and other public spaces in order to secure women's rights in Saudi Arabia Even though the removal of certain religious and government heads has made way for liberal agendas to promote desegregation, the public largely still subscribes to the idea of a segregated society, while institutions and the government itself still technically remain under the control of Wahhabism. Reform is small in size, since there is no constitution to back up policy changes concerning sex segregation. The Saudi people refer to this segregation as Khilwa and violation of the separation is punishable by law. This separation is tangibly manifested in the recently erected wall in places that employ both men and women, a feat possible by a law passed in 2011 allowing Saudi women to work in lingerie shops in order to lower female unemployment rates. The public views the 1.6 meter wall favorably, saying that it will lead to less instances of harassment by men visiting the expatriate women in the shops. The Luthan hotel in Saudi Arabia was the country's first women's only hotel, acting more as a vacation spot for women than a mandated segregated institution. Upon entering the hotel, women are allowed to remove their headscarves and abayas and the hotel employs only women, calling their bellhops the world's first bellgirls, providing opportunities for Saudi women in IT and engineering jobs where, outside the Luthan, are quite scarce.
In prisons
Sex segregation is very prevalent in the administration of prisons.
Radical feminist Catharine MacKinnon says that the policy is in place for ease of management and not for protecting women, examplified by the fact that women's prisons put women who have been convicted of rape or murder in the same wards as women who have been convicted of prostitution, or killing their batterers.
Significance
In human development
For most children, sex segregation manifests itself early in the socialization process via group socialization theory, where children are expected to behave in certain sex-typed manners and form sex in and out-groups. In pre-school classrooms, for example, making gender more salient to children has been shown to lead to stronger gender stereotypes and inter-group biases between sex groups. These evident tendencies were also manifested in decreased playtime with children of the opposite sex, or a kind of early, selective sex segregation based on preconceived social norms. While specifically segregating by sex for playtime has not been linked to any long-lasting effects on women's rights compared to men, these different manners of socialization often lead to communication and power struggles between men and women and to differential life decisions by each sex based on these long-established gendered identities.
In elementary and secondary education, sex segregation sometimes yields and perpetuates gender bias in the form of treatment by teachers and peers that perpetuates traditional gender roles and sex bias, underrepresentation of girls in upper level math, science, and computer classes, fewer opportunities for girls to learn and solve problems, girls receiving less attention compared to the boys in their classes, and significantly different performance levels between boys and girls in reading and math classes. Sometimes in elementary schools teachers force the students to sit boy, girl, boy girl. Sex segregation in educational settings can also lead to negative outcomes for boys such as boys in co-educational classrooms having academic scores higher than boys in single-sex classrooms. On the contrary, girls in single-sex classrooms have academic scores higher than girls in co-educational classrooms. Boys academically benefit from a coeducational environment while girls do from a single-sex environment, so critics and proponents of both types of education argue that either single-sex or coeducational classrooms create a comparative disadvantage for either sex. Athletic participation and physical education are examples where appeals to differences in biological sex may encourage segregation within education systems. These differences can impact access to competition, gender identity construction, and external as well as internalized perceptions of capabilities, especially among young girls.
Separation of public toilets by sex is very common around the world. In certain settings the sex separation can be critical to ensure the safety of females, in particular schoolgirls, from male abuse. At the same time, sex segregated public toilets may promote a gender binary that excludes transgender people. Unisex public toilets can be a suitable alternative and/or addition to sex-segregated toilets in many cases.
A special case presents with choirs and choruses, especially in the tradition of the choir school which uses ensemble and musical discipline to ground academic discipline. Male and female voices are distinctive both solo and in ensemble, and segregated singing has an evolved and established aesthetic. Male voices, unlike female voices, break in early adolescence, and accommodating this break in an educational program is challenging in a coed environment. Coeducation tends to stigmatize males, as is often the case in expressive arts, unlike athletics.
In economies
Physical sex separation is popular in many institutions on a tertiary level (between types of institutions), while fields of study or majors are not highly gendered, such as later life decisions such as work/care work conflicts. Men tend to occupy engineering, manufacturing, science, and construction fields while women dominate education, humanities and arts, social sciences, business, law, and health and welfare fields. However, important life decisions as occupations can yield other instances of sex segregation by impacting occupational sex imbalances and further male and female socialization. Vicki Schultz (1990) indicates that although Title VII of the Civil Rights Act of 1964 prohibits sex discrimination in employment and promised working women change, "most women continue to work in low paying, low status, and traditionally female jobs." Schultz (1990) states that "employers have argued that women lack interest in male-dominated jobs, which are highly rewarded and nontraditional for women." According to Schultz, the courts have accepted this argument, subsequently not holding employers liable. Schultz contends that "the courts have failed to recognize the role of employers in shaping women's work aspirations." (Schultz, 1990:1750,1756)) Schultz states that the judicial framework that has been established by the courts "has created an unduly narrow definition of sex discrimination and an overly restrictive role for the law in dismantling sex segregation." (Schultz, 1990:1757) Schultz concludes by saying, "courts can acknowledge their own constructive power and use it to help create a work world in which women are empowered to choose the more highly rewarded occupations that Title VII has long promised." (Schultz, 1990:1843) Even at psychological levels, socialized preferences for or against sex segregation can also have significant effects. In one study, women in male-dominated work settings were the most satisfied psychologically with their jobs while women in occupational settings with only 15-30% men were less satisfied due to favored treatment of the male minority in such a segregated atmosphere. Stark segregation by occupation can lead to a sexual division of labor, influencing the access and control men and women have over inputs and outputs required for labor. Additionally, occupational sex segregation has certain health and safety hazards for each sex, since employment conditions, type of work, and contract and domestic responsibilities vary for types of employment. In many areas of work, women tend to dominate the production line jobs while men occupy managerial and technical jobs. These types of workplace factors and interactions between work and family have been cited by social stratification research as key causes for social inequality. Family roles are especially influential for predicting significant differences in earnings between married couples. Men benefit financially from family roles such as a husband and a father, while women's incomes are lowered when becoming a wife and mother.
Other gender disparities via sex segregation between men and women include differential asset ownership, house and care work responsibilities, and agency in public and private spheres for each sex. These segregations have persisted because of governmental policy, blocked access for a sex, and/or the existence of sex-based societal gender roles and norms. Perpetuation of gender segregation, especially in economic spheres, creates market and institutional failures. For example, women often occupy jobs with flexible working environments in order to take on care work as well as job responsibilities, but since part-time, flexible hourly jobs pay less and have lower levels of benefits, large numbers of women in these lower income jobs lowers incentives to participate in the same market work as their male counterparts, perpetuating occupational gender lines in societies and within households. Schultz (1990) article indicates that "working-class women have made it a priority to end job segregation for they want opportunities that enable them to support them and their families." (Schultz, 1990:1755) Additionally, economic development in countries is positively correlated with female workers in wage employment occupations and negatively correlated with female workers in unpaid or part-time work, self-employment, or entrepreneurship, job sectors often seen occupied by women in developing countries. Many critics of sex segregation see globalization processes as having the potential to promote systemic equality among the sexes.
In fiction
Some literary works of social science fiction and gender, sex and sexuality in speculative fiction that consider sex segregation are the books Swastika Night or The Handmaid's Tale (later converted into a TV series).
See also
Age segregation
Athos, a Greek peninsula where women are not allowed
Discrimination against non-binary people
Discrimination against transgender men
Discrimination against transgender women
Feminist separatism
Gender apartheid
Gender-blind
Gender inequality
Gender polarization
Geographical segregation
Mixed-sex sports
Occupational inequality
Occupational sexism
Okinoshima, a Japanese island where women are not allowed
Racial segregation
Religious segregation
School segregation
Separate spheres
Sex differences in humans
Sex segregation in Iran
Sex segregation in Saudi Arabia
Sexism
Transgender discrimination
Transgender inequality
Transmisogyny
Transphobia
Unisex changing rooms
Unisex public toilet
Women and children first
Women-only space
Sex segregation in Afghanistan
References
External links
RAWA – Revolutionary Association of the Women of Afghanistan. Documenting Taliban atrocities against women
Gender Apartheid an essay on the topic from Third World Women's Health
Stop Gender Apartheid in Afghanistan an anti-Taliban pamphlet from the Feminist Majority Foundation
Taking the Gender Apartheid Tour in Saudi Arabia
Saudi Arabia's Apartheid by Colbert King, Washington Post December 22, 2001
Against Sexual Apartheid in Iran Interview with Azar Majedi
Sexual Apartheid in Iran by Mahin Hassibi
Sexism
Islam-related controversies
Women's rights in religious movements
Sex
Gender and society
Culture of Saudi Arabia
Culture of Iran
Single-gender worlds | Sex segregation | [
"Biology"
] | 6,319 | [
"Sex"
] |
597,793 | https://en.wikipedia.org/wiki/Troglitazone | Troglitazone is an antidiabetic and anti-inflammatory drug, and a member of the drug class of the thiazolidinediones. It was prescribed for people with diabetes mellitus type 2.
It was patented in 1983 and approved for medical use in 1997. It was subsequently withdrawn.
Mechanism of action
Troglitazone, like the other thiazolidinediones (pioglitazone and rosiglitazone), works by activating peroxisome proliferator-activated receptors (PPARs).
Troglitazone is a ligand to both PPARα and – more strongly – PPARγ. Troglitazone also contains an α-Tocopherol moiety, potentially giving it vitamin E-like activity in addition to its PPAR activation. It has been shown to reduce inflammation. Troglitazone use was associated with a decrease of nuclear factor kappa-B (NF-κB) and a concomitant increase in its inhibitor (IκB). NFκB is an important cellular transcription regulator for the immune response.
History
Troglitazone was developed by Daiichi Sankyo (Japan). In the United States, it was introduced and manufactured by Parke-Davis in the late 1990s but turned out to be associated with an idiosyncratic reaction leading to drug-induced hepatitis. The Food and Drug Administration (FDA) medical officer assigned to evaluate troglitazone, John Gueriguian, did not recommend its approval due to potentially high liver toxicity; Parke-Davis complained to the FDA, and Gueriguian was subsequently removed from his post. A panel of experts approved it in January 1997. Once the prevalence of adverse liver effects became known, troglitazone was withdrawn from the British market in December 1997, from the United States market in 2000, and from the Japanese market soon afterwards. It did not get approval in the rest of Europe.
Troglitazone was developed as the first anti-diabetic drug having a mechanism of action involving the enhancement of insulin resistance. At the time, it was widely believed that such drugs, by addressing the primary metabolic defect associated with Type 2 diabetes, would have numerous benefits including avoiding the risk of hypoglycemia associated with insulin and earlier oral antidiabetic drugs. It was further believed that reducing insulin resistance would potentially reduce the very high rate of cardiovascular disease that is associated with diabetes.
Parke-Davis/Warner Lambert submitted the diabetes drug Rezulin for FDA review on July 31, 1996. The medical officer assigned to the review, Dr. John L. Gueriguian, cited Rezulin's potential to harm the liver and the heart, and he questioned its viability in lowering blood sugar for patients with adult-onset diabetes, recommending against the drug's approval. After complaints from the drugmaker, Gueriguian was removed on November 4, 1996, and his review was purged by the FDA. Gueriguian and the company had a single meeting at which Gueriguian used "intemperate" language; the company said its objections were based on inappropriate remarks made by Gueriguian. Parke-Davis said at the advisory committee that the risk of liver toxicity was comparable to placebo and that additional data of other studies confirmed this. According to Peter Gøtzsche, when the company provided these additional data one week after approval, they showed a substantially greater risk for liver toxicity.
The FDA approved the drug on January 29, 1997, and it appeared in pharmacies in late March. At the time, Dr. Solomon Sobel, a director at the FDA overseeing diabetes drugs, said in a New York Times interview that adverse effects of troglitazone appeared to be rare and relatively mild.
Glaxo Wellcome received approval from the British Medicines Control Agency (MCA) to market troglitazone, as Romozin, in July 1997. After reports of sudden liver failure in patients receiving the drug, Parke-Davis and the FDA added warnings to the drug label requiring monthly monitoring of liver enzyme levels. Glaxo Wellcome removed troglitazone from the market in Britain on December 1, 1997. Glaxo Wellcome had licensed the drug from Sankyo Company of Japan and had sold it in Britain from October 1, 1997.
On May 17, 1998, a 55-year-old patient named Audrey LaRue Jones died of acute liver failure after taking troglitazone. Importantly, she had been monitored closely by physicians at the National Institutes of Health (NIH) as a participant in the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) diabetes prevention study. This called into question the efficacy of the monitoring strategy. The NIH responded on June 4 by dropping troglitazone from the study. Dr. David J. Graham, an FDA epidemiologist charged with evaluating the drug, warned on March 26, 1999 of the dangers of using it and concluded that patient monitoring was not effective in protecting against liver failure. He estimated that the drug could be linked to over 430 liver failures and that patients incurred 1,200 times greater risk of liver failure when taking Rezulin. Dr. Janet B. McGill, an endocrinologist who had assisted in the Warner–Lambert's early clinical testing of Rezulin, wrote in a March 1, 2000 letter to Sen. Edward M. Kennedy (D-Mass.): "I believe that the company... deliberately omitted reports of liver toxicity and misrepresented serious adverse events experienced by patients in their clinical studies."
On March 21, 2000, the FDA withdrew the drug from the market. Dr. Robert I. Misbin, an FDA medical officer, wrote in a March 3, 2000 letter to Senator John Ashcroft of strong evidence that Rezulin could not be used safely. He was later threatened by the FDA with dismissal. By that time, the drug had been linked to 63 liver-failure deaths and had generated sales of more than $2.1 billion for Warner-Lambert. The drug cost $1,400 a year per patient in 1998. Pfizer, which had acquired Warner-Lambert in February 2000, reported the withdrawal of Rezulin cost $136 million.
Mechanisms of hepatotoxicity
Since the withdrawal in 2000, mechanisms of troglitazone hepatotoxicity have been extensively studied using a variety of in vivo, in vitro, and computational methods. These studies have suggested that hepatotoxicity of troglitazone results from a combination of metabolic and nonmetabolic factors. The nonmetabolic toxicity is a complex function of drug-protein interactions in the liver and biliary system. Initially, the metabolic toxicity was largely associated with reactive metabolite formation from the thiazolidinedione and chromane rings of troglitazone. Moreover, the formation of quinone and o-quinone methide reactive metabolites were proposed to be formed by metabolic oxidation of the hydroxy group (OH group) of the chromane ring. Detailed quantum chemical analysis of the metabolic pathways for troglitazone has shown that quinone reactive metabolite is generated by oxidation of the OH group, but o-quinone methide reactive metabolite is formed by the oxidation of the methyl groups (CH3 groups) ortho to the OH group of the chromane ring. This understanding has been recently used in the design of novel troglitazone derivatives with antiproliferative activity in breast cancer cell lines.
Lawsuits
In 2009, Pfizer resolved all but three of 35,000 claims over its withdrawn diabetes drug Rezulin for a total of about $750 million. Pfizer, which acquired rival Wyeth for almost $64 billion, paid about $500 million to settle Rezulin cases consolidated in federal court in New York, according to court filings. The company also paid as much as $250 million to resolve state-court suits. In 2004, it set aside $955 million to end Rezulin cases.
References
External links
3β-Hydroxysteroid dehydrogenase inhibitors
Aromatase inhibitors
Chromanes
CYP3A4 inducers
Drugs developed by Pfizer
CYP17A1 inhibitors
Hepatotoxins
Phenol ethers
Thiazolidinediones
Withdrawn drugs | Troglitazone | [
"Chemistry"
] | 1,746 | [
"Drug safety",
"Withdrawn drugs"
] |
597,837 | https://en.wikipedia.org/wiki/Sato%E2%80%93Tate%20conjecture | In mathematics, the Sato–Tate conjecture is a statistical statement about the family of elliptic curves Ep obtained from an elliptic curve E over the rational numbers by reduction modulo almost all prime numbers p. Mikio Sato and John Tate independently posed the conjecture around 1960.
If Np denotes the number of points on the elliptic curve Ep defined over the finite field with p elements, the conjecture gives an answer to the distribution of the second-order term for Np. By Hasse's theorem on elliptic curves,
as , and the point of the conjecture is to predict how the O-term varies.
The original conjecture and its generalization to all totally real fields was proved by Laurent Clozel, Michael Harris, Nicholas Shepherd-Barron, and Richard Taylor under mild assumptions in 2008, and completed by Thomas Barnet-Lamb, David Geraghty, Harris, and Taylor in 2011. Several generalizations to other algebraic varieties and fields are open.
Statement
Let E be an elliptic curve defined over the rational numbers without complex multiplication. For a prime number p, define θp as the solution to the equation
Then, for every two real numbers and for which
Details
By Hasse's theorem on elliptic curves, the ratio
is between -1 and 1. Thus it can be expressed as cos θ for an angle θ; in geometric terms there are two eigenvalues accounting for the remainder and with the denominator as given they are complex conjugate and of absolute value 1. The Sato–Tate conjecture, when E doesn't have complex multiplication, states that the probability measure of θ is proportional to
This is due to Mikio Sato and John Tate (independently, and around 1960, published somewhat later).
Proof
In 2008, Clozel, Harris, Shepherd-Barron, and Taylor published a proof of the Sato–Tate conjecture for elliptic curves over totally real fields satisfying a certain condition: of having multiplicative reduction at some prime, in a series of three joint papers.
Further results are conditional on improved forms of the Arthur–Selberg trace formula. Harris has a conditional proof of a result for the product of two elliptic curves (not isogenous) following from such a hypothetical trace formula. In 2011, Barnet-Lamb, Geraghty, Harris, and Taylor proved a generalized version of the Sato–Tate conjecture for an arbitrary non-CM holomorphic modular form of weight greater than or equal to two, by improving the potential modularity results of previous papers. The prior issues involved with the trace formula were solved by Michael Harris, and Sug Woo Shin.
In 2015, Richard Taylor was awarded the Breakthrough Prize in Mathematics "for numerous breakthrough results in (...) the Sato–Tate conjecture."
Generalisations
There are generalisations, involving the distribution of Frobenius elements in Galois groups involved in the Galois representations on étale cohomology. In particular there is a conjectural theory for curves of genus n > 1.
Under the random matrix model developed by Nick Katz and Peter Sarnak, there is a conjectural correspondence between (unitarized) characteristic polynomials of Frobenius elements and conjugacy classes in the compact Lie group USp(2n) = Sp(n). The Haar measure on USp(2n) then gives the conjectured distribution, and the classical case is USp(2) = SU(2).
Refinements
There are also more refined statements. The Lang–Trotter conjecture (1976) of Serge Lang and Hale Trotter states the asymptotic number of primes p with a given value of ap, the trace of Frobenius that appears in the formula. For the typical case (no complex multiplication, trace ≠ 0) their formula states that the number of p up to X is asymptotically
with a specified constant c. Neal Koblitz (1988) provided detailed conjectures for the case of a prime number q of points on Ep, motivated by elliptic curve cryptography.
In 1999, Chantal David and Francesco Pappalardi proved an averaged version of the Lang–Trotter conjecture.
See also
Wigner semicircle distribution
References
External links
Report on Barry Mazur giving context
Michael Harris notes, with statement (PDF)
La Conjecture de Sato–Tate [d'après Clozel, Harris, Shepherd-Barron, Taylor], Bourbaki seminar June 2007 by Henri Carayol (PDF)
Video introducing Elliptic curves and its relation to Sato-Tate conjecture, Imperial College London, 2014 (Last 15 minutes)
Elliptic curves
Finite fields
Conjectures | Sato–Tate conjecture | [
"Mathematics"
] | 945 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures"
] |
597,844 | https://en.wikipedia.org/wiki/Levitation%20%28physics%29 | Levitation (from Latin , ) is the process by which an object is held aloft in a stable position, without mechanical support via any physical contact.
Levitation is accomplished by providing an upward force that counteracts the pull of gravity (in relation to gravity on earth), plus a smaller stabilizing force that pushes the object toward a home position whenever it is a small distance away from that home position. The force can be a fundamental force such as magnetic or electrostatic, or it can be a reactive force such as optical, buoyant, aerodynamic, or hydrodynamic.
Levitation excludes floating at the surface of a liquid because the liquid provides direct mechanical support. Levitation excludes hovering flight by insects, hummingbirds, helicopters, rockets, and balloons because the object provides its own counter-gravity force.
Physics
Levitation (on Earth or any planetoid) requires an upward force that cancels out the weight of the object, so that the object does not fall (accelerate downward) or rise (accelerate upward). For positional stability, any small displacement of the levitating object must result in a small change in force in the opposite direction. the small changes in force can be accomplished by gradient field(s) or by active regulation. If the object is disturbed, it might oscillate around its final position, but its motion eventually decreases to zero due to damping effects. (In a turbulent flow, the object might oscillate indefinitely.)
Levitation techniques are useful tools in physics research. For example, levitation methods are useful for high-temperature melt property studies because they eliminate the problem of reaction with containers and allow deep undercooling of melts. The containerless conditions may be obtained by opposing gravity with a levitation force instead of allowing an entire experiment to freefall.
Magnetic levitation
Magnetic levitation is the most commonly seen and used form of levitation. This form of levitation occurs when an object is suspended using magnetic fields.
Diamagnetic materials are commonly used for demonstration purposes. In this case the returning force appears from the interaction with the screening currents. For example, a superconducting sample, which can be considered either as a perfect diamagnet or an ideally hard superconductor, easily levitates in an ambient external magnetic field. The superconductor is cooled with liquid nitrogen to levitate on top of a magnet becoming super diamagnetic. In a powerful magnetic field utilizing diamagnetic levitation, even small live animals have been levitated.
It is possible to levitate pyrolytic graphite by placing thin squares of it above four cube magnets with the north poles forming one diagonal and south poles forming the other diagonal. Researchers have even successfully levitated (non-magnetic) liquid droplets surrounded by paramagnetic fluids. The process of such inverse magnetic levitation is usually referred to as Magneto-Archimedes effect.
Magnetic levitation is in development for use for transportation systems. For example, the Maglev includes trains that are levitated by a large number of magnets. Due to the lack of friction on the guide rails, they are faster, quieter, and smoother than wheeled mass transit systems.
Electrodynamic suspension uses AC magnetic fields.
Electrostatic levitation
In electrostatic levitation an electric field is used to counteract gravitational force. Some spiders shoot silk into the air to ride Earth's electric field.
Aerodynamic levitation
In aerodynamic levitation, the levitation is achieved by floating the object on a stream of gas, either produced by the object or acting on the object. For example, a ping pong ball can be levitated with the stream of air from a vacuum cleaner set on "blow" - exploiting the Coandă effect which keeps it stable in the airstream. With enough thrust, very large objects can be levitated using this method.
Gas film levitation
This technique enables the levitation of an object against gravitational force by floating it on a thin gas film formed by gas flow through a porous membrane. Using this technique, high temperature melts can be kept clean from contamination and be supercooled. A common example in general usage includes air hockey, where the puck is lifted by a thin layer of air. Hovercraft also use this technique, producing a large region of high-pressure air underneath them.
Acoustic levitation
Acoustic levitation uses sound waves to provide a levitating force.
Optical levitation
Optical levitation is a technique in which a material is levitated against the downward force of gravity by an upward force stemming from photon momentum transfer (radiation pressure).
Buoyant levitation
Gases at high pressure can have a density exceeding that of some solids. Thus they can be used to levitate solid objects through buoyancy. Noble gases are preferred for their non-reactivity. Xenon is the densest non-radioactive noble gas, at 5.894g/L. Xenon has been used to levitate polyethylene, at a pressure of 154atm.
Casimir force
Scientists have discovered a way of levitating ultra small objects by manipulating the Casimir force, which normally causes objects to stick together due to forces predicted by quantum field theory. This is, however, only possible for micro-objects.
Uses
Maglev trains
Magnetic levitation is used to suspend trains without touching the track. This permits very high speeds, and greatly reduces the maintenance requirements for tracks and vehicles, as little wear occurs. This also means there is no friction, so the only force acting against it is air resistance.
Animal levitation
Scientists have levitated frogs, grasshoppers, and mice by means of powerful electromagnets utilizing superconductors, producing diamagnetic repulsion of body water. The mice acted confused at first, but adapted to the levitation after approximately four hours, suffering no immediate ill effects.
Further reading
.
See also
Levitation (illusion)
Levitation based inertial sensing
Anti-gravity
Flight
Leidenfrost effect
Telekinesis
Weightlessness
References
External links
Diamagnetic Levitation (YouTube)
Superconducting Levitation Demos
Gravity | Levitation (physics) | [
"Physics"
] | 1,260 | [
"Physical phenomena",
"Motion (physics)",
"Levitation"
] |
597,998 | https://en.wikipedia.org/wiki/Multinomial%20theorem | In mathematics, the multinomial theorem describes how to expand a power of a sum in terms of powers of the terms in that sum. It is the generalization of the binomial theorem from binomials to multinomials.
Theorem
For any positive integer and any non-negative integer , the multinomial theorem describes how a sum with terms expands when raised to the th power:
where
is a multinomial coefficient. The sum is taken over all combinations of nonnegative integer indices through such that the sum of all is . That is, for each term in the expansion, the exponents of the must add up to .
In the case , this statement reduces to that of the binomial theorem.
Example
The third power of the trinomial is given by
This can be computed by hand using the distributive property of multiplication over addition and combining like terms, but it can also be done (perhaps more easily) with the multinomial theorem. It is possible to "read off" the multinomial coefficients from the terms by using the multinomial coefficient formula. For example, the term has coefficient , the term has coefficient , and so on.
Alternate expression
The statement of the theorem can be written concisely using multiindices:
where
and
Proof
This proof of the multinomial theorem uses the binomial theorem and induction on .
First, for , both sides equal since there is only one term in the sum. For the induction step, suppose the multinomial theorem holds for . Then
by the induction hypothesis. Applying the binomial theorem to the last factor,
which completes the induction. The last step follows because
as can easily be seen by writing the three coefficients using factorials as follows:
Multinomial coefficients
The numbers
appearing in the theorem are the multinomial coefficients. They can be expressed in numerous ways, including as a product of binomial coefficients or of factorials:
Sum of all multinomial coefficients
The substitution of for all into the multinomial theorem
gives immediately that
Number of multinomial coefficients
The number of terms in a multinomial sum, , is equal to the number of monomials of degree on the variables :
The count can be performed easily using the method of stars and bars.
Valuation of multinomial coefficients
The largest power of a prime that divides a multinomial coefficient may be computed using a generalization of Kummer's theorem.
Asymptotics
By Stirling's approximation, or equivalently the log-gamma function's asymptotic expansion, so for example,
Interpretations
Ways to put objects into bins
The multinomial coefficients have a direct combinatorial interpretation, as the number of ways of depositing distinct objects into distinct bins, with objects in the first bin, objects in the second bin, and so on.
Number of ways to select according to a distribution
In statistical mechanics and combinatorics, if one has a number distribution of labels, then the multinomial coefficients naturally arise from the binomial coefficients. Given a number distribution on a set of total items, represents the number of items to be given the label . (In statistical mechanics is the label of the energy state.)
The number of arrangements is found by
Choosing of the total to be labeled 1. This can be done ways.
From the remaining items choose to label 2. This can be done ways.
From the remaining items choose to label 3. Again, this can be done ways.
Multiplying the number of choices at each step results in:
Cancellation results in the formula given above.
Number of unique permutations of words
The multinomial coefficient
is also the number of distinct ways to permute a multiset of elements, where is the multiplicity of each of the th element. For example, the number of distinct permutations of the letters of the word MISSISSIPPI, which has 1 M, 4 Is, 4 Ss, and 2 Ps, is
Generalized Pascal's triangle
One can use the multinomial theorem to generalize Pascal's triangle or Pascal's pyramid to Pascal's simplex. This provides a quick way to generate a lookup table for multinomial coefficients.
See also
Multinomial distribution
Stars and bars (combinatorics)
References
Combinatorics
Factorial and binomial topics
Articles containing proofs
Theorems about polynomials | Multinomial theorem | [
"Mathematics"
] | 912 | [
"Discrete mathematics",
"Factorial and binomial topics",
"Theorems in algebra",
"Combinatorics",
"Theorems about polynomials",
"Articles containing proofs"
] |
598,134 | https://en.wikipedia.org/wiki/Join%20%28Unix%29 | join is a command in Unix and Unix-like operating systems that merges the lines of two sorted text files based on the presence of a common field. It is similar to the join operator used in relational databases but operating on text files.
Overview
The join command takes as input two text files and several options. If no command-line argument is given, this command looks for a pair of lines from the two files having the same first field (a sequence of characters that are different from space), and outputs a line composed of the first field followed by the rest of the two lines.
The program arguments specify which character to be used in place of space to separate the fields of the line, which field to use when looking for matching lines, and whether to output lines that do not match. The output can be stored to another file rather than printed using redirection.
As an example, the two following files list the known fathers and the mothers of some people. Both files have been sorted on the join field — this is a requirement of the program.
george jim
kumar gunaware
albert martha
george sophie
The join of these two files (with no argument) would produce:
george jim sophie
Indeed, only "george" is common as a first word of both files.
History
is intended to be a relation database operator. It is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification.
The version of join bundled in GNU coreutils was written by Mike Haertel. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
See also
Textutils
Join (SQL)
Relational algebra
List of Unix commands
References
External links
join command
Unix text processing utilities
Unix SUS2008 utilities
Plan 9 commands | Join (Unix) | [
"Technology"
] | 387 | [
"Computing commands",
"Plan 9 commands"
] |
598,143 | https://en.wikipedia.org/wiki/Millennium%20Technology%20Prize | The Millennium Technology Prize () is one of the world's largest technology prizes. It is awarded once every two years by Technology Academy Finland, an independent foundation established by Finnish industries, academic institutions, and the state of Finland. The patron of the prize is the President of Finland. The Millennium Technology Prize is Finland's tribute to innovations for a better life. The aims of the prize are to promote technological research and Finland as a high-tech Nordic welfare state. The prize was inaugurated in 2004.
The Prize
The idea of the prize came originally from the Finnish academician Pekka Jauho, with American real estate investor and philanthropist Arthur J Collingsworth encouraging its establishment. The Prize celebrates innovations that have a favorable and sustainable impact on quality of life and well-being of people. The innovations also must have been applied in practice and stimulate further research and development. Compared to the Nobel Prize the Millennium Technology Prize is a technology award, whereas the Nobel Prize is a science award. Furthermore, the Nobel Prize is awarded for basic research, but the Millennium Technology Prize may be given to a recently conceived innovation which is still being developed. The Millennium Technology Prize is not intended as a reward for lifetime achievement.
The Millennium Technology Prize is awarded by Technology Academy Finland (formerly Millennium Prize Foundation and Finnish Technology Award Foundation), established in 2002 by eight Finnish organisations supporting technological development and innovation. The prize sum is 1 million euros (~US$1.3 million). The Millennium Technology Prize is awarded every second year and its patron is the president of Finland.
Universities, research institutes, national scientific and engineering academies, high-tech companies, and other organizations around the world are eligible to nominate individuals or groups for the award. Nominations are accepted from any field except military technology. In accordance with the rules of the Technology Academy Finland, a proposal concerning the winner of the Millennium Technology Prize is made to the board of the foundation by the eight-member international selection committee, and the final decision on the prize winner is made by the board of the foundation.
International Selection Committee (ISC)
Current members of the selection committee include:
Päivi Törmä, Professor at Aalto University and Chairman of ISC 2017-2024.
Nigel Brandon, Dean of the Faculty of Engineering at Imperial College London
Thierry E. Klein, President of Bell Labs Solutions Research at Nokia Bell Labs
Ilan Spillinger, Executive Vice President and Corporate CTO of imec.
Sirpa Jalkanen, Professor of Immunology at University of Turku
Cecilia Tortajada, Professor in Practice in Environmental Innovation (Interdisciplinary Studies) in the University of Glasgow
Past committee members include:
Jonathan Knowles, Visiting Professor, FIMM, HiLIFE, University of Helsinki, Finland
Tero Ojanperä, Chairman of Silo.AI company
Jarl-Thure Eriksson, Chancellor of Åbo Akademi University and former Rector of Tampere University of Technology (Finland)
Eva-Mari Aro, Professor in Molecular Plant Biology at University of Turku (Finland)
Jaakko Astola, Professor of Signal Processing at Tampere University of Technology (Finland)
Craig R. Barrett, Retired CEO/Chairman of the Board of Intel Corporation (United States)
Riitta Hari, Director of both the multidisciplinary Brain Research Unit of the Low Temperature Laboratory at Aalto University and the national Centre of Excellence on Systems Neuroscience and Neuroimaging Research (Finland)
Konrad Osterwalder, Former Rector of the United Nations University and Under-Secretary-General of the United Nations (Switzerland)
Ayao Tsuge, President of the Japan Federation of Engineering Society and President of Japan International Science and Technology Exchange Center (Japan)
Laureates
*The ceremony was postponed due to the COVID-19 pandemic, and was held on 18 May, 2021.
See also
Harvey Prize
IMU Abacus Medal
Japan Prize
Kyoto Prize
Nobel Prize
Schock Prize
Shaw Prize
Tang Prize
ACM Turing Award
IET Faraday Medal
IEEE Medal of Honor
Queen Elizabeth Prize for Engineering
List of engineering awards
References
External links
The Millennium Technology Prize - Official site
The Millennium Technology Prize Youtube Channel
Technology Academy Finland - Official site
International awards
Invention awards
Science and technology awards
Science and technology in Finland | Millennium Technology Prize | [
"Technology"
] | 857 | [
"Science and technology awards",
"Invention awards",
"International science and technology awards"
] |
598,272 | https://en.wikipedia.org/wiki/Leblanc%20process | The Leblanc process was an early industrial process for making soda ash (sodium carbonate) used throughout the 19th century, named after its inventor, Nicolas Leblanc. It involved two stages: making sodium sulfate from sodium chloride, followed by reacting the sodium sulfate with coal and calcium carbonate to make sodium carbonate. The process gradually became obsolete after the development of the Solvay process.
Background
Soda ash (sodium carbonate) and potash (potassium carbonate), collectively termed alkali, are vital chemicals in the glass, textile, soap, and paper industries. The traditional source of alkali in western Europe had been potash obtained from wood ashes. However, by the 13th century, deforestation had rendered this means of production uneconomical, and alkali had to be imported. Potash was imported from North America, Scandinavia, and Russia, where large forests still stood. Soda ash was imported from Spain and the Canary Islands, where it was produced from the ashes of glasswort plants (called barilla ashes in Spain), or imported from Syria. The soda ash from glasswort plant ashes was mainly a mixture of sodium carbonate and potassium carbonate. In addition in Egypt, naturally occurring sodium carbonate, the mineral natron, was mined from dry lakebeds. In Britain, the only local source of alkali was from kelp, which washed ashore in Scotland and Ireland.
In 1783, King Louis XVI of France and the French Academy of Sciences offered a prize of 2400 livres for a method to produce alkali from sea salt (sodium chloride). In 1791, Nicolas Leblanc, physician to Louis Philip II, Duke of Orléans, patented a solution. That same year he built the first Leblanc plant for the Duke at Saint-Denis, and this began to produce 320 tons of soda per year. He was denied his prize money because of the French Revolution.
For more recent history, see industrial history below.
Chemistry
In the first step, sodium chloride is treated with sulfuric acid in the Mannheim process. This reaction produces sodium sulfate (called the salt cake) and hydrogen chloride:
2 NaCl + H2SO4 → Na2SO4 + 2 HCl
This chemical reaction had been discovered in 1772 by the Swedish chemist Carl Wilhelm Scheele. Leblanc's contribution was the second step, in which a mixture of the salt cake and crushed limestone (calcium carbonate) was reduced by heating with coal. This conversion entails two parts. First is the carbothermic reaction whereby the coal, a source of carbon, reduces the sulfate to sulfide:
Na2SO4 + 2 C → Na2S + 2 CO2
In the second stage, is the reaction to produce sodium carbonate and calcium sulfide. This mixture is called black ash.
Na2S + CaCO3 → Na2CO3 + CaS
The soda ash is extracted from the black ash with water. Evaporation of this extract yields solid sodium carbonate. This extraction process was termed lixiviation.
In response to the Alkali Act, the noxious calcium sulfide was converted into calcium carbonate:
The hydrogen sulfide can be used as a sulfur source for the lead chamber process to produce the sulfuric acid used in the first step of the Leblanc process.
Likewise, by 1874 the Deacon process was invented, oxidizing the hydrochloric acid over a copper catalyst:
The chlorine would be sold for bleach in paper and textile manufacturing. Eventually, the chlorine sales became the purpose of the Leblanc process. The inexpensive chlorine was a contributor to the development of the chloralkali process.
Process detail
The sodium chloride is initially mixed with concentrated sulfuric acid and the mixture exposed to low heat. The hydrogen chloride gas bubbles off and was discarded to atmosphere before gas absorption towers were introduced. This continues until all that is left is a fused mass. This mass still contains enough chloride to contaminate the later stages of the process. The mass is then exposed to direct flame, which evaporates nearly all of the remaining chloride.
The coal used in the next step must be low in nitrogen to avoid the formation of cyanide. The calcium carbonate, in the form of limestone or chalk, should be low in magnesia and silica. The weight ratio of the charge is 2:2:1 of salt cake, calcium carbonate, and carbon respectively. It is fired in a reverberatory furnace at about 1000 °C. Sometimes the reverberatory furnace rotated and thus was called a "revolver".
The black-ash product of firing must be lixiviated right away to prevent oxidation of sulfides back to sulfate. In the lixiviation process, the black-ash is completely covered in water, again to prevent oxidation. To optimize the leaching of soluble material, the lixiviation is done in cascaded stages. That is, pure water is used on the black-ash that has already been through prior stages. The liquor from that stage is used to leach an earlier stage of the black-ash, and so on.
The final liquor is treated by blowing carbon dioxide through it. This precipitates dissolved calcium and other impurities. It also volatilizes the sulfide, which is carried off as H2S gas. Any residual sulfide can be subsequently precipitated by adding zinc hydroxide. The liquor is separated from the precipitate and evaporated using waste heat from the reverberatory furnace. The resulting ash is then redissolved into concentrated solution in hot water. Solids that fail to dissolve are separated. The solution is then cooled to recrystallize nearly pure sodium carbonate decahydrate.
Industrial history
Leblanc established the first Leblanc process plant in 1791 in St. Denis. However, French Revolutionaries seized the plant, along with the rest of Louis Philip's estate, in 1794, and publicized Leblanc's trade secrets. Napoleon I returned the plant to Leblanc in 1801, but lacking the funds to repair it and compete against other soda works that had been established in the meantime, Leblanc committed suicide in 1806.
By the early 19th century, French soda ash producers were making 10,000 - 15,000 tons annually. However, it was in Britain that the Leblanc process became most widely practiced. The first British soda works using the Leblanc process was built by the Losh family of iron founders at the Losh, Wilson and Bell works in Walker on the River Tyne in 1816, but steep British tariffs on salt production hindered the economics of the Leblanc process and kept such operations on a small scale until 1824. Following the repeal of the salt tariff, the British soda industry grew dramatically. The Bonnington Chemical Works was possibly the earliest production, and the chemical works established by James Muspratt in Liverpool and Flint, and by Charles Tennant near Glasgow became some of the largest in the world. Muspratt's Liverpool works enjoyed proximity and transport links to the Cheshire salt mines, the St Helens coalfields and the North Wales and Derbyshire limestone quarries. By 1852, annual soda production had reached 140,000 tons in Britain and 45,000 tons in France. By the 1870s, the British soda output of 200,000 tons annually exceeded that of all other nations in the world combined.
Obsolescence
In 1861, the Belgian chemist Ernest Solvay developed a more direct process for producing soda ash from salt and limestone through the use of ammonia. The only waste product of this Solvay process was calcium chloride, and so it was both more economical and less polluting than the Leblanc method. From the late 1870s, Solvay-based soda works on the European continent provided stiff competition in their home markets to the Leblanc-based British soda industry. Additionally the Brunner Mond Solvay plant which opened in 1874 at Winnington near Northwich provided fierce competition nationally. Leblanc producers were unable to compete with Solvay soda ash, and their soda ash production was effectively an adjunct to their still profitable production of chlorine, bleaching powder etc. (The unwanted by-products had become the profitable products). The development of electrolytic methods of chlorine production removed that source of profits as well, and there followed a decline moderated only by "gentlemen's' agreements" with Solvay producers. By 1900, 90% of the world's soda production was through the Solvay method, or on the North American continent, through the mining of trona, discovered in 1938, which caused the closure of the last North American Solvay plant in 1986.
The last Leblanc-based soda ash plant in the West closed in the early 1920s, but when during WWII Nationalist China had to evacuate its industry to the inland rural areas, the difficulties in importing and maintaining complex equipment forced them to temporarily re-establish the Leblanc process.
However, the Solvay process does not work for the manufacture of potassium carbonate, because it relies on the low solubility of the corresponding bicarbonate. Thus, the Leblanc process continued in limited use for K2CO3 manufacture until much later.
Pollution issues
The Leblanc process plants were quite damaging to the local environment. The process of generating salt cake from salt and sulfuric acid released hydrochloric acid gas, and because this acid was industrially useless in the early 19th century, it was simply vented into the atmosphere. Also, an insoluble smelly solid waste was produced. For every 8 tons of soda ash, the process produced 5.5 tons of hydrogen chloride and 7 tons of calcium sulfide waste. This solid waste (known as galligu) had no economic value, and was piled in heaps and spread on fields near the soda works, where it weathered to release hydrogen sulfide, the toxic gas responsible for the odor of rotten eggs.
Because of their noxious emissions, Leblanc soda works became targets of lawsuits and legislation. An 1839 suit against soda works alleged, "the gas from these manufactories is of such a deleterious nature as to blight everything within its influence, and is alike baneful to health and property. The herbage of the fields in their vicinity is scorched, the gardens neither yield fruit nor vegetables; many flourishing trees have lately become rotten naked sticks. Cattle and poultry droop and pine away. It tarnishes the furniture in our houses, and when we are exposed to it, which is of frequent occurrence, we are afflicted with coughs and pains in the head ... all of which we attribute to the Alkali works."
In 1863, the British Parliament passed the Alkali Act 1863, the first of several Alkali Acts, the first modern air pollution legislation. This act allowed that no more than 5% of the hydrochloric acid produced by alkali plants could be vented to the atmosphere. To comply with the legislation, soda works passed the escaping hydrogen chloride gas up through a tower packed with charcoal, where it was absorbed by water flowing in the other direction. The chemical works usually dumped the resulting hydrochloric acid solution into nearby bodies of water, killing fish and other aquatic life.
The Leblanc process also meant very unpleasant working conditions for the operators. It originally required careful operation and frequent operator interventions (some involving heavy manual labour) into processes giving off hot noxious chemicals.
Sometimes, workmen cleaning the reaction products out of the reverberatory furnace wore cloth mouth-and-nose gags to keep dust and aerosols out of the lungs.
This improved somewhat later as processes were more heavily mechanised to improve economics and uniformity of product.
By the 1880s, methods for converting the hydrochloric acid to chlorine gas for the manufacture of bleaching powder and for reclaiming the sulfur in the calcium sulfide waste had been discovered, but the Leblanc process remained more wasteful and more polluting than the Solvay process. The same is true when it is compared with the later electrolytical processes which eventually replaced it for chlorine production.
Biodiversity
There is a strong case for arguing that Leblanc process waste is the most endangered habitat in the UK, since the waste weathers down to calcium carbonate and produces a haven for plants that thrive in lime-rich soils, known as calcicoles. Only four such sites have survived the new millennium; three are protected as local nature reserves of which the largest, at Nob End near Bolton, is an SSSI and Local Nature Reserve - largely for its sparse orchid-calcicole flora, most unusual in an area with acid soils. This alkaline island contains within it an acid island, where acid boiler slag was deposited, which now shows up as a zone dominated by heather, Calluna vulgaris.
References
External links
T. Howard Deighton. The struggle for supremacy : being a series of chapters in the history of the leblanc alkali industry in Great Britain: 32:Wealth from Waste "...this abominable refuse tipped by millions of tons upon the banks of the .. Tyne"
http://cavemanchemistry.com/oldcave/projects/chloralkali/index.html
See also
Sodium carbonate
Alkali Act
Deacon Process
Soda-lime glass
Chemical processes
Industrial Revolution
French inventions
History of the chemical industry
Name reactions
Industrial processes | Leblanc process | [
"Chemistry"
] | 2,764 | [
"History of the chemical industry",
"Name reactions",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
598,302 | https://en.wikipedia.org/wiki/Pathophysiology | Pathophysiology (or physiopathology) is a branch of study, at the intersection of pathology and physiology, concerning disordered physiological processes that cause, result from, or are otherwise associated with a disease or injury. Pathology is the medical discipline that describes conditions typically observed during a disease state, whereas physiology is the biological discipline that describes processes or mechanisms operating within an organism. Pathology describes the abnormal or undesired condition (symptoms of a disease), whereas pathophysiology seeks to explain the functional changes that are occurring within an individual due to a disease or pathologic state.
Etymology
The term pathophysiology comes from the Ancient Greek πάθος (pathos) and φυσιολογία (phisiologia).
History
Early Developments
The origins of pathophysiology as a distinct field date back to the late 18th century. The first known lectures on the subject were delivered by Professor at the University of Erfurt in 1790, and in 1791, he published the first textbook on pathophysiology, Grundriss der Physiologia pathologica, spanning 770 pages. Hecker also established the first academic journal in the field, Magazin für die pathologische Anatomie und Physiologie, in 1796. The French physician Jean François Fernel had earlier suggested in 1542 that a distinct branch of physiology should study the functions of diseased organisms, an idea further developed by in 1617, who first coined the term "pathologic physiology" in a medical text.
Nineteenth century
Reductionism
In Germany in the 1830s, Johannes Müller led the establishment of physiology research autonomous from medical research. In 1843, the Berlin Physical Society was founded in part to purge biology and medicine of vitalism, and in 1847 Hermann von Helmholtz, who joined the Society in 1845, published the paper "On the conservation of energy", highly influential to reduce physiology's research foundation to physical sciences. In the late 1850s, German anatomical pathologist Rudolf Virchow, a former student of Müller, directed focus to the cell, establishing cytology as the focus of physiological research. He also recognized pathophysiology as a distinct discipline, arguing that it should rely on clinical observation and experimentation rather than purely anatomical pathology. Virchow’s influence extended to his student Julius Cohnheim, who pioneered experimental pathology and the usage of intravital microscopy, further advancing the study of pathophysiology.
Germ theory
By 1863, motivated by Louis Pasteur's report on fermentation to butyric acid, fellow Frenchman Casimir Davaine identified a microorganism as the crucial causal agent of the cattle disease anthrax, but its routinely vanishing from blood left other scientists inferring it a mere byproduct of putrefaction. In 1876, upon Ferdinand Cohn's report of a tiny spore stage of a bacterial species, the fellow German Robert Koch isolated Davaine's bacterides in pure culture —a pivotal step that would establish bacteriology as a distinct discipline— identified a spore stage, applied Jakob Henle's postulates, and confirmed Davaine's conclusion, a major feat for experimental pathology. Pasteur and colleagues followed up with ecological investigations confirming its role in the natural environment via spores in soil.
Also, as to sepsis, Davaine had injected rabbits with a highly diluted, tiny amount of putrid blood, duplicated disease, and used the term ferment of putrefaction, but it was unclear whether this referred as did Pasteur's term ferment to a microorganism or, as it did for many others, to a chemical. In 1878, Koch published Aetiology of Traumatic Infective Diseases, unlike any previous work, where in 80 pages Koch, as noted by a historian, "was able to show, in a manner practically conclusive, that a number of diseases, differing clinically, anatomically, and in aetiology, can be produced experimentally by the injection of putrid materials into animals." Koch used bacteriology and the new staining methods with aniline dyes to identify particular microorganisms for each. Germ theory of disease crystallized the concept of cause—presumably identifiable by scientific investigation.
Scientific medicine
The American physician William Welch trained in German pathology from 1876 to 1878, including under Cohnheim, and opened America's first scientific laboratory —a pathology laboratory— at Bellevue Hospital in New York City in 1878. Welch's course drew enrollment from students at other medical schools, which responded by opening their own pathology laboratories. Once appointed by Daniel Coit Gilman, upon advice by John Shaw Billings, as founding dean of the medical school of the newly forming Johns Hopkins University that Gilman, as its first president, was planning, Welch traveled again to Germany for training in Koch's bacteriology in 1883. Welch returned to America but moved to Baltimore, eager to overhaul American medicine, while blending Virchow's anatomical pathology, Cohnheim's experimental pathology, and Koch's bacteriology. Hopkins medical school, led by the "Four Horsemen" —Welch, William Osler, Howard Kelly, and William Halsted— opened at last in 1893 as America's first medical school devoted to teaching German scientific medicine, so called.
Twentieth century
Biomedicine
The first biomedical institutes, Pasteur Institute and Berlin Institute for Infectious Diseases, whose first directors were Pasteur and Koch, were founded in 1888 and 1891, respectively. America's first biomedical institute, The Rockefeller Institute for Medical Research, was founded in 1901 with Welch, nicknamed "dean of American medicine", as its scientific director, who appointed his former Hopkins student Simon Flexner as director of pathology and bacteriology laboratories. By way of World War I and World War II, Rockefeller Institute became the globe's leader in biomedical research.
Molecular paradigm
The 1918 pandemic triggered frenzied search for its cause, although most deaths were via lobar pneumonia, already attributed to pneumococcal invasion. In London, pathologist with the Ministry of Health, Fred Griffith in 1928 reported pneumococcal transformation from virulent to avirulent and between antigenic types —nearly a switch in species— challenging pneumonia's specific causation. The laboratory of Rockefeller Institute's Oswald Avery, America's leading pneumococcal expert, was so troubled by the report that they refused to attempt repetition.
When Avery was away on summer vacation, Martin Dawson, British-Canadian, convinced that anything from England must be correct, repeated Griffith's results, then achieved transformation in vitro, too, opening it to precise investigation. Having returned, Avery kept a photo of Griffith on his desk while his researchers followed the trail. In 1944, Avery, Colin MacLeod, and Maclyn McCarty reported the transformation factor as DNA, widely doubted amid estimations that something must act with it. At the time of Griffith's report, it was unrecognized that bacteria even had genes.
The first genetics, Mendelian genetics, began at 1900, yet inheritance of Mendelian traits was localized to chromosomes by 1903, thus chromosomal genetics. Biochemistry emerged in the same decade. In the 1940s, most scientists viewed the cell as a "sack of chemicals" —a membrane containing only loose molecules in chaotic motion— and the only especial cell structures as chromosomes, which bacteria lack as such. Chromosomal DNA was presumed too simple, so genes were sought in chromosomal proteins. Yet in 1953, American biologist James Watson, British physicist Francis Crick, and British chemist Rosalind Franklin inferred DNA's molecular structure —a double helix— and conjectured it to spell a code. In the early 1960s, Crick helped crack a genetic code in DNA, thus establishing molecular genetics.
In the late 1930s, Rockefeller Foundation had spearheaded and funded the molecular biology research program —seeking fundamental explanation of organisms and life— led largely by physicist Max Delbrück at Caltech and Vanderbilt University. Yet the reality of organelles in cells was controversial amid unclear visualization with conventional light microscopy. Around 1940, largely via cancer research at Rockefeller Institute, cell biology emerged as a new discipline filling the vast gap between cytology and biochemistry by applying new technology —ultracentrifuge and electron microscope— to identify and deconstruct cell structures, functions, and mechanisms. The two new sciences interlaced, cell and molecular biology.
Mindful of Griffith and Avery, Joshua Lederberg confirmed bacterial conjugation —reported decades earlier but controversial— and was awarded the 1958 Nobel Prize in Physiology or Medicine. At Cold Spring Harbor Laboratory in Long Island, New York, Delbrück and Salvador Luria led the Phage Group —hosting Watson— discovering details of cell physiology by tracking changes to bacteria upon infection with their viruses, the process transduction. Lederberg led the opening of a genetics department at Stanford University's medical school, and facilitated greater communication between biologists and medical departments.
Disease mechanisms
In the 1950s, researches on rheumatic fever, a complication of streptococcal infections, revealed it was mediated by the host's own immune response, stirring investigation by pathologist Lewis Thomas that led to identification of enzymes released by the innate immune cells macrophages and that degrade host tissue. In the late 1970s, as president of Memorial Sloan–Kettering Cancer Center, Thomas collaborated with Lederberg, soon to become president of Rockefeller University, to redirect the funding focus of the US National Institutes of Health toward basic research into the mechanisms operating during disease processes, which at the time medical scientists were all but wholly ignorant of, as biologists had scarcely taken interest in disease mechanisms. Thomas became for American basic researchers a patron saint.
Examples
Parkinson's disease
The pathophysiology of Parkinson's disease is death of dopaminergic neurons as a result of changes in biological activity in the brain with respect to Parkinson's disease (PD). There are several proposed mechanisms for neuronal death in PD; however, not all of them are well understood. Five proposed major mechanisms for neuronal death in Parkinson's Disease include protein aggregation in Lewy bodies, disruption of autophagy, changes in cell metabolism or mitochondrial function, neuroinflammation, and blood–brain barrier (BBB) breakdown resulting in vascular leakiness.
Heart failure
The pathophysiology of heart failure is a reduction in the efficiency of the heart muscle, through damage or overloading. As such, it can be caused by a wide number of conditions, including myocardial infarction (in which the heart muscle is starved of oxygen and dies), hypertension (which increases the force of contraction needed to pump blood) and amyloidosis (in which misfolded proteins are deposited in the heart muscle, causing it to stiffen). Over time these increases in workload will produce changes to the heart itself.
Multiple sclerosis
The pathophysiology of multiple sclerosis is that of an inflammatory demyelinating disease of the CNS in which activated immune cells invade the central nervous system and cause inflammation, neurodegeneration and tissue damage. The underlying condition that produces this behaviour is currently unknown. Current research in neuropathology, neuroimmunology, neurobiology, and neuroimaging, together with clinical neurology provide support for the notion that MS is not a single disease but rather a spectrum
Hypertension
The pathophysiology of hypertension is that of a chronic disease characterized by elevation of blood pressure. Hypertension can be classified by cause as either essential (also known as primary or idiopathic) or secondary. About 90–95% of hypertension is essential hypertension.
HIV/AIDS
The pathophysiology of HIV/AIDS involves, upon acquisition of the virus, that the virus replicates inside and kills T helper cells, which are required for almost all adaptive immune responses. There is an initial period of influenza-like illness, and then a latent, asymptomatic phase. When the CD4 lymphocyte count falls below 200 cells/ml of blood, the HIV host has progressed to AIDS, a condition characterized by deficiency in cell-mediated immunity and the resulting increased susceptibility to opportunistic infections and certain forms of cancer.
Spider bites
The pathophysiology of spider bites is due to the effect of its venom. A spider envenomation occurs whenever a spider injects venom into the skin. Not all spider bites inject venom – a dry bite, and the amount of venom injected can vary based on the type of spider and the circumstances of the encounter. The mechanical injury from a spider bite is not a serious concern for humans.
Obesity
The pathophysiology of obesity involves many possible pathophysiological mechanisms involved in its development and maintenance.
This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory. These investigators postulated that leptin was a satiety factor. In the ob/ob mouse, mutations in the leptin gene resulted in the obese phenotype opening the possibility of leptin therapy for human obesity. However, soon thereafter J. F. Caro's laboratory could not detect any mutations in the leptin gene in humans with obesity. On the contrary Leptin expression was increased proposing the possibility of Leptin-resistance in human obesity.
See also
Pathogenesis
References
Pathology
Physiology | Pathophysiology | [
"Biology"
] | 2,821 | [
"Pathology",
"Physiology"
] |
598,345 | https://en.wikipedia.org/wiki/Cell%20site | A cell site, cell phone tower, cell base tower, or cellular base station is a cellular-enabled mobile device site where antennas and electronic communications equipment are placed (typically on a radio mast, tower, or other raised structure) to create a cell, or adjacent cells, in a cellular network. The raised structure typically supports antenna and one or more sets of transmitter/receivers transceivers, digital signal processors, control electronics, a GPS receiver for timing (for CDMA2000/IS-95 or GSM systems), primary and backup electrical power sources, and sheltering.
Multiple cellular providers often save money by mounting their antennas on a common shared mast; since separate systems use different frequencies, antennas can be located close together without interfering with each other. Some provider companies operate multiple cellular networks and similarly use colocated base stations for two or more cellular networks, (CDMA2000 or GSM, for example).
Cell sites are sometimes required to be inconspicuous; they may be blended with the surrounding area or mounted on buildings or advertising towers. Preserved treescapes can often hide cell towers inside an artificial or preserved tree. These installations are generally referred to as concealed cell sites or stealth cell sites.
Overview
A cellular network is a network of handheld mobile phones (cell phones) in which each phone communicates with the telephone network by radio waves through a local antenna at a cellular base station (cell site). The coverage area in which service is provided is divided into a mosaic of small geographical areas called "cells", each served by a separate low power multichannel transceiver and antenna at a base station. All the cell phones within a cell communicate with the system through that cell's antenna, on separate frequency channels assigned by the base station from a common pool of frequencies used by the system.
The purpose of cellular organization is to conserve radio bandwidth by frequency reuse; the low power radio signals used within each cell do not travel far beyond the cell, so the radio channels can be reused in geographically separated cells. When a mobile user moves from one cell to another, their phone is automatically "handed off" to the new cell's antenna, and assigned a new set of frequencies, and subsequently communicates with this antenna. This background handoff process is imperceptible to the user and can occur in the middle of a phone call without any service interruption. Each cell phone has an automated full duplex digital transceiver and communicates with the cell antenna over two digital radio channels in the UHF or microwave band, one for each direction of the bidirectional conversation, plus a control channel which handles registering the phone with the network, dialing, and the handoff process.
Typically a cell tower is located at the edge of one or more cells and covers multiple cells using directional antennas. A common geometry is to locate the cell site at the intersection of three adjacent cells, with three antennas at 120° angles each covering one cell. The type of antenna used for cellular base stations (vertical white rectangles in pictures), called a sector antenna, usually consists of a vertical collinear array of dipoles. It has a flat fan-shaped radiation pattern, which is tilted slightly down to cover the cell area without radiating at higher angles into further off cells which reuse the same frequencies. The elevation angle of the antenna must be carefully adjusted, so the beam covers the entire cell without radiating too far. In modern sector antennas beam tilt can usually be adjusted electronically, to avoid the necessity of a lineman climbing the tower to mechanically tilt the antenna when adjustment is needed.
Operation
Range
The working range of a cell site (the range which mobile devices connects reliably to the cell site) is not a fixed figure. It will depend on a number of factors, including, but not limited to:
Height of antenna over surrounding terrain (Line-of-sight propagation).
The frequency of signal in use.
The transmitter's rated power.
The required uplink/downlink data rate of the subscriber's device
The directional characteristics of the site antenna array.
Reflection and absorption of radio energy by buildings or vegetation.
It may also be limited by local geographical or regulatory factors and weather conditions.
In addition there are timing limitations in some technologies (e.g., even in free space, GSM would be limited to 150 km, with 180 km being possible with special equipment)
Generally, in areas where there are enough cell sites to cover a wide area, the range of each one will be set to:
Ensure there is enough overlap for "handover" to/from other sites (moving the signal for a mobile device from one cell site to another, for those technologies that can handle it - e.g. making a GSM phone call while in a car or train).
Ensure that the overlap area is not too large, to minimize interference problems with other sites.
In practice, cell sites are grouped in areas of high population density, with the most potential users. Cell phone traffic through a single site is limited by the base station's capacity; of -56 dBm signal there is a finite number of calls or data traffic that a base station can handle at once. This capacity limitation is commonly the factor that determines the spacing of cell mast sites. In suburban areas, masts are commonly spaced apart and in dense urban areas, masts may be as close as 400–800 m apart.
The maximum range of a mast (where it is not limited by interference with other masts nearby) depends on the same considerations. In any case the limiting factor is the ability of a low-powered personal cell phone to transmit back to the mast. As a rough guide, based on a tall mast and flat terrain, it may be possible to get between . When the terrain is hilly, the maximum distance can vary from as little as due to encroachment of intermediate objects into the wide center Fresnel zone of the signal. Depending on terrain and other circumstances, a GSM Tower can replace between of cabling for fixed wireless networks. In addition, some technologies, such as GSM, have an additional absolute maximum range of , which is imposed by technical limitations. CDMA and IDEN have no such limit defined by timing.
Practical example of range
3G/4G/5G (FR1) Mobile base station tower: it is technically possible to cover up to 50–150 km. (Macrocell)
5G (FR2) Mobile base station: the distances between the 5G base-station is about 250–300 m, due to the use of millimetre waves.
Channel reuse
The concept of "maximum" range is misleading in a cellular network. Cellular networks are designed to support many conversations with a limited number of radio channels (slices of radio frequency spectrum necessary to make one conversation) that are licensed to an operator of a cellular service. To overcome this limitation, it is necessary to repeat and reuse the same channels at different locations. Just as a car radio changes from one local station to a completely different local station with the same frequency when traveling to another city, the same radio channel gets reused on a cell mast only a few miles away. To do this, the signal of a cell mast is intentionally kept at low power and in many cases tilted downward to limit its reach. This allows covering an area small enough not to have to support more conversations than the available channels can carry. Due to the sectorized arrangement of antennas on a tower, it is possible to vary the strength and angle for each sector depending on the coverage from other towers in the area.
Signal limiting factor
A cell phone may not work at times because it is too far from a mast, or because the phone is in a location where cell phone signals are attenuated by thick building walls, hills, or other structures. The signals do not need a clear line of sight but greater radio interference will degrade or eliminate reception. When many people try to use the cell mast at the same time, e.g. during a traffic jam or a sports event, then there will be a signal on the phone display but it is blocked from starting a new connection. The other limiting factor for cell phones is the ability to send a signal from its low powered battery to the cell site. Some cell phones perform better than others under low power or low battery, typically due to the ability to send a good signal from the phone to the mast.
The base station controller (a central computer that specializes in making phone connections) and the intelligence of the cell phone keeps track of and allows the phone to switch from one mast to the next during conversation. As the user moves towards a mast it picks the strongest signal and releases the mast from which the signal has become weaker; that channel on that mast becomes available to another user.
Geolocation
Cellular geolocation is less precise than by GNSS (e.g. GPS), but it is available to devices that do not have GPS receivers and where the GNSS is not available. The precision of this system varies and is highest where advanced forward link methods are possible and is lowest where only a single cell site can be reached, in which case the location is only known to be within the coverage of that site.
An advanced forward link is where a device is within range of at least three cell sites and where the carrier has implemented timing system use.
Another method is using angle of arrival (AoA) and it occurs when the device is in range of at least two cell sites, produces intermediate precision. Assisted GPS uses both satellite and cell phone signals.
In the United States, for emergency calling service using location data (locally called "Enhanced 911"), it was required that at least 95% of cellular phones in use on 31 December 2005 support such service. Many carriers missed this deadline and were fined by the Federal Communications Commission.
Radio power and health
According to the U.S. Federal Communications Commission: "Measurement data obtained from various sources have consistently indicated that 'worst-case' ground-level power densities near typical cellular towers are on the order of 1 μW/cm2 (or 10 mW/m2) or less (usually significantly less)."
Cell phones, cell towers, wi-fi, smart meters, digital enhanced cordless telecommunications phones, cordless phones, baby monitors, and other wireless devices all emit non-ionizing radio frequencies, which the World Health Organization (WHO) has classified as a "potential" carcinogen, According to the U.S. National Cancer Institute, "No mechanism by which ELF-EMFs or radiofrequency radiation could cause cancer has been identified."
According to the U.S. Food and Drug Administration, "Scientific consensus shows that non-ionizing radiation is not a carcinogen and, at or below the radio frequency exposure limits set by the FCC, non-ionizing radiation has not been shown to cause any harm to people."
Temporary sites
Although cell antennas are normally attached to permanent structures, carriers also maintain fleets of vehicles, called cells-on-wheels (COWs), that serve as temporary cell sites. A generator may be included for use where network electrical power is not available, and the system may have a wireless backhaul link allowing use where a wired link is not available.
COWs are also used at permanent cell sites—as temporary replacements for damaged equipment, during planned outages, and to augment capacity such as during conventions.
Employment
Cell site workers are called tower climbers or transmission tower workers. Transmission tower workers often work at heights of up to , performing installation, maintenance and repair work for cellular phone and other wireless communications companies.
Spy agency setup
According to documents leaked to , the NSA sells a $40,000 "active GSM base station" to be used as a tool to mimic a mobile phone tower and thus monitor cell phones.
In November 2014, The Wall Street Journal reported that the Technical Operations Group of the U.S. Marshals utilizes spy devices, known as "dirtboxes", to mimic powerful cell tower signals. Such devices are designed to cause mobile phones to switch over to the tower, as it is the strongest signal within reach. The devices are placed on airplanes to effectively create a "dragnet", gathering data about phones as the planes travel above populated areas.
Off-grid systems
An off-grid cell site is not connected to the public electrical grid. Usually the system is off-the-grid because of difficult access or lack of infrastructure. Fuel cell or other backup power systems are added to critical cell sites to provide day-to-day and emergency power. Traditionally, sites have used internal-combustion-engine-driven generator sets, however, being less efficient than public power, they increase operating expense and are a source of pollution (atmospheric, acoustic, etc.) and some are in areas protected by environment and landscape conservation.
Renewable sources, such as solar power and wind power may be available where cell sites are placed. The first off-grid mast in the UK was installed in 2022 in Eglwyswrw, Wales. This can reduce the cost of fuel to the cell site or telecom tower by up to 75%. They can be backed up by a fuel generator system which allows the cell site to work when the renewable sources are not enough. One such energy production system consists of:
Solar power generator
Wind generator
Electrochemical generator fuel cells
Electrical energy from intermittent sources is stored in secondary batteries which are usually designed to have an average of two days of self-sufficiency, also known as autonomy, to allow time for maintenance personnel to arrive at site when a repair is needed.
The renewable energy systems supply electrical power when available. The fuel cells are activated only when the natural sources are not enough to supply the energy the system needs. The emergency power supply (the fuel cells) is designed to last an average of ten days. In this way the structure is completely self-sufficient: this enables the maintenance team to pay only few visits to the site, since it is usually hard to get to.
Camouflage
There is often local opposition to new masts for reasons of safety and appearance. The latter is sometimes tackled by disguising the mast as something else, such as a flag pole, street lamp, or a tree (e.g. palm trees, pine trees, cypress) or rooftop structures or urban features such as chimneys or panels.
These concealed cell sites can distinguish themselves by foliage shape and bark type. The foliage of all these antennas is composed of leaves made of plastic material accurately designed, taking into consideration quantity, shape and array suitable to completely conceal the antennas and all accessory parts in a natural manner. The materials used guarantee absolute radio-electric transparency and resistance to UVA rays. Nicknames include "monopalm" for a monopole disguised as a palm tree or "Pseudopinus telephoneyensis" for a mast disguised as a pine tree. In monopoles, the directional antennas are sometimes hidden in a plastic housing near the top of the pole so that the crossbars can be eliminated.
Rooftop structures such as concealment chimneys or panels, 6 to 12 meters high, may conceal one or more mobile telephone operators on the same station. Roofmask panels can be fixed to existing rooftop structures, restyling them quickly and cheaply.
Miniature
Researchers at Alcatel-Lucent have developed a cell site called lightRadio that fits in the palm of hand. It is the size of a Rubik's cube. It is capable of relaying 2G, 3G and 4G signals. They are more energy efficient and deliver broadband more efficiently than current cell sites. They could be used in very populated urban areas to make room for more radio space.
Water tower cellular
Cellular companies sign leases with local governments to place cellular antennas on water towers.
See also
Cellular network
Node B
OpenBTS
Mobile phone radiation and health
Telecom infrastructure sharing
Base transceiver station
Remote radio head
Radio masts and towers
Mobile cell sites
Distributed antenna system
Telecommunications lease
Title 47 of the Code of Federal Regulations
In re Application of the United States for Historical Cell Site Data
Tower climber
References
External links
Maps of All Towers Across the United Kingdom
FCC: Universal Licensing Information
FCC: Information On Human Exposure To Radio frequency Fields From Cellular and PCS Radio Transmitters
Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) Base Station Survey 2007- 2011
Telecommunications infrastructure
Radio masts and towers
Mass media technology
Telecommunications | Cell site | [
"Technology"
] | 3,344 | [
"Information and communications technology",
"Mass media technology",
"Telecommunications"
] |
598,358 | https://en.wikipedia.org/wiki/Eastern%20Continental%20Divide | The Eastern Continental Divide, Eastern Divide or Appalachian Divide is a hydrological divide in eastern North America that separates the easterly Atlantic Seaboard watershed from the westerly Gulf of Mexico watershed. It is one of six continental hydrological divides of North America which define several drainage basins, each of which drains to a particular body of water.
The divide nearly spans the United States from south of Lake Ontario through the Florida peninsula, and consists of raised terrain including the Appalachian Mountains to the north, the southern Piedmont Plateau and lowland ridges in the Atlantic Coastal Plain to the south.
Course
Northern portion
The divide's northern portion winds through the middle of the Appalachian Mountains, either through the interior of the Allegheny Plateau or along the Allegheny Mountains. In this portion, the western drainage of the divide flows into the watersheds of the Allegheny River, Monongahela River, and New River, all tributaries of the Ohio River. The eastern drainage flows into the watersheds of the Susquehanna River, Potomac River, and James River, all of which flow into Chesapeake Bay before entering the Atlantic Ocean.
At its northern terminus, the Eastern Continental Divide originates at Triple Divide Peak in Ulysses Township, Pennsylvania, about south of the New York-Pennsylvania border, where it diverges from the St. Lawrence Divide. This point divides the eastern United States into three watersheds: those of the Genesee River flowing into Lake Ontario and then the St. Lawrence River to the north; Pine Creek into the Susquehanna River as part of the Atlantic seaboard watershed to the east; and the Allegheny River into the Ohio, the Mississippi, and the Gulf of Mexico to the west.
From north to south, the divide passes through the broader Allegheny Plateau region, following the boundary between the Allegheny River and Susquehanna River watersheds through most of Pennsylvania. At Blue Knob near Altoona, the Divide begins to follow Allegheny Mountain and then Little Savage Mountain. A few miles before the state border, the Divide begins to separate the Youghiogheny River and Potomac River watersheds.
In Maryland, the Divide runs significantly west of the Allegheny Front, following Backbone Mountain, and passing near the source of the North Branch Potomac River at the Fairfax Stone. The Divide then passes through a plateau of the Allegheny Mountains of West Virginia, passing between the north end of the Canaan Valley in the Cheat River watershed, and the Mount Storm Lake basin in the Potomac River watershed. The Divide then rejoins the Allegheny Front.
A significant portion of the Divide forms part of the border between West Virginia and Virginia along Allegheny Mountain and then Peters Mountain, separating the Greenbrier River and James River watersheds. It then makes a dramatic arc to the east around the Sinking Creek valley, and then follows the hill crest east of Blacksburg, Virginia.
Central portion
The divide's central portion generally follows the easternmost ridge of the Blue Ridge Mountains and thus of the Appalachian Mountains as a whole, which takes the form of a high escarpment. In this portion, the western drainage of the divide flows into the watersheds of the New River and Tennessee River, both tributaries of the Ohio River. The eastern drainage flows into the watersheds of the Roanoke River, Pee Dee River, and Santee River.
The divide initially separates the headwaters of the New River from that of the Roanoke River. Just before the Divide passes into North Carolina, it begins to separate the New River and Yadkin River watersheds. It then separates upper tributaries of the Tennessee River from those of the Santee River. Its high point is on Grandfather Mountain at 5,946 feet (1,812 m); although Mount Mitchell is the highest point in the Appalachian Mountains, it is not on the Divide, but 4 miles west of it.
Southern portion
Past the southern end of the Appalachian Mountains, the divide's southern portion winds through the lowlands of Georgia and Florida. In this portion, the western drainage of the divide flows into the watersheds of the Apalachicola River, Suwannee River, Withlacoochee River, and Peace River, all of which drain directly to the Gulf of Mexico without reaching the Ohio River first. The eastern drainage flows into the watersheds of the Savannah River, Altamaha River, Satilla River, St. Marys River, and St. Johns River.
In Georgia, the Divide generally separates the Apalachicola River, watershed in the west from the Savannah River and Altamaha River watersheds to the east, passing through the Atlanta metropolitan area and extending past the southern end of the Appalachian Mountains southeasterly across the Georgia plateau. In southern Georgia, it separates the Suwannee River and Satilla River watersheds.
In Florida, the Divide generally follows the western edge of the St. Marys River and then St. Johns River, meandering into the low country of Northern Florida until it reaches Central Florida. The west side of the divide continues to be the Suwannee River and then the Withlacoochee River watersheds.
The southern terminus of the Eastern Continental Divide is at the triple divide between the St. Johns, Peace, and Kissimmee River watersheds, which is in Haines City, Florida on the Lake Wales Ridge. Because the Kissimmee River flows into Lake Okeechobee, whose distributaries reach both the Gulf of Mexico and the Atlantic Ocean via low swampland covered by a network of diverging canals and natural waterways, its watershed's land is not clearly divisible between the two watersheds.
Weather
Because the divide is at or in proximity to the highest terrain, air is forced upwards regardless of wind direction. This process of orographic enhancement leads to higher precipitation than surrounding areas. In winter, the divide is often much snowier than surrounding areas, due to orographic enhancement and cooler temperatures with elevation.
History
Prior to about 1760, north of Spanish Florida, the Appalachian Divide represented the boundary between British and French colonial possessions in North America.
The Royal Proclamation of 1763 separated settled lands of the Thirteen Colonies from lands north and west of it designated the Indian Reserve; the proclamation border ran along the Appalachian Divide but extended beyond its Pennsylvania-New York terminus north into New England.
The exact route of the ECD shifts over time due to erosion, tectonic activity, construction projects, and other factors.
Locations
See also
Continental Divide of the Americas
Great Basin Divide
Laurentian Divide
Saint Lawrence River Divide
Notes
References
Geographic coordinate lists
Geography of the United States
Drainage divides
Drainage basins of North America
Appalachian Mountains
Hydrography | Eastern Continental Divide | [
"Environmental_science"
] | 1,317 | [
"Hydrography",
"Hydrology"
] |
598,373 | https://en.wikipedia.org/wiki/Electropolishing | Electropolishing, also known as electrochemical polishing, anodic polishing, or electrolytic polishing (especially in the metallography field), is an electrochemical process that removes material from a metallic workpiece, reducing the surface roughness by levelling micro-peaks and valleys, improving the surface finish. Electropolishing is often compared to, but distinctly different from, electrochemical machining. It is used to polish, passivate, and deburr metal parts. It is often described as the reverse of electroplating. It may be used in lieu of abrasive fine polishing in microstructural preparation.
Mechanism
Typically, the work-piece is immersed in a temperature-controlled bath of electrolyte and serves as the anode; it is connected to the positive terminal of a DC power supply, the negative terminal being attached to the cathode. A current passes from the anode, where metal on the surface is oxidised and dissolved in the electrolyte, to the cathode. At the cathode, a reduction reaction occurs, which normally produces hydrogen. Electrolytes used for electropolishing are most often concentrated acid solutions such as mixtures of sulfuric acid and phosphoric acid. Other electropolishing electrolytes reported in the literature include mixtures of perchloric acid with acetic anhydride (which has caused fatal explosions), and methanolic solutions of sulfuric acid.
To electropolish a rough surface, the protruding parts of a surface profile must dissolve faster than the recesses. This process, referred to as anodic leveling, can be subject to incorrect analysis when measuring the surface topography. Anodic dissolution under electropolishing conditions deburrs metal objects due to increased current density on corners and burrs.
Most importantly, successful electropolishing should operate under diffusion limited constant current plateau, achieved by following current dependence on voltage (polarisation curve), under constant temperature and stirring conditions.
Applications
Due to its ease of operation and its usefulness in polishing irregularly-shaped objects, electropolishing has become a common process in the production of semiconductors.
As electropolishing can also be used to sterilize workpieces, the process plays an essential role in the food, medical, and pharmaceutical industries.
It is commonly used in the post-production of large metal pieces such as those used in drums of washing machines, bodies of ocean vessels and aircraft, and automobiles.
While nearly any metal may be electropolished, the most-commonly polished metals are 300- and 400-series stainless steel, aluminum, copper, titanium, and nickel- and copper-alloys.
Ultra-high vacuum (UHV) components are typically electropolished in order to have a smoother surface for improved vacuum pressures, out-gassing rates, and pumping speed.
Electropolishing is commonly used to prepare thin metal samples for transmission electron microscopy and atom probe tomography because the process does not mechanically deform surface layers like mechanical polishing does.
Standards
ISO.15730:2000 Metallic and other Inorganic Coatings - Electropolishing as a Means of Smoothing and Passivating Stainless Steel
ASME BPE Standards for Electropolishing Bioprocessing Equipment
SEMI F19, Electropolishing Specifications for Semiconductor Applications
ASTM B 912-02 (2008), Passivation of Stainless Steels Using Electropolishing
ASTM E1558, Standard Guide for Electrolytic Polishing of Metallographic Specimens
Benefits
The results are aesthetically pleasing.
Creates a clean, smooth surface that is easier to sterilise.
Can polish areas that are inaccessible by other polishing methods.
Removes a small amount of material (typically 20-40 micrometre in depth in the case of stainless steel) from the surface of the parts, while also removing small burrs or high spots. It can be used to reduce the size of parts when necessary.
Stainless steel preferentially removes iron from the surface and enhances the chromium/nickel content for the most superior form of passivation for stainless steel.
Electropolishing can be used on a wide range of metals including stainless steel, aluminum, copper, brass and titanium.
See also
Corrosion
Electrochemistry
Electroetching
Electroplating
Passivation (chemistry)
Polishing (metalworking)
Stainless steel
Surface finishing
References
Chemical processes
Metallurgical processes
Metalworking | Electropolishing | [
"Chemistry",
"Materials_science"
] | 901 | [
"Metallurgical processes",
"Metallurgy",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
598,460 | https://en.wikipedia.org/wiki/Vampire%20Hunter%20D | is a series of novels written by Japanese author Hideyuki Kikuchi and illustrated by Yoshitaka Amano since 1983.
As of February 2024, 55 novels have been published in the main series, with some novels comprising as many as four volumes. They have sold over 17 million copies worldwide, making Vampire Hunter D one of the best-selling book series in history. The series has also spawned anime, audio drama, manga, comic adaptations, a video game, as well as a short story collection, art books, and a supplemental guide book.
Premise
Vampire hunter D wanders through a far-future post-nuclear war Earth that combines elements of pulp genres: western, science fiction, horror and Lovecraftian horror, dark fantasy, folklore, and occult science. The planet, once terrified by the elegant but cruel vampires known as , ancient demons, mutants, and their technological creations, is now slowly returning to a semblance of order and human control—thanks in part to the decadence that brought about the downfall of the vampire race, to the continued stubbornness of frontier dwellers, and to the rise of a caste of independent hunters-for-hire who eliminate supernatural threats.
Some time in 1999, a nuclear war occurred. The Nobility were vampires that planned for a possible nuclear war and sequestered all that was needed to rebuild civilization in their shelters. They use their science combined with magic to restore the world in their image. Nearly all magical creatures on Earth are genetically engineered by the Nobility to possess seemingly supernatural abilities, regeneration, and biological immortality, with a very small number being demons, gods, aliens, and extradimensional beings who survived the holocaust. Despite their technology being advanced enough to create a blood substitute as food, they still prefer to feed on humans. As such, they created a civilization where vampires and humans coexisted, eventually developing the planet into parklands and cities. The society eventually stagnates when vampire technology perfects scientific prophecy, which determines they are at their zenith of existence and doomed to fall, overthrown by humans. The human race was transformed at this time, with fear for the vampires being woven into their genetics, and the inability to remember vampire weaknesses such as garlic and crucifixes.
Unlike vampires from traditional lore, the Nobility can reproduce sexually, although their offspring will permanently cease aging after reaching physical maturity, having inherited their vampire parent's immortality.
Main characters
D
D is a dhampir, the half-breed child of a vampire father and human mother (actually genetically engineered by the ruler and first of vampires called Sacred Ancestor using his own DNA and that of humans in an experiment to create a vampire without vampires' typical weaknesses like the sun and having to drink blood), the ideal vampire hunter. He is renowned for his consummate skill and unearthly grace but feared and despised for his mixed lineage: born of both races but belonging to neither. Often underestimated by his opponents, D possesses surprising power and resourcefulness, having most of the strengths of the Nobility and only mild levels of their common weaknesses. It has been seen in both movies that his power is not only physical, but extends into the magical realm as well. His supernatural powers make him one of the strongest beings in the world, if not the second strongest (second only to his father). However, D prefers his physical abilities, only using his magic in times of great need. Unlike most dhampirs, D is able to live as a "normal" human; however, he is marked by his unearthly beauty and exceptionally potent aura and thus rarely accepted by human settlements. In terms of weaknesses, he is randomly susceptible to sun-sickness, a severe type of sunstroke, about once every five years (far less than most dhampirs). D also recovers from it at a rate far greater than other dhampirs. Usually, it takes several days to recover from sunlight syndrome, longer if the dhampir is exceedingly powerful, but D recovered in a few hours (around 1–6 hours approximately) despite being one of the strongest dhampirs alive. Otherwise, D does not appear to suffer from other vampiric weaknesses usual to dhampirs, being able to physically restrain opponents with his aura and having godlike reflexes surpassing even those of Nobles.
His symbiotic left hand states, in Vampire Hunter D: Bloodlust: "The Herarchy of us became impatient with the heartless David and impaled The Lord on 'The Sword'". Speculation on whether "The Sword" is D's sword or not is debatable. It is important to note here, however, that the movie differs sharply from the book it takes its story from, Vampire Hunter D: Demon Deathchase, and future entries in the novel series do not differentiate between Dracula, The Vampire King, The Sacred Ancestor and D's father, proposing that they are one and the same.
D rides a cybernetic horse with mechanical legs and other enhancements, wields a crescent longsword which looks similar to Yoshitaka Amano's scimitar sword design found in many of his works of art, but the sword has a hefty length, similar to that of a Japanese nodachi. D always wears a mystical blue pendant; it prevents many of the automatic defenses (such as laser fields and small nuclear blasters) created by the Nobility in past millennia from working properly and allows him to enter their sealed castles. He also uses wooden needles in the novels and game, which he can throw with super speed. He protects his milk-white face from the noonday sun with long black hair, flowing black clothing and cape, and the shadow of a wide-brimmed hat. D is described as appearing as a youth between 17 and 18 years old, though D's age is unknown (although in the novel Pale Fallen Angel parts I and II, it is made known that he is at least 5,000 years old, and later it is revealed that he is over 10,000 years old). His beauty is mesmerizing, often unintentionally wooing women and sometimes flustering men.
Very little is known of D's past, and the identity of his parents is ambiguous, but his innate power suggests that his father is an extraordinarily powerful vampire. Regarding D's birth, some Nobles whisper dark rumors about their vampire progenitor, the Sacred Ancestor known as Count Dracula, bedding a human woman called "Mina the Fair" (implied to be Mina Harker). Dracula conducted bizarre crossbreeding experiments involving himself and countless human women or even other vampires. The only successful product of the experiments was D. D, wanting nothing to do with his father save for killing him, refusing to go by his true name. Instead, he shortens it to the first letter. In Twin Shadowed Knight, D has a twin who goes unnamed. The twin states that he and D were born from the same woman in exactly the same conditions.
Left Hand
D is the host for a sentient symbiote, Left Hand, a wisecracking homunculus with a human face residing in his left palm, who can suck in massive amounts of matter through a wind void or vacuum tunnel. Left Hand enjoys needling the poker-faced D, but only appears as needed, rarely witnessed or heard by anyone other than D, yet aware of many of D's thoughts and actions. At all other times, D's left hand appears normal. Besides providing a contrast to D's reserved demeanor, Left Hand is incredibly useful, possessing many mysterious powers such as psychometry, inducing sleep, hypnotizing others, determining the medical condition of a victim, absorbing matter, energy, and even souls and magic, healing and reviving D and himself, and the ability to size up the supernatural powers or prowess of an enemy, even beyond D's keen senses. After absorbing four elements, Left Hand can use them to generate powerful attacks, regenerate D and himself, and to transform D into full vampire state rivaling Sacred Ancestor. Left Hand can store absorbed objects in a pocket dimension inside his stomach, and later spit them out.
In the first and second novels, Left Hand can also revive D when his physical condition is suffering, by consuming the four elements and converting the resulting energy into life force. This ability even saved D from the usually fatal for vampires stake through the heart he received from Rei-Ginsei in the first novel. Left Hand has its own mind and will, and acts as D's guide and sole permanent companion, providing a reservoir of knowledge pertaining to the lost Noble culture. So far, Left Hand's origins are unknown, and it is unclear how they came to be joined. However, some of its nature is revealed in the third book, which features a similar creature; it is implied he was one of the Barbarois (mutant/demon hybrids) who served in the personal retinue of the Sacred Ancestor, and was experimented on by him to increase his powers over other Barbarois.
Sacred Ancestor
The Sacred Ancestor's role in the novels is very mixed, appearing both as the bane and savior to isolated towns, and deified as a legendary god-king to the vampires, many of whom have never even met him in person. D quotes the Sacred Ancestor's precepts ("Transient guests are we"—implied to refer to the Nobility) in the first novel. The Sacred Ancestor appears both as a lawgiver honored for his intelligence, who showed some interest in preserving humans, and as a ruthless scientist (in the second novel), conducting hybrid breeding experiments with humans in order to perpetuate his own dwindling species. D appears to have encountered his alleged father on at least one occasion, as when at times D reaches a place where the imprint of the Sacred Ancestor's power remains, D remembers the Sacred Ancestor telling him that "You are my only success." Like D, the Sacred Ancestor is portrayed as a mysterious and handsome young wanderer who deals with both life and death. However, in the English dub of the anime, D states that the Sacred Ancestor respected humanity and did not feed on innocent people.
Production
In the postscript of the first Vampire Hunter D novel, Hideyuki Kikuchi cited Hammer Films' Horror of Dracula (1958) as his first inspiration in horror. He also praised horror manga artist Osamu Kishimoto for his distinct style, that he described as "a gothic mood in the Western tradition". Kikuchi is famous for writing his manuscripts by hand. Having written the Vampire Hunter D series for 40 years, the author has previously admitted that he does not remember all of its mythology, only that of a couple of volumes, and therefore one might find some contradictions in it. Kikuchi explained that the two points he is always careful of are, not to "mix up the characteristics of D", and "not to lose sight of the purpose of the journey". When asked if the Sacred Ancestor will ever be given a larger role, Kikuchi said the character "will come out someday" and "When that happens, please expect that to be your warning that Vampire Hunter D is speeding toward an ending". The author has previous said that he has "a final conflict for D" in mind, but that final conflict will not be the entire ending of the series.
Publication history
Beginning in January 1983, Kikuchi has written 41 Vampire Hunter D novels spanning 53 volumes as of April 2023. All of the publications in the series were published by Asahi Sonorama until the branch went out of business in September 2007. The release of D – Throng of Heretics in October 2007 under the Asahi Bunko – Sonorama Selection label marked the transition to the new publisher, Asahi Shimbun Publishing, a division of Asahi Sonorama's parent company. From December 2007 through January 2008, Asahi Shimbun Publishing reprinted the complete Vampire Hunter catalogue under the Sonorama Selection label.
On May 11, 2005, the first official English translation was released under DH Press, translated by Kevin Leahy. As of 2020, 24 novels have been published in English, spanning 29 volumes. In 2021, Dark Horse began releasing the series in an omnibus format. The first volume, featuring the first three novels, was released on October 27, 2021. In December 2021, Dark Horse Comics in partnership with GraphicAudio began publishing dramatized audiobook adaptations of the Vampire Hunter D novel series featuring a full English voice cast, soundtrack, and sound effects.
In January 2011, Hideyuki Kikuchi published the first spinoff set in the Vampire Hunter universe, a series of prequels titled , illustrated by Ayami Kojima, artist and character designer for the Castlevania series of video games. It takes place over 5,000 years before Vampire Hunter D and focuses on expanding the history of the Nobility, following the exploits of the vampire warrior Lord Greylancer. In 2013, Viz Media's Haikasoru imprint released the first official English translation of the prequel series, retitled Noble V: Greylancer, translated by Takami Nieda with newly commissioned cover artwork by Vincent Chong.
Adaptations
1985 animated film
Vampire Hunter D remains a cult classic among English-speaking audiences. Billed by the Japanese producers as a "dark future science-fiction romance" Vampire Hunter D is set in the year 12,090 AD, in a post-nuclear holocaust world where vampires, mutants and demons "slither through a world of darkness" (in the words of the film's opening introduction).
1988–1990 audio dramas
Asashi Sonorama created audio drama adaptations of three of the novels, in five parts:
Raiser of Gales "D" (January 1988) (the book it was based on was published May 1984)
D – Demon Deathchase (June 1988)
D – Mysterious Journey to the North Sea I: To the North Sea (March 1990)
D – Mysterious Journey to the North Sea II: Summer at Last (May 1990)
D – Mysterious Journey to the North Sea III: When Winter Comes Again (June 1990).
Most of the voice cast for the original OVA reprised their roles. Originally released on cassette tape, in 2005 they were re-released as a special edition, five-disc Vampire Hunter D Audio Drama Box, including a small supplemental booklet with a new short story by Kikuchi and an "art cloth" with an illustration by Amano.
1999 video game
A video game based on Vampire Hunter D Bloodlust was also made for the PlayStation game console, titled Vampire Hunter D. It is a survival horror game, but also similar to a standard adventure title. The player can see D from different pre-rendered angles throughout the game, and allow D to attack enemies with his sword. D can also use magic, Left Hand's abilities, and items. The story of the game is similar to that of Vampire Hunter D Bloodlust, although it takes place entirely within the castle as D fights all the enemies. Only two of the Barbarois mutants appear as enemies. There are three endings, one of which is similar to the end of the anime.
2000 animated film
The second film, Vampire Hunter D: Bloodlust garnered respect for its advanced animation techniques, detailed art style and character designs, voice acting originally recorded in English (English voice casting/direction by Jack Fletcher), and its sophisticated orchestral soundtrack composed, arranged and conducted by Marco D'Ambrosio. Its art style closely mirrored that of the illustrator and original character designer of the first movie, Yoshitaka Amano.
The storyline features a larger cast than the first film. The second Vampire Hunter D movie (known as Vampire Hunter D: Bloodlust outside of Japan) is based on the third of Hideyuki Kikuchi's Vampire Hunter D novels (Demon Deathchase in English). Unlike the first film, which was released in 1985, this movie is rated NC-16 in Singapore, M in Australia, 15 in the UK, R13 in New Zealand and R for violence/gore in the USA (except for the Blu-ray release, which is unrated).
2007 manga adaptation
In November 2007, the first volume of Saiko Takaki's manga adaptation of Hideyuki Kikuchi's series was published simultaneously in the U.S., Japan, and Europe. The project, overseen by Digital Manga Publishing and Hideyuki Kikuchi, aimed to adapt the entire catalogue of Vampire Hunter D novels into a manga form, however it had concluded after the eighth volume
2022 comic book series
On June 30, 2016, a Kickstarter crowdfunding campaign for a five-issue Vampire Hunter D comic book series titled Vampire Hunter D: Message from Mars was announced. Published by Stranger Comics with supervision from series creator Hideyuki Kikuchi and support from the creative teams at Unified Pictures and Digital Frontier, Message from Mars is an adaptation of the 2005 short story Message from Cecile and acts as a prequel to the then-in-development animated series. The series is written by Brandon M. Easton and illustrated by Michael Broussard, with visual development by Christopher Shy. The campaign's stretch goals also include an official Vampire Hunter D Pathfinder Roleplaying Game supplement written by F. Wesley Schneider. The campaign reached its $25,000 funding goal on July 1, 2016, and its initial $50,000 stretch goal on July 7, 2016. The campaign concluded on August 9, 2016, with 1,736 backers pledging a total of $107,025, reaching four out of five stretch goals.
Following the first issue the series was placed on temporary hiatus due to a serious medical emergency in Broussard's family, resuming production in early 2018 with new artists Ryan Benjamin and Richard Friend. As of December 2020 all production work for the five-issue run is complete, but publication plans were placed on an indefinite hiatus due to the ongoing COVID-19 pandemic. In December 2021 the completion of the project was announced via Kickstarter update, with limited publication planned to begin in 2022 following a recovery from pandemic conditions, to be followed by a wide retail and digital release in the future. The graphic novel Kickstarter campaign was launched on January 26, 2022, offering a limited hardcover collector's edition, variant cover single issue editions, and a digital edition. The campaign achieved its initial $30,000 funding goal within 90 minutes, and surpassed $100,000 within the first day, concluding on February 19, 2020, with 4,095 backers pledging a total of $445,205.
Development of animated series
In June 2015, a new animated series tentatively titled Vampire Hunter D: Resurrection was announced, produced by Unified Pictures and Digital Frontier. The series would be produced by Kurt Rauer and Scott McLean, and directed by Yoichi Mori, with Bloodlust director Yoshiaki Kawajiri acting as supervising director and series creator Hideyuki Kikuchi providing editorial supervision. The series was currently in pre-production, and is developed as an hour-long serial drama with the intent of being broadcast on a major American cable network or on-demand provider, with Japanese distribution to follow. As of June 2016 the series is still in pre-production, with plans to begin shipping the project to distributors by the end of the year. Given the abundance of source material, the current plan is to produce as many as seven seasons, without revisiting the source material that was adapted into the first two films. In February 2018 it was announced that the pilot episode would be written by Brandon M. Easton, writer for the Message from Mars comic book series. The first draft of the pilot was completed in October 2018. Pre-production on the series was put on hold in early 2020 as a result of the COVID-19 pandemic.
Other media
In July 2008, Devils Due Publishing announced that it had acquired the rights to publish an English-language Vampire Hunter D comic book mini-series titled Vampire Hunter D: American Wasteland, to be written by Jimmy Palmiotti and pencilled by Tim Seeley, however the project was cancelled in 2009. Intended to infuse the standard Vampire Hunter D formula and mythos with more Western sensibilities, it would have told an original story about D departing the Frontier to embark on a journey to a new land still ruled by the vampiric Nobility.
In 2010, it was reported in Japanese horror magazine Rue Morgue that Hideyuki Kikuchi was in talks with one of the producers for Capcom's Resident Evil video game series to develop a live-action Vampire Hunter D adaptation.
Reception
By 2008, the Vampire Hunter D novels had sold over 17 million copies worldwide, making it one of the best-selling book series in history. Theron Martin of Anime News Network called the first novel "a competent vampire-hunting story with enough strong points to balance out its weaknesses" and gave it a B rating. He praised the setting as a wholly credible world ruled by vampires and grounded in science fiction, rather than fantasy or the supernatural. However, he called the plotting "fairly rudimentary" and a standard tale of a hero and heroine struggling against colorful opposition coming from different directions, where even the few twists are hardly unique. While Martin praised the characters of Doris and Rei-Ginsei, he criticized D and Count Magnus Lee as having weak characterizations. His colleague Rebecca Silverman also strongly praised the world and setting of the first three Vampire Hunter D novels, finding it to clearly be a post-apocalyptic "version of our reality" that in many ways is just as much a character as D. She wrote that the books are "practically dripping with atmosphere" as the story's descriptions are florid and detailed. In her four out of five stars review, Silverman did note that the series could sometimes feel melodramatic and criticized most of the heroines as outdated. Reviewing the first three novels for Anime UK News, Ian Wolf gave the series an 8/10 rating and wrote "Vampire Hunter D is an entertaining read, mixing elements of many different genres to create something very different from what is often available." He noted that D's Left Hand adds some comedy to the overall dark tone of the series.
See also
Vampire literature
Vampire film
Notes
References
External links
Dark Horse
Digital Manga Publishing
Haikasoru
Asashi Sonorama – Japanese publisher of the Vampire Hunter D series books and audio dramas.
Hideyuki Kikuichi Official Fan Club (Japanese)
The Vampire Hunter D Archives
Book series introduced in 1983
Japanese serial novels
Biopunk
Transhumanism in fiction
Demons in popular culture
Novels about magic
Fiction about artificial intelligence
Fiction about wormholes
Mutants in fiction
Fiction about cyborgs
Fiction about robots
Fiction about genetic engineering
Fiction about nanotechnology
Science fiction novel series
Science fiction horror
Science fantasy
Dystopian fiction
Lovecraftian horror
Horror fiction
Dark fantasy
Gothic fiction
Fantasy novel series
Fictional half-vampires
Fictional vampire hunters
Fictional bounty hunters
Fiction about the Solar System
Fiction about immortality
Fiction set in the 7th millennium or beyond
Apocalyptic fiction
Post-apocalyptic fiction
Post-apocalyptic literature
Vampire novels
Novels adapted into comics
Japanese novels adapted into films
Novels adapted into video games
Retrofuturism
Anime productions suspended due to the COVID-19 pandemic
Comic books suspended due to the COVID-19 pandemic
Science fantasy novels
Horror novels | Vampire Hunter D | [
"Materials_science",
"Engineering",
"Biology"
] | 4,791 | [
"Fiction about nanotechnology",
"Nanotechnology"
] |
598,500 | https://en.wikipedia.org/wiki/Algebraic%20K-theory | Algebraic K-theory is a subject area in mathematics with connections to geometry, topology, ring theory, and number theory. Geometric, algebraic, and arithmetic objects are assigned objects called K-groups. These are groups in the sense of abstract algebra. They contain detailed information about the original object but are notoriously difficult to compute; for example, an important outstanding problem is to compute the K-groups of the integers.
K-theory was discovered in the late 1950s by Alexander Grothendieck in his study of intersection theory on algebraic varieties. In the modern language, Grothendieck defined only K0, the zeroth K-group, but even this single group has plenty of applications, such as the Grothendieck–Riemann–Roch theorem. Intersection theory is still a motivating force in the development of (higher) algebraic K-theory through its links with motivic cohomology and specifically Chow groups. The subject also includes classical number-theoretic topics like quadratic reciprocity and embeddings of number fields into the real numbers and complex numbers, as well as more modern concerns like the construction of higher regulators and special values of L-functions.
The lower K-groups were discovered first, in the sense that adequate descriptions of these groups in terms of other algebraic structures were found. For example, if F is a field, then is isomorphic to the integers Z and is closely related to the notion of vector space dimension. For a commutative ring R, the group is related to the Picard group of R, and when R is the ring of integers in a number field, this generalizes the classical construction of the class group. The group K1(R) is closely related to the group of units , and if R is a field, it is exactly the group of units. For a number field F, the group K2(F) is related to class field theory, the Hilbert symbol, and the solvability of quadratic equations over completions. In contrast, finding the correct definition of the higher K-groups of rings was a difficult achievement of Daniel Quillen, and many of the basic facts about the higher K-groups of algebraic varieties were not known until the work of Robert Thomason.
History
The history of K-theory was detailed by Charles Weibel.
The Grothendieck group K0
In the 19th century, Bernhard Riemann and his student Gustav Roch proved what is now known as the Riemann–Roch theorem. If X is a Riemann surface, then the sets of meromorphic functions and meromorphic differential forms on X form vector spaces. A line bundle on X determines subspaces of these vector spaces, and if X is projective, then these subspaces are finite dimensional. The Riemann–Roch theorem states that the difference in dimensions between these subspaces is equal to the degree of the line bundle (a measure of twistedness) plus one minus the genus of X. In the mid-20th century, the Riemann–Roch theorem was generalized by Friedrich Hirzebruch to all algebraic varieties. In Hirzebruch's formulation, the Hirzebruch–Riemann–Roch theorem, the theorem became a statement about Euler characteristics: The Euler characteristic of a vector bundle on an algebraic variety (which is the alternating sum of the dimensions of its cohomology groups) equals the Euler characteristic of the trivial bundle plus a correction factor coming from characteristic classes of the vector bundle. This is a generalization because on a projective Riemann surface, the Euler characteristic of a line bundle equals the difference in dimensions mentioned previously, the Euler characteristic of the trivial bundle is one minus the genus, and the only nontrivial characteristic class is the degree.
The subject of K-theory takes its name from a 1957 construction of Alexander Grothendieck which appeared in the Grothendieck–Riemann–Roch theorem, his generalization of Hirzebruch's theorem. Let X be a smooth algebraic variety. To each vector bundle on X, Grothendieck associates an invariant, its class. The set of all classes on X was called K(X) from the German Klasse. By definition, K(X) is a quotient of the free abelian group on isomorphism classes of vector bundles on X, and so it is an abelian group. If the basis element corresponding to a vector bundle V is denoted [V], then for each short exact sequence of vector bundles:
Grothendieck imposed the relation . These generators and relations define K(X), and they imply that it is the universal way to assign invariants to vector bundles in a way compatible with exact sequences.
Grothendieck took the perspective that the Riemann–Roch theorem is a statement about morphisms of varieties, not the varieties themselves. He proved that there is a homomorphism from K(X) to the Chow groups of X coming from the Chern character and Todd class of X. Additionally, he proved that a proper morphism to a smooth variety Y determines a homomorphism called the pushforward. This gives two ways of determining an element in the Chow group of Y from a vector bundle on X: Starting from X, one can first compute the pushforward in K-theory and then apply the Chern character and Todd class of Y, or one can first apply the Chern character and Todd class of X and then compute the pushforward for Chow groups. The Grothendieck–Riemann–Roch theorem says that these are equal. When Y is a point, a vector bundle is a vector space, the class of a vector space is its dimension, and the Grothendieck–Riemann–Roch theorem specializes to Hirzebruch's theorem.
The group K(X) is now known as K0(X). Upon replacing vector bundles by projective modules, K0 also became defined for non-commutative rings, where it had applications to group representations. Atiyah and Hirzebruch quickly transported Grothendieck's construction to topology and used it to define topological K-theory. Topological K-theory was one of the first examples of an extraordinary cohomology theory: It associates to each topological space X (satisfying some mild technical constraints) a sequence of groups Kn(X) which satisfy all the Eilenberg–Steenrod axioms except the normalization axiom. The setting of algebraic varieties, however, is much more rigid, and the flexible constructions used in topology were not available. While the group K0 seemed to satisfy the necessary properties to be the beginning of a cohomology theory of algebraic varieties and of non-commutative rings, there was no clear definition of the higher Kn(X). Even as such definitions were developed, technical issues surrounding restriction and gluing usually forced Kn to be defined only for rings, not for varieties.
K0, K1, and K2
A group closely related to K1 for group rings was earlier introduced by J.H.C. Whitehead. Henri Poincaré had attempted to define the Betti numbers of a manifold in terms of a triangulation. His methods, however, had a serious gap: Poincaré could not prove that two triangulations of a manifold always yielded the same Betti numbers. It was clearly true that Betti numbers were unchanged by subdividing the triangulation, and therefore it was clear that any two triangulations that shared a common subdivision had the same Betti numbers. What was not known was that any two triangulations admitted a common subdivision. This hypothesis became a conjecture known as the Hauptvermutung (roughly "main conjecture"). The fact that triangulations were stable under subdivision led J.H.C. Whitehead to introduce the notion of simple homotopy type. A simple homotopy equivalence is defined in terms of adding simplices or cells to a simplicial complex or cell complex in such a way that each additional simplex or cell deformation retracts into a subdivision of the old space. Part of the motivation for this definition is that a subdivision of a triangulation is simple homotopy equivalent to the original triangulation, and therefore two triangulations that share a common subdivision must be simple homotopy equivalent. Whitehead proved that simple homotopy equivalence is a finer invariant than homotopy equivalence by introducing an invariant called the torsion. The torsion of a homotopy equivalence takes values in a group now called the Whitehead group and denoted Wh(π), where π is the fundamental group of the target complex. Whitehead found examples of non-trivial torsion and thereby proved that some homotopy equivalences were not simple. The Whitehead group was later discovered to be a quotient of K1(Zπ), where Zπ is the integral group ring of π. Later John Milnor used Reidemeister torsion, an invariant related to Whitehead torsion, to disprove the Hauptvermutung.
The first adequate definition of K1 of a ring was made by Hyman Bass and Stephen Schanuel. In topological K-theory, K1 is defined using vector bundles on a suspension of the space. All such vector bundles come from the clutching construction, where two trivial vector bundles on two halves of a space are glued along a common strip of the space. This gluing data is expressed using the general linear group, but elements of that group coming from elementary matrices (matrices corresponding to elementary row or column operations) define equivalent gluings. Motivated by this, the Bass–Schanuel definition of K1 of a ring R is , where GL(R) is the infinite general linear group (the union of all GLn(R)) and E(R) is the subgroup of elementary matrices. They also provided a definition of K0 of a homomorphism of rings and proved that K0 and K1 could be fit together into an exact sequence similar to the relative homology exact sequence.
Work in K-theory from this period culminated in Bass' book Algebraic K-theory. In addition to providing a coherent exposition of the results then known, Bass improved many of the statements of the theorems. Of particular note is that Bass, building on his earlier work with Murthy, provided the first proof of what is now known as the fundamental theorem of algebraic K-theory. This is a four-term exact sequence relating K0 of a ring R to K1 of R, the polynomial ring R[t], and the localization R[t, t−1]. Bass recognized that this theorem provided a description of K0 entirely in terms of K1. By applying this description recursively, he produced negative K-groups K−n(R). In independent work, Max Karoubi gave another definition of negative K-groups for certain categories and proved that his definitions yielded that same groups as those of Bass.
The next major development in the subject came with the definition of K2. Steinberg studied the universal central extensions of a Chevalley group over a field and gave an explicit presentation of this group in terms of generators and relations. In the case of the group En(k) of elementary matrices, the universal central extension is now written Stn(k) and called the Steinberg group. In the spring of 1967, John Milnor defined K2(R) to be the kernel of the homomorphism . The group K2 further extended some of the exact sequences known for K1 and K0, and it had striking applications to number theory. Hideya Matsumoto's 1968 thesis showed that for a field F, K2(F) was isomorphic to:
This relation is also satisfied by the Hilbert symbol, which expresses the solvability of quadratic equations over local fields. In particular, John Tate was able to prove that K2(Q) is essentially structured around the law of quadratic reciprocity.
Higher K-groups
In the late 1960s and early 1970s, several definitions of higher K-theory were proposed. Swan and Gersten both produced definitions of Kn for all n, and Gersten proved that his and Swan's theories were equivalent, but the two theories were not known to satisfy all the expected properties. Nobile and Villamayor also proposed a definition of higher K-groups. Karoubi and Villamayor defined well-behaved K-groups for all n, but their equivalent of K1 was sometimes a proper quotient of the Bass–Schanuel K1. Their K-groups are now called KVn and are related to homotopy-invariant modifications of K-theory.
Inspired in part by Matsumoto's theorem, Milnor made a definition of the higher K-groups of a field. He referred to his definition as "purely ad hoc", and it neither appeared to generalize to all rings nor did it appear to be the correct definition of the higher K-theory of fields. Much later, it was discovered by Nesterenko and Suslin and by Totaro that Milnor K-theory is actually a direct summand of the true K-theory of the field. Specifically, K-groups have a filtration called the weight filtration, and the Milnor K-theory of a field is the highest weight-graded piece of the K-theory. Additionally, Thomason discovered that there is no analog of Milnor K-theory for a general variety.
The first definition of higher K-theory to be widely accepted was Daniel Quillen's. As part of Quillen's work on the Adams conjecture in topology, he had constructed maps from the classifying spaces BGL(Fq) to the homotopy fiber of , where ψq is the qth Adams operation acting on the classifying space BU. This map is acyclic, and after modifying BGL(Fq) slightly to produce a new space BGL(Fq)+, the map became a homotopy equivalence. This modification was called the plus construction. The Adams operations had been known to be related to Chern classes and to K-theory since the work of Grothendieck, and so Quillen was led to define the K-theory of R as the homotopy groups of BGL(R)+. Not only did this recover K1 and K2, the relation of K-theory to the Adams operations allowed Quillen to compute the K-groups of finite fields.
The classifying space BGL is connected, so Quillen's definition failed to give the correct value for K0. Additionally, it did not give any negative K-groups. Since K0 had a known and accepted definition it was possible to sidestep this difficulty, but it remained technically awkward. Conceptually, the problem was that the definition sprung from GL, which was classically the source of K1. Because GL knows only about gluing vector bundles, not about the vector bundles themselves, it was impossible for it to describe K0.
Inspired by conversations with Quillen, Segal soon introduced another approach to constructing algebraic K-theory under the name of Γ-objects. Segal's approach is a homotopy analog of Grothendieck's construction of K0. Where Grothendieck worked with isomorphism classes of bundles, Segal worked with the bundles themselves and used isomorphisms of the bundles as part of his data. This results in a spectrum whose homotopy groups are the higher K-groups (including K0). However, Segal's approach was only able to impose relations for split exact sequences, not general exact sequences. In the category of projective modules over a ring, every short exact sequence splits, and so Γ-objects could be used to define the K-theory of a ring. However, there are non-split short exact sequences in the category of vector bundles on a variety and in the category of all modules over a ring, so Segal's approach did not apply to all cases of interest.
In the spring of 1972, Quillen found another approach to the construction of higher K-theory which was to prove enormously successful. This new definition began with an exact category, a category satisfying certain formal properties similar to, but slightly weaker than, the properties satisfied by a category of modules or vector bundles. From this he constructed an auxiliary category using a new device called his "Q-construction." Like Segal's Γ-objects, the Q-construction has its roots in Grothendieck's definition of K0. Unlike Grothendieck's definition, however, the Q-construction builds a category, not an abelian group, and unlike Segal's Γ-objects, the Q-construction works directly with short exact sequences. If C is an abelian category, then QC is a category with the same objects as C but whose morphisms are defined in terms of short exact sequences in C. The K-groups of the exact category are the homotopy groups of ΩBQC, the loop space of the geometric realization (taking the loop space corrects the indexing). Quillen additionally proved his " theorem" that his two definitions of K-theory agreed with each other. This yielded the correct K0 and led to simpler proofs, but still did not yield any negative K-groups.
All abelian categories are exact categories, but not all exact categories are abelian. Because Quillen was able to work in this more general situation, he was able to use exact categories as tools in his proofs. This technique allowed him to prove many of the basic theorems of algebraic K-theory. Additionally, it was possible to prove that the earlier definitions of Swan and Gersten were equivalent to Quillen's under certain conditions.
K-theory now appeared to be a homology theory for rings and a cohomology theory for varieties. However, many of its basic theorems carried the hypothesis that the ring or variety in question was regular. One of the basic expected relations was a long exact sequence (called the "localization sequence") relating the K-theory of a variety X and an open subset U. Quillen was unable to prove the existence of the localization sequence in full generality. He was, however, able to prove its existence for a related theory called G-theory (or sometimes K′-theory). G-theory had been defined early in the development of the subject by Grothendieck. Grothendieck defined G0(X) for a variety X to be the free abelian group on isomorphism classes of coherent sheaves on X, modulo relations coming from exact sequences of coherent sheaves. In the categorical framework adopted by later authors, the K-theory of a variety is the K-theory of its category of vector bundles, while its G-theory is the K-theory of its category of coherent sheaves. Not only could Quillen prove the existence of a localization exact sequence for G-theory, he could prove that for a regular ring or variety, K-theory equaled G-theory, and therefore K-theory of regular varieties had a localization exact sequence. Since this sequence was fundamental to many of the facts in the subject, regularity hypotheses pervaded early work on higher K-theory.
Applications of algebraic K-theory in topology
The earliest application of algebraic K-theory to topology was Whitehead's construction of Whitehead torsion. A closely related construction was found by C. T. C. Wall in 1963. Wall found that a space X dominated by a finite complex has a generalized Euler characteristic taking values in a quotient of K0(Zπ), where π is the fundamental group of the space. This invariant is called Wall's finiteness obstruction because X is homotopy equivalent to a finite complex if and only if the invariant vanishes. Laurent Siebenmann in his thesis found an invariant similar to Wall's that gives an obstruction to an open manifold being the interior of a compact manifold with boundary. If two manifolds with boundary M and N have isomorphic interiors (in TOP, PL, or DIFF as appropriate), then the isomorphism between them defines an h-cobordism between M and N.
Whitehead torsion was eventually reinterpreted in a more directly K-theoretic way. This reinterpretation happened through the study of h-cobordisms. Two n-dimensional manifolds M and N are h-cobordant if there exists an -dimensional manifold with boundary W whose boundary is the disjoint union of M and N and for which the inclusions of M and N into W are homotopy equivalences (in the categories TOP, PL, or DIFF). Stephen Smale's h-cobordism theorem asserted that if , W is compact, and M, N, and W are simply connected, then W is isomorphic to the cylinder (in TOP, PL, or DIFF as appropriate). This theorem proved the Poincaré conjecture for .
If M and N are not assumed to be simply connected, then an h-cobordism need not be a cylinder. The s-cobordism theorem, due independently to Mazur, Stallings, and Barden, explains the general situation: An h-cobordism is a cylinder if and only if the Whitehead torsion of the inclusion vanishes. This generalizes the h-cobordism theorem because the simple connectedness hypotheses imply that the relevant Whitehead group is trivial. In fact the s-cobordism theorem implies that there is a bijective correspondence between isomorphism classes of h-cobordisms and elements of the Whitehead group.
An obvious question associated with the existence of h-cobordisms is their uniqueness. The natural notion of equivalence is isotopy. Jean Cerf proved that for simply connected smooth manifolds M of dimension at least 5, isotopy of h-cobordisms is the same as a weaker notion called pseudo-isotopy. Hatcher and Wagoner studied the components of the space of pseudo-isotopies and related it to a quotient of K2(Zπ).
The proper context for the s-cobordism theorem is the classifying space of h-cobordisms. If M is a CAT manifold, then HCAT(M) is a space that classifies bundles of h-cobordisms on M. The s-cobordism theorem can be reinterpreted as the statement that the set of connected components of this space is the Whitehead group of π1(M). This space contains strictly more information than the Whitehead group; for example, the connected component of the trivial cobordism describes the possible cylinders on M and in particular is the obstruction to the uniqueness of a homotopy between a manifold and . Consideration of these questions led Waldhausen to introduce his algebraic K-theory of spaces. The algebraic K-theory of M is a space A(M) which is defined so that it plays essentially the same role for higher K-groups as K1(Zπ1(M)) does for M. In particular, Waldhausen showed that there is a map from A(M) to a space Wh(M) which generalizes the map and whose homotopy fiber is a homology theory.
In order to fully develop A-theory, Waldhausen made significant technical advances in the foundations of K-theory. Waldhausen introduced Waldhausen categories, and for a Waldhausen category C he introduced a simplicial category S⋅C (the S is for Segal) defined in terms of chains of cofibrations in C. This freed the foundations of K-theory from the need to invoke analogs of exact sequences.
Algebraic topology and algebraic geometry in algebraic K-theory
Quillen suggested to his student Kenneth Brown that it might be possible to create a theory of sheaves of spectra of which K-theory would provide an example. The sheaf of K-theory spectra would, to each open subset of a variety, associate the K-theory of that open subset. Brown developed such a theory for his thesis. Simultaneously, Gersten had the same idea. At a Seattle conference in autumn of 1972, they together discovered a spectral sequence converging from the sheaf cohomology of , the sheaf of Kn-groups on X, to the K-group of the total space. This is now called the Brown–Gersten spectral sequence.
Spencer Bloch, influenced by Gersten's work on sheaves of K-groups, proved that on a regular surface, the cohomology group is isomorphic to the Chow group CH2(X) of codimension 2 cycles on X. Inspired by this, Gersten conjectured that for a regular local ring R with fraction field F, Kn(R) injects into Kn(F) for all n. Soon Quillen proved that this is true when R contains a field, and using this he proved that
for all p. This is known as Bloch's formula. While progress has been made on Gersten's conjecture since then, the general case remains open.
Lichtenbaum conjectured that special values of the zeta function of a number field could be expressed in terms of the K-groups of the ring of integers of the field. These special values were known to be related to the étale cohomology of the ring of integers. Quillen therefore generalized Lichtenbaum's conjecture, predicting the existence of a spectral sequence like the Atiyah–Hirzebruch spectral sequence in topological K-theory. Quillen's proposed spectral sequence would start from the étale cohomology of a ring R and, in high enough degrees and after completing at a prime invertible in R, abut to the -adic completion of the K-theory of R. In the case studied by Lichtenbaum, the spectral sequence would degenerate, yielding Lichtenbaum's conjecture.
The necessity of localizing at a prime suggested to Browder that there should be a variant of K-theory with finite coefficients. He introduced K-theory groups Kn(R; Z/Z) which were Z/Z-vector spaces, and he found an analog of the Bott element in topological K-theory. Soule used this theory to construct "étale Chern classes", an analog of topological Chern classes which took elements of algebraic K-theory to classes in étale cohomology. Unlike algebraic K-theory, étale cohomology is highly computable, so étale Chern classes provided an effective tool for detecting the existence of elements in K-theory. William G. Dwyer and Eric Friedlander then invented an analog of K-theory for the étale topology called étale K-theory. For varieties defined over the complex numbers, étale K-theory is isomorphic to topological K-theory. Moreover, étale K-theory admitted a spectral sequence similar to the one conjectured by Quillen. Thomason proved around 1980 that after inverting the Bott element, algebraic K-theory with finite coefficients became isomorphic to étale K-theory.
Throughout the 1970s and early 1980s, K-theory on singular varieties still lacked adequate foundations. While it was believed that Quillen's K-theory gave the correct groups, it was not known that these groups had all of the envisaged properties. For this, algebraic K-theory had to be reformulated. This was done by Thomason in a lengthy monograph which he co-credited to his dead friend Thomas Trobaugh, who he said gave him a key idea in a dream. Thomason combined Waldhausen's construction of K-theory with the foundations of intersection theory described in volume six of Grothendieck's Séminaire de Géométrie Algébrique du Bois Marie. There, K0 was described in terms of complexes of sheaves on algebraic varieties. Thomason discovered that if one worked with in derived category of sheaves, there was a simple description of when a complex of sheaves could be extended from an open subset of a variety to the whole variety. By applying Waldhausen's construction of K-theory to derived categories, Thomason was able to prove that algebraic K-theory had all the expected properties of a cohomology theory.
In 1976, R. Keith Dennis discovered an entirely novel technique for computing K-theory based on Hochschild homology. This was based around the existence of the Dennis trace map, a homomorphism from K-theory to Hochschild homology. While the Dennis trace map seemed to be successful for calculations of K-theory with finite coefficients, it was less successful for rational calculations. Goodwillie, motivated by his "calculus of functors", conjectured the existence of a theory intermediate to K-theory and Hochschild homology. He called this theory topological Hochschild homology because its ground ring should be the sphere spectrum (considered as a ring whose operations are defined only up to homotopy). In the mid-1980s, Bokstedt gave a definition of topological Hochschild homology that satisfied nearly all of Goodwillie's conjectural properties, and this made possible further computations of K-groups. Bokstedt's version of the Dennis trace map was a transformation of spectra . This transformation factored through the fixed points of a circle action on THH, which suggested a relationship with cyclic homology. In the course of proving an algebraic K-theory analog of the Novikov conjecture, Bokstedt, Hsiang, and Madsen introduced topological cyclic homology, which bore the same relationship to topological Hochschild homology as cyclic homology did to Hochschild homology.
The Dennis trace map to topological Hochschild homology factors through topological cyclic homology, providing an even more detailed tool for calculations. In 1996, Dundas, Goodwillie, and McCarthy proved that topological cyclic homology has in a precise sense the same local structure as algebraic K-theory, so that if a calculation in K-theory or topological cyclic homology is possible, then many other "nearby" calculations follow.
Lower K-groups
The lower K-groups were discovered first, and given various ad hoc descriptions, which remain useful. Throughout, let A be a ring.
K0
The functor K0 takes a ring A to the Grothendieck group of the set of isomorphism classes of its finitely generated projective modules, regarded as a monoid under direct sum. Any ring homomorphism A → B gives a map K0(A) → K0(B) by mapping (the class of) a projective A-module M to M ⊗A B, making K0 a covariant functor.
If the ring A is commutative, we can define a subgroup of K0(A) as the set
where :
is the map sending every (class of a) finitely generated projective A-module M to the rank of the free -module (this module is indeed free, as any finitely generated projective module over a local ring is free). This subgroup is known as the reduced zeroth K-theory of A.
If B is a ring without an identity element, we can extend the definition of K0 as follows. Let A = B⊕Z be the extension of B to a ring with unity obtaining by adjoining an identity element (0,1). There is a short exact sequence B → A → Z and we define K0(B) to be the kernel of the corresponding map K0(A) → K0(Z) = Z.
Examples
(Projective) modules over a field k are vector spaces and K0(k) is isomorphic to Z, by dimension.
Finitely generated projective modules over a local ring A are free and so in this case once again K0(A) is isomorphic to Z, by rank.
For A a Dedekind domain, K0(A) = Pic(A) ⊕ Z, where Pic(A) is the Picard group of A,
An algebro-geometric variant of this construction is applied to the category of algebraic varieties; it associates with a given algebraic variety X the Grothendieck's K-group of the category of locally free sheaves (or coherent sheaves) on X. Given a compact topological space X, the topological K-theory Ktop(X) of (real) vector bundles over X coincides with K0 of the ring of continuous real-valued functions on X.
Relative K0
Let I be an ideal of A and define the "double" to be a subring of the Cartesian product A×A:
The relative K-group is defined in terms of the "double"
where the map is induced by projection along the first factor.
The relative K0(A,I) is isomorphic to K0(I), regarding I as a ring without identity. The independence from A is an analogue of the Excision theorem in homology.
K0 as a ring
If A is a commutative ring, then the tensor product of projective modules is again projective, and so tensor product induces a multiplication turning K0 into a commutative ring with the class [A] as identity. The exterior product similarly induces a λ-ring structure.
The Picard group embeds as a subgroup of the group of units K0(A)∗.
K1
Hyman Bass provided this definition, which generalizes the group of units of a ring: K1(A) is the abelianization of the infinite general linear group:
Here
is the direct limit of the GL(n), which embeds in GL(n + 1) as the upper left block matrix, and is its commutator subgroup. Define an elementary matrix to be one which is the sum of an identity matrix and a single off-diagonal element (this is a subset of the elementary matrices used in linear algebra). Then Whitehead's lemma states that the group E(A) generated by elementary matrices equals the commutator subgroup [GL(A), GL(A)]. Indeed, the group GL(A)/E(A) was first defined and studied by Whitehead, and is called the Whitehead group of the ring A.
Relative K1
The relative K-group is defined in terms of the "double"
There is a natural exact sequence
Commutative rings and fields
For A a commutative ring, one can define a determinant det: GL(A) → A* to the group of units of A, which vanishes on E(A) and thus descends to a map det : K1(A) → A*. As E(A) ◅ SL(A), one can also define the special Whitehead group SK1(A) := SL(A)/E(A). This map splits via the map A* → GL(1, A) → K1(A) (unit in the upper left corner), and hence is onto, and has the special Whitehead group as kernel, yielding the split short exact sequence:
which is a quotient of the usual split short exact sequence defining the special linear group, namely
The determinant is split by including the group of units A* = GL1(A) into the general linear group GL(A), so K1(A) splits as the direct sum of the group of units and the special Whitehead group: K1(A) ≅ A* ⊕ SK1 (A).
When A is a Euclidean domain (e.g. a field, or the integers) SK1(A) vanishes, and the determinant map is an isomorphism from K1(A) to A∗. This is false in general for PIDs, thus providing one of the rare mathematical features of Euclidean domains that do not generalize to all PIDs. An explicit PID such that SK1 is nonzero was given by Ischebeck in 1980 and by Grayson in 1981. If A is a Dedekind domain whose quotient field is an algebraic number field (a finite extension of the rationals) then shows that SK1(A) vanishes.
The vanishing of SK1 can be interpreted as saying that K1 is generated by the image of GL1 in GL. When this fails, one can ask whether K1 is generated by the image of GL2. For a Dedekind domain, this is the case: indeed, K1 is generated by the images of GL1 and SL2 in GL. The subgroup of SK1 generated by SL2 may be studied by Mennicke symbols. For Dedekind domains with all quotients by maximal ideals finite, SK1 is a torsion group.
For a non-commutative ring, the determinant cannot in general be defined, but the map GL(A) → K1(A) is a generalisation of the determinant.
Central simple algebras
In the case of a central simple algebra A over a field F, the reduced norm provides a generalisation of the determinant giving a map K1(A) → F∗ and SK1(A) may be defined as the kernel. Wang's theorem states that if A has prime degree then SK1(A) is trivial, and this may be extended to square-free degree. Wang also showed that SK1(A) is trivial for any central simple algebra over a number field, but Platonov has given examples of algebras of degree prime squared for which SK1(A) is non-trivial.
K2
John Milnor found the right definition of K2: it is the center of the Steinberg group St(A) of A.
It can also be defined as the kernel of the map
or as the Schur multiplier of the group of elementary matrices.
For a field, K2 is determined by Steinberg symbols: this leads to Matsumoto's theorem.
One can compute that K2 is zero for any finite field. The computation of K2(Q) is complicated: Tate proved
and remarked that the proof followed Gauss's first proof of the Law of Quadratic Reciprocity.
For non-Archimedean local fields, the group K2(F) is the direct sum of a finite cyclic group of order m, say, and a divisible group K2(F)m.
We have K2(Z) = Z/2, and in general K2 is finite for the ring of integers of a number field.
We further have K2(Z/n) = Z/2 if n is divisible by 4, and otherwise zero.
Matsumoto's theorem
Matsumoto's theorem states that for a field k, the second K-group is given by
Matsumoto's original theorem is even more general: For any root system, it gives a presentation for the unstable K-theory. This presentation is different from the one given here only for symplectic root systems. For non-symplectic root systems, the unstable second K-group with respect to the root system is exactly the stable K-group for GL(A). Unstable second K-groups (in this context) are defined by taking the kernel of the universal central extension of the Chevalley group of universal type for a given root system. This construction yields the kernel of the Steinberg extension for the root systems An (n > 1) and, in the limit, stable second K-groups.
Long exact sequences
If A is a Dedekind domain with field of fractions F then there is a long exact sequence
where p runs over all prime ideals of A.
There is also an extension of the exact sequence for relative K1 and K0:
Pairing
There is a pairing on K1 with values in K2. Given commuting matrices X and Y over A, take elements x and y in the Steinberg group with X,Y as images. The commutator is an element of K2. The map is not always surjective.
Milnor K-theory
The above expression for K2 of a field k led Milnor to the following definition of "higher" K-groups by
thus as graded parts of a quotient of the tensor algebra of the multiplicative group k× by the two-sided ideal, generated by the
For n = 0,1,2 these coincide with those below, but for n ≧ 3 they differ in general. For example, we have K(Fq) = 0 for n ≧ 2
but KnFq is nonzero for odd n (see below).
The tensor product on the tensor algebra induces a product making a graded ring which is graded-commutative.
The images of elements in are termed symbols, denoted . For integer m invertible in k there is a map
where denotes the group of m-th roots of unity in some separable extension of k. This extends to
satisfying the defining relations of the Milnor K-group. Hence may be regarded as a map on , called the Galois symbol map.
The relation between étale (or Galois) cohomology of the field and Milnor K-theory modulo 2 is the Milnor conjecture, proven by Vladimir Voevodsky. The analogous statement for odd primes is the Bloch-Kato conjecture, proved by Voevodsky, Rost, and others.
Higher K-theory
The accepted definitions of higher K-groups were given by , after a few years during which several incompatible definitions were suggested. The object of the program was to find definitions of K(R) and K(R,I) in terms of classifying spaces so that
R ⇒ K(R) and (R,I) ⇒ K(R,I) are functors into a homotopy category of spaces and the long exact sequence for relative K-groups arises as the long exact homotopy sequence of a fibration K(R,I) → K(R) → K(R/I).
Quillen gave two constructions, the "plus-construction" and the "Q-construction", the latter subsequently modified in different ways. The two constructions yield the same K-groups.
The +-construction
One possible definition of higher algebraic K-theory of rings was given by Quillen
Here πn is a homotopy group, GL(R) is the direct limit of the general linear groups over R for the size of the matrix tending to infinity, B is the classifying space construction of homotopy theory, and the + is Quillen's plus construction. He originally found this idea while studying the group cohomology of and noted some of his calculations were related to .
This definition only holds for n > 0 so one often defines the higher algebraic K-theory via
Since BGL(R)+ is path connected and K0(R) discrete, this definition doesn't differ in higher degrees and also holds for n = 0.
The Q-construction
The Q-construction gives the same results as the +-construction, but it applies in more general situations. Moreover, the definition is more direct in the sense that the K-groups, defined via the Q-construction are functorial by definition. This fact is not automatic in the plus-construction.
Suppose is an exact category; associated to a new category is defined, objects of which are those of and morphisms from M′ to M″ are isomorphism classes of diagrams
where the first arrow is an admissible epimorphism and the second arrow is an admissible monomorphism. Note the morphisms in are analogous to the definitions of morphisms in the category of motives, where morphisms are given as correspondences such thatis a diagram where the arrow on the left is a covering map (hence surjective) and the arrow on the right is injective. This category can then be turned into a topological space using the classifying space construction , which is defined to be the geometric realisation of the nerve of . Then, the i-th K-group of the exact category is then defined as
with a fixed zero-object . Note the classifying space of a groupoid moves the homotopy groups up one degree, hence the shift in degrees for being of a space.
This definition coincides with the above definition of K0(P). If P is the category of finitely generated projective R-modules, this definition agrees with the above BGL+
definition of Kn(R) for all n.
More generally, for a scheme X, the higher K-groups of X are defined to be the K-groups of (the exact category of) locally free coherent sheaves on X.
The following variant of this is also used: instead of finitely generated projective (= locally free) modules, take finitely generated modules. The resulting K-groups are usually written Gn(R). When R is a noetherian regular ring, then G- and K-theory coincide. Indeed, the global dimension of regular rings is finite, i.e. any finitely generated module has a finite projective resolution P* → M, and a simple argument shows that the canonical map K0(R) → G0(R) is an isomorphism, with [M]=Σ ± [Pn]. This isomorphism extends to the higher K-groups, too.
The S-construction
A third construction of K-theory groups is the S-construction, due to Waldhausen. It applies to categories with cofibrations (also called Waldhausen categories). This is a more general concept than exact categories.
Examples
While the Quillen algebraic K-theory has provided deep insight into various aspects of algebraic geometry and topology, the K-groups have proved particularly difficult to compute except in a few isolated but interesting cases. (See also: K-groups of a field.)
Algebraic K-groups of finite fields
The first and one of the most important calculations of the higher algebraic K-groups of a ring were made by Quillen himself for the case of finite fields:
If Fq is the finite field with q elements, then:
K0(Fq) = Z,
K2i(Fq) = 0 for i ≥1,
K2i–1(Fq) = Z/(q i − 1)Z for i ≥ 1.
reproved Quillen's computation using different methods.
Algebraic K-groups of rings of integers
Quillen proved that if A is the ring of algebraic integers in an algebraic number field F (a finite extension of the rationals), then the algebraic K-groups of A are finitely generated. Armand Borel used this to calculate Ki(A) and Ki(F) modulo torsion. For example, for the integers Z, Borel proved that (modulo torsion)
Ki (Z)/tors.=0 for positive i unless i=4k+1 with k positive
K4k+1 (Z)/tors.= Z for positive k.
The torsion subgroups of K2i+1(Z), and the orders of the finite groups K4k+2(Z) have recently been determined, but whether the latter groups are cyclic, and whether the groups K4k(Z) vanish depends upon Vandiver's conjecture about the class groups of cyclotomic integers. See Quillen–Lichtenbaum conjecture for more details.
Applications and open questions
Algebraic K-groups are used in conjectures on special values of L-functions and the formulation of a non-commutative main conjecture of Iwasawa theory and in construction of higher regulators.
Parshin's conjecture concerns the higher algebraic K-groups for smooth varieties over finite fields, and states that in this case the groups vanish up to torsion.
Another fundamental conjecture due to Hyman Bass (Bass' conjecture) says that all of the groups Gn(A) are finitely generated when A is a finitely generated Z-algebra. (The groups
Gn(A) are the K-groups of the category of finitely generated A-modules)
See also
Additive K-theory
Bloch's formula
Fundamental theorem of algebraic K-theory
Basic theorems in algebraic K-theory
K-theory
K-theory of a category
K-group of a field
K-theory spectrum
Redshift conjecture
Topological K-theory
Rigidity (K-theory)
Notes
References
(lower K-groups)
(Quillen's Q-construction)
(relation of Q-construction to plus-construction)
. Errata
(survey article)
Further reading
Pedagogical references
Higher Algebraic K-Theory: an overview
. Errata
Historical references
Bokstedt, M., Topological Hochschild homology. Preprint, Bielefeld, 1986.
Bokstedt, M., Hsiang, W. C., Madsen, I., The cyclotomic trace and algebraic K-theory of spaces. Invent. Math., 111(3) (1993), 465–539.
Brown, K., Gersten, S., Algebraic K-theory as generalized sheaf cohomology, Algebraic K-theory I, Lecture Notes in Math., vol. 341, Springer-Verlag, 1973, pp. 266–292.
Dennis, R. K., Higher algebraic K-theory and Hochschild homology, unpublished preprint (1976).
Grothendieck, Alexander, Classes de fasiceaux et theoreme de Riemann–Roch, mimeographed notes, Princeton 1957.
Milnor, J., Introduction to Algebraic K-theory, Princeton Univ. Press, 1971.
Nobile, A., Villamayor, O., Sur la K-theorie algebrique, Annales Scientifiques de l'École Normale Supérieure, 4e serie, 1, no. 3, 1968, 581–616.
Quillen, Daniel, Cohomology of groups, Proc. ICM Nice 1970, vol. 2, Gauthier-Villars, Paris, 1971, 47–52.
Quillen, Daniel, Higher algebraic K-theory I, Algebraic K-theory I, Lecture Notes in Math., vol. 341, Springer Verlag, 1973, 85–147.
Quillen, Daniel, Higher algebraic K-theory, Proc. Intern. Congress Math., Vancouver, 1974, vol. I, Canad. Math. Soc., 1975, pp. 171–176.
Siebenmann, Larry, The Obstruction to Finding a Boundary for an Open Manifold of Dimension Greater than Five, Thesis, Princeton University (1965).
Steinberg, R., Generateurs, relations et revetements de groupes algebriques, ́Colloq. Theorie des Groupes Algebriques, Gauthier-Villars, Paris, 1962, pp. 113–127. (French)
Swan, Richard, Nonabelian homological algebra and K-theory, Proc. Sympos. Pure Math., vol. XVII, 1970, pp. 88–123.
Thomason, R. W., Algebraic K-theory and étale cohomology, Ann. Scient. Ec. Norm. Sup. 18, 4e serie (1985), 437–552; erratum 22 (1989), 675–677.
Thomason, R. W., Le principe de sciendage et l'inexistence d'une K-theorie de Milnor globale, Topology 31, no. 3, 1992, 571–588.
Waldhausen, F., Algebraic K-theory of topological spaces. I, in Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976), Part 1, pp. 35–60, Proc. Sympos. Pure Math., XXXII, Amer. Math. Soc., Providence, R.I., 1978.
Waldhausen, F., Algebraic K-theory of spaces, in Algebraic and geometric topology (New Brunswick, N.J., 1983), Lecture Notes in Mathematics, vol. 1126 (1985), 318–419.
External links
K theory preprint archive
Algebraic geometry | Algebraic K-theory | [
"Mathematics"
] | 10,973 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
598,536 | https://en.wikipedia.org/wiki/Einstein%20ring | An Einstein ring, also known as an Einstein–Chwolson ring or Chwolson ring (named for Orest Chwolson), is created when light from a galaxy or star passes by a massive object en route to the Earth. Due to gravitational lensing, the light is diverted, making it seem to come from different places. If source, lens, and observer are all in perfect alignment (syzygy), the light appears as a ring.
Introduction
Gravitational lensing is predicted by Albert Einstein's theory of general relativity. Instead of light from a source traveling in a straight line (in three dimensions), it is bent by the presence of a massive body, which distorts spacetime. An Einstein Ring is a special case of gravitational lensing, caused by the exact alignment of the source, lens, and observer. This results in symmetry around the lens, causing a ring-like structure.
The size of an Einstein ring is given by the Einstein radius. In radians, it is
where
is the gravitational constant,
is the mass of the lens,
is the speed of light,
is the angular diameter distance to the lens,
is the angular diameter distance to the source, and
is the angular diameter distance between the lens and the source.
Over cosmological distances in general.
History
The bending of light by a gravitational body was predicted by Albert Einstein in 1912, a few years before the publication of general relativity in 1916 (Renn et al. 1997). The ring effect was first mentioned in the academic literature by Orest Khvolson in a short article in 1924, in which he mentioned the “halo effect” of gravitation when the source, lens, and observer are in near-perfect alignment. Einstein remarked upon this effect in 1936 in a paper prompted by a letter by a Czech engineer, R W Mandl, but stated
(In this statement, β is the Einstein Angle currently denoted by as in the expression above.) However, Einstein was only considering the chance of observing Einstein rings produced by stars, which is low – the chance of observing those produced by larger lenses such as galaxies or black holes is higher since the angular size of an Einstein ring increases with the mass of the lens.
The first complete Einstein ring, designated B1938+666, was discovered by collaboration between astronomers at the University of Manchester and NASA's Hubble Space Telescope in 1998.
There have apparently not been any observations of a star forming an Einstein ring with another star, but there is a 45% chance of this happening in early May, 2028 when Alpha Centauri A passes between us and a distant red star.
Known Einstein rings
Hundreds of gravitational lenses are currently known. About half a dozen of them are partial Einstein rings with diameters up to an arcsecond, although as either the mass distribution of the lenses is not perfectly axially symmetrical, or the source, lens, and observer are not perfectly aligned, we have yet to see a perfect Einstein ring. Most rings have been discovered in the radio range. The degree of completeness needed for an image seen through a gravitational lens to qualify as an Einstein ring is yet to be defined.
The first Einstein ring was discovered by Hewitt et al. (1988), who observed the radio source MG1131+0456 using the Very Large Array. This observation saw a quasar lensed by a nearer galaxy into two separate but very similar images of the same object, the images stretched round the lens into an almost complete ring. These dual images are another possible effect of the source, lens, and observer not being perfectly aligned.
The first complete Einstein ring to be discovered was B1938+666, which was found by King et al. (1998) via optical follow-up with the Hubble Space Telescope of a gravitational lens imaged with MERLIN. The galaxy causing the lens at B1938+666 is an ancient elliptical galaxy, and the image we see through the lens is a dark dwarf satellite galaxy, which we would otherwise not be able to see with current technology.
In 2005, the combined power of the Sloan Digital Sky Survey (SDSS) with the Hubble Space Telescope was used in the Sloan Lens ACS (SLACS) Survey to find 19 new gravitational lenses, 8 of which showed Einstein rings, these are the 8 shown in the adjacent image. As of 2009, this survey has found 85 confirmed gravitational lenses but there is not yet a number for how many show Einstein rings. This survey is responsible for most of the recent discoveries of Einstein rings in the optical range, following are some examples which were found:
FOR J0332-3557, discovered by Remi Cabanac et al. in 2005, notable for its high redshift which allows us to use it to make observations about the early universe.
The "Cosmic Horseshoe" is a partial Einstein ring which was observed through the gravitational lens of LRG 3-757, a distinctively large Luminous Red Galaxy. It was discovered in 2007 by V. Belokurov et al.
SDSSJ0946+1006, the "double Einstein ring" was discovered by Raphael Gavazzi and Tomasso Treu in 2008, notable for the presence of multiple rings observed through the same gravitational lens, the significance of which is explained in the next section on extra rings.
Another example is the radio/X-Ray Einstein ring around PKS 1830-211, which is unusually strong in radio. It was discovered in X-Ray by Varsha Gupta et al. at the Chandra X-Ray observatory It is also notable for being the first case of a quasar being lensed by an almost face-on spiral galaxy.
Galaxy MG1654+1346 features a radio ring. The image in the ring is that of a quasar radio lobe, discovered in 1989 by G.Langston et al.
In June 2023, a team of astronomers led by Justin Spilker announced their discovery of an Einstein ring of distant galaxy rich in organic molecules (aromatic hydrocarbons).
Extra rings
Using the Hubble Space Telescope, a double ring has been found by Raphael Gavazzi of the STScI and Tommaso Treu of the University of California, Santa Barbara. This arises from the light from three galaxies at distances of 3, 6, and 11 billion light years. Such rings help in understanding the distribution of dark matter, dark energy, the nature of distant galaxies, and the curvature of the universe. The odds of finding such a double ring around a massive galaxy are 1 in 10,000. Sampling 50 suitable double rings would provide astronomers with a more accurate measurement of the dark matter content of the universe and the equation of state of the dark energy to within 10 percent precision.
Simulation
Below in the Gallery section is a simulation depicting a zoom on a Schwarzschild black hole in the plane of the Milky Way between us and the centre of the galaxy. The first Einstein ring is the most distorted region of the picture and shows the galactic disc. The zoom then reveals a series of 4 extra rings, increasingly thinner and closer to the black hole shadow. They are multiple images of the galactic disk. The first and third correspond to points which are behind the black hole (from the observer's position) and correspond here to the bright yellow region of the galactic disc (close to the galactic center), whereas the second and fourth correspond to images of objects which are behind the observer, which appear bluer, since the corresponding part of the galactic disc is thinner and hence dimmer here.
Gallery
See also
Einstein Cross
Einstein radius
SN Refsdal
References
Journals
(refers to FOR J0332-3357)
(The first paper to propose rings)
(The famous Einstein Ring paper)
News
(refers to FOR J0332-3357)
Further reading
Effects of gravity
Ring
Optical phenomena
Gravitational lensing | Einstein ring | [
"Physics"
] | 1,603 | [
"Optical phenomena",
"Physical phenomena"
] |
598,663 | https://en.wikipedia.org/wiki/WECT%20tower | The WECT Tower was a 2,000 ft (609.6m)-tall mast used as antenna for TV-broadcasting, including broadcasting the analog television signal of WECT channel 6. It was built in 1969 and was situated along NC 53 south of White Lake in Colly Township in Bladen County, North Carolina, United States. Before demolition, WECT Tower was, along with several other masts, the seventh tallest man-made structure ever created; and was not only the tallest structure in North Carolina, but also the tallest in the United States east of the Mississippi River.
On September 8, 2008, WECT ceased regular transmission of their analog signal from the Bladen County tower, relying instead on its newer digital transmitter in Winnabow. Following the switch, the analog signal remained on air until the end of September as a "Nightlight", broadcasting an instructional video explaining installation of converters and UHF antennas, but many who were able to receive WECT's former VHF analog signal would no longer be able to receive the station at all digitally, due to a shift to a UHF channel and a vastly smaller coverage area.
WECT continued to utilize the former analog tower for electronic news-gathering purposes before donating the tower and site to the Green Beret Foundation in 2011. On September 20, 2012 at 12:47 PM, the tower was demolished with explosives to be scrapped. Proceeds from the sale of the land and the scrap metal of the tower will go to the foundation.
See also
WECT
List of masts
List of tallest structures
List of towers, Table of masts
References
External links
Skyscraper Page diagrams
Towers completed in 1969
Buildings and structures in Bladen County, North Carolina
Towers in North Carolina
Radio masts and towers in the United States
Demolished buildings and structures in North Carolina
Buildings and structures demolished by controlled implosion
1969 establishments in North Carolina
2012 disestablishments in North Carolina
Buildings and structures demolished in 2012
Former radio masts and towers | WECT tower | [
"Engineering"
] | 402 | [
"Buildings and structures demolished by controlled implosion",
"Architecture"
] |
598,672 | https://en.wikipedia.org/wiki/Frank%E2%80%93Starling%20law | The Frank–Starling law of the heart (also known as Starling's law and the Frank–Starling mechanism) represents the relationship between stroke volume and end diastolic volume. The law states that the stroke volume of the heart increases in response to an increase in the volume of blood in the ventricles, before contraction (the end diastolic volume), when all other factors remain constant. As a larger volume of blood flows into the ventricle, the blood stretches cardiac muscle, leading to an increase in the force of contraction. The Frank-Starling mechanism allows the cardiac output to be synchronized with the venous return, arterial blood supply and humoral length, without depending upon external regulation to make alterations. The physiological importance of the mechanism lies mainly in maintaining left and right ventricular output equality.
Physiology
The Frank-Starling mechanism occurs as the result of the length-tension relationship observed in striated muscle, including for example skeletal muscles, arthropod muscle and cardiac (heart) muscle. As striated muscle is stretched, active tension is created by altering the overlap of thick and thin filaments. The greatest isometric active tension is developed when a muscle is at its optimal length. In most relaxed skeletal muscle fibers, passive elastic properties maintain the muscle fibers length near optimal, as determined usually by the fixed distance between the attachment points of tendons to the bones (or the exoskeleton of arthropods) at either end of the muscle. In contrast, the relaxed sarcomere length of cardiac muscle cells, in a resting ventricle, is lower than the optimal length for contraction. There is no bone to fix sarcomere length in the heart (of any animal) so sarcomere length is very variable and depends directly upon blood filling and thereby expanding the heart chambers. In the human heart, maximal force is generated with an initial sarcomere length of 2.2 micrometers, a length which is rarely exceeded in a normal heart. Initial lengths larger or smaller than this optimal value will decrease the force the muscle can achieve. For longer sarcomere lengths, this is the result of there being less overlap of the thin and thick filaments; for shorter sarcomere lengths, the cause is the decreased sensitivity for calcium by the myofilaments. An increase in filling of the ventricle increases the load experienced by each cardiac muscle cells, stretching their sarcomeres toward their optimal length.
The stretching sarcomeres augments cardiac muscle contraction by increasing the calcium sensitivity of the myofibrils, causing a greater number of actin-myosin cross-bridges to form within the muscle. Specifically, the sensitivity of troponin for binding Ca2+ increases and there is an increased release of Ca2+ from the sarcoplasmic reticulum. In addition, stretch of cardiac myocytes increases the releasability of Ca2+ from the internal store, the sarcoplasmic reticulum, as shown by an increase in Ca2+ spark rate upon axial stretch of single cardiac myocytes. Finally, there is thought to be a decrease in the spacing between thick and thin filaments, when a cardiac muscle is stretched, allowing an increased number of cross-bridges to form. The force that any single cardiac muscle cell generates is related to the sarcomere length at the time of muscle cell activation by calcium. The stretch on the individual cell, caused by ventricular filling, determines the sarcomere length of the fibres. Therefore the force (pressure) generated by the cardiac muscle fibres is related to the end-diastolic volume of the left and right ventricles as determined by complexities of the force-sarcomere length relationship.
Due to the intrinsic property of myocardium that is responsible for the Frank-Starling mechanism, the heart can automatically accommodate an increase in venous return, at any heart rate. The mechanism is of functional importance because it serves to adapt left ventricular output to right ventricular output. If this mechanism did not exist and the right and left cardiac outputs were not equivalent, blood would accumulate in the pulmonary circulation (were the right ventricle producing more output than the left) or the systemic circulation (were the left ventricle producing more output than the right).
Clinical examples
Premature ventricular contraction
Premature ventricular contraction causes early emptying of the left ventricle (LV) into the aorta. Since the next ventricular contraction occurs at its regular time, the filling time for the LV increases, causing an increased LV end-diastolic volume. Due to the Frank–Starling mechanism, the next ventricular contraction is more forceful, leading to the ejection of the larger than normal volume of blood, and bringing the LV end-systolic volume back to baseline.
Diastolic dysfunction – heart failure
Diastolic dysfunction is associated with a reduced compliance, or increased stiffness, of the ventricle wall. This reduced compliance results in an inadequate filling of the ventricle and a decrease in the end-diastolic volume. The decreased end-diastolic volume then leads to a reduction in stroke volume because of the Frank-Starling mechanism.
History
The Frank–Starling law is named after the two physiologists, Otto Frank and Ernest Henry Starling. However, neither Frank nor Starling was the first to describe the relationship between the end-diastolic volume and the regulation of cardiac output. The first formulation of the law was theorized by the Italian physiologist Dario Maestrini, who on December 13, 1914, started the first of 19 experiments that led him to formulate the "legge del cuore" .
Otto Frank's contributions are derived from his 1895 experiments on frog hearts. In order to relate the work of the heart to skeletal muscle mechanics, Frank observed changes in diastolic pressure with varying volumes of the frog ventricle. His data was analyzed on a pressure-volume diagram, which resulted in his description of peak isovolumic pressure and its effects on ventricular volume.
Starling experimented on intact mammalian hearts, such as from dogs, to understand why variations in arterial pressure, heart rate, and temperature do not affect the relatively constant cardiac output. More than 30 years before the development of the sliding filament model of muscle contraction and the understanding of the relationship between active tension and sarcomere length, Starling hypothesized in 1914, "the mechanical energy set free in the passage from the resting to the active state is a function of the length of the fiber." Starling used a volume-pressure diagram to construct a length-tension diagram from his data. Starling's data and associated diagrams, provided evidence that the length of the muscle fibers, and resulting tension, altered the systolic pressure.
See also
Starling equation
Total peripheral resistance
References
Cardiovascular physiology
Mathematics in medicine | Frank–Starling law | [
"Mathematics"
] | 1,436 | [
"Applied mathematics",
"Mathematics in medicine"
] |
598,704 | https://en.wikipedia.org/wiki/BD%20%28company%29 | Becton, Dickinson and Company (BD; also Becton Dickinson or Becton) is an American multinational medical technology company that manufactures and sells medical devices, instrument systems, and reagents. BD also provides consulting and analytics services in certain areas.
BD is ranked #211 in the 2024 Fortune 500 list based on its revenues for the fiscal year ending September30, 2023.
History
The company was founded in 1897 in New York City by Maxwell Becton and Fairleigh S. Dickinson. It later moved its headquarters to New Jersey.
In 2004, BD agreed to pay out US$100 million to settle allegations from competitor Retractable Technologies that it had engaged in anti-competitive behavior to prevent the distribution of Retractable's syringes, which are designed to prevent needlestick injury. The lawsuit touched off a series of legal conflicts between the companies. Retractable would accuse BD of patent infringement after BD released a retractable needle of its own. Later Retractable would claim BD was falsely advertising its own brand of retractable needle as the “world’s sharpest needle”.
In October 2014, the company agreed to acquire CareFusion for a price of US$12.2 billion in cash and stock.
In April 2017, Becton, Dickinson and Company announced it would acquire C. R. Bard. The transaction was completed later that year, and the company became a wholly-owned subsidiary of BD, rebranded as Bard.
In 2024, BD announced it would acquire Edwards Lifesciences' critical care unit for $4.2 billion.
Finances
For the fiscal year 2017, Becton Dickinson reported earnings of US$1.030 billion, with an annual revenue of US$12.093 billion, an increase of 10.5% over the previous fiscal cycle. Becton Dickinson's shares traded at over US$192 per share, and its market capitalization was valued at over US$63 billion in November 2018.
Business segments
Currently there are three business segments.
BD Medical
In certain places, BD Medical also offers consulting and analytics related services. BD Medical's Consulting services are primarily targeted at hospitals, healthcare systems and networks of healthcare providers.
BD Life Sciences
Business units include Biosciences and Integrated Diagnostic Solutions.
Offerings include preanalytical solutions for sample management; immunology research, including flow cytometry and multiomics tools; microbiology and molecular diagnostics; lab automation and informatics; and differentiated reagents and assays.
BD Interventional
The company's line of plastic conical screwtop test tubes, known as 'Falcon tubes', is popular and the term is sometimes used as a generic term for such tubes.
Environmental and social track record
As of February 2010, BD was ranked 18th in the EPA Fortune 500 List of Green Power Purchasers. BD was also listed among the top 100 companies in Newsweek's 2009 Green Rankings ranking of the 500 largest American corporations based on environmental performance, policies, and reputation. BD placed third in the health care sector and 83rd overall. In addition, BD has been a component of the Dow Jones Sustainability World Index and the Dow Jones Sustainability North America Index for the four and five consecutive years, respectively.
Pfitzer et al. (2013) identify BD's development of a needleless injection system as an example of leading businesses' role in creating shared value.
Health and safety issues
In April 2016, the Occupational Safety and Health Administration fined BD US$112,700 for safety violations. They found repeat and serious violations of health and safety law that had resulted in two employees having partial finger amputations.
In 2020, C.R. Bard, Inc. and its parent company BD were fined $60 million USD for failing to adequately inform patients about health risks related to their transvaginal mesh devices.
Recalls
2007 Discardit II incident in Poland
In mid-2007, the firm's Discardit II series of syringes numbered 0607186 was withdrawn from hospitals and other medical services around Poland, about half a year after the discovery of remains of dark dust in some syringes, which were alleged to have been from this series. The newspaper Dziennik Online claimed that other series such as 06022444, 0603266, and 0607297 were also suspected of being contaminated. BD recalled and tested the syringes in question, and revealed sterile particulates in 0.013 percent of the products.
2010 Q-Syte Luer and IV Catheter partial recall
In February 2010 BD announced a voluntary product recall of certain lots of BD Q-Syte Luer Access Devices and BD Nexiva Closed IV Catheter Systems. BD stated that the use of the affected devices may cause an air embolism or leakage of blood and/or therapy, which may result in serious injury or death. The approximately 2.8 million BD Q-Syte and 2.9 million BD Nexiva units containing 5 million BD Q-Syte devices that were recalled were distributed in the United States, Asia, Canada, Europe, Mexico, the Middle East, South Africa, and South America. The recall was initiated on Oct. 28, 2009 after BD received complaints of problems due to air entry through a part of the device. BD stated that the cause of the problem was manufacturing deviation and claimed that it corrected the problem. BD announced that it notified customers about the recall by letter and has been working with the U.S. Food and Drug Administration and worldwide health agencies to coordinate recall activities.
2021 IV Giving Sets
In March 2021 BD announced a recall of infusion sets for CC, GP, VP, GW/GW800, SE, and IVAC 590 Alaris Pumps and gravity infusion sets and connectors following the news that a supplier falsified sterilisation documents going back ten years.
See also
Becton, Dickinson and Company headquarters
List of biotech and pharmaceutical companies in the New York metropolitan area
References
External links
Companies listed on the New York Stock Exchange
Companies in the S&P 500 Dividend Aristocrats
Medical technology companies of the United States
Companies based in Bergen County, New Jersey
1897 establishments in New Jersey
Health care companies established in 1897
Franklin Lakes, New Jersey
Life sciences industry
Technology companies established in 1897
Health care companies based in New Jersey
American companies established in 1897 | BD (company) | [
"Biology"
] | 1,327 | [
"Life sciences industry"
] |
598,708 | https://en.wikipedia.org/wiki/Facultative%20anaerobic%20organism | A facultative anaerobic organism is an organism that makes ATP by aerobic respiration if oxygen is present, but is capable of switching to fermentation if oxygen is absent.
Some examples of facultatively anaerobic bacteria are Staphylococcus spp., Escherichia coli, Salmonella, Listeria spp., Shewanella oneidensis and Yersinia pestis. Certain eukaryotes are also facultative anaerobes, including fungi such as Saccharomyces cerevisiae and many aquatic invertebrates such as nereid polychaetes.
It has been observed that in mutants of Salmonella typhimurium that underwent mutations to be either obligate aerobes or anaerobes, there were varying levels of chromatin-remodeling proteins. The obligate aerobes were later found to have a defective DNA gyrase subunit A gene (gyrA), while obligate anaerobes were defective in topoisomerase I (topI). This indicates that topoisomerase I and its associated relaxation of chromosomal DNA is required for transcription of genes required for aerobic growth, while the opposite is true for DNA gyrase. Additionally, in Escherichia coli K-12 it has been noted that phosphofructokinase (PFK) exists as a dimer under aerobic conditions and as a tetramer under anaerobic conditions. Given PFK’s role in glycolysis, this has implications for the effect of oxygen on the glucose metabolism of E. coli K-12 in relation to the mechanism of the Pasteur effect.
There may exist a core network of transcription factors (TFs) that includes the major oxygen-responsive ArcA and FNR control the adaptation of Escherichia coli to changes in oxygen availability. Activities of these two regulators are indicative of spatial effects that may affect gene expression in the microaerobic range. It has also been observed that these oxygen-sensitive proteins are protected within the cytoplasm by oxygen consumers within the cell membrane, known as terminal oxidases.
Functions
Facultative anaerobes are able to grow in both the presence and absence of oxygen due to the expression of both aerobic and anaerobic respiratory chains using either oxygen or an alternative electron acceptor. For example, in the absence of oxygen, E. coli can use fumarate, nitrate, nitrite, dimethyl sulfoxide, or trimethylamine oxide as an electron acceptor. This flexibility allows facultative anaerobes to survive in a number of environments, and in environments with frequently changing conditions.
Several species of protists use a facultative anaerobic metabolism to enhance their ATP production, and some can produce dihydrogen through this process.
As pathogens
Since facultative anaerobes can grow in both the presence and absence of oxygen, they can survive in many different environments, adapt easily to changing conditions, and thus have a selective advantage over other bacteria. As a result, most life-threatening pathogens are facultative anaerobes.
The ability of facultative anaerobic pathogens to survive without oxygen is important since their infection is shown to reduce oxygen levels in their host's gut tissue. Moreover, the ability of facultative anaerobes to limit oxygen levels at infection sites is beneficial to them and other bacteria, as dioxygen can form reactive oxygen species (ROS). These species are toxic to bacteria and can damage their DNA, among other constituents.
See also
Aerobic respiration
Anaerobic respiration
Fermentation
Obligate aerobe
Obligate anaerobe
Microaerophile
References
External links
Facultative Anaerobic Bacteria
Obligate Anaerobic Bacteria
Anaerobic Bacteria and Anaerobic Bacteria in the decomposition (stabilization) of organic matter.
Anaerobic respiration
Cellular respiration | Facultative anaerobic organism | [
"Chemistry",
"Biology"
] | 829 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
598,801 | https://en.wikipedia.org/wiki/Asmara-Massawa%20Cableway | The Asmara-Massawa Cableway was a cableway (or "ropeway") built in Italian Eritrea before World War II. The Eritrean Ropeway, completed in 1937, ran 71.8 km from the south end of Asmara to the city-port of Massawa.
History
The cableway was built by the Italian engineering firm Ceretti and Tanfani S.A. in Eritrea. It connected the port of Massawa with the city of Italian Asmara and ran a distance of nearly 72 kilometres. It also moved food, supplies and war materials for the Imperial Italian Army, which had also conquered Ethiopia in 1936. In August 1936 the first section of 26.6 km was opened from Ghinda to Godaif, a suburb of Asmara.
With the capacity to transport 30 tons of material every hour in each direction from the seaport of Massawa to 2326 meters above sea level in Asmara, the cableway was the longest of its kind in the world when inaugurated in 1937. The bearing cables were in almost 30 sections, were powered by diesel engines, and carried freight in 1540 small transport gondolas. In southern Eritrea there was another small ropeway.
During their eleven-year military administration (1941-1952) of the former Italian colony, the British dismantled the installations. They removed the diesel engines, the steel cables, and other equipment as war reparations. Iron towers that remained were scrapped in the 1980s.
See also
Eritrean Railway
Italian Eritrea
Notes
External links
Extensive article by Mike Metras
Facsimile of La Teleferica Massaua-Asmara cableway brochure, translated by Mike Metras, Dave Engstrom, and Renato Guadino
History of Asmara
Massawa
Transport in Eritrea
Vertical transport devices
Italian Eritrea
Italian East Africa
Aerial tramways | Asmara-Massawa Cableway | [
"Technology"
] | 371 | [
"Vertical transport devices",
"Transport systems"
] |
598,811 | https://en.wikipedia.org/wiki/Alkaliphile | Alkaliphiles are a class of extremophilic microbes capable of survival in alkaline (pH roughly 8.5–11) environments, growing optimally around a pH of 10. These bacteria can be further categorized as obligate alkaliphiles (those that require high pH to survive), facultative alkaliphiles (those able to survive in high pH, but also grow under normal conditions) and haloalkaliphiles (those that require high salt content to survive).
Background information
Microbial growth in alkaline conditions presents several complications to normal biochemical activity and reproduction, as high pH is detrimental to normal cellular processes. For example, alkalinity can lead to denaturation of DNA, instability of the plasma membrane and inactivation of cytosolic enzymes, as well as other unfavorable physiological changes. Thus, to adequately circumvent these obstacles, alkaliphiles must either possess specific cellular machinery that works best in the alkaline range, or they must have methods of acidifying the cytosol in relation to the extracellular environment. To determine which of the above possibilities an alkaliphile uses, experimentation has demonstrated that alkaliphilic enzymes possess relatively normal pH optimums. The determination that these enzymes function most efficiently near physiologically neutral pH ranges (about 7.5–8.5) was one of the primary steps in elucidating how alkaliphiles survive intensely basic environments. Since the cytosolic pH must remain nearly neutral, alkaliphiles must have one or more mechanisms of acidifying the cytosol when in the presence of a highly alkaline environment.
Mechanisms of cytosolic acidification
Alkaliphiles maintain cytosolic acidification through both passive and active means. In passive acidification, it has been proposed that cell walls contain acidic polymers composed of residues such as galacturonic acid, gluconic acid, glutamic acid, aspartic acid, and phosphoric acid. Together, these residues form an acidic matrix that helps protect the plasma membrane from alkaline conditions by preventing the entry of hydroxide ions, and allowing for the uptake of sodium and hydronium ions. In addition, the peptidoglycan in alkaliphilic B. subtilis has been observed to contain higher levels of hexosamines and amino acids as compared to its neutrophilic counterpart. When alkaliphiles lose these acidic residues in the form of induced mutations, it has been shown that their ability to grow in alkaline conditions is severely hindered. However, it is generally agreed upon that passive methods of cytosolic acidification are not sufficient to maintain an internal pH 2-2.3 levels below that of external pH; there must also be active forms of acidification. The most characterized method of active acidification is in the form of Na+/H+ antiporters. In this model, H+ ions are first extruded through the electron transport chain in respiring cells and to some extent through an ATPase in fermentative cells. This proton extrusion establishes a proton gradient that drives electrogenic antiporters—which drive intracellular Na+ out of the cell in exchange for a greater number of H+ ions, leading to the net accumulation of internal protons. This proton accumulation leads to a lowering of cytosolic pH. The expelled Na+ can be used for solute symport, which are necessary for cellular processes. It has been noted that Na+/H+ antiport is required for alkaliphilic growth, whereas either K+/H+ antiporters or Na+/H+ antiporters can be utilized by neutrophilic bacteria. If Na+/H+ antiporters are disabled through mutation or another means, the bacteria are rendered neutrophilic. The sodium required for this antiport system is the reason some alkaliphiles can only grow in saline environments.
Differences in alkaliphilic ATP production
In addition to the method of proton extrusion discussed above, it is believed that the general method of cellular respiration is different in obligate alkaliphiles as compared to neutrophiles. Generally, ATP production operates by establishing a proton gradient (greater H+ concentration outside the membrane) and a transmembrane electrical potential (with a positive charge outside the membrane). However, since alkaliphiles have a reversed pH gradient, it would seem that ATP production—which is based on a strong proton-motive force – would be severely reduced. However, the opposite is true. It has been proposed that while the pH gradient has been reversed, the transmembrane electrical potential is greatly increased. This increase in charge causes the production of greater amounts of ATP by each translocated proton when driven through an ATPase. Research in this area is ongoing.
Applications and future research
Alkaliphiles promise several interesting uses for biotechnology and future research. Alkaliphilic methods of regulating pH and producing ATP are of interest in the scientific community. However, perhaps the greatest area of interest from alkaliphiles lies in their enzymes: alkaline proteases; starch-degrading enzymes; cellulases; lipases; xylanases; pectinases; chitinases and their metabolites, including: 2-phenylamine; carotenoids; siderophores; cholic acid derivatives and organic acids. It is hoped that further research into alkaliphilic enzymes will allow scientists to harvest alkaliphiles' enzymes for use in basic conditions. Research aimed at discovering alkaliphile-produced antibiotics showed some success, yet has been held at bay by the fact that some products produced at high pH are unstable and unusable at a physiological pH range.
Examples
Examples of alkaliphiles include Halorhodospira halochloris, Natronomonas pharaonis, and Thiohalospira alkaliphila.
See also
Acidophile
Acidophobe
Extremophile
Neutrophile
References
Biochemical reactions
Extremophiles | Alkaliphile | [
"Chemistry",
"Biology",
"Environmental_science"
] | 1,279 | [
"Biochemical reactions",
"Organisms by adaptation",
"Extremophiles",
"Alkaliphiles",
"Bacteria",
"Biochemistry",
"Environmental microbiology"
] |
598,913 | https://en.wikipedia.org/wiki/Run-time%20type%20information | In computer programming, run-time type information or run-time type identification (RTTI) is a feature of some programming languages (such as C++, Object Pascal, and Ada) that exposes information about an object's data type at runtime. Run-time type information may be available for all types or only to types that explicitly have it (as is the case with Ada). Run-time type information is a specialization of a more general concept called type introspection.
In the original C++ design, Bjarne Stroustrup did not include run-time type information, because he thought this mechanism was often misused.
Overview
In C++, RTTI can be used to do safe typecasts using the dynamic_cast<> operator, and to manipulate type information at runtime using the typeid operator and std::type_info class. In Object Pascal, RTTI can be used to perform safe type casts with the as operator, test the class to which an object belongs with the is operator, and manipulate type information at run time with classes contained in the RTTI unit (i.e. classes: TRttiContext, TRttiInstanceType, etc.). In Ada, objects of tagged types also store a type tag, which permits the identification of the type of these object at runtime. The in operator can be used to test, at runtime, if an object is of a specific type and may be safely converted to it.
RTTI is available only for classes that are polymorphic, which means they have at least one virtual method. In practice, this is not a limitation because base classes must have a virtual destructor to allow objects of derived classes to perform proper cleanup if they are deleted from a base pointer.
Some compilers have flags to disable RTTI. Using these flags may reduce the overall size of the application, making them especially useful when targeting systems with a limited amount of memory.
C++ – typeid
The typeid reserved word (keyword) is used to determine the class of an object at runtime. It returns a reference to std::type_info object, which exists until the end of the program. The use of typeid, in a non-polymorphic context, is often preferred over dynamic_cast<class_type> in situations where just the class information is needed, because typeid is always a constant-time procedure, whereas dynamic_cast may need to traverse the class derivation lattice of its argument at runtime. Some aspects of the returned object are implementation-defined, such as std::type_info::name(), and cannot be relied on across compilers to be consistent.
Objects of class std::bad_typeid are thrown when the expression for typeid is the result of applying the unary * operator on a null pointer. Whether an exception is thrown for other null reference arguments is implementation-dependent. In other words, for the exception to be guaranteed, the expression must take the form typeid(*p) where p is any expression resulting in a null pointer.
Example
#include <iostream>
#include <typeinfo>
class Person {
public:
virtual ~Person() = default;
};
class Employee : public Person {};
int main() {
Person person;
Employee employee;
Person* ptr = &employee;
Person& ref = employee;
// The string returned by typeid::name is implementation-defined.
std::cout << typeid(person).name()
<< std::endl; // Person (statically known at compile-time).
std::cout << typeid(employee).name()
<< std::endl; // Employee (statically known at compile-time).
std::cout << typeid(ptr).name()
<< std::endl; // Person* (statically known at compile-time).
std::cout << typeid(*ptr).name()
<< std::endl; // Employee (looked up dynamically at run-time
// because it is the dereference of a
// pointer to a polymorphic class).
std::cout << typeid(ref).name()
<< std::endl; // Employee (references can also be polymorphic)
Person* p = nullptr;
try {
typeid(*p); // Not undefined behavior; throws std::bad_typeid.
} catch (...) { }
Person& p_ref = *p; // Undefined behavior: dereferencing null
typeid(p_ref); // does not meet requirements to throw std::bad_typeid
// because the expression for typeid is not the result
// of applying the unary * operator.
}
Output (exact output varies by system and compiler):
Person
Employee
Person*
Employee
Employee
C++ – dynamic_cast and Java cast
The dynamic_cast operator in C++ is used for downcasting a reference or pointer to a more specific type in the class hierarchy. Unlike the static_cast, the target of the dynamic_cast must be a pointer or reference to class. Unlike static_cast and C-style typecast (where type check occurs while compiling), a type safety check is performed at runtime. If the types are not compatible, an exception will be thrown (when dealing with references) or a null pointer will be returned (when dealing with pointers).
A Java typecast behaves similarly; if the object being cast is not actually an instance of the target type, and cannot be converted to one by a language-defined method, an instance of java.lang.ClassCastException will be thrown.
Example
Suppose some function takes an object of type A as its argument, and wishes to perform some additional operation if the object passed is an instance of B, a subclass of A. This can be done using dynamic_cast as follows.
#include <array>
#include <iostream>
#include <memory>
#include <typeinfo>
using namespace std;
class A {
public:
// Since RTTI is included in the virtual method table there should be at
// least one virtual function.
virtual ~A() = default;
void MethodSpecificToA() {
cout << "Method specific for A was invoked" << endl;
}
};
class B: public A {
public:
void MethodSpecificToB() {
cout << "Method specific for B was invoked" << endl;
}
};
void MyFunction(A& my_a) {
try {
// Cast will be successful only for B type objects.
B& my_b = dynamic_cast<B&>(my_a);
my_b.MethodSpecificToB();
} catch (const bad_cast& e) {
cerr << " Exception " << e.what() << " thrown." << endl;
cerr << " Object is not of type B" << endl;
}
}
int main() {
array<unique_ptr<A>, 3> array_of_a; // Array of pointers to base class A.
array_of_a[0] = make_unique<B>(); // Pointer to B object.
array_of_a[1] = make_unique<B>(); // Pointer to B object.
array_of_a[2] = make_unique<A>(); // Pointer to A object.
for (int i = 0; i < 3; ++i)
MyFunction(*array_of_a[i]);
}
Console output:
Method specific for B was invoked
Method specific for B was invoked
Exception std::bad_cast thrown.
Object is not of type B
A similar version of MyFunction can be written with pointers instead of references:
void MyFunction(A* my_a) {
B* my_b = dynamic_cast<B*>(my_a);
if (my_b != nullptr)
my_b->methodSpecificToB();
else
std::cerr << " Object is not B type" << std::endl;
}
Object Pascal, Delphi
In Object Pascal and Delphi, the operator is is used to check the type of a class at runtime. It tests the belonging of an object to a given class, including classes of individual ancestors present in the inheritance hierarchy tree (e.g. Button1 is a TButton class that has ancestors: TWinControl → TControl → TComponent → TPersistent → TObject, where the latter is the ancestor of all classes). The operator as is used when an object needs to be treated at run time as if it belonged to an ancestor class.
The RTTI unit is used to manipulate object type information at run time. This unit contains a set of classes that allow you to: get information about an object's class and its ancestors, properties, methods and events, change property values and call methods. The following example shows the use of the RTTI module to obtain information about the class to which an object belongs, creating it, and to call its method. The example assumes that the TSubject class has been declared in a unit named SubjectUnit.uses
RTTI, SubjectUnit;
procedure WithoutReflection;
var
MySubject: TSubject;
begin
MySubject := TSubject.Create;
try
Subject.Hello;
finally
Subject.Free;
end;
end;
procedure WithReflection;
var
RttiContext: TRttiContext;
RttiType: TRttiInstanceType;
Subject: TObject;
begin
RttiType := RttiContext.FindType('SubjectUnit.TSubject') as TRttiInstanceType;
Subject := RttiType.GetMethod('Create').Invoke(RttiType.MetaclassType, []).AsObject;
try
RttiType.GetMethod('Hello').Invoke(Subject, []);
finally
Subject.Free;
end;
end;
See also
Type inference
Type introspection
typeof
Reflection (computer science)
Template (C++)
References
External links
dynamic_cast operator at IBM Mac OS X Compilers
dynamic_cast operator at MSDN
C++
Class (computer programming)
Data types
Programming language comparisons
Articles with example C++ code
Articles with example Java code
Articles with example Pascal code | Run-time type information | [
"Technology"
] | 2,298 | [
"Programming language comparisons",
"Computing comparisons"
] |
598,949 | https://en.wikipedia.org/wiki/Integrated%20circuit%20layout | In integrated circuit design, integrated circuit (IC) layout, also known IC mask layout or mask design, is the representation of an integrated circuit in terms of planar geometric shapes which correspond to the patterns of metal, oxide, or semiconductor layers that make up the components of the integrated circuit. Originally the overall process was called tapeout, as historically early ICs used graphical black crepe tape on mylar media for photo imaging (erroneously believed to reference magnetic data—the photo process greatly predated magnetic media).
When using a standard process—where the interaction of the many chemical, thermal, and photographic variables is known and carefully controlled—the behaviour of the final integrated circuit depends largely on the positions and interconnections of the geometric shapes. Using a computer-aided layout tool, the layout engineer—or layout technician—places and connects all of the components that make up the chip such that they meet certain criteria—typically: performance, size, density, and manufacturability. This practice is often subdivided between two primary layout disciplines: analog and digital.
The generated layout must pass a series of checks in a process known as physical verification. The most common checks in this verification process are
Design rule checking (DRC),
Layout versus schematic (LVS),
parasitic extraction,
antenna rule checking, and
electrical rule checking (ERC).
When all verification is complete, layout post processing is applied where the data is also translated into an industry-standard format, typically GDSII, and sent to a semiconductor foundry. The milestone completion of the layout process of sending this data to the foundry is now colloquially called "tapeout". The foundry converts the data into mask data and uses it to generate the photomasks used in a photolithographic process of semiconductor device fabrication.
In the earlier, simpler, days of IC design, layout was done by hand using opaque tapes and films, an evolution derived from early days of printed circuit board (PCB) design -- tape-out.
Modern IC layout is done with the aid of IC layout editor software, mostly automatically using EDA tools, including place and route tools or schematic-driven layout tools.
Typically this involves a library of standard cells.
The manual operation of choosing and positioning the geometric shapes is informally known as "polygon pushing".
See also
Interconnects (integrated circuits)
Physical design (electronics)
Printed circuit board
Integrated circuit design
Floorplan (microelectronics)
References
Further reading
Clein, D. (2000). CMOS IC Layout. Newnes.
Hastings, A. (2005). The Art of Analog Layout. Prentice Hall.
Saint, Ch. and J. (2002). IC Layout Basics. McGraw-Hill.
Electronic design
Electronic design automation
Integrated circuits | Integrated circuit layout | [
"Technology",
"Engineering"
] | 569 | [
"Computer engineering",
"Electronic design",
"Electronic engineering",
"Design",
"Integrated circuits"
] |
598,971 | https://en.wikipedia.org/wiki/Fisher%20information | In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.
The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized and explored by the statistician Sir Ronald Fisher (following some initial results by Francis Ysidro Edgeworth). The Fisher information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates. It can also be used in the formulation of test statistics, such as the Wald test.
In Bayesian statistics, the Fisher information plays a role in the derivation of non-informative prior distributions according to Jeffreys' rule. It also appears as the large-sample covariance of the posterior distribution, provided that the prior is sufficiently smooth (a result known as Bernstein–von Mises theorem, which was anticipated by Laplace for exponential families). The same result is used when approximating the posterior with Laplace's approximation, where the Fisher information appears as the covariance of the fitted Gaussian.
Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey shift invariance have been shown to obey maximum Fisher information. The level of the maximum depends upon the nature of the system constraints.
Definition
The Fisher information is a way of measuring the amount of information that an observable random variable carries about an unknown parameter upon which the probability of depends. Let be the probability density function (or probability mass function) for conditioned on the value of . It describes the probability that we observe a given outcome of , given a known value of . If is sharply peaked with respect to changes in , it is easy to indicate the "correct" value of from the data, or equivalently, that the data provides a lot of information about the parameter . If is flat and spread-out, then it would take many samples of to estimate the actual "true" value of that would be obtained using the entire population being sampled. This suggests studying some kind of variance with respect to .
Formally, the partial derivative with respect to of the natural logarithm of the likelihood function is called the score. Under certain regularity conditions, if is the true parameter (i.e. is actually distributed as ), it can be shown that the expected value (the first moment) of the score, evaluated at the true parameter value , is 0:
The Fisher information is defined to be the variance of the score:
Note that . A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable X has been averaged out.
If is twice differentiable with respect to θ, and under certain additional regularity conditions, then the Fisher information may also be written as
since
and
Thus, the Fisher information may be seen as the curvature of the support curve (the graph of the log-likelihood). Near the maximum likelihood estimate, low Fisher information therefore indicates that the maximum appears "blunt", that is, the maximum is shallow and there are many nearby values with a similar log-likelihood. Conversely, high Fisher information indicates that the maximum is sharp.
Regularity conditions
The regularity conditions are as follows:
The partial derivative of f(X; θ) with respect to θ exists almost everywhere. (It can fail to exist on a null set, as long as this set does not depend on θ.)
The integral of f(X; θ) can be differentiated under the integral sign with respect to θ.
The support of f(X; θ) does not depend on θ.
If θ is a vector then the regularity conditions must hold for every component of θ. It is easy to find an example of a density that does not satisfy the regularity conditions: The density of a Uniform(0, θ) variable fails to satisfy conditions 1 and 3. In this case, even though the Fisher information can be computed from the definition, it will not have the properties it is typically assumed to have.
In terms of likelihood
Because the likelihood of θ given X is always proportional to the probability f(X; θ), their logarithms necessarily differ by a constant that is independent of θ, and the derivatives of these logarithms with respect to θ are necessarily equal. Thus one can substitute in a log-likelihood l(θ; X) instead of in the definitions of Fisher Information.
Samples of any size
The value X can represent a single sample drawn from a single distribution or can represent a collection of samples drawn from a collection of distributions. If there are n samples and the corresponding n distributions are statistically independent then the Fisher information will necessarily be the sum of the single-sample Fisher information values, one for each single sample from its distribution. In particular, if the n distributions are independent and identically distributed then the Fisher information will necessarily be n times the Fisher information of a single sample from the common distribution. Stated in other words, the Fisher Information of i.i.d. observations of a sample of size n from a population is equal to the product of n and the Fisher Information of a single observation from the same population.
Informal derivation of the Cramér–Rao bound
The Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any unbiased estimator of θ. and provide the following method of deriving the Cramér–Rao bound, a result which describes use of the Fisher information.
Informally, we begin by considering an unbiased estimator . Mathematically, "unbiased" means that
This expression is zero independent of θ, so its partial derivative with respect to θ must also be zero. By the product rule, this partial derivative is also equal to
For each θ, the likelihood function is a probability density function, and therefore . By using the chain rule on the partial derivative of and then dividing and multiplying by , one can verify that
Using these two facts in the above, we get
Factoring the integrand gives
Squaring the expression in the integral, the Cauchy–Schwarz inequality yields
The second bracketed factor is defined to be the Fisher Information, while the first bracketed factor is the expected mean-squared error of the estimator . By rearranging, the inequality tells us that
In other words, the precision to which we can estimate θ is fundamentally limited by the Fisher information of the likelihood function.
Alternatively, the same conclusion can be obtained directly from the Cauchy–Schwarz inequality for random variables, , applied to the random variables and , and observing that for unbiased estimators we have
Examples
Single-parameter Bernoulli experiment
A Bernoulli trial is a random variable with two possible outcomes, 0 and 1, with 1 having a probability of θ. The outcome can be thought of as determined by the toss of a biased coin, with the probability of heads (1) being θ and the probability of tails (0) being .
Let X be a Bernoulli trial of one sample from the distribution. The Fisher information contained in X may be calculated to be:
Because Fisher information is additive, the Fisher information contained in n independent Bernoulli trials is therefore
If is one of the possible outcomes of n independent Bernoulli trials and is the j th outcome of the i th trial, then the probability of is given by:
The mean of the i th trial is
The expected value of the mean of a trial is:
where the sum is over all possible trial outcomes. The expected value of the square of the means is:
so the variance in the value of the mean is:
It is seen that the Fisher information is the reciprocal of the variance of the mean number of successes in n Bernoulli trials. This is generally true. In this case, the Cramér–Rao bound is an equality.
Estimate θ from X ∼ Bern (√θ)
As another toy example consider a random variable with possible outcomes 0 and 1, with probabilities and , respectively, for some . Our goal is estimating from observations of .
The Fisher information reads in this caseThis expression can also be derived directly from the change of reparametrization formula given below. More generally, for any sufficiently regular function such that , the Fisher information to retrieve from is similarly computed to be
Matrix form
When there are N parameters, so that θ is an vector the Fisher information takes the form of an matrix. This matrix is called the Fisher information matrix (FIM) and has typical element
The FIM is a positive semidefinite matrix. If it is positive definite, then it defines a Riemannian metric on the N-dimensional parameter space. The topic information geometry uses this to connect Fisher information to differential geometry, and in that context, this metric is known as the Fisher information metric.
Under certain regularity conditions, the Fisher information matrix may also be written as
The result is interesting in several ways:
It can be derived as the Hessian of the relative entropy.
It can be used as a Riemannian metric for defining Fisher-Rao geometry when it is positive-definite.
It can be understood as a metric induced from the Euclidean metric, after appropriate change of variable.
In its complex-valued form, it is the Fubini–Study metric.
It is the key part of the proof of Wilks' theorem, which allows confidence region estimates for maximum likelihood estimation (for those conditions for which it applies) without needing the Likelihood Principle.
In cases where the analytical calculations of the FIM above are difficult, it is possible to form an average of easy Monte Carlo estimates of the Hessian of the negative log-likelihood function as an estimate of the FIM. The estimates may be based on values of the negative log-likelihood function or the gradient of the negative log-likelihood function; no analytical calculation of the Hessian of the negative log-likelihood function is needed.
Information orthogonal parameters
We say that two parameter component vectors θ1 and θ2 are information orthogonal if the Fisher information matrix is block diagonal, with these components in separate blocks. Orthogonal parameters are easy to deal with in the sense that their maximum likelihood estimates are asymptotically uncorrelated. When considering how to analyse a statistical model, the modeller is advised to invest some time searching for an orthogonal parametrization of the model, in particular when the parameter of interest is one-dimensional, but the nuisance parameter can have any dimension.
Singular statistical model
If the Fisher information matrix is positive definite for all , then the corresponding statistical model is said to be regular; otherwise, the statistical model is said to be singular. Examples of singular statistical models include the following: normal mixtures, binomial mixtures, multinomial mixtures, Bayesian networks, neural networks, radial basis functions, hidden Markov models, stochastic context-free grammars, reduced rank regressions, Boltzmann machines.
In machine learning, if a statistical model is devised so that it extracts hidden structure from a random phenomenon, then it naturally becomes singular.
Multivariate normal distribution
The FIM for a N-variate multivariate normal distribution, has a special form. Let the K-dimensional vector of parameters be and the vector of random normal variables be . Assume that the mean values of these random variables are , and let be the covariance matrix. Then, for , the (m, n) entry of the FIM is:
where denotes the transpose of a vector, denotes the trace of a square matrix, and:
Note that a special, but very common, case is the one where , a constant. Then
In this case the Fisher information matrix may be identified with the coefficient matrix of the normal equations of least squares estimation theory.
Another special case occurs when the mean and covariance depend on two different vector parameters, say, β and θ. This is especially popular in the analysis of spatial data, which often uses a linear model with correlated residuals. In this case,
where
Properties
Chain rule
Similar to the entropy or mutual information, the Fisher information also possesses a chain rule decomposition. In particular, if X and Y are jointly distributed random variables, it follows that:
where and is the Fisher information of Y relative to calculated with respect to the conditional density of Y given a specific value X = x.
As a special case, if the two random variables are independent, the information yielded by the two random variables is the sum of the information from each random variable separately:
Consequently, the information in a random sample of n independent and identically distributed observations is n times the information in a sample of size 1.
f-divergence
Given a convex function that is finite for all , , and , (which could be infinite), it defines an f-divergence . Then if is strictly convex at , then locally at , the Fisher information matrix is a metric, in the sense thatwhere is the distribution parametrized by . That is, it's the distribution with pdf .
In this form, it is clear that the Fisher information matrix is a Riemannian metric, and varies correctly under a change of variables. (see section on Reparameterization.)
Sufficient statistic
The information provided by a sufficient statistic is the same as that of the sample X. This may be seen by using Neyman's factorization criterion for a sufficient statistic. If T(X) is sufficient for θ, then
for some functions g and h. The independence of h(X) from θ implies
and the equality of information then follows from the definition of Fisher information. More generally, if is a statistic, then
with equality if and only if T is a sufficient statistic.
Reparameterization
The Fisher information depends on the parametrization of the problem. If θ and η are two scalar parametrizations of an estimation problem, and θ is a continuously differentiable function of η, then
where and are the Fisher information measures of η and θ, respectively.
In the vector case, suppose and are k-vectors which parametrize an estimation problem, and suppose that is a continuously differentiable function of , then,
where the (i, j)th element of the k × k Jacobian matrix is defined by
and where is the matrix transpose of
In information geometry, this is seen as a change of coordinates on a Riemannian manifold, and the intrinsic properties of curvature are unchanged under different parametrizations. In general, the Fisher information matrix provides a Riemannian metric (more precisely, the Fisher–Rao metric) for the manifold of thermodynamic states, and can be used as an information-geometric complexity measure for a classification of phase transitions, e.g., the scalar curvature of the thermodynamic metric tensor diverges at (and only at) a phase transition point.
In the thermodynamic context, the Fisher information matrix is directly related to the rate of change in the corresponding order parameters. In particular, such relations identify second-order phase transitions via divergences of individual elements of the Fisher information matrix.
Isoperimetric inequality
The Fisher information matrix plays a role in an inequality like the isoperimetric inequality. Of all probability distributions with a given entropy, the one whose Fisher information matrix has the smallest trace is the Gaussian distribution. This is like how, of all bounded sets with a given volume, the sphere has the smallest surface area.
The proof involves taking a multivariate random variable with density function and adding a location parameter to form a family of densities . Then, by analogy with the Minkowski–Steiner formula, the "surface area" of is defined to be
where is a Gaussian variable with covariance matrix . The name "surface area" is apt because the entropy power is the volume of the "effective support set," so is the "derivative" of the volume of the effective support set, much like the Minkowski-Steiner formula. The remainder of the proof uses the entropy power inequality, which is like the Brunn–Minkowski inequality. The trace of the Fisher information matrix is found to be a factor of .
Applications
Optimal design of experiments
Fisher information is widely used in optimal experimental design. Because of the reciprocity of estimator-variance and Fisher information, minimizing the variance corresponds to maximizing the information.
When the linear (or linearized) statistical model has several parameters, the mean of the parameter estimator is a vector and its variance is a matrix. The inverse of the variance matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using statistical theory, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these "information criteria" can be maximized.
Traditionally, statisticians have evaluated estimators and designs by considering some summary statistic of the covariance matrix (of an unbiased estimator), usually with positive real values (like the determinant or matrix trace). Working with positive real numbers brings several advantages: If the estimator of a single parameter has a positive variance, then the variance and the Fisher information are both positive real numbers; hence they are members of the convex cone of nonnegative real numbers (whose nonzero members have reciprocals in this same cone).
For several parameters, the covariance matrices and information matrices are elements of the convex cone of nonnegative-definite symmetric matrices in a partially ordered vector space, under the Loewner (Löwner) order. This cone is closed under matrix addition and inversion, as well as under the multiplication of positive real numbers and matrices. An exposition of matrix theory and Loewner order appears in Pukelsheim.
The traditional optimality criteria are the information matrix's invariants, in the sense of invariant theory; algebraically, the traditional optimality criteria are functionals of the eigenvalues of the (Fisher) information matrix (see optimal design).
Jeffreys prior in Bayesian statistics
In Bayesian statistics, the Fisher information is used to calculate the Jeffreys prior, which is a standard, non-informative prior for continuous distribution parameters.
Computational neuroscience
The Fisher information has been used to find bounds on the accuracy of neural codes. In that case, X is typically the joint responses of many neurons representing a low dimensional variable θ (such as a stimulus parameter). In particular the role of correlations in the noise of the neural responses has been studied.
Epidemiology
Fisher information was used to study how informative different data sources are for estimation of the reproduction number of SARS-CoV-2.
Derivation of physical laws
Fisher information plays a central role in a controversial principle put forward by Frieden as the basis of physical laws, a claim that has been disputed.
Machine learning
The Fisher information is used in machine learning techniques such as elastic weight consolidation, which reduces catastrophic forgetting in artificial neural networks.
Fisher information can be used as an alternative to the Hessian of the loss function in second-order gradient descent network training.
Color discrimination
Using a Fisher information metric, da Fonseca et. al investigated the degree to which MacAdam ellipses (color discrimination ellipses) can be derived from the response functions of the retinal photoreceptors.
Relation to relative entropy
Fisher information is related to relative entropy. The relative entropy, or Kullback–Leibler divergence, between two distributions and can be written as
Now, consider a family of probability distributions parametrized by . Then the Kullback–Leibler divergence, between two distributions in the family can be written as
If is fixed, then the relative entropy between two distributions of the same family is minimized at . For close to , one may expand the previous expression in a series up to second order:
But the second order derivative can be written as
Thus the Fisher information represents the curvature of the relative entropy of a conditional distribution with respect to its parameters.
History
The Fisher information was discussed by several early statisticians, notably F. Y. Edgeworth. For example, Savage says: "In it [Fisher information], he [Fisher] was to some extent anticipated (Edgeworth 1908–9 esp. 502, 507–8, 662, 677–8, 82–5 and references he [Edgeworth] cites including Pearson and Filon 1898 [. . .])." There are a number of early historical sources and a number of reviews of this early work.
See also
Efficiency (statistics)
Observed information
Fisher information metric
Formation matrix
Information geometry
Jeffreys prior
Cramér–Rao bound
Minimum Fisher information
Quantum Fisher information
Other measures employed in information theory:
Entropy (information theory)
Kullback–Leibler divergence
Self-information
Notes
References
Estimation theory
Information theory
Design of experiments
Ronald Fisher | Fisher information | [
"Mathematics",
"Technology",
"Engineering"
] | 4,316 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
598,999 | https://en.wikipedia.org/wiki/Stipe%20%28botany%29 | In botany, a stipe is a stalk that supports some other structure. The precise meaning is different depending on which taxonomic group is being described.
In the case of ferns, the stipe is only the petiole from the rootstock to the beginning of the leaf tissue, or lamina. The continuation of the structure within the lamina is then termed a rachis.
In flowering plants, the term is often used in reference to a stalk that sometimes supports a flower's ovary. In orchids, the stipe or caudicle is the stalk-like support of the pollinia. It is a non-viscid band or strap connecting the pollinia with the viscidium (the viscid part of the rostellum or beak).
A stipe is also a structure found in organisms that are studied by botanists but that are no longer classified as plants. It may be the stem-like part of the thallus of a mushroom or a seaweed, and is particularly common among brown algae such as kelp. The stipe of a kelp often contains a central region of cells that, like the phloem of vascular plants, serves to transport nutrients within the alga.
See also
Rachis
Stipe (mycology), the stalk supporting the fruiting body of some fungi.
References
Plant anatomy
Plant morphology
Orchid morphology | Stipe (botany) | [
"Biology"
] | 281 | [
"Plant morphology",
"Plants"
] |
599,001 | https://en.wikipedia.org/wiki/Holdfast%20%28biology%29 | A holdfast is a root-like structure that anchors aquatic sessile organisms, such as seaweed, other sessile algae, stalked crinoids, benthic cnidarians, and sponges, to the substrate.
Holdfasts vary in shape and form depending on both the species and the substrate type. The holdfasts of organisms that live in muddy substrates often have complex tangles of root-like growths. These projections are called haptera and similar structures of the same name are found on lichens. The holdfasts of organisms that live in sandy substrates are bulb-like and very flexible, such as those of sea pens, thus permitting the organism to pull the entire body into the substrate when the holdfast is contracted. The holdfasts of organisms that live on smooth surfaces (such as the surface of a boulder) have flattened bases which adhere to the surface. The organism derives no nutrition from this intimate contact with the substrate, as the process of liberating nutrients from the substrate requires enzymatically eroding the substrate away, thereby increasing the risk of the organism falling off the substrate.
The claw-like holdfasts of kelps and other algae differ from the roots of land plants, in that they have no absorbent function, instead serving only as an anchor.
References
Plant morphology
es:Rizoide | Holdfast (biology) | [
"Biology"
] | 279 | [
"Plant morphology",
"Algae stubs",
"Algae",
"Plants"
] |
599,009 | https://en.wikipedia.org/wiki/Brass%20knuckles | Brass knuckles (also referred to as brass knucks, knuckledusters, iron fist and paperweight, among other names) are a melee weapon used primarily in hand-to-hand combat. They are fitted and designed to be worn around the knuckles of the human hand. Despite their name, they are often made from other metals, plastics or carbon fibers and not necessarily brass.
Designed to preserve and concentrate a punch's force by directing it toward a harder and smaller contact area, they result in increased tissue disruption, including an increased likelihood of fracturing the intended target's bones on impact. The extended and rounded palm grip also spreads the counter-force across the attacker's palm, which would otherwise have been absorbed primarily by the attacker's fingers. This reduces the likelihood of damage to the attacker's fingers.
The weapon has been controversial for its easy concealability and is illegal to own and use in a number of countries.
History and variations
During the 18th century. Cast iron, brass, lead, and wood knuckles were made in the United States during the American Civil War (1861–1865). Soldiers would often buy cast iron or brass knuckles. If they could not buy them, they would carve their own from wood, or cast them at camp by melting lead bullets and using a mold in the dirt.
Some brass knuckles have rounded rings, which increase the impact of blows from moderate to severe damage. Other instruments (not generally considered to be "brass knuckles" or "metal knuckles" per se) may have spikes, sharp points and cutting edges. These devices come in many variations and are called by a variety of names, including "knuckle knives."
By the late 19th century, knuckledusters were incorporated into various kinds of pistols such as the Apache revolver used by criminals in France in the late 19th to early 20th centuries. During World War I the US Army issued two different knuckle knives, the US model 1917 and US model 1918 Mark I trench knives. Knuckles and knuckle knives were also being made in England at the time and purchased privately by British soldiers. It was advised not to polish brass knuckles as allowing the brass to darken would act as camouflage on the battlefield.
By World War II, knuckles and knuckle knives were quite popular with both American and British soldiers. The Model 1918 trench knives were reissued to American paratroopers. A notable knuckle knife still in use is the Cuchillo de Paracaidista, issued to Argentinian paratroopers. Current-issue models have an emergency blade in the crossguard.
Legality and distribution
Brass knuckles are illegal in several countries, including: Hong Kong, Austria, Belgium, Canada, Denmark, Bosnia, Croatia, Estonia, Cyprus, Finland, France, Germany, Greece, Hungary, Israel, Ireland, Malaysia, the Netherlands, Norway, Poland, Portugal, Russia, Spain, Turkey, Sweden, Singapore, Taiwan, Ukraine, the United Arab Emirates and the United Kingdom.
Import of brass knuckles into Australia is illegal unless a government permit is obtained; permits are available for only limited purposes, such as police and government use, or use in film productions. They are prohibited weapons in the state of New South Wales.
In Brazil, brass knuckles are legal and freely sold. They are called , which means 'English punch', or , which means 'puncher'.
In Canada, brass knuckles (Canadian French , which literally means 'American fist'), or any similar devices made of metal, are listed as prohibited weapons; possession of such weapon is a criminal offence under the Criminal Code. Plastic knuckles have been determined to be legal in Canada.
In France, brass knuckles are illegal. They can be bought as a "collectable" (provided one is over 18), but it is forbidden to carry or use one, whatever the circumstance, including self-defense. The French term is , which literally means 'American punch'.
In Russia, brass knuckles were illegal to purchase or own during Imperial times and are still forbidden according to Article 6 of the 1996 Federal Law on Weapons. They are called (from French , literally 'head breaker').
In Serbia, brass knuckles are legal to purchase and own (for people over 16 years old) but are not legal to carry in public. They are called , literally 'boxer'.
In Taiwan, according to the Law of the Republic of China, possession and sales of brass knuckles are illegal. Under the regulation, brass knuckles are considered weapons. Without the permission of the central regulatory agency, it is against the law to manufacture, sell, transport, transfer, rent, or have them in any collection or on display.
In China, brass knuckles are completely legal as per the Law of the Republic of China. According to Article 32 of the "Public Security Administration Punishment Law of the People's Republic of China", citizens can legally own them for self-defense, but they are prohibited items in certain places. For example, brass knuckles are not allowed to be carried when travelling on the subway, buses, trains, or other public transport. In ancient China, brass knuckles were popular, and were used regularly as a concealed weapon or self-defense tool.
In the United States, brass knuckles are not prohibited at the federal level, but various state, county and city laws, and the District of Columbia, regulate or prohibit their purchase and/or possession. , brass knuckles are prohibited in 21 states. Some state laws require purchasers to be 18 or older. Most states have statutes regulating the carrying of weapons, and some specifically prohibit brass knuckles or "metal knuckles". Brass knuckles can readily be purchased online or, where legal, at flea markets, swap meets, gun shows, and at specialty stores. Some companies manufacture belt buckles or novelty paper weights that function as brass knuckles. Brass knuckles made of plastic, rather than metal, have been marketed as "undetectable by airport metal detectors". Some states that ban metal knuckles also ban plastic knuckles. For example, New York's criminal statutes list both "metal knuckles" and "plastic knuckles" as prohibited weapons, but do not define either.
See also
Bagh nakh
Cestus (boxing)
Gauntlet (glove)
Mark I trench knife
Tekkō
Vajra-mushti
Weighted-knuckle glove
References
Brass
Metallic objects | Brass knuckles | [
"Physics"
] | 1,290 | [
"Metallic objects",
"Physical objects",
"Matter"
] |
599,040 | https://en.wikipedia.org/wiki/Propyl%20group | In organic chemistry, a propyl group is a three-carbon alkyl substituent with chemical formula for the linear form. This substituent form is obtained by removing one hydrogen atom attached to the terminal carbon of propane. A propyl substituent is often represented in organic chemistry with the symbol Pr (not to be confused with the element praseodymium).
An isomeric form of propyl is obtained by moving the point of attachment from a terminal carbon atom to the central carbon atom, named isopropyl or 1-methylethyl. To maintain four substituents on each carbon atom, one hydrogen atom has to be moved from the middle carbon atom to the carbon atom which served as attachment point in the n-propyl variant, written as .
Linear propyl is sometimes termed normal and hence written with a prefix n- (i.e., n-propyl), as the absence of the prefix n- does not indicate which attachment point is chosen, i.e. absence of prefix does not automatically exclude the possibility of it being the branched version (i.e. i-propyl or isopropyl).
In addition, there is a third, cyclic, form called cyclopropyl, or c-propyl. It is not isomeric with the other two forms, having a different chemical formula ( vs ), not just a different connectivity of the atoms.
Examples
n-Propyl acetate is an ester which has the n-propyl group attached to the oxygen atom of the acetate group.
Other examples
Isopropyl alcohol
Isopropylamine
References
Alkyl groups | Propyl group | [
"Chemistry"
] | 345 | [
"Substituents",
"Alkyl groups"
] |
599,215 | https://en.wikipedia.org/wiki/Synchrotron | A synchrotron is a particular type of cyclic particle accelerator, descended from the cyclotron, in which the accelerating particle beam travels around a fixed closed-loop path. The strength of the magnetic field which bends the particle beam into its closed path increases with time during the accelerating process, being synchronized to the increasing kinetic energy of the particles.
The synchrotron is one of the first accelerator concepts to enable the construction of large-scale facilities, since bending, beam focusing and acceleration can be separated into different components. The most powerful modern particle accelerators use versions of the synchrotron design. The largest synchrotron-type accelerator, also the largest particle accelerator in the world, is the Large Hadron Collider (LHC) near Geneva, Switzerland, built in 2008 by the European Organization for Nuclear Research (CERN). It can accelerate beams of protons to an energy of 7 tera electronvolts (TeV or 1012 eV).
The synchrotron principle was invented by Vladimir Veksler in 1944. Edwin McMillan constructed the first electron synchrotron in 1945, arriving at the idea independently, having missed Veksler's publication (which was only available in a Soviet journal, although in English). The first proton synchrotron was designed by Sir Marcus Oliphant and built in 1952.
Types
Large synchrotrons usually have a linear accelerator (linac) to give the particles an initial acceleration, and a lower energy synchrotron which is sometimes called a booster to increase the energy of the particles before they are injected into the high energy synchrotron ring. Several specialized types of synchrotron machines are used today:
A collider is a type in which, instead of the particles striking a stationary target, particles traveling in two countercirculating rings collide head-on, making higher-energy collisions possible.
A storage ring is a special type of synchrotron in which the kinetic energy of the particles is kept constant.
A synchrotron light source is a combination of different electron accelerator types, including a storage ring in which the desired electromagnetic radiation is generated. This radiation is then used in experimental stations located on different beamlines. Synchrotron light sources in their entirety are sometimes called "synchrotrons", although this is technically incorrect.
Principle of operation
The synchrotron evolved from the cyclotron, the first cyclic particle accelerator. While a classical cyclotron uses both a constant guiding magnetic field and a constant-frequency electromagnetic field (and is working in classical approximation), its successor, the isochronous cyclotron, works by local variations of the guiding magnetic field, adapting to the increasing relativistic mass of particles during acceleration.
In a synchrotron, this adaptation is done by variation of the magnetic field strength in time, rather than in space. For particles that are not close to the speed of light, the frequency of the applied electromagnetic field may also change to follow their non-constant circulation time. By increasing these parameters accordingly as the particles gain energy, their circulation path can be held constant as they are accelerated. This allows the vacuum chamber for the particles to be a large thin torus, rather than a disk as in previous, compact accelerator designs. Also, the thin profile of the vacuum chamber allowed for a more efficient use of magnetic fields than in a cyclotron, enabling the cost-effective construction of larger synchrotrons.
While the first synchrotrons and storage rings like the Cosmotron and ADA strictly used the toroid shape, the strong focusing principle independently discovered by Ernest Courant et al. and Nicholas Christofilos allowed the complete separation of the accelerator into components with specialized functions along the particle path, shaping the path into a round-cornered polygon. Some important components are given by radio frequency cavities for direct acceleration, dipole magnets (bending magnets) for deflection of particles (to close the path), and quadrupole / sextupole magnets for beam focusing.
The combination of time-dependent guiding magnetic fields and the strong focusing principle enabled the design and operation of modern large-scale accelerator facilities like colliders and synchrotron light sources. The straight sections along the closed path in such facilities are not only required for radio frequency cavities, but also for particle detectors (in colliders) and photon generation devices such as wigglers and undulators (in third generation synchrotron light sources).
The maximum energy that a cyclic accelerator can impart is typically limited by the maximum strength of the magnetic fields and the minimum radius (maximum curvature) of the particle path. Thus one method for increasing the energy limit is to use superconducting magnets, these not being limited by magnetic saturation. Electron/positron accelerators may also be limited by the emission of synchrotron radiation, resulting in a partial loss of the particle beam's kinetic energy. The limiting beam energy is reached when the energy lost to the lateral acceleration required to maintain the beam path in a circle equals the energy added each cycle.
More powerful accelerators are built by using large radius paths and by using more numerous and more powerful microwave cavities. Lighter particles (such as electrons) lose a larger fraction of their energy when deflected. Practically speaking, the energy of electron/positron accelerators is limited by this radiation loss, while this does not play a significant role in the dynamics of proton or ion accelerators. The energy of such accelerators is limited strictly by the strength of magnets and by the cost.
Injection procedure
Unlike in a cyclotron, synchrotrons are unable to accelerate particles from zero kinetic energy; one of the obvious reasons for this is that its closed particle path would be cut by a device that emits particles. Thus, schemes were developed to inject pre-accelerated particle beams into a synchrotron. The pre-acceleration can be realized by a chain of other accelerator structures like a linac, a microtron or another synchrotron; all of these in turn need to be fed by a particle source comprising a simple high voltage power supply, typically a Cockcroft-Walton generator.
Starting from an appropriate initial value determined by the injection energy, the field strength of the dipole magnets is then increased. If the high energy particles are emitted at the end of the acceleration procedure, e.g. to a target or to another accelerator, the field strength is again decreased to injection level, starting a new injection cycle. Depending on the method of magnet control used, the time interval for one cycle can vary substantially between different installations.
In large-scale facilities
One of the early large synchrotrons, now retired, is the Bevatron, constructed in 1950 at the Lawrence Berkeley Laboratory. The name of this proton accelerator comes from its power, in the range of 6.3 GeV (then called BeV for billion electron volts; the name predates the adoption of the SI prefix giga-). A number of transuranium elements, unseen in the natural world, were first created with this machine. This site is also the location of one of the first large bubble chambers used to examine the results of the atomic collisions produced here.
Another early large synchrotron is the Cosmotron built at Brookhaven National Laboratory which reached 3.3 GeV in 1953.
Among the few synchrotrons around the world, 16 are located in the United States. Many of them belong to national laboratories; few are located in universities.
As part of colliders
Until August 2008, the highest energy collider in the world was the Tevatron, at the Fermi National Accelerator Laboratory, in the United States. It accelerated protons and antiprotons to slightly less than 1 TeV of kinetic energy and collided them together. The Large Hadron Collider (LHC), which has been built at the European Laboratory for High Energy Physics (CERN), has roughly seven times this energy (so proton-proton collisions occur at roughly 14 TeV). It is housed in the 27 km tunnel which formerly housed the Large Electron Positron (LEP) collider, so it will maintain the claim as the largest scientific device ever built. The LHC will also accelerate heavy ions (such as lead) up to an energy of 1.15 PeV.
The largest device of this type seriously proposed was the Superconducting Super Collider (SSC), which was to be built in the United States. This design, like others, used superconducting magnets which allow more intense magnetic fields to be created without the limitations of core saturation. While construction was begun, the project was cancelled in 1994, citing excessive budget overruns — this was due to naïve cost estimation and economic management issues rather than any basic engineering flaws. It can also be argued that the end of the Cold War resulted in a change of scientific funding priorities that contributed to its ultimate cancellation. However, the tunnel built for its placement still remains, although empty.
While there is still potential for yet more powerful proton and heavy particle cyclic accelerators, it appears that the next step up in electron beam energy must avoid losses due to synchrotron radiation. This will require a return to the linear accelerator, but with devices significantly longer than those currently in use. There is at present a major effort to design and build the International Linear Collider (ILC), which will consist of two opposing linear accelerators, one for electrons and one for positrons. These will collide at a total center of mass energy of 0.5 TeV.
As part of synchrotron light sources
Synchrotron radiation also has a wide range of applications (see synchrotron light) and many 2nd and 3rd generation synchrotrons have been built especially to harness it. The largest of those 3rd generation synchrotron light sources are the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, the Advanced Photon Source (APS) near Chicago, United States, and SPring-8 in Japan, accelerating electrons up to 6, 7 and 8 GeV, respectively.
Synchrotrons which are useful for cutting edge research are large machines, costing tens or hundreds of millions of dollars to construct, and each beamline (there may be 20 to 50 at a large synchrotron) costs another two or three million dollars on average. These installations are mostly built by the science funding agencies of governments of developed countries, or by collaborations between several countries in a region, and operated as infrastructure facilities available to scientists from universities and research organisations throughout the country, region, or world. More compact models, however, have been developed, such as the Compact Light Source.
Applications
Life sciences: protein and large-molecule crystallography
LIGA based microfabrication
Drug discovery and research
X-ray lithography
X-ray microtomography
Analysing chemicals to determine their composition
Observing the reaction of living cells to drugs
Inorganic material crystallography and microanalysis
Fluorescence studies
Semiconductor material analysis and structural studies
Geological material analysis
Medical imaging
Particle therapy to treat some forms of cancer
Radiometry: calibration of detectors and radiometric standards
See also
List of synchrotron radiation facilities
Synchrotron radiation
Cyclotron radiation
Computed X-ray tomography
Energy amplifier
Superconducting radio frequency
Coherent diffraction imaging
References
External links
ESRF (European Synchrotron Radiation Facility)
National Synchrotron Radiation Research Center (NSRRC) in Taiwan
Elettra Sincrotrone Trieste - Elettra and Fermi lightsources
Canadian Light Source
Australian Synchrotron
French synchrotron Soleil
Diamond UK Synchrotron
Lightsources.org
IAEA database of electron synchrotron and storage rings
CERN Large Hadron Collider
Synchrotron Light Sources of the World
A Miniature Synchrotron: room-size synchrotron offers scientists a new way to perform high-quality x-ray experiments in their own labs, Technology Review, February 4, 2008
Brazilian Synchrotron Light Laboratory
Podcast interview with a scientist at the European Synchrotron Radiation Facility
Indian SRS
Spanish ALBA Light Source
The tabletop synchrotron MIRRORCLE
SOLARIS synchrotron in Poland
Accelerator physics
Synchrotron-related techniques
Particle accelerators | Synchrotron | [
"Physics"
] | 2,567 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
599,222 | https://en.wikipedia.org/wiki/Recto%20and%20verso | Recto is the "right" or "front" side and verso is the "left" or "back" side when text is written or printed on a leaf of paper () in a bound item such as a codex, book, broadsheet, or pamphlet.
In double-sided printing, each leaf has two pages – front and back. In modern books, the physical sheets of paper are stacked and folded in half, producing two leaves and four pages for each sheet. For example, the outer sheet in a 16-page book will have one leaf with pages 1 (recto) and 2 (verso), and another leaf with pages 15 (recto) and 16 (verso). Pages 1 and 16, for example, are printed on the same side of the physical sheet of paper, combining recto and verso sides of different leaves. The number of pages in a book using this binding technique must thus be a multiple of four, and the number of leaves must be a multiple of two, but unused pages are typically left unnumbered and uncounted. A sheet folded in this manner is known as a folio, a word also used for a book or pamphlet made with this technique.
Looseleaf paper consists of unbound leaves. Sometimes single-sided or blank leaves are used for numbering or counting and abbreviated "l." instead of "p." for the number of pages.
Etymology
The terms are shortened from Latin: and (which translate as "on the right side of the leaf" and "on the back side of the leaf"). The two opposite pages themselves are called and in Latin, and the ablative , already imply that the text on the page (and not the physical page itself) are referred to.
Usage
In codicology, each physical sheet (, abbreviated fol. or f.) of a manuscript is numbered, and the sides are referred to as and , abbreviated as r and v respectively. Editions of manuscripts will thus mark the position of text in the original manuscript in the form fol. 1r, sometimes with the r and v in superscript, as in 1r, or with a superscript o indicating the ablative , , as in 1ro. This terminology has been standard since the beginnings of modern codicology in the 17th century.
In 2011, Martyn Lyons argued that the term "right, correct, proper" for the front side of the leaf derives from the use of papyrus in late antiquity, as a different grain ran across each side, and only one side was suitable to be written on, so that usually papyrus would carry writing only on the "correct", smooth side (and just in exceptional cases would there be writing on the reverse side of the leaf).
The terms "recto" and "verso" are also used in the codicology of manuscripts written in right-to-left scripts, like Syriac, Arabic and Hebrew. However, as these scripts are written in the other direction to the scripts witnessed in European codices, the recto page is to the left while the verso is to the right. The reading order of each folio remains first verso, then recto, regardless of writing direction.
The terms are carried over into printing; is the norm for printed books but was an important advantage of the printing press over the much older Asian woodblock printing method, which printed by rubbing from behind the page being printed, and so could only print on one side of a piece of paper. The distinction between recto and verso can be convenient in the annotation of scholarly books, particularly in bilingual edition translations.
The "recto" and "verso" terms can also be employed for the front and back of a one-sheet artwork, particularly in drawing. A drawing is a sheet with drawings on both sides, for example in a sketchbook—although usually in these cases there is no obvious primary side. Some works are planned to exploit being on two sides of the same piece of paper, but usually the works are not intended to be considered together. Paper was relatively expensive in the past; good drawing paper still is much more expensive than normal paper.
By book publishing convention, the first page of a book, and sometimes of each section and chapter of a book, is a recto page, and hence all recto pages will have odd numbers and all verso pages will have even numbers.
In many early printed books or incunables and still in some 16th-century books (e.g. 's ), it is the ("leaves") rather than the pages, that are numbered. Thus, each carries a consecutive number on its recto side, while on the verso side there is no number. This was also very common in e.g. internal company reports in the 20th century, before double-sided printers became commonplace in offices.
See also
Book design
Obverse and reverse in coins
Page spread
Page (paper)
References
External links
Printing
Book design
Book terminology
Papyrus
Codicology | Recto and verso | [
"Engineering"
] | 1,026 | [
"Book design",
"Design"
] |
599,261 | https://en.wikipedia.org/wiki/American%20Society%20of%20Civil%20Engineers | The American Society of Civil Engineers (ASCE) is a tax-exempt professional body founded in 1852 to represent members of the civil engineering profession worldwide. Headquartered in Reston, Virginia, it is the oldest national engineering society in the United States. Its constitution was based on the older Boston Society of Civil Engineers from 1848.
ASCE is dedicated to the advancement of the science and profession of civil engineering and the enhancement of human welfare through the activities of society members. It has more than 143,000 members in 177 countries. Its mission is to provide essential value to members, their careers, partners, and the public; facilitate the advancement of technology; encourage and provide the tools for lifelong learning; promote professionalism and the profession; develop and support civil engineers.
History
The first serious and documented attempts to organize civil engineers as a professional society in the newly created United States were in the early 19th century. In 1828, John Kilbourn of Ohio managed a short-lived "Civil Engineering Journal," editorializing about the recent incorporation of the Institution of Civil Engineers in Great Britain that same year, Kilbourn suggested that the American corps of engineers could constitute an American society of civil engineers. Later, in 1834, an American trade periodical, the "American Railroad Journal," advocated for a similar national organization of civil engineers.
Institution of American Civil Engineers
On December 17, 1838, a petition started circulating asking civil engineers to meet in 1839 in Baltimore, Maryland, to organize a permanent society of civil engineers. Prior to that, thirteen notable civil engineers largely identifiable as being from New York, Pennsylvania, or Maryland met in Philadelphia. This group presented the Franklin Institute of Philadelphia with a formal proposal that an Institution of American Civil Engineers be established as an adjunct of the Franklin..." Some of them were:
Benjamin Wright. In 1969, the American Society of Civil Engineers declared Wright to be the 'Father of American Civil Engineering'.
William Strickland
Pennsylvanians Edward Miller and Solomon. W. Roberts, the latter being Chief Engineer for the Allegheny Portage railroad, the first crossing of the Allegheny mountains (1831–1834)
Forty engineers actually appeared at the February 1839 meeting in Baltimore, including J. Edgar Thomson (Future Chief Engineer and later President of the Pennsylvania Railroad), Wright, Roberts, Edward Miller, and the Maryland engineers Isaac Trimble and Benjamin Henry Latrobe II and attendees from as far as Massachusetts, Illinois, and Louisiana. Subsequently, a group met again in Philadelphia, led by its Secretary, Edward Miller to take steps to formalize the society, participants now included such other notable engineers as:
John B. Jervis
Claudius Crozet
William Gibbs McNeill
George Washington Whistler
Walter Gwynn
J. Edgar Thompson
Sylvester Welch, brother of future ASCE president Ashbel Welch
Jonathan Knight
Benjamin Henry Latrobe II
Moncure Robinson.
Miller drafted a proposed constitution that defined society's purpose as "the collection and diffusion of professional knowledge, the advancement of mechanical philosophy, and the elevation of the character and standing of the Civil Engineers of the United States." Membership in the new society restricted membership to engineers, and "architects and eminent machinists were to be admitted only as Associates."
The proposed constitution failed, and no further attempts were made to form another society. Miller later ascribed the failure to the difficulties of assembling members due to available means for traveling in the country at the time. One of the other difficulties members would have to contend with was the requirement to produce each year one previously unpublished paper or "...present a scientific book, map, plan or model, not already in the possession of the Society, under the penalty of $10."
In that same period, the editor of the American Railroad Journal commented that effort had failed in part due to certain jealousies that arose due to the proposed affiliation with the Franklin Institute. That journal continued discussion on forming an engineers' organization from 1839 thru 1843 serving its own self-interests in advocating its journal as a replacement for a professional society but to no avail.
The American Society of Civil Engineers and Architects
During the 1840s, professional organizations continued to develop and organize in the United States. The organizers' motives were largely to "improve common standards, foster research, and disseminate knowledge through meetings and publications." Unlike earlier associations such as the American Philosophical Society, these newer associations were not seeking to limit membership as much as pursue "more specialized interests." Examples of this surge in new professional organizations in America were the American Statistical Association (1839), American Ethnological Society (1842), American Medical Association (1847), American Association for the Advancement of Science, (1848) and National Education Association (1852).
During this same period of association incorporations in the 1840s, attempts were again made at organizing an American engineer association. They succeeded at first with the Boston Society of Civil Engineers, organized in 1848, and then in October 1852, with an effort to organize a Society of Civil Engineers and Architects in New York. Led by Alfred W. Craven, Chief Engineer of the Croton Aqueduct and future ASCE president, the meeting resolved to incorporate the society under the name "American Society of Civil Engineers And Architects". Membership eligibility was restricted to "civil, geological, mining and mechanical Engineers, architects, and other persons who, by profession, are interested in the advancement of science." James Laurie was elected the society's first president. At an early meeting of the Board of Direction in 1852, instructions were given for the incorporation of the "American Society of Civil Engineers and Architects" but this was the proper steps were never taken, and therefore this name never legally belonged to the association. The ASCE held its first meetings at the Croton Aqueduct Department building in City Hall Park, Manhattan. The meetings only went through 1855 and with the advent of the American Civil War, the society suspended its activities.
Late 19th century
The next meeting was more than twelve years later in 1867. A number of the original founders, such as James Laurie, J.W. Adams, C. W. Copeland, and W. H. Talcott, were at this meeting and were dedicated to the objective of resuscitating the society. They also planned to put the society on a more permanent footing and elect fifty-four new members. With success in that effort, the young engineering society passed a resolution noting that its preservation was mainly due to the persevering efforts of its first president, James Laurie. The address of President James Pugh Kirkwood delivered at that meeting in 1867 was the first publication of the society, appearing in Volume 1 of "Transactions", bearing date of 1872.
On March 4, 1868, by a vote of 17 to 4, the name was changed to "American Society of Civil Engineers", but it was not until April 17, 1877, that the lack of incorporation was discovered and the proper steps taken to remedy the defect. The society was then chartered and incorporated in New York state.
The reconvened ASCE met at the Chamber of Commerce of the State of New York until 1875 when the society moved to 4 East 23rd Street. The ASCE moved again in 1877 to 104 East 20th Street and in 1881 to 127 East 23rd Street. The ASCE commissioned a new headquarters at 220 West 57th Street in 1895. The building was completed in 1897 and served as the society's headquarters until 1917 when the ASCE moved to the Engineering Societies' Building.
20th century
Nora Stanton Barney was among the first women in the United States to earn a civil engineering degree, graduating from Cornell University in 1905. In the same year, she was accepted as a junior member of the organization and began work for the New York City Board of Water Supply. She was the first female member of ASCE, where she was allowed to be a junior member, but was denied advancement to associate member in 1916 because of her gender. In 2015, she was posthumously advanced to ASCE Fellow status.
In 1999, the ASCE elected the top-ten "civil engineering achievements that had the greatest positive impact on life in the 20th century" in "broad categories". Monuments of the Millennium were a "combination of technical engineering achievement, courage and inspiration, and a dramatic influence on the development of [their] communities".
The achievements and monuments that best exemplified them included:
Airport design and development the Kansai International Airport in Osaka, Japan
Dams the Hoover Dam on the Colorado River in the United States
Interstate Highway System "the system overall"
Long-span bridges like the Golden Gate Bridge in San Francisco, California
Rail transportation as exemplified by the Eurotunnel rail system connecting the UK and France
Sanitary landfills and solid waste disposal "sanitary waste disposal advances overall"
Skyscrapers the Empire State Building in Midtown Manhattan, New York City
Wastewater treatment the Chicago wastewater system
Water supply and distribution the California State Water Project
Water transportation the Panama Canal
Overview
ASCE's mission is to deliver essential value to "its members, their careers, our partners, and the public" as well as enable "the advancement of technology, encourage and provide the tools for lifelong learning, promote professionalism and the profession." The society also seeks to "develop and support civil engineer leaders, and advocate infrastructure and environmental stewardship." The society as an exempt organization in the United States (Section 501(c)(3)) was required to reported its program service accomplishments and related expenses and revenues.
Publications
ASCE stated that dissemination of technical and professional information to the civil engineering profession was a major goal of the society. This is accomplished through a variety of publications and information products, including 35 technical and professional journals amongst them:
ASCE Journal of Structural Engineering
Journal of Environmental Engineering
Journal of Hydraulic Engineering
Journal of Hydrologic Engineering
Journal of Transportation Engineering, Part A: Systems
Journal of Transportation Engineering, Part B: Pavements
Journal of Water Resources Planning and Management
Civil Engineering, the society's monthly magazine
They also publish an online bibliographic database, conference proceedings, standards, manuals of practice, and technical reports.
The ASCE Library contains 470+ E-books and standards, some with chapter-level access and no restrictive DRM, and 600+ online proceedings.
Conferences, meetings, and education
Each year, more than 55,000 engineers earn continuing education units (CEUs) and/or professional development hours (PDHs) by participating in ASCE's continuing education programs. ASCE hosts more than 15 annual and specialty conferences, over 200 continuing education seminars and more than 300 live web seminars. Meetings include "...committees, task forces, focus groups, workshops and seminars designed to bring together civil engineering experts either from specific fields or those with a broad range of experience and skills. These meetings deal with specific topics and issues facing civil engineers such as America's failing infrastructure, sustainability, earthquakes, and bridge collapses."
Engineering programs
The engineering programs division directly advances the science of engineering by delivering technical content for ASCE's publications, conferences and continuing education programs. It consists of eight discipline-specific institutes, four technical divisions, and six technical councils. The work is accomplished by over 600 technical committees with editorial responsibility for 28 of ASCE's 33 journals. On an annual basis, the division conducts more than twelve congresses and specialty conferences. As a founding society of ANSI and accredited standards development organization, ASCE committees use an established and audited process to produce consensus standards under a program supervised by the society's Codes and Standards Committee.
Civil Engineering Certification Inc. (CEC), affiliated with ASCE, has been established to support specialty certification academies for civil engineering specialties and is accredited by the Council of Engineering and Scientific Specialty Boards (CESB). CEC also handles safety certification for state, municipal, and federal buildings, formerly the province of the now-defunct Building Security Council. The Committee on Critical Infrastructure (CCI) provides vision and guidance on ASCE activities related to critical infrastructure resilience, including planning, design, construction, O&M, and event mitigation, response and recovery.
Certification is the recognition of attaining advanced knowledge and skills in a specialty area of civil engineering. ASCE offers certifications for engineers who demonstrate advanced knowledge and skills in their area of engineering.
American Academy of Water Resources Engineers (AAWRE)
Academy of Geo-Professionals (AGP)
Academy of Coastal, Ocean, Port & Navigation Engineers (ACOPNE)
Institutes
ASCE also has nine full-service institutes created to serve working professionals working within specialized fields of civil engineering:
Architectural Engineering Institute (AEI)
Coasts, Oceans, Ports and Rivers Institute (COPRI)
Construction Institute (CI)
Engineering Mechanics Institute (EMI)
Environmental and Water Resources Institute (EWRI)
Geo-Institute (G-I)
Transportation and Development Institute (T&DI)
Structural Engineering Institute (SEI)
Utility Engineering & Surveying Institute (UESI)
Advocacy
To advance its policy mission, ASCE "...identifies legislation to improve the nation's infrastructure, and advance the profession of engineering specifically, ASCE lobbied on legislation at the Federal, State and local levels. In 2015, ASCE's Lobbying at the Federal level was focused primarily upon:
Reauthorization of the federal surface transportation programs such as Moving Ahead for Progress in the 21st Century Act (MAP-21)
Reauthorization of the brownfields revitalization and environmental restoration act.
Reauthorization of the national dam safety program and creation of a national levee safety program due to National Levee Safety Act Of 2007, WRDA Title IX, Section 9000.
Reauthorization of the Clean Water State Revolving Fund program
Reauthorization of the drinking water state revolving fund program
Water resources development act
funding for stem education programs
Reauthorization of the 1977 national earthquake hazards reduction program
Reauthorization of the national windstorm impact reduction act
Safe building code incentive act
Appropriations for federal programs relating to civil engineering, including surface transportation, aviation, water resources, environment, education, homeland security, and research and development.
Lobbying at the state and local level focused primarily upon licensure of civil engineers, procurement of engineering services, continuing education, and the financing of infrastructure improvements as well as lobbying at the state level to raise the minimum requirements for licensure as a professional engineer as part of ASCE's Raise the Bar (RTB) and Civil Engineering Body of Knowledge (CEBoK) initiatives.
For 2018, ASCE identified Federal advocacy priorities as follows:
Civil engineering education (higher education)
Clean water, drinking water and wastewater issues
Natural hazards mitigation & infrastructure security
Qualifications-Based Selection for engineering services
Research and Development Funding
Science, technology, engineering and math(STEM) education & support (K-12)
Sustainability, implicitly sustainable engineering
Transportation infrastructure
The State advocacy priorities in 2018 are as follows:
Licensing
Natural Hazards Impact Mitigation
Science, Technology, Engineering and Math (STEM) education & support (K-12)
State support for civil engineering higher education
Sustainability, implicitly sustainable engineering
Tort reform & indemnification for pro bono services
Transportation infrastructure financing
Strategic issues and initiatives
To promote the society's objectives and address key issues facing the civil engineering profession, ASCE developed three strategic initiatives: Sustainable Infrastructure, the ASCE Grand Challenge, and Raise the Bar.
Awards and designations
ASCE honors civil engineers through many Society Awards including the Norman medal (1874), Wellington prize (1921), Huber Civil Engineering Research Prize, the Outstanding Projects and Leaders (OPAL) awards in the categories of construction, design, education, government and management, the Outstanding Civil Engineering Achievement (OCEA) for projects, the Henry L. Michel Award for Industry Advancement of Research and the Charles Pankow Award for innovation, 12 scholarships and fellowships for student members.
Created in 1968 by ASCE's Sanitary Engineering Division, the Wesley W. Horner award is named after former ASCE President Wesley W. Horner, and given to a recently peer reviewed published paper in the fields of hydrology, urban drainage, or sewerage. Special consideration is given to private practice engineering work that is recognized as a valuable contribution to the field of environmental engineering.
The Lifetime Achievement Award has been presented annually since 1999 and recognizes five different individual leaders. One award is present in each category of design, construction, government, education, and management.
Walter L. Huber Civil Engineering Research Prize
In July 1946, the Board of Direction authorized annual awards on recommendation by the society's Committee on Research to stimulate research in civil engineering. In October 1964, Mrs. Alberta Reed Huber endowed these prizes in honor of her husband, Walter L. Huber, past president, ASCE. The Huber Prize is considered the highest level mid-career research prize in civil engineering and is awarded for outstanding achievements and contributions in research with respect to all disciplines of civil engineering.
LTPP International Data Analysis Contest Award
The LTPP International Data Analysis Contest is an annual data analysis contest held by the ASCE in collaboration with the Federal Highway Administration (FHWA). The participants are supposed to use the LTPP data.
ASCE Foundation
The ASCE Foundation is a charitable foundation established in 1994 to support and promote civil engineering programs that "... enhance quality of life, promote the profession, advance technical practices, and prepare civil engineers for tomorrow." It is incorporated separately from the ASCE, although it has a close relationship to it and all the foundation's personnel are employees of ASCE. The foundation board of directors has seven persons and its bylaws require that four of the seven directors must be ASCE officers as well and the ASCE executive director and chief financial officer must also be ASCE employees. The foundation's support is most often to ASCE's charitable, educational and scientific programs. The foundation's largest program is supporting three strategic areas; lifelong learning and leadership, advocacy for infrastructure investment and the role of civil engineers in sustainable practices. In 2014, this foundation's support in these areas was almost US$4 million.
Criticisms and historical controversies
Controversies in New Orleans levee investigations
Press release of expert review panel 2007
ASCE provides peer reviews at the request of public agencies and projects as a "means to improve the management and quality of [public agency] services and thus better protect the public health and safety with which they are entrusted".
After the 2005 levee failures in Greater New Orleans, the commander of the U.S. Army Corps of Engineers (Lt Gen Carl Strock P.E., M.ASCE) requested that ASCE create an expert review panel to peer review the corps-sponsored Interagency Performance Evaluation Task Force, the body commissioned by the corps to assess the performance of the hurricane protection system in metro New Orleans. Lawrence Roth, deputy executive director of the ASCE led the ERP development, served as the panel's chief of staff and facilitated its interaction with IPET.
The expert panel's role was to provide an independent technical review of the IPET's activities and findings, as stated at a National Research Council meeting in New Orleans: "an independent review panel ensure[s] that the outcome is a robust, credible and defensible performance evaluation". On February 12, 2007, Lt. Gen Strock gave all expert review panel members an Outstanding Civilian Service Medals.
On June 1, 2007, the ASCE issued its expert review panel report, and an accompanying press release. The press release was considered controversial because it contained information not present in the report, conflicting with the report, and minimized the Army Corps' involvement in the catastrophe: "Even without breaching, Hurricane Katrina's rainfall and surge overtopping would have caused extensive and severe flooding—and the worst loss of life and property loss ever experienced in New Orleans." The report stated that had levees and pump stations not failed, "far less property loss would have occurred and nearly two-thirds of deaths could have been avoided." The ASCE administration was criticized by the Times-Picayune for an attempt to minimize and understate the role of the Army Corps in the flooding.
Ethics complaint
In October 2007, Raymond Seed, a University of California-Berkeley civil engineering professor and ASCE member, submitted a 42-page ethics complaint to the ASCE alleging that the corps of engineers with ASCE's help sought to minimize the corps' mistakes in the flooding, intimidate anyone who tried to intervene, and delay the final results until the public's attention had turned elsewhere. The corps acknowledged receiving a copy of the letter and refused to comment until the ASCE's Committee on Professional Conduct (CPC) had commented on the complaint. It took over a year for the ASCE to announce the results of the CPC. The ASCE self-study panel did not file charges of ethical misconduct and blamed errors on "staff" and not review panel members having created the June press release."
Review panels to examine alleged ethics breaches
On November 14, 2007, ASCE announced that U.S. Congressman Sherwood Boehlert, R‑N.Y. (ret), would lead an independent task force of outside experts to review how ASCE participated in engineering studies of national significance. ASCE President David Mongan said the review was to address criticism of ASCE´s role in assisting the Army Corps of Engineers-sponsored investigation of Katrina failures. Mongan assured citizens of metro New Orleans in a letter to the Times Picayune, that ASCE took "this matter very seriously and that appropriate actions are being taken".
The panel recommended in results released on September 12, 2008, that ASCE should immediately take steps to remove the potential for conflict of interest in its participation in post-disaster engineering studies. The most important recommendations were that peer review funds over $1 million should come from a separate source, like the National Institute of Standards and Technology (NIST), that ASCE should facilitate but not control the assessment teams, and that information to the public and press should be disseminated not under the extremely tight controls that Ray Seed and his team experienced. It concluded that ASCE should draw up an ethics policy to eliminate questions of possible conflicts of interest.
On April 6, 2009, an internal probe with the ASCE issued a report that ordered a retraction of the ASCE's June 1, 2007, press release. The panel determined that the press release had "inadvertently conveyed a misleading impression regarding the role of engineering failures in the devastation of New Orleans", that it incorrectly said that surge levels along Mississippi's coastline were higher than water levels caused by a tsunami in the Indian Ocean in 2004, and that it had incorrectly repeated estimates of deaths and property damage in New Orleans that might have occurred if levees and floodwalls had not been breached.
Grassroots group spoof of ASCE–USACE relationship
On November 5, 2007, New Orleans–based grassroots group Levees.org led by Sandy Rosenthal criticized the ASCE's close relationship with the United States Army Corps of Engineers in a spoof online public service announcement. On November 12, 2007, the ASCE asked Levees.org to remove the video from the internet, threatening the organization with legal action if it did not comply. On November 13, the Times-Picayune posted the video on its website. Flanked by lawyers with Adams and Reese in the presence of extensive media coverage, the group ignored the threat and posted the video to YouTube citing Louisiana's Anti-SLAPP statute, a "strategic lawsuit against public participation", which allows courts to weed out lawsuits designed to chill public participation on matters of public significance. In a response for comment, ASCE President Mongan replied, "Since the video has already been widely reposted by other organizations, moving forward, we feel our time and expertise are best utilized working to help protect the residents of New Orleans from future storms and flooding."
USACE grant money for disinformation, 2008
In March 2008, Levees.org announced that records obtained under the Freedom of Information Act revealed that as early October 2005, the Army Corps of Engineers had directed and later paid the ASCE more than $1.1 million for its peer review (Grant Number: W912HZ-06-1-0001). The grant also paid for a series of misleading ASCE presentations attempting to shift blame away from the corps and onto local levee officials. Members of the ASCE are forbidden from making false or exaggerated statements and also from making statements for an interested party unless this is disclosed. Levees.org claimed the records showed how the external peer review would be done in four phases: Phase 1 was research and analysis on the performance of the levees, floodwalls and other important structures. Phase 2 was provision of information on the current system to prevent future flooding. Phase 3 was provision of information to evaluate alternative approaches to flood protection. Phase 4 was transfer information and knowledge gained to a broader audience within Corps and its consultancy community to communicate lessons learned. The group claimed that these records were proof that ASCE's routine powerpoint presentation from 2007 and 2008 were a public relations campaign to repair the corps' reputation. ASCE officials responded that ASCE paid for the powerpoint presentations itself and had not used USACE grant money for that purpose.
See also
ASCE Library – online database of civil engineering journals, proceedings, e-books, and standards published by the society
List of Historic Civil Engineering Landmarks – landmarks designated by the ASCE
References
External links
"Centennial of Engineering" A 3¢ commemorative US postage stamp issued in 1952
Civil engineering professional associations
Civil engineering
Organizations established in 1852
1852 establishments in the United States | American Society of Civil Engineers | [
"Engineering"
] | 5,202 | [
"American Society of Civil Engineers",
"Civil engineering organizations",
"Construction",
"Civil engineering",
"Civil engineering professional associations"
] |
599,309 | https://en.wikipedia.org/wiki/List%20of%20Historic%20Civil%20Engineering%20Landmarks |
The following is a list of Historic Civil Engineering Landmarks as designated by the American Society of Civil Engineers since it began the program in 1964. The designation is granted to projects, structures, and sites in the United States (National Historic Civil Engineering Landmarks) and the rest of the world (International Historic Civil Engineering Landmarks).
As of 2024, there are 235 designated Historic Civil Engineering Landmarks in the United States and 61 internationally, totaling 296 landmarks worldwide. Sections or chapters of the American Society of Civil Engineers may also designate state or local landmarks within their areas; those landmarks are not listed here.
See also
List of Historic Mechanical Engineering Landmarks
References
External links
American Society of Civil Engineers Historic Landmarks
Historic Civil Engineering Landmarks
Civil engineering
American Society of Civil Engineers
Civil engineering | List of Historic Civil Engineering Landmarks | [
"Engineering"
] | 150 | [
"American Society of Civil Engineers",
"Historic Civil Engineering Landmarks",
"Construction",
"Civil engineering organizations",
"Civil engineering"
] |
599,325 | https://en.wikipedia.org/wiki/Fact%20sheet | A factsheet or fact sheet, also called fact file, is a single-page document containing essential information about a product, substance, service or other topic. Factsheets are frequently used to provide information to an end user, consumer or member of the public in concise, simple language. They generally contain key safety points, operating instructions or basic information about a topic depending on the purpose of the fact sheet.
Typical contents
Factsheets frequently make use of elements such as lists, tables and diagrams to convey meaning quickly and effectively. The language and content of a factsheet depend on its target audience; a factsheet aimed at professional engineers may use more technical language than one aimed at an end-user.
History
Factsheets were traditionally printed and physically distributed, often included in the packaging of a product. Many manufacturers now provide digital factsheets as well as or instead of paper-and-ink documents.
Examples
The World Health Organization provides fact sheets on wide range of health issues
The US National Aeronautics and Space Administration provides planetary fact sheets.
The US conservation organization Defenders of Wildlife provides fact sheets about animals.
The World Factbook, a collection from the US Central Intelligence Agency of tabular factsheets on various countries.
The Federal Republic of Germany has published a fact sheet on the unique dual vocational training system.
See also
Brochure
Executive summary
Infobox
Infographic
One sheet
Press release
Public relations
References
Data publishing
Information
Technical communication
News media
Publishing | Fact sheet | [
"Technology"
] | 294 | [
"Data",
"Data publishing"
] |
599,432 | https://en.wikipedia.org/wiki/Halligan%20bar | A Halligan bar (also known as a Halligan tool or Hooligan tool) is a forcible entry tool used by firefighters.
History
The Halligan bar was designed by New York City Fire Department (FDNY) First Deputy Chief Hugh Halligan in 1948 and was named after him.
"Created by Hugh Halligan, allegedly modeled on a burglar's tool found in the rubble of a bank fire during overhaul operations." — New York City Fire Museum
That same year, blacksmith Peter Clarke made the first prototype of the tool.
"Due to a dispute between the Department and Halligan, the tool was not purchased by the FDNY until the patent expired and the Department was able to buy comparable tools from other vendors. Nonetheless it was widely used; firefighters purchased their own "Halligans" out-of-pocket, a tribute to its effectiveness and dependability. The FDNY now issues a modified Halligan Tool called the "PRO-BAR," manufactured by Fire Hooks Unlimited, for use as the primary forcible entry tool." — New York City Fire Museum
Despite its popularity among FDNY ladder companies, the department initially refrained from purchasing the tool to avoid the appearance of a conflict of interest. However, the Boston Fire Department was the first major customer of the Halligan bar, purchasing one for every fire company in the city. This led to widespread adoption of the tool, first in North America and eventually worldwide. The Halligan bar has become the most versatile hand tool for fireground tasks over the past seven decades.
Design
Based on the earlier Kelly tool, the Halligan is a multipurpose tool for prying, twisting, punching, or striking. It consists of a claw (or fork), a blade (wedge or adze), and a tapered pick, which is especially useful in quickly breaching many types of locked doors.
One variant of the Halligan has a heavy sliding collar on the shaft. Once the prying end of the tool is wedged into position, the sliding "hammer" is used to force the wedge, allowing for proper seating before prying. The adze end is also assisted by using the sliding hammer to generate forced traction on a hooked cylinder. Another variant has an end that resembles a lever-type can opener, used for making large holes for access or ventilation in sheet metal.
The Halligan is available in a number of lengths – typically – and of various materials, including titanium, beryllium copper or stainless steel. Carrying straps or rings can be found. The 18-inch Halligan is often referred to as an officer's tool.
A Halligan bar and a flathead axe can be joined (and partially interlocked, head-to-toe) to form what is known as a married set, set of irons or simply the irons. This combination of tools is most common within the fire service. However, the Halligan may also be combined with a Halligan hook or sledgehammer as an alternative.
Uses
Doors and locks
Either the adze end or fork end of the tool can be used to break through the latch of a swinging door by forcing the tool between the door and door jamb and prying the two apart, striking it with a sledgehammer or a flat-head axe.
The firefighter holding the Halligan can use a "baseball bat swing" to sink the pick into the door frame near the door handle and then force the door by applying pressure to the adze.
Another option is to use the Halligan to pry the door off the top hinges. The pick and adze (only when properly used) provide protection to the arms, hands, and body of the holder during forcible entry operation.
The pick can be placed into the shackle (or eye) of a padlock or hasp and twisted or pried to break it free.
Using a K-tool and the adze end, a lock cylinder can easily be pulled.
Vehicles
The Halligan can be used to make a purchase point on a car hood to cut the battery.
The Halligan can also be used for vehicle extrication, among other things.
The tool can be used to pry open the hood of a car when it is jammed from an accident.
The Halligan can be used to knock down a wall in a house to get to another area.
The point can be used to break glass on a car or building for access or ventilation.
It can also be driven into a roof to provide a foothold for firefighters engaged in vertical ventilation.
The fork end is routinely used to shut off gas meter valves.
The Halligan can be used as a step to get up on a window that is at head level.
The Halligan can be tied to a rope and act as an anchor in the window frame, for improvised bailout.
See also
Pulaski (tool)
References
Ryan, Gregory. Firefighter's Escape Implement. Patent. 2007.
Further reading
Essentials of Fire Fighting; Hall, Richard and Adams, Barbara, eds.; 4th Ed., 1998: Board of Regents, Oklahoma State University–Stillwater.
FDNY Forcible Entry Reference Guide, Techniques and Procedures
External links
Firefighter tools
Hand tools | Halligan bar | [
"Engineering"
] | 1,068 | [
"Human–machine interaction",
"Hand tools"
] |
599,463 | https://en.wikipedia.org/wiki/Isoniazid | Isoniazid, also known as isonicotinic acid hydrazide (INH), is an antibiotic used for the treatment of tuberculosis. For active tuberculosis, it is often used together with rifampicin, pyrazinamide, and either streptomycin or ethambutol. For latent tuberculosis, it is often used alone. It may also be used for atypical types of mycobacteria, such as M. avium, M. kansasii, and M. xenopi. It is usually taken by mouth, but may be used by injection into muscle.
History
First synthesis was described in 1912. A. Kachugin invented the drug against tuberculosis under name Tubazid in 1949. Three pharmaceutical companies unsuccessfully attempted to patent the drug at the same time, the most prominent one being Roche, which launched its version, Rimifon, in 1952.
The drug was first tested at Many Farms, a Navajo community in Arizona, due to the Navajo reservation's tuberculosis problem and because the population had not previously been treated with streptomycin, the main tuberculosis treatment at the time. The research was led by Walsh McDermott, an infectious disease researcher with an interest in public health, who had previously taken isoniazid to treat his own tuberculosis.
Isoniazid and a related drug, iproniazid, were among the first drugs to be referred to as antidepressants. Psychiatric use stopped in 1961 following reports of hepatotoxicity. Use against tuberculosis continued, as isoniazid's effectiveness against the disease outweighs its risks.
It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies isoniazid as critically important for human medicine. Isoniazid is available as a generic medication.
Medical uses
Tuberculosis
Isoniazid is often used to treat latent and active tuberculosis infections. In persons with isoniazid-sensitive Mycobacterium tuberculosis infection, drug regimens based on isoniazid are usually effective when persons adhere to the prescribed treatment. However, in persons with isoniazid-resistant Mycobacterium tuberculosis infection, drug regimens based on isoniazid have a high rate of failure.
Isoniazid has been approved as prophylactic therapy for the following populations:
People with HIV infection and a PPD (purified protein derivative) reaction of at least 5 mm induration
Contacts of people with tuberculosis and who have a PPD reaction at least 5 mm induration
People whose PPD reactions convert from negative to positive in a two-year period – at least 10 mm induration for those up to 35 years of age, and at least 15 mm induration for those at least 35 years old
People with pulmonary damage on their chest X-ray that is likely to be due to healed tuberculosis and also have a PPD reaction at least 5 mm induration
Injection drug users whose HIV status is negative who have a PPD reaction at least 10 mm induration
People with a PPD of greater than or equal to 10 mm induration who are foreign-born from high prevalence geographical regions, low-income populations, and patients residing in long-term facilities
Isoniazid can be used alone or in combination with Rifampin for treatment of latent tuberculosis, or as part of a four-drug regimen for treatment of active tuberculosis. The drug regimen typically requires daily or weekly oral administration for a period of three to nine months, often under Directly Observed Therapy (DOT) supervision.
Non-tuberculous mycobacteria
Isoniazid was widely used in the treatment of Mycobacterium avium complex as part of a regimen including rifampicin and ethambutol. Evidence suggests that isoniazid prevents mycolic acid synthesis in M. avium complex as in M. tuberculosis and although this is not bactericidal to M. avium complex, it greatly potentiates the effect of rifampicin. The introduction of macrolides led to this use greatly decreasing. However, since rifampicin is broadly underdosed in M. avium complex treatment, this effect may be worth re-investigating.
Special populations
It is recommended that women with active tuberculosis who are pregnant or breastfeeding take isoniazid. Preventive therapy should be delayed until after giving birth. Nursing mothers excrete a relatively low and non-toxic concentration of INH in breast milk, and their babies are at low risk for side effects. Both pregnant women and infants being breastfed by mothers taking INH should take vitamin B6 in its pyridoxine form to minimize the risk of peripheral nerve damage.
Vitamin B6 is used to prevent isoniazid-induced B6 deficiency and neuropathy in people with a risk factor, such as pregnancy, lactation, HIV infection, alcoholism, diabetes, kidney failure, or malnutrition.
People with liver dysfunction are at a higher risk for hepatitis caused by INH, and may need a lower dose.
Levels of liver enzymes in the bloodstream should be frequently checked in daily alcohol drinkers, pregnant women, IV drug users, people over 35, and those who have chronic liver disease, severe kidney dysfunction, peripheral neuropathy, or HIV infection since they are more likely to develop hepatitis from INH.
Side effects
Up to 20% of people taking isoniazid experience peripheral neuropathy when taking daily doses of 6 mg/kg of body weight or higher. Gastrointestinal reactions include nausea and vomiting. Aplastic anemia, thrombocytopenia, and agranulocytosis due to lack of production of red blood cells, platelets, and white blood cells by the bone marrow respectively, can also occur. Hypersensitivity reactions are also common and can present with a maculopapular rash and fever. Gynecomastia may occur.
Asymptomatic elevation of serum liver enzyme concentrations occurs in 10% to 20% of people taking INH, and liver enzyme concentrations usually return to normal even when treatment is continued. Isoniazid has a boxed warning for severe and sometimes fatal hepatitis, which is age-dependent at a rate of 0.3% in people 21 to 35 years old and over 2% in those over age 50. Symptoms suggestive of liver toxicity include nausea, vomiting, abdominal pain, dark urine, right upper quadrant pain, and loss of appetite. Black and Hispanic women are at higher risk for isoniazid-induced hepatotoxicity. When it happens, isoniazid-induced liver toxicity has been shown to occur in 50% of patients within the first 2 months of therapy.
Some recommend that liver function should be monitored carefully in all people receiving it, but others recommend monitoring only in certain populations.
Headache, poor concentration, weight gain, poor memory, insomnia, and depression have all been associated with isoniazid use. All patients and healthcare workers should be aware of these serious side effects, especially if suicidal ideation or behavior are suspected.
Isoniazid is associated with pyridoxine (vitamin B6) deficiency because of its similar structure. Isoniazid is also associated with increased excretion of pyridoxine. Pyridoxal phosphate (a derivative of pyridoxine) is required for δ-aminolevulinic acid synthase, the enzyme responsible for the rate-limiting step in heme synthesis. Therefore, isoniazid-induced pyridoxine deficiency causes insufficient heme formation in early red blood cells, leading to sideroblastic anemia.
Isoniazid was found to significantly elevate the in vivo concentration of GABA and homocarnosine in a single subject via magnetic resonance spectroscopy.
Drug interactions
People taking isoniazid and acetaminophen are at risk of acetaminophen toxicity. Isoniazid is thought to induce a liver enzyme which causes a larger amount of acetaminophen to be metabolized to a toxic form.
Isoniazid decreases the metabolism of carbamazepine, thus slowing down its clearance from the body. People taking carbamazepine should have their carbamazepine levels monitored and, if necessary, have their dose adjusted accordingly.
It is possible that isoniazid may decrease the serum levels of ketoconazole after long-term treatment. This is seen with the simultaneous use of rifampin, isoniazid, and ketoconazole.
Isoniazid may increase the amount of phenytoin in the body. The doses of phenytoin may need to be adjusted when given with isoniazid.
Isoniazid may increase the plasma levels of theophylline. There are some cases of theophylline slowing down isoniazid elimination. Both theophylline and isoniazid levels should be monitored.
Valproate levels may increase when taken with isoniazid. Valproate levels should be monitored and its dose adjusted if necessary.
Mechanism of action
Isoniazid is a prodrug that inhibits the formation of the mycobacterial cell wall. Isoniazid must be activated by KatG, a bacterial catalase-peroxidase enzyme in Mycobacterium tuberculosis. KatG catalyzes the formation of the isonicotinic acyl radical, which spontaneously couples with NADH to form the nicotinoyl-NAD adduct. This complex binds tightly to the enoyl-acyl carrier protein reductase InhA, thereby blocking the natural enoyl-AcpM substrate and the action of fatty acid synthase. This process inhibits the synthesis of mycolic acids, which are required components of the mycobacterial cell wall. A range of radicals are produced by KatG activation of isoniazid, including nitric oxide, which has also been shown to be important in the action of another antimycobacterial prodrug pretomanid.
Isoniazid is bactericidal to rapidly dividing mycobacteria, but is bacteriostatic if the mycobacteria are slow-growing. It inhibits the cytochrome P450 system and hence acts as a source of free radicals.
Isoniazid is a mild non-selective monoamine oxidase inhibitor (MAO-I). It inhibits diamine oxidase more strongly. These two actions are possible explanations for its antidepressant action as well as its ability to cause mania.
Metabolism
Isoniazid reaches therapeutic concentrations in serum, cerebrospinal fluid, and within caseous granulomas. It is metabolized in the liver via acetylation into acetylhydrazine. Two forms of the enzyme are responsible for acetylation, so some patients metabolize the drug more quickly than others. Hence, the half-life is bimodal, with "slow acetylators" and "fast acetylators". A graph of number of people versus time shows peaks at one and three hours. The height of the peaks depends on the ethnicities of the people being tested. The metabolites are excreted in the urine. Doses do not usually have to be adjusted in case of renal failure.
Preparation
Isoniazid is an isonicotinic acid derivative. It is manufactured using 4-cyanopyridine and hydrazine hydrate. In another method, isoniazid was claimed to have been made from citric acid starting material.
It can in theory be made from methyl isonicotinate, which is labelled a semiochemical.
Brand names
Hydra, Hyzyd, Isovit, Laniazid, Nydrazid, Rimifon, and Stanozide.
Other uses
Chromatography
Isonicotinic acid hydrazide is also used in chromatography to differentiate between various degrees of conjugation in organic compounds barring the ketone functional group. The test works by forming a hydrazone which can be detected by its bathochromic shift.
Dogs
Isoniazid may be used for dogs, but there have been concerns it can cause seizures.
References
Further reading
External links
Anti-tuberculosis drugs
Antidepressants
CYP3A4 inhibitors
Disulfiram-like drugs
GABA transaminase inhibitors
Hepatotoxins
Hydrazides
4-Pyridyl compounds
Prodrugs
Vitamin B6 antagonists
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Isoniazid | [
"Chemistry"
] | 2,630 | [
"Chemicals in medicine",
"Prodrugs"
] |
599,523 | https://en.wikipedia.org/wiki/Oil%20rig | An oil rig is any kind of apparatus constructed for oil drilling.
Kinds of oil rig include:
Drilling rig, an apparatus for on-land oil drilling
Drillship, a floating apparatus for offshore oil drilling
Oil platform, an apparatus for offshore oil drilling
Oil well, a boring from which oil is extracted
Petroleum engineering | Oil rig | [
"Engineering"
] | 63 | [
"Petroleum engineering",
"Energy engineering"
] |
599,563 | https://en.wikipedia.org/wiki/Voltage-controlled%20oscillator | A voltage-controlled oscillator (VCO) is an electronic oscillator whose oscillation frequency is controlled by a voltage input. The applied input voltage determines the instantaneous oscillation frequency. Consequently, a VCO can be used for frequency modulation (FM) or phase modulation (PM) by applying a modulating signal to the control input. A VCO is also an integral part of a phase-locked loop. VCOs are used in synthesizers to generate a waveform whose pitch can be adjusted by a voltage determined by a musical keyboard or other input.
A voltage-to-frequency converter (VFC) is a special type of VCO designed to be very linear in frequency control over a wide range of input control voltages.
Types
VCOs can be generally categorized into two groups based on the type of waveform produced.
Linear or harmonic oscillators generate a sinusoidal waveform. Harmonic oscillators in electronics usually consist of a resonator with an amplifier that replaces the resonator losses (to prevent the amplitude from decaying) and isolates the resonator from the output (so the load does not affect the resonator). Some examples of harmonic oscillators are LC oscillators and crystal oscillators.
Relaxation oscillators can generate a sawtooth or triangular waveform. They are commonly used in integrated circuits (ICs). They can provide a wide range of operational frequencies with a minimal number of external components.
Frequency control
A voltage-controlled capacitor is one method of making an LC oscillator vary its frequency in response to a control voltage. Any reverse-biased semiconductor diode displays a measure of voltage-dependent capacitance and can be used to change the frequency of an oscillator by varying a control voltage applied to the diode. Special-purpose variable-capacitance varactor diodes are available with well-characterized wide-ranging values of capacitance. A varactor is used to change the capacitance (and hence the frequency) of an LC tank. A varactor can also change loading on a crystal resonator and pull its resonant frequency.
The same effect occurs with bipolar transistors, as described by Donald E. Thomas at Bell Labs in 1954: with a tank circuit connected to the collector and the modulating audio signal applied between the emitter and the base, a single-transistor FM transmitter is created. Thomas worked with a point-contact transistor, but the effect also works in junction transistors; applications include wireless microphones such as that patented by Raymond A. Litke in 1964.
For low-frequency VCOs, other methods of varying the frequency (such as altering the charging rate of a capacitor by means of a voltage-controlled current source) are used (see function generator).
The frequency of a ring oscillator is controlled by varying either the supply voltage, the current available to each inverter stage, or the capacitive loading on each stage.
Phase-domain equations
VCOs are used in analog applications such as frequency modulation and frequency-shift keying. The functional relationship between the control voltage and the output frequency for a VCO (especially those used at radio frequency) may not be linear, but over small ranges, the relationship is approximately linear, and linear control theory can be used. A voltage-to-frequency converter (VFC) is a special type of VCO designed to be very linear over a wide range of input voltages.
Modeling for VCOs is often not concerned with the amplitude or shape (sinewave, triangle wave, sawtooth) but rather its instantaneous phase. In effect, the focus is not on the time-domain signal but rather the argument of the sine function (the phase). Consequently, modeling is often done in the phase domain.
The instantaneous frequency of a VCO is often modeled as a linear relationship with its instantaneous control voltage. The output phase of the oscillator is the integral of the instantaneous frequency.
is the instantaneous frequency of the oscillator at time (not the waveform amplitude)
is the quiescent frequency of the oscillator (not the waveform amplitude)
is called the oscillator sensitivity, or gain. Its units are hertz per volt.
is the VCO's frequency
is the VCO's output phase
is the time-domain control input or tuning voltage of the VCO
For analyzing a control system, the Laplace transforms of the above signals are useful.
Design and circuits
Tuning range, tuning gain and phase noise are the important characteristics of a VCO. Generally, low phase noise is preferred in a VCO. Tuning gain and noise present in the control signal affect the phase noise; high noise or high tuning gain imply more phase noise. Other important elements that determine the phase noise are sources of flicker noise (1/f noise) in the circuit, the output power level, and the loaded Q factor of the resonator. (see Leeson's equation). The low frequency flicker noise affects the phase noise because the flicker noise is heterodyned to the oscillator output frequency due to the non-linear transfer function of active devices. The effect of flicker noise can be reduced with negative feedback that linearizes the transfer function (for example, emitter degeneration).
VCOs generally have lower Q factor compared to similar fixed-frequency oscillators, and so suffer more jitter. The jitter can be made low enough for many applications (such as driving an ASIC), in which case VCOs enjoy the advantages of having no off-chip components (expensive) or on-chip inductors (low yields on generic CMOS processes).
LC oscillators
Commonly used VCO circuits are the Clapp and Colpitts oscillators. The more widely used oscillator of the two is Colpitts and these oscillators are very similar in configuration.
Crystal oscillators
A (VCXO) is used for fine adjustment of the operating frequency. The frequency of a voltage-controlled crystal oscillator can be varied a few tens of parts per million (ppm) over a control voltage range of typically 0 to 3 volts, because the high Q factor of the crystals allows frequency control over only a small range of frequencies.
A (TCVCXO) incorporates components that partially correct the dependence on temperature of the resonant frequency of the crystal. A smaller range of voltage control then suffices to stabilize the oscillator frequency in applications where temperature varies, such as heat buildup inside a transmitter.
Placing the oscillator in a crystal oven at a constant but higher-than-ambient temperature is another way to stabilize oscillator frequency. High stability crystal oscillator references often place the crystal in an oven and use a voltage input for fine control. The temperature is selected to be the turnover temperature: the temperature where small changes do not affect the resonance. The control voltage can be used to occasionally adjust the reference frequency to a NIST source. Sophisticated designs may also adjust the control voltage over time to compensate for crystal aging.
Clock generators
A clock generator is an oscillator that provides a timing signal to synchronize operations in digital circuits. VCXO clock generators are used in many areas such as digital TV, modems, transmitters and computers. Design parameters for a VCXO clock generator are tuning voltage range, center frequency, frequency tuning range and the timing jitter of the output signal. Jitter is a form of phase noise that must be minimised in applications such as radio receivers, transmitters and measuring equipment.
When a wider selection of clock frequencies is needed the VCXO output can be passed through digital divider circuits to obtain lower frequencies or be fed to a phase-locked loop (PLL). ICs containing both a VCXO (for external crystal) and a PLL are available. A typical application is to provide clock frequencies in a range from 12 kHz to 96 kHz to an audio digital-to-analog converter.
Frequency synthesizers
A frequency synthesizer generates precise and adjustable frequencies based on a stable single-frequency clock. A digitally controlled oscillator based on a frequency synthesizer may serve as a digital alternative to analog voltage controlled oscillator circuits.
Applications
VCOs are used in function generators, phase-locked loops including frequency synthesizers used in communication equipment and the production of electronic music, to generate variable tones in synthesizers.
Function generators are low-frequency oscillators which feature multiple waveforms, typically sine, square, and triangle waves. Monolithic function generators are voltage-controlled.
Analog phase-locked loops typically contain VCOs. High-frequency VCOs are usually used in phase-locked loops for radio receivers. Phase noise is the most important specification in this application.
Audio-frequency VCOs are used in analog music synthesizers. For these, sweep range, linearity, and distortion are often the most important specifications. Audio-frequency VCOs for use in musical contexts were largely superseded in the 1980s by their digital counterparts, digitally controlled oscillators (DCOs), due to their output stability in the face of temperature changes during operation. Since the 1990s, musical software has become the dominant sound-generating method.
Voltage-to-frequency converters are voltage-controlled oscillators with a highly linear relation between applied voltage and frequency. They are used to convert a slow analog signal (such as from a temperature transducer) to a signal suitable for transmission over a long distance, since the frequency will not drift or be affected by noise. Oscillators in this application may have sine or square wave outputs.
Where the oscillator drives equipment that may generate radio-frequency interference, adding a varying voltage to its control input, called dithering, can disperse the interference spectrum to make it less objectionable (see spread spectrum clock).
See also
Low-frequency oscillation (LFO)
Modular synthesizer
Numerically-controlled oscillator (NCO)
Variable-frequency oscillator (VFO)
Variable-gain amplifier
Voltage-controlled filter (VCF)
References
External links
Designing VCOs and Buffers Using the UPA family of Dual Transistors
Electronic oscillators
Synthesizer electronics
Radio electronics
Electronic design | Voltage-controlled oscillator | [
"Engineering"
] | 2,153 | [
"Electronic design",
"Radio electronics",
"Electronic engineering",
"Design"
] |
599,673 | https://en.wikipedia.org/wiki/New%20Civil%20Engineer | New Civil Engineer is the monthly magazine for members of the Institution of Civil Engineers (ICE), the UK chartered body that oversees the practice of civil engineering in the UK. First published in May 1972, it is today published by Metropolis. Under its previous publisher, Ascential, who, as Emap, acquired the title and editorial control from the ICE in 1995, the ICE regularly discussed the magazine's content through an editorial advisory board and a supervisory board.
Available in print and online after the appropriate subscription has been taken out (it is free for members of the ICE), the magazine is aimed at professionals in the civil engineering industry. It contains industry news and analysis, letters from subscribers, a directory of companies, with listings arranged by companies’ areas of work, and an appointments section. It also occasionally has details of university courses and graduate positions.
In 2013 it had a net circulation of more than 50,000 per issue. Two years later, this had dropped to 42,805, of which some 39,000 related to copies distributed to ICE members. Previously printed on a weekly basis the magazine switched to a monthly format in December 2015.
New Civil Engineer was a co-founder of the British Construction Industry Awards.
In January 2017, Ascential announced its intention to sell 13 titles including New Civil Engineer; the 13 "heritage titles" were to be "hived off into a separate business while buyers are sought." The brands were purchased by Metropolis International Ltd (owner of the Property Week title since 2013) in a £23.5m cash deal, announced on 1 June 2017.
Jacqueline Whitelaw was the magazine's deputy editor from 1998 to 2009.
References
External links
1972 establishments in the United Kingdom
Ascential
Business magazines published in the United Kingdom
Engineering magazines
Civil engineering journals
Magazines established in 1972
Magazines published in London
Monthly magazines published in the United Kingdom
Science and technology magazines published in the United Kingdom
Professional and trade magazines published in the United Kingdom | New Civil Engineer | [
"Engineering"
] | 394 | [
"Civil engineering journals",
"Civil engineering"
] |
599,674 | https://en.wikipedia.org/wiki/Antheridium | An antheridium is a haploid structure or organ producing and containing male gametes (called antherozoids or sperm). The plural form is antheridia, and a structure containing one or more antheridia is called an androecium. The androecium is also the collective term for the stamens of flowering plants.
Antheridia are present in the gametophyte phase of cryptogams like bryophytes and ferns. Many algae and some fungi, for example, ascomycetes and water moulds, also have antheridia during their reproductive stages. In gymnosperms and angiosperms, the male gametophytes have been reduced to pollen grains, and in most of these, the antheridia have been reduced to a single generative cell within the pollen grain. During pollination, this generative cell divides and gives rise to sperm cells.
The female counterpart to the antheridium in cryptogams is the archegonium, and in flowering plants is the gynoecium.
An antheridium typically consists of sterile cells and spermatogenous tissue. The sterile cells may form a central support structure or surround the spermatogenous tissue as a protective jacket. The spermatogenous cells give rise to spermatids via mitotic cell division. In some bryophytes, the antheridium is borne on an antheridiophore, a stalk-like structure that carries the antheridium at its apex.
Gallery
See also
Hornworts have antheridia, in some cases arranged within androecia.
Microsporangia produce spores that give rise to male gametophytes.
References
Further reading
C.Michael Hogan. 2010. Fern. Encyclopedia of Earth. National council for Science and the Environment. Washington, DC
Plant anatomy
Plant reproduction
he:מורפולוגיה של הצמח - מונחים#איברים בצמחים פרימיטיביים | Antheridium | [
"Biology"
] | 431 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction"
] |
599,709 | https://en.wikipedia.org/wiki/Karyogamy | Karyogamy is the final step in the process of fusing together two haploid eukaryotic cells, and refers specifically to the fusion of the two nuclei. Before karyogamy, each haploid cell has one complete copy of the organism's genome. In order for karyogamy to occur, the cell membrane and cytoplasm of each cell must fuse with the other in a process known as plasmogamy. Once within the joined cell membrane, the nuclei are referred to as pronuclei. Once the cell membranes, cytoplasm, and pronuclei fuse, the resulting single cell is diploid, containing two copies of the genome. This diploid cell, called a zygote or zygospore can then enter meiosis (a process of chromosome duplication, recombination, and division, to produce four new haploid cells), or continue to divide by mitosis. Mammalian fertilization uses a comparable process to combine haploid sperm and egg cells (gametes) to create a diploid fertilized egg.
The term karyogamy comes from the Greek karyo- (from κάρυον karyon) 'nut' and γάμος gamos 'marriage'.
Importance in haploid organisms
Haploid organisms such as fungi, yeast, and algae can have complex cell cycles, in which the choice between sexual or asexual reproduction is fluid, and often influenced by the environment. Some organisms, in addition to their usual haploid state, can also exist as diploid for a short time, allowing genetic recombination to occur. Karyogamy can occur within either mode of reproduction: during the sexual cycle or in somatic (non-reproductive) cells.
Thus, karyogamy is the key step in bringing together two sets of different genetic material which can recombine during meiosis. In haploid organisms that lack sexual cycles, karyogamy can also be an important source of genetic variation during the process of forming somatic diploid cells. Formation of somatic diploids circumvents the process of gamete formation during the sexual reproduction cycle and instead creates variation within the somatic cells of an already developed organism, such as a fungus.
Role in sexual reproduction
The role of karyogamy in sexual reproduction can be demonstrated most simply by single-celled haploid organisms such as the algae of genus Chlamydomonas or the yeast Saccharomyces cerevisiae. Such organisms exist normally in a haploid state, containing only one set of chromosomes per cell. However, the mechanism remains largely the same among all haploid eukaryotes.
When subjected to environmental stress, such as nitrogen starvation in the case of Chlamydomonas, cells are induced to form gametes. Gamete formation in single-celled haploid organisms such as yeast is called sporulation, resulting in many cellular changes that increase resistance to stress. Gamete formation in multicellular fungi occurs in the gametangia, an organ specialized for such a process, usually by meiosis. When opposite mating types meet, they are induced to leave the vegetative cycle and enter the mating cycle. In yeast, there are two mating types, a and α. In fungi, there can be two, four, or even up to 10,000 mating types, depending on the species. Mate recognition in the simplest eukaryotes is achieved through pheromone signaling, which induces shmoo formation (a projection of the cell) and begins the process of microtubule organization and migration. Pheromones used in mating type recognition are often peptides, but sometimes trisporic acid or other molecules, recognized by cellular receptors on the opposite cell. Notably, pheromone signaling is absent in higher fungi such as mushrooms.
The cell membranes and cytoplasm of these haploid cells then fuse together in a process known as plasmogamy. This results in a single cell with two nuclei, known as pronuclei. The pronuclei then fuse together in a well regulated process known as karyogamy. This creates a diploid cell known as a zygote, or a zygospore, which can then enter meiosis, a process of chromosome duplication, recombination, and cell division, to create four new haploid gamete cells. One possible advantage of sexual reproduction is that it results in more genetic variability, providing the opportunity for adaptation through natural selection. Another advantage is efficient recombinational repair of DNA damages during meiosis. Thus, karyogamy is the key step in bringing together a variety of genetic material in order to ensure recombination in meiosis.
The Amoebozoa is a large group of mostly single-celled species that have recently been determined to have the machinery for karyogamy and meiosis. Since the Amoeboza branched off early from the eukaryotic family tree, this finding suggests that karyogamy and meiosis were present early in eukaryotic evolution.
Cellular mechanisms
Pronuclear migration
The ultimate goal of karyogamy is fusion of the two haploid nuclei. The first step in this process is the movement of the two pronuclei toward each other, which occurs directly after plasmogamy. Each pronucleus has a spindle pole body that is embedded in the nuclear envelope and serves as an attachment point for microtubules. Microtubules, an important fiber-like component of the cytoskeleton, emerge at the spindle pole body. The attachment point to the spindle pole body marks the minus end, and the plus end extends into the cytoplasm. The plus end has normal roles in mitotic division, but during nuclear congression, the plus ends are redirected. The microtubule plus ends attach to the opposite pronucleus, resulting in the pulling of the two pronuclei toward each other.
Microtubule movement is mediated by a family of motor proteins known as kinesins, such as Kar3 in yeast. Accessory proteins, such as Spc72 in yeast, act as a glue, connecting the motor protein, spindle pole body and microtubule in a structure known as the half-bridge. Other proteins, such as Kar9 and Bim1 in yeast, attach to the plus end of the microtubules. They are activated by pheromone signals to attach to the shmoo tip. A shmoo is a projection of the cellular membrane which is the site of initial cell fusion in plasmogamy. After plasmogamy, the microtubule plus ends continue to grow towards the opposite pronucleus. It is thought that the growing plus end of the microtubule attaches directly to the motor protein of the opposite pronucleus, triggering a reorganization of the proteins at the half-bridge. The force necessary for migration occurs directly in response to this interaction.
Two models of nuclear congression have been proposed: the sliding cross-bridge, and the plus end model. In the sliding cross-bridge model, the microtubules run antiparallel to each other for the entire distance between the two pronuclei, forming cross-links to each other, and each attaching to the opposite nucleus at the plus end. This is the favored model. The alternative model proposes that the plus ends contact each other midway between the two pronuclei and only overlap slightly. In either model, it is believed that microtubule shortening occurs at the plus end and requires Kar3p (in yeast), a member of a family of kinesin-like proteins.
Microtubule organization in the cytoskeleton has been shown to be essential for proper nuclear congression during karyogamy. Defective microtubule organization causes total failure of karyogamy, but does not totally interrupt meiosis and spore production in yeast. The failure occurs because the process of nuclear congression cannot occur without functional microtubules. Thus, the pronuclei do not approach close enough to each other to fuse together, and their genetic material remains separated.
Pronuclear fusion (karyogamy)
Merging of the nuclear envelopes of the pi occurs in three steps: fusion of the outer membrane, fusion of the inner membrane, and fusion of the spindle pole bodies. In yeast, several members of the Kar family of proteins, as well as a protamine, are required for the fusion of nuclear membranes. The protamine Prm3 is located on the outer surface of each nuclear membrane, and is required for the fusion of the outer membrane. The exact mechanism is not known. Kar5, a kinesin-like protein, is necessary to expand the distance between the outer and inner membranes in a phenomenon known as bridge expansion. Kar8 and Kar2 are thought to be necessary to the fusing of the inner membranes.
As described above, the reorganization of accessory and motor proteins during pronuclear migration also serves to orient the spindle pole bodies in the correct direction for efficient nuclear congression. Nuclear congression can still take place without this pre-orientation of spindle pole bodies, but it is slower. Ultimately the two pronuclei combine the contents of their nucleoplasms and form a single envelope around the result.
Role in somatic diploids
Although fungi are normally haploid, diploid cells can arise by two mechanisms. The first is a failure of the mitotic spindle during regular cell division, and does not involve karyogamy. The resulting cell can only be genetically homozygous since it is produced from one haploid cell. The second mechanism, involving karyogamy of somatic cells, can produce heterozygous diploids if the two nuclei differ in genetic information. The formation of somatic diploids is generally rare, and is thought to occur because of a mutation in the karyogamy repressor gene (KR).
There are, however, a few fungi that exist mostly in the diploid state. One example is Candida albicans, a fungus that lives in the gastrointestinal tracts of many warm blooded animals, including humans. Although usually innocuous, C. albicans can turn pathogenic and is a particular problem in immunosuppressed patients. Unlike with most other fungi, diploid cells of different mating types fuse to create tetraploid cells which subsequently return to the diploid state by losing chromosomes.
Similarities to and differences from mammalian fertilization
Mammals, including humans, also combine genetic material from two sources - father and mother - in fertilization. This process is similar to karyogamy. As with karyogamy, microtubules play an important part in fertilization and are necessary for the joining of the sperm and egg (oocyte) DNA. Drugs such as griseofulvin that interfere with microtubules prevent the fusion of the sperm and egg pronuclei. The gene KAR2 which plays a large role in karyogamy has a mammalian analog called Bib/GRP78. In both cases, genetic material is combined to create a diploid cell that has greater genetic diversity than either original source. Instead of fusing in the same way as lower eukaryotes do in karyogamy, the sperm nucleus vesiculates and its DNA decondenses. The sperm centriole acts as a microtubule organizing center and forms an aster which extends throughout the egg until contacting the egg's nucleus. The two pronuclei migrate toward each other and then fuse to form a diploid cell.
See also
Sexual reproduction
Polyploid
Fungi
References
Cell biology | Karyogamy | [
"Biology"
] | 2,479 | [
"Cell biology"
] |
17,371,248 | https://en.wikipedia.org/wiki/Domain-specific%20learning | Domain-specific learning theories of development hold that we have many independent, specialised knowledge structures (domains), rather than one cohesive knowledge structure. Thus, training in one domain may not impact another independent domain. Domain-general views instead suggest that children possess a "general developmental function" where skills are interrelated through a single cognitive system. Therefore, whereas domain-general theories would propose that acquisition of language and mathematical skill are developed by the same broad set of cognitive skills, domain-specific theories would propose that they are genetically, neurologically and computationally independent.
Domain specificity has been supported by a variety of theorists. An early supporter was Jerry Fodor, who argued that the mind functions partly, by innate, domain-specific mental modules. In Modularity of Mind, Fodor proposed the Hypothesis of Modest Modularity, stating that input systems such as perception and language are modular, whereas central systems such as belief fixation and practical reasoning are not. By contrast, evolutionary psychologists have supported the Massive Modularity Hypothesis, arguing that the mind is not just partially, but completely modular, composed of domain-specific modules genetically shaped by selection pressures to carry out innate and complex functions. Core knowledge theorists such as Elizabeth Spelke hold that knowledge can be separated into a few, highly specialised, domain-specific bodies.
Domain-specific learning mechanisms
Language
The Poverty of the Stimulus (PoS) argument proposed by Noam Chomsky takes a nativist view towards language acquisition suggesting that innate, domain-specific knowledge structures help us to navigate tough linguistic environments. This flows contrary to empiricist views that learning and knowledge derive from our sensory experiences. The PoS argument maintains that there is a mismatch between the linguistic knowledge that we acquire, and how much information is available to us in the environment.
Chomsky believed that children cannot be empiricist learners of language, because many linguistic principles are neither simple nor natural to acquire. Therefore, a sufficient linguistic environment would be required to facilitate a full understanding of language. However, the data needed to grasp these linguistic principles is not always available due to different environmental conditions. Despite this, all normal children are still able to formulate an accurate representation of grammar which led Chomsky to theorise that children must have an innate, domain-specific capacity for language.
Further support for nativism
1. Biological time-clock
Evidence shows that children go through similar stages of language development at similar times, leading to many linguists advocating for an innate and pre-determined linguistic schedule.
2. Predictability of error
Children explore a diverse range of grammars in their environment as they develop. Under empiric learning, this would likely cause them to make all kinds of unpredictable linguistic errors. However, children make errors that exhibit regularity. When expressing verbs in past tense form, they often overgeneralise irregular forms such as came and saw into comed and seed to match "regular" forms such as loved and worked. The way children deal with environmental irregularity has therefore led to the proposition of a domain-specific language hypothesis space.
3. Specific Language Impairment (SLI)
A dissociation between intelligence and linguistic functioning has been shown in people with SLI. Evidence has also indicated that people with Grammatical-SLI experience grammar-only deficits. The case of SLI may therefore indicate an independent linguistic system.
Criticisms
Many critics have argued against the convincingness of the PoS argument, stating that Chomsky's theory is vague, incoherent and untestable. Therefore, debate still remains about the extent to which language learning is an innate, domain-specific process.
Socialisation
Socialisation is integral to a child's ability to acquire the necessary skills to function in a social environment. It has commonly been viewed as a product of domain-general learning, with the same organisational principles applying to child development, regardless of setting, task or developmental stage. Objections have therefore been raised on its unitary approach and lack of consideration for variation across contexts.
Instead, researchers have proposed a socialisation process involving five domains, stating that different parent-child relationships serve different functions, rely on different ways to bring about behavioural change, and have different outcomes.
1. Protection
Primarily responsible for giving the child a sense of security through adopting a comforting parenting style. This results in children being able to better manage stress knowing that support will be available to them.
2. Reciprocity
Mutual compliance forms this relationship where parent and child fulfil each other's desires and treat each other as equals. The reciprocity domain nurtures a child's tendency to reciprocate, which can predict pro-social behaviour.
3. Control
The control domain involves parents who modify their child's behaviour through exerting the necessary amount of authority to achieve the socialisation agent's goals. Consequently, outcomes involving a child's ability to suppress conflicting desires to make the correct moral and principled judgements are typical.
4. Guided learning
Aims to effectively guide children's learning through strategies and feedback to help acquire the target knowledge and skills.
5. Group participation
Parents attempt to encourage shared identity for the child through promoting routines and rituals that reflect group norms. Successful outcomes involve children conforming to and adopting group values that build on their notions of social identity.
Further research
Although these five domains have demonstrated initial evidence for input and output differences in socialisation, additional research is required as the taxonomy of domains remains disputed.
Opposition to domain-specific learning
Although some arguments have supported domain-specific learning, there still remains debate about how we truly learn and develop.
Support for domain-general learning include theories from Jean Piaget and Charles Spearman. Piaget argued that developments in domain-general cognitive architecture drives learning and conceptual change in his theory of cognitive development. Similarly, Spearman proposed an underlying, domain-general g-factor (general intelligence) to explain one's performance on all types of mental tests.
However, research has also introduced the possibility of a combination of domain-specific and domain-general learning mechanisms. In the mathematical field, it has been hypothesised that both mechanisms are at work and target arithmetic skills differently. It has further been suggested that the magnitude of each mechanism in determining mathematical achievement varies across grades. Therefore, research is needed to better understand our learning across a wide range of fields.
See also
Developmental psychology
Domain-general learning
Learning
Wason selection task
References
Developmental psychology | Domain-specific learning | [
"Biology"
] | 1,304 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
17,372,184 | https://en.wikipedia.org/wiki/Reference%20designator | A reference designator unambiguously identifies the location of a component within an electrical schematic or on a printed circuit board. The reference designator usually consists of one or two letters followed by a number, e.g. C3, D1, R4, U15. The number is sometimes followed by a letter, indicating that components are grouped or matched with each other, e.g. R17A, R17B. The IEEE 315 standard contains a list of Class Designation Letters to use for electrical and electronic assemblies. For example, the letter R is a reference prefix for the resistors of an assembly, C for capacitors, K for relays.
Industrial electrical installations often use reference designators according to IEC 81346.
History
IEEE 200-1975 or "Standard Reference Designations for Electrical and Electronics Parts and Equipments" is a standard that was used to define referencing naming systems for collections of electronic equipment. IEEE 200 was ratified in 1975. The IEEE renewed the standard in the 1990s, but withdrew it from active support shortly thereafter. This document also has an ANSI document number, ANSI Y32.16-1975.
This standard codified information from, among other sources, a United States military standard MIL-STD-16 which dates back to at least the 1950s in American industry.
To replace IEEE 200-1975, ASME, a standards body for mechanical engineers, initiated the new standard ASME Y14.44-2008. This standard, along with IEEE 315-1975, provide the electrical designer with guidance on how to properly reference and annotate everything from a single circuit board to a collection of complete enclosures.
Definition
ASME Y14.44-2008 and IEEE 315-1975 define how to reference and annotate components of electronic devices.
It breaks down a system into units, and then any number of sub-assemblies. The unit is the highest level of demarcation in a system and is always a numeral. Subsequent demarcation are called assemblies and always have the Class Letter "A" as a prefix following by a sequential number starting with 1. Any number of sub-assemblies may be defined until finally reaching the component. Note that IEEE 315-1975 defines separate class designation letters for separable assemblies (class designation 'A') and inseparable assemblies (class designation 'U'). Inseparable assemblies—i.e., "items which are ordinarily replaced as a single item of supply"—are typically treated as components in this referencing scheme.
Examples:
1A12A2R3 - Unit 1, Assembly 12, Sub-assembly 2, Resistor 3
1A12A2U3 - Unit 1, Assembly 12, Sub-assembly 2, Inseparable Assembly 3
Especially valuable is the method of referencing and annotating cables plus their connectors within and outside assemblies.
Examples:
1A1A44J5 - Unit 1, Assembly 1, Sub-Assembly 44, Jack 5 (J5 is a connector on a box referenced as A44)
1A1A45J333 - Unit 1, Assembly 1, Sub-Assembly 45, Jack 333 (J333 is a connector on a box referenced as A45)
A cable connecting these two might be:
1A1W35 - In the assembly A1 is a cable called W35.
Connectors on this cable would be designated:
1A1W35P1
1A1W35P2
ASME Y14.44-2008 continues the convention of Plug P and Jack J when assigning references for electrical connectors in assemblies where a J (or jack) is the more fixed and P (or plug) is the less fixed of a connector pair, without regard to the gender of the connector contacts.
The construction of reference designators is covered by IEEE 200-1975/ANSI Y32.16-1975 (replaced by ASME Y14.44-2008) and IEEE 315-1975.
Designators
The table below lists designators commonly used, and does not necessarily comply with standards. For modern use, designators are often simplified towards shorter designators, because it requires less space on silkscreens.
Other designators
See also
Circuit diagram
Electronic symbol
References
Further reading
AS 1103.2-1982 - "Diagrams charts and tables for electrotechnology, Part 2: Item Designation" (Superseded by AS 3702-1989.)
AS 3702-1989 - "Item designation in electrotechnology". (Equivalent to IEC 60750 Edition 1.0, 1983.)
IEC 113 (Superseded by IEC 750, i.e. IEC 60750.)
IEC 750-1983 (AS 3702 is equivalent, but provides extra information.)
IEEE 315-1975 / ANSI Standard Y32.2. Annex F: "Cross reference list of Class Designation Letters" compares IEC 113-2:1971 to the IEEE/ANSI standard. * AS 1102 and IEC 60617 for "Graphical Symbols for Electrotechnology".
Electronic engineering | Reference designator | [
"Technology",
"Engineering"
] | 1,019 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering"
] |
17,373,006 | https://en.wikipedia.org/wiki/Information%20Systems%20for%20Crisis%20Response%20and%20Management | The Information Systems for Crisis Response and Management (ISCRAM) Community is an international community of researchers, practitioners and policy makers involved in or concerned about the design, development, deployment, use and evaluation of information systems for crisis response and management. The ISCRAM Community has been co-founded by Bartel Van de Walle (Tilburg University, the Netherlands), Benny Carlé (SCK-CEN Nuclear Research Center, Belgium), and Murray Turoff (New Jersey Institute of Technology).
ISCRAM Conferences
ISCRAM conferences have been held annually since 2004. Since 2005, the conference has alternated between Europe and the United States / Canada.
At the conference, the Mike Meleshkin best PhD student paper is awarded to the best paper written and presented by a PhD student. Past awardees are Sebastian Henke (University of Münster, Germany), Jonas Landgren (Viktoria Institute, Sweden), Jiri Trnka (Linkoping University, Sweden), Manuel Llavador (Polytechnic University of Valencia, Spain), Valentin Bertsch (Karlsruhe University, Germany), Thomas Foulquier (Université de Sherbrooke, Canada), and the PhD students in crisis informatics at the University of Colorado at Boulder (USA).
ISCRAM-CHINA and Summer School
Since 2005, an annual conference is also held in China, at Harbin Engineering University, with Dr. Song Yan as conference chair. The 2008 meeting is held jointly with the GI4D meeting on August 4–6, 2008.
The Summer School for PhD students took place in the Netherlands at Tilburg University in June 2006 and 2007.
ISCRAM Journal
The International Journal of Information Systems for Crisis Response and Management (IJISCRAM) is a journal which started in January 2009. Co-Editors-in-Chief are Murray Jennex (San Diego State University) and Bartel Van de Walle (Tilburg University, the Netherlands).
The mission of the International Journal of Information Systems for Crisis Response and Management (IJISCRAM) is to provide an outlet for innovative research in the area of information systems for crisis response and management. Research is expected to be rigorous but can utilize any accepted methodology and may be qualitative or quantitative in nature. The journal will provide a comprehensive cross disciplinary forum for advancing the understanding of the organizational, technical, human, and cognitive issues associated with the use of information systems in responding and managing crises of all kinds. The goal of the journal is to publish high quality empirical and theoretical research covering all aspects of information systems for crisis response and management. Full-length research manuscripts, insightful research and practice notes, and case studies will be considered for publication.
Notes
References
Peace IT! issue 2008/1, published by CMI Finland, had a report on ISCRAM2008
External links
ISCRAM Community web site
ISCRAM journal website
ISCRAM overview slideshow
Information systems | Information Systems for Crisis Response and Management | [
"Technology"
] | 597 | [
"Information systems",
"Information technology"
] |
17,373,539 | https://en.wikipedia.org/wiki/Freshman%27s%20dream | The freshman's dream is a name given to the erroneous equation , where is a real number (usually a positive integer greater than 1) and are non-zero real numbers. Beginning students commonly make this error in computing the power of a sum of real numbers, falsely assuming powers distribute over sums. When n = 2, it is easy to see why this is incorrect: (x + y)2 can be correctly computed as x2 + 2xy + y2 using distributivity (commonly known by students in the United States as the FOIL method). For larger positive integer values of n, the correct result is given by the binomial theorem.
The name "freshman's dream" also sometimes refers to the theorem that says that for a prime number p, if x and y are members of a commutative ring of characteristic p, then
(x + y)p = xp + yp. In this more exotic type of arithmetic, the "mistake" actually gives the correct result, since p divides all the binomial coefficients apart from the first and the last, making all the intermediate terms equal to zero.
The identity is also actually true in the context of tropical geometry, where multiplication is replaced with addition, and addition is replaced with minimum.
Examples
, but .
does not equal . For example, , which does not equal . In this example, the error is being committed with the exponent .
Prime characteristic
When is a prime number and and are members of a commutative ring of characteristic , then . This can be seen by examining the prime factors of the binomial coefficients: the nth binomial coefficient is
The numerator is p factorial(!), which is divisible by p. However, when , both n! and are coprime with p since all the factors are less than p and p is prime. Since a binomial coefficient is always an integer, the nth binomial coefficient is divisible by p and hence equal to 0 in the ring. We are left with the zeroth and pth coefficients, which both equal 1, yielding the desired equation.
Thus in characteristic p the freshman's dream is a valid identity. This result demonstrates that exponentiation by p produces an endomorphism, known as the Frobenius endomorphism of the ring.
The demand that the characteristic p be a prime number is central to the truth of the freshman's dream. A related theorem states that if p is prime then in the polynomial ring . This theorem is a key fact in modern primality testing.
History and alternate names
In 1938, Harold Willard Gleason published a poem titled «"Dark and Bloody Ground---" (The Freshman's Dream)» in The New York Sun on September 6, which was subsequently reprinted in various other newspapers and magazines. It consists of 2 stanzas, each containing 8 lines with alternating indentation; it has an ABCB rhyming scheme. Words and phrases that hint that it might be related to this concept include: "Algebra", "Wild corollaries twine", "surds", "of plus and minus sign", "binomial", "quadratic", "parenthesis", "exponents", "in terms of x and y", "remove the brackets, radicals, and do so with discretion", and "factor cubes".
The history of the term "freshman's dream" is somewhat unclear. In a 1940 article on modular fields, Saunders Mac Lane quotes Stephen Kleene's remark that a knowledge of in a field of characteristic 2 would corrupt freshman students of algebra. This may be the first connection between "freshman" and binomial expansion in fields of positive characteristic. Since then, authors of undergraduate algebra texts took note of the common error. The first actual attestation of the phrase "freshman's dream" seems to be in Hungerford's graduate algebra textbook (1974), where he states that the name is "due to" Vincent O. McBrien. Alternative terms include "freshman exponentiation", used in Fraleigh (1998). The term "freshman's dream" itself, in non-mathematical contexts, is recorded since the 19th century.
Since the expansion of is correctly given by the binomial theorem, the freshman's dream is also known as the "child's binomial theorem" or "schoolboy binomial theorem".
See also
Pons asinorum
Primality test
Sophomore's dream
Frobenius endomorphism
References
Algebra education
Mathematical fallacies
Theorems in ring theory
Prime numbers | Freshman's dream | [
"Mathematics"
] | 961 | [
"Algebra education",
"Algebra",
"Prime numbers",
"Mathematical objects",
"Mathematical fallacies",
"Numbers",
"Number theory"
] |
17,374,931 | https://en.wikipedia.org/wiki/Partial%20word | In computer science and the study of combinatorics on words, a partial word is a string that may contain a number of "do not know" or "do not care" symbols i.e. placeholders in the string where the symbol value is not known or not specified. More formally, a partial word is a partial function where is some finite alphabet. If u(k) is not defined for some then the unknown element at place k in the string is called a "hole". In regular expressions (following the POSIX standard) a hole is represented by the metacharacter ".". For example, aab.ab.b is a partial word of length 8 over the alphabet A ={a,b} in which the fourth and seventh characters are holes.
Algorithms
Several algorithms have been developed for the problem of "string matching with don't cares", in which the input is a long text and a shorter partial word and the goal is to find all strings in the text that match the given partial word.
Applications
Two partial words are said to be compatible when they have the same length and when every position that is a non-wildcard in both of them has the same character in both. If one forms an undirected graph with a vertex for each partial word in a collection of partial words, and an edge for each compatible pair, then the cliques of this graph come from sets of partial words that all match at least one common string. This graph-theoretical interpretation of compatibility of partial words plays a key role in the proof of hardness of approximation of the clique problem, in which a collection of partial words representing successful runs of a probabilistically checkable proof verifier has a large clique if and only if there exists a valid proof of an underlying NP-complete problem.
The faces (subcubes) of an -dimensional hypercube can be described by partial words of length over a binary alphabet, whose
symbols are the Cartesian coordinates of the hypercube vertices (e.g., 0 or 1 for a unit cube). The dimension of a subcube, in this representation, equals the number of don't-care symbols it contains. The same representation may also be used to describe the implicants of Boolean functions.
Related concepts
Partial words may be generalized to parameter words, in which some of the "do not know" symbols are marked as being equal to each other. A partial word is a special case of a parameter word in which each do not know symbol may be substituted by a character independently of all of the other ones.
References
Combinatorics on words
String (computer science) | Partial word | [
"Mathematics",
"Technology"
] | 547 | [
"Sequences and series",
"String (computer science)",
"Mathematical structures",
"Combinatorics",
"Computer science",
"Combinatorics on words"
] |
17,375,166 | https://en.wikipedia.org/wiki/Double%20%28manifold%29 | In the subject of manifold theory in mathematics, if is a topological manifold with boundary, its double is obtained by gluing two copies of together along their common boundary. Precisely, the double is where for all .
If has a smooth structure, then its double can be endowed with a smooth structure thanks to a collar neighbourdhood.
Although the concept makes sense for any manifold, and even for some non-manifold sets such as the Alexander horned sphere, the notion of double tends to be used primarily in the context that is non-empty and is compact.
Doubles bound
Given a manifold , the double of is the boundary of . This gives doubles a special role in cobordism.
Examples
The n-sphere is the double of the n-ball. In this context, the two balls would be the upper and lower hemi-sphere respectively. More generally, if is closed, the double of is . Even more generally, the double of a disc bundle over a manifold is a sphere bundle over the same manifold. More concretely, the double of the Möbius strip is the Klein bottle.
If is a closed, oriented manifold and if is obtained from by removing an open ball, then the connected sum is the double of .
The double of a Mazur manifold is a homotopy 4-sphere.
References
Differential topology
Manifolds | Double (manifold) | [
"Mathematics"
] | 272 | [
"Space (mathematics)",
"Topology stubs",
"Topological spaces",
"Topology",
"Differential topology",
"Manifolds"
] |
17,375,183 | https://en.wikipedia.org/wiki/Floodplain%20restoration | Floodplain restoration is the process of fully or partially restoring a river's floodplain to its original conditions before having been affected by the construction of levees (dikes) and the draining of wetlands and marshes.
The objectives of restoring floodplains include the reduction of the incidence of floods, the provision of habitats for aquatic species, the improvement of water quality and the increased recharge of groundwater.
Description
Types/methods
Anthropogenic impacts on floodplain mostly target the lateral connectivity between rivers and their floodplains, so many restoration methods focus on removing human-made structures that disrupt connectivity. One type of floodplain restoration are levee setbacks and dam removal, either full or partial, to allow for rivers to migrate within a space that is closer to the natural floodplain. Another method is through a "beaded approach" with allows small portions of a floodplain to be restored to natural habitat and functions . The removal of levees and/or weirs can allow for the reconnection of river channels to their floodplain. Riverside embankments through the creation of overflow sills and creating artificial opening at inflow channels can help increase channel connectivity to the floodplain. Restoring drained or degraded wetlands can help increase floodplain connectivity.
Potential benefits
Floodplain restoration can restore previously lost or degraded ecosystem services. These ecosystem services can be categorized by supporting, regulating, provisioning, and cultural services. Restoring floodplains can help regulate flood events and mitigate flood related damage. Floodplain restoration can also increase biodiversity by creating new or restoring degraded habitat and encourage growth of native species. Methods of wetland restoration in the floodplain, can help better water quality. Reconnecting rivers to their floodplains promotes carbon storage in soil and regulates processes within soil.
Challenges
There are several issues that may arise when planning and/or implementing floodplain restoration projects. Since floodplain restoration involves a wide range of partnerships and stakeholders, a lack of communication between parties and differences ideas or priorities for restoration goals can be a constraint for restoration projects. There is also the potential for a higher value or desire placed into immediate flood-defense and current land-use practices rather than the ecological or environmental benefits, which can stall or prevent floodplain restoration. It is also important to include the socio-economic aspects of floodplain restoration, so when this becomes a constraint to projects that do not consider these aspects. Restoration efforts need to be properly and continuously monitored to determine effectiveness and benefits.
Examples of existing projects
Africa
Waza-Logone Restoration
Asia and The Pacific
Tarim River, China case study focuses on the cultural, socio-economic, and environmental aspects of the basin to plan for restoration projects.
Mekong Delta, Vietnam restoration to aid with coastal protection.
Four Major Rivers Restoration Project in South Korea to restore the Han, Nakdong, Geum, and Yeongsan rivers.
Europe
One of the drivers for floodplain restoration is the EU Water Framework Directive. Early floodplain restoration schemes were undertaken in the mid-1990s in the Rheinvorland-Süd on the Upper Rhine, the Bourret on the Garonne, and as part of the Long Eau project in England. Ongoing schemes in 2007 include Lenzen on the Elbe, La Basse on the Seine and the Parrett Catchment Project in England. On the Elbe River near Lenzen (Brandenburg), 420 hectares of floodplain were restored in order to prevent a recurrence of the Elbe floods of 2002. A total of 20 floodplain restoration projects on the Elbe River were envisaged after the 2002 floods, but only two have been implemented as of 2009 according to the environmental group de:BUND.
Upper Danube River, Germany Restoration project.
Latin America and the Caribbean
Chubut River Restoration Project
North America
Floodplain restoration in the United States is driven by The Clean Water Act (1972), The Endangered Species Act (1973), and various state level legislations.
In the catchment area of the Chesapeake Bay in Maryland
Emiquon Preserve on the Illinois River
Baraboo River in Wisconsin.
Upper Sandy Creek a tributary in the Cape Fear River in North Carolina.
Efforts on the Oldman River and St Mary River, in Alberta to restore the flow regime to encourage vegetation growth.
See also
Ecological restoration
Riparian zone restoration
Stream restoration
References
Ecological restoration
Flood control
Stormwater management
Water and the environment
Water resources management
Floodplains | Floodplain restoration | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 910 | [
"Ecological restoration",
"Water treatment",
"Stormwater management",
"Water pollution",
"Flood control",
"Environmental engineering"
] |
17,375,601 | https://en.wikipedia.org/wiki/Draining%20and%20development%20of%20the%20Everglades | A national push for expansion and progress toward the latter part of the 19th century stimulated interest in draining the Everglades, a region of tropical wetlands in southern Florida, for agricultural use. According to historians, "From the middle of the nineteenth century to the middle of the twentieth century, the United States went through a period in which wetland removal was not questioned. Indeed, it was considered the proper thing to do."
A pattern of political and financial motivation, and a lack of understanding of the geography and ecology of the Everglades have plagued the history of drainage projects. The Everglades are a part of a massive watershed that originates near Orlando and drains into Lake Okeechobee, a vast and shallow lake. As the lake exceeds its capacity in the wet season, the water forms a flat and very wide river, about long and wide. As the land from Lake Okeechobee slopes gradually to Florida Bay, water flows at a rate of half a mile (0.8 km) a day. Before human activity in the Everglades, the system comprised the lower third of the Florida peninsula. The first attempt to drain the region was made by real estate developer Hamilton Disston in 1881. Disston's sponsored canals were unsuccessful, but the land he purchased for them stimulated economic and population growth that attracted railway developer Henry Flagler. Flagler built a railroad along the east coast of Florida and eventually to Key West; towns grew and farmland was cultivated along the rail line.
During his 1904 campaign to be elected governor, Napoleon Bonaparte Broward promised to drain the Everglades, and his later projects were more effective than Disston's. Broward's promises sparked a land boom facilitated by blatant errors in an engineer's report, pressure from real estate developers, and the burgeoning tourist industry throughout south Florida. The increased population brought hunters who went unchecked and had a devastating impact on the numbers of wading birds (hunted for their plumes), alligators, and other Everglades animals.
Severe hurricanes in 1926 and 1928 caused catastrophic damage and flooding from Lake Okeechobee that prompted the Army Corps of Engineers to build a dike around the lake. Further floods in 1947 prompted an unprecedented construction of canals throughout southern Florida. Following another population boom after World War II, and the creation of the Central and Southern Florida Flood Control Project, the Everglades was divided into sections separated by canals and water control devices that delivered water to agricultural and newly developed urban areas. However, in the late 1960s, following a proposal to construct a massive airport next to Everglades National Park, national attention turned from developing the land to restoring the Everglades.
Exploration
American involvement in the Everglades began during the Second Seminole War (1836–42), a costly and very unpopular conflict. The United States spent between $30 million and $40 million and lost between 1,500 and 3,000 lives. The U.S. military drove the Seminoles into the Everglades and were charged with the task of finding them, defeating them, and moving them to Oklahoma Indian territory. Almost 4,000 Seminoles were killed in the war or were removed. The U.S. military was completely unprepared for the conditions they found in the Everglades. They tore their clothes on sawgrass, ruined their boots on the uneven limestone floor, and were plagued by mosquitoes. Soldiers' legs, feet, and arms were cut open on the sawgrass and gangrene infection set in, taking many lives and limbs. Many died of mosquito-borne illness. After slogging through mud, one private died in his tracks of exhaustion in 1842. General Thomas Jesup admitted the military was overwhelmed by the terrain when he wrote to the Secretary of War in 1838, trying to dissuade him from prolonging the war.
Opinion about the value of Florida to the Union was mixed: some thought it a useless land of swamps and horrible animals, while others thought it a gift from God for national prosperity. In 1838 comments in The Army and Navy Chronicle supported future development of southern Florida: [The] climate [is] most delightful; but, from want of actual observation, [it] could not speak so confidently of the soil, although, from the appearance of the surrounding vegetation, a portion of it, at least, must be rich. Whenever the aborigines shall be forced from their fastnesses, as eventually they must be, the enterprising spirit of our countrymen will very soon discover the sections best adapted to cultivation, and the now barren or unproductive everglades will be made to blossom like a garden. It is the general impression that these everglades are uninhabitable during the summer months, by reason of their being overflowed by the abundant rains of the season; but if it should prove that these inundations are caused or increased by obstructions to the natural courses of the rivers, as outlets to the numerous lakes, American industry will remove these obstructions.
The military penetration of southern Florida offered the opportunity to map a poorly understood part of the country. As late as 1823, official reports doubted the existence of a large inland lake, until the military met the Seminoles at the Battle of Lake Okeechobee in 1837. To avenge repeated surprise attacks on himself and ammunition stores, Colonel William Harney led an expedition into the Everglades in 1840, to hunt for a chief named Chekika. With Harney were 90 soldiers in 16 canoes. One soldier's account of the trip in the St. Augustine News was the first printed description of the Everglades available to the general public. The anonymous writer described the hunt for Chekika and the terrain they were crossing: "No country that I have ever heard of bears any resemblance to it; it seems like a vast sea filled with grass and green trees, and expressly intended as a retreat for the rascally Indian, from which the white man would never seek to drive them".
The final blame for the military stalemate was determined to lie not in military preparation, supplies, leadership, or superior tactics by the Seminoles, but in Florida's impenetrable terrain. An army surgeon wrote: "It is in fact a most hideous region to live in, a perfect paradise for Indians, alligators, serpents, frogs, and every other kind of loathsome reptile." The land seemed to inspire extreme reactions of wonder or hatred. In 1870, an author described the mangrove forests as a "waste of nature's grandest exhibition to have these carnivals of splendid vegetation occurring in isolated places where it is but seldom they are seen." A band of hunters, naturalists, and collectors ventured through in 1885, taking along with them the 17-year-old grandson of an early resident of Miami. The landscape unnerved the young man shortly after he entered the Shark River: "The place looked wild and lonely. About three o'clock it seemed to get on Henry's nerves and we saw him crying, he would not tell us why, he was just plain scared."
In 1897, Hugh L. Willoughby spent eight days canoeing with a party from the mouth of the Harney River to the Miami River. He wrote about his observations and sent them back to the New Orleans Times-Democrat. Willoughby described the water as healthy and wholesome, with numerous springs, and 10,000 alligators "more or less" in Lake Okeechobee. The party encountered thousands of birds near the Shark River, "killing hundreds, but they continued to return". Willoughby pointed out that much of the rest of the country had been mapped and explored except for this part of Florida, writing, "(w)e have a tract of land one hundred and thirty miles long and seventy miles wide that is as much unknown to the white man as the heart of Africa."
Drainage
As early as 1837, a visitor to the Everglades suggested the value of the land without the water:Could it be drained by deepening the natural outlets? Would it not open to cultivation immense tracts of rich vegetable soil? Could the waterpower, obtained by draining, be improved to any useful purpose? Would such draining render the country unhealthy? ... Many queries like these passed through our minds. They can only be solved by a thorough examination of the whole country. Could the waters be lowered ten feet, it would probably drain six hundred thousand acres; should this prove to be a rich soil, as would seem probable, what a field it would open for tropical productions! What facilities for commerce!
Territorial representative David Levy proposed a resolution that was passed in Congress in 1842: "that the Secretary of War be directed to place before this House such information as can be obtained in relation to the practicability and probable expense of draining the everglades of Florida." From this directive Secretary of the Treasury Robert J. Walker requested Thomas Buckingham Smith from St. Augustine to consult those with experience in the Everglades on the feasibility of draining them, saying that he had been told two or three canals to the Gulf of Mexico would be sufficient. Smith asked officers who had served in the Seminole Wars to respond, and many favored the idea, promoting the land as a future agricultural asset to the South. A few disagreed, such as Captain John Sprague, who wrote he "never supposed the country would excite an inquiry, other than as a hiding place for Indians, and had it occurred to me that so great an undertaking, one so utterly impracticable, as draining the Ever Glades was to be discussed, I should not have destroyed the scratch of pen upon a subject so fruitful, and which cannot be understood but by those who have waded the water belly deep and examined carefully the western coast by land and by water."
Nevertheless, Smith returned a report to the Secretary of the Treasury asking for $500,000 to do the job. The report is the first published study on the topic of the Everglades, and concluded with the statement: The Ever Glades are now suitable only for the haunt of noxious vermin or the resort of pestilent reptiles. The statesman whose exertions shall cause the millions of acres they contain, now worse than worthless, to teem with the products of agricultural industry; that man who thus adds to the resources of his country ... will merit a high place in public favor, not only with his own generation, but with posterity. He will have created a State! Smith suggested cutting through the rim of the Everglades (known today as the Atlantic Coastal Ridge), connecting the heads of rivers to the coastline so that of water would be drained from the area. The result, Smith hoped, would yield farmland suitable for corn, sugar, rice, cotton, and tobacco.
In 1850 Congress passed a law that gave several states wetlands within their state boundaries. The Swamp Land Act of 1850 ensured that the state would be responsible for funding the attempts at developing wetlands into farmlands. Florida quickly formed a committee to consolidate grants to pay for such attempts, though attention and funds were diverted owing to the Civil War and Reconstruction. Not until after 1877 did attention return to the Everglades.
Hamilton Disston's canals
After the Civil War, an agency named the Internal Improvement Fund (IIF), charged with using grant money to improve Florida's infrastructure through canals, rail lines, and roads, was eager to be rid of the debt incurred by the Civil War. IIF trustees found a Pennsylvania real estate developer named Hamilton Disston who was interested in implementing plans to drain the land for agriculture. Disston was persuaded to buy of land for $1 million in 1881 . The New York Times declared it the largest purchase of land ever by any individual. Disston began building canals near St. Cloud to lower the basin of the Caloosahatchee and Kissimmee Rivers. His workers and engineers faced conditions similar to those of the soldiers during the Seminole Wars; it was harrowing, backbreaking labor in dangerous conditions. The canals seemed at first to work in lowering the water levels in the wetlands surrounding the rivers. Another dredged waterway between the Gulf of Mexico and Lake Okeechobee was built, opening the region to steamboat traffic.
Disston's engineers focused on Lake Okeechobee as well. As one colleague put it, "Okeechobee is the point to attack"; the canals were to be "equal or greater than the inflow from the Kissimmee valley, which is the source of all the evil." Disston sponsored the digging of a canal long from Lake Okeechobee towards Miami, but it was abandoned when the rock proved denser than the engineers had expected. Though the canals lowered the groundwater, their capacity was inadequate for the wet season. A report that evaluated the failure of the project concluded: "The reduction of the waters is simply a question of sufficient capacity in the canals which may be dug for their relief".
Though Disston's canals did not drain, his purchase primed the economy of Florida. It made news and attracted tourists and land buyers alike. Within four years property values doubled, and the population increased significantly. One newcomer was the inventor Thomas Edison, who bought a home in Fort Myers. Disston opened real estate offices throughout the United States and Europe, and sold tracts of land for $5 an acre, establishing towns on the west coast and in central Florida. English tourists in particular were targeted and responded in large numbers. Florida passed its first water laws to "build drains, ditches, or water courses upon petition of two or more landowners" in 1893.
Henry Flagler's railroads
Due to Disston's purchase, the IIF was able to sponsor railroad projects, and the opportunity presented itself when oil tycoon Henry Flagler became enchanted with St. Augustine during a vacation. He built the opulent Ponce de Leon Hotel in St. Augustine in 1888, and began buying land and building rail lines along the east coast of Florida, first from Jacksonville to Daytona, then as far south as Palm Beach in 1893. Flagler's establishment of "the Styx", a settlement for hotel and rail line workers across the river from the barrier island containing Palm Beach, became West Palm Beach. Along the way he built resort hotels, transforming territorial outposts into tourist destinations and the land bordering the rail lines into citrus farms.
The winter of 1894–1895 produced a bitter frost that killed citrus trees as far south as Palm Beach. Miami resident Julia Tuttle sent Flagler a pristine orange blossom and an invitation to visit Miami, to persuade him to build the railroad farther south. Although he had earlier turned her down several times, Flagler finally agreed, and by 1896 the rail line had been extended to Biscayne Bay. Three months after the first train arrived, the residents of Miami, 512 in all, voted to incorporate the town. Flagler publicized Miami as a "Magic City" throughout the United States and it became a prime destination for the extremely wealthy after the Royal Palm Hotel was opened.
Broward's "Empire of the Everglades"
Despite the sale of to Disston and the skyrocketing price of land, by the turn of the 20th century the IIF was bankrupt due to mismanagement. Legal battles ensued between the State of Florida and the railroad owners about who owned the rights to sell reclaimed land in the Everglades. In 1904 gubernatorial campaigning, the strongest candidate, Napoleon Bonaparte Broward, made draining the Everglades a major plan. He called the future of south Florida the "Empire of the Everglades" and compared its potential to that of Holland and Egypt: "It would indeed be a commentary on the intelligence and energy of the State of Florida to confess that so simple an engineering feat as the drainage of a body of land above the sea was above their power", he wrote to voters. Soon after his election, he fulfilled his promise to "drain that abominable pestilence-ridden swamp" and pushed the Florida legislature to form a group of commissioners to oversee reclamation of flooded lands. They began by taxing counties that would be affected by the drainage attempts, at 5 cents an acre, and formed the Everglades Drainage District in 1907.
Broward asked James O. Wright—an engineer on loan to the State of Florida from the USDA's Bureau of Drainage Investigations—to draw up plans for drainage in 1906. Two dredges were built by 1908, but had cut only of canals. The project quickly ran out of money, so Broward sold real estate developer Richard "Dicky" J. Bolles a million dollars worth of land in the Everglades, , before the engineer's report had been submitted. Abstracts from Wright's report were given to the IIF stating that eight canals would be enough to drain at a cost of a dollar an acre. The abstracts were released to real estate developers who used them in their advertisements, and Wright and the USDA were pressed by the real estate industry to publicize the report as quickly as possible. Wright's supervisor noted errors in the report, as well as undue enthusiasm for draining, and delayed its release in 1910. Different unofficial versions of the report circulated—some that had been altered by real estate interests—and a version hastily put together by Senator Duncan U. Fletcher called U.S. Senate Document 89 included early unrevised statements, causing a frenzy of speculation.
Wright's initial report concluded that drainage would not be difficult. Building canals would be more cost effective than constructing a dike around Lake Okeechobee. The soil would be fertile after drainage, the climate would not be adversely affected, and the enormous lake would be able to irrigate farmland in the dry season. Wright based his conclusions on 15 years of weather data since the recording of precipitation began in the 1890s. His calculations concentrated on the towns of Jupiter and Kissimmee. Since weather data had not been recorded for any area within the Everglades, none was included in the report. Furthermore, the heaviest year of rain on record, Wright assumed, was atypical, and he urged that canals should not be constructed to bear that amount of water due to the expense. Wright's calculations for what canals should be able to hold were off by 55 percent. His most fundamental mistake, however, was designing the canals for a maximum rainfall of of water a day, based on flawed data for July and August rainfall, despite available data that indicated torrential downpours of and had occurred in 24-hour periods.
Though a few voices expressed skepticism of the report's conclusions—notably Frank Stoneman, editor of the Miami Evening Record and the later Miami Morning News-Record (predecessors of the Miami Herald)—the report was hailed as impeccable, coming from a branch of the U.S. government. In 1912 Florida appointed Wright to oversee the drainage, and the real estate industry energetically misrepresented this mid-level engineer as the world's foremost authority on wetlands drainage, in charge of the U.S. Bureau of Reclamation. However, the U.S. House of Representatives investigated Wright since no report had officially been published despite the money paid for it. Wright eventually retired when it was discovered that his colleagues disagreed with his conclusions and refused to approve the report's publication. One testified at the hearings: "I regard Mr. Wright as absolutely and completely incompetent for any engineering work".
Governor Broward ran for the U.S. Senate in 1908 but lost. Broward and his predecessor, William Jennings, were paid by Richard Bolles to tour the state to promote drainage. Broward was elected to the Senate in 1910, but died before he could take office. He was eulogized across Florida for his leadership and progressive inspiration. Rapidly growing Fort Lauderdale paid him tribute by naming Broward County after him (the town's original plan had been to name it Everglades County). Land in the Everglades was being sold for $15 an acre a month after Broward died. Meanwhile, Henry Flagler continued to build railway stations at towns as soon as the populations warranted them. News of the Panama Canal inspired him to connect his rail line to the closest deep water port. Biscayne Bay was too shallow, so Flagler sent railway scouts to explore the possibility of building the line through to the tip of mainland Florida. The scouts reported that not enough land was present to build through the Everglades, so Flagler instead changed the plan to build to Key West in 1912.
Boom and plume harvesting
Real estate companies continued to advertise and sell land along newly dug canals. In April 1912—the end of the dry season—reporters from all over the U.S. were given a tour of what had recently been drained, and they returned to their papers and raved about the progress. Land developers sold 20,000 lots in a few months. But as news about the Wright report continued to be negative, land values plummeted, and sales decreased. Developers were sued and arrested for mail fraud when people who had spent their life savings to buy land arrived in south Florida expecting to find a dry parcel of land to build upon and instead found it completely underwater. Advertisements promised land that would yield crops in eight weeks, but for many it took at least as long just to clear. Some burned off the sawgrass or other vegetation only to discover that the underlying peat continued to burn. Animals and tractors used for plowing got mired in the muck and were useless. When the muck dried, it turned to a fine black powder and created dust storms. Settlers encountered rodents, skinks, and biting insects, and faced dangers from mosquitoes, poisonous snakes and alligators. Though at first crops sprouted quickly and lushly, they just as quickly wilted and died, seemingly without reason. It was discovered later that the peat and muck lacked copper and other trace elements. The USDA released a pamphlet in 1915 that declared land along the New River Canal would be too costly to keep drained and fertilized; people in Ft. Lauderdale responded by collecting all of the pamphlets and burning them.
With the increasing population in towns near the Everglades came hunting opportunities. Even decades earlier, Harriet Beecher Stowe had been horrified at the hunting by visitors, and she wrote the first conservation publication for Florida in 1877: "[t]he decks of boats are crowded with men, whose only feeling amid our magnificent forests, seems to be a wild desire to shoot something and who fire at every living thing on shore." Otters and raccoons were the most widely hunted for their skins. Otter pelts could fetch between $8 and $15 each. Raccoons, more plentiful, only warranted 75 cents each in 1915 . Hunting often went unchecked; on a single trip, one Lake Okeechobee hunter killed 250 alligators and 172 otters.
Wading birds were a particular target. Their feathers were used in women's hats from the late 19th century until the 1920s. In 1886, five million birds were estimated to have been killed for their feathers. They were usually shot in the spring, when their feathers were colored for mating and nesting. Aigrettes, as the plumes were called in the millinery business, sold in 1915 for $32 an ounce, also the price of gold. Millinery was a $17-million-a-year industry that motivated plume harvesters to lie in wait at the nests of egrets and other large birds during the nesting season, shoot the parents with small-bore rifles, and leave the chicks to starve. Many hunters refused to participate after watching the gruesome results of a plume hunt. Still, plumes from Everglades wading birds could be found in Havana, New York City, London, and Paris. A dealer in New York paid at least 60 hunters to provide him with "almost anything that wore feathers, but particularly the Herons, Spoonbills, and showy birds". Hunters could collect plumes from a hundred birds on a good day.
Plume harvesting became a dangerous business. The Audubon Society became concerned with the amount of hunting being done in rookeries in the mangrove forests. In 1902, they hired a warden, Guy Bradley, to watch the rookeries around Cuthbert Lake. Bradley had lived in Flamingo within the Everglades, and was murdered in 1905 by one of his neighbors after he tried to prevent him from hunting. Protection of birds was the reason for establishing the first wildlife refuge when President Theodore Roosevelt set Pelican Island as a sanctuary in 1903.
In the 1920s, after birds were protected and alligators hunted nearly to extinction, Prohibition created a living for those willing to smuggle alcohol into the U.S. from Cuba. Rum-runners used the vast Everglades as a hiding spot: there were never enough law enforcement officers to patrol it. The advent of the fishing industry, the arrival of the railroad, and the discovery of the benefits of adding copper to Okeechobee muck soon created unprecedented numbers of residents in new towns like Moore Haven, Clewiston, and Belle Glade. By 1921, 2,000 people lived in 16 new towns around Lake Okeechobee. Sugarcane became the primary crop grown in south Florida and it began to be mass-produced. Miami experienced a second real estate boom that earned a developer in Coral Gables $150 million and saw undeveloped land north of Miami sell for $30,600 an acre. Miami became cosmopolitan and experienced a renaissance of architecture and culture. Hollywood movie stars vacationed in the area and industrialists built lavish homes. Miami's population multiplied fivefold, and Ft. Lauderdale and Palm Beach grew many times over as well. In 1925, Miami newspapers published editions weighing over , most of it real estate advertising. Waterfront property was the most highly valued. Mangrove trees were cut down and replaced with palm trees to improve the view. Acres of south Florida slash pine were taken down, some for lumber, but the wood was found to be dense and it split apart when nails were driven into it. It was also termite-resistant, but homes were needed quickly. Most of the pine forests in Dade County were cleared for development.
Hurricanes
The canals proposed by Wright were unsuccessful in making the lands south of Lake Okeechobee fulfill the promises made by real estate developers to local farmers. The winter of 1922 was unseasonably wet and the region was underwater. The town of Moore Haven received of rain in six weeks in 1924. Engineers were pressured to regulate the water flow, not only for farmers but also for commercial fishers, who often requested conflicting water levels in the lake. Fred Elliot, who was in charge of building the canals after James Wright retired, commented: "A man on one side of the canal wants it raised for his particular use and a man on the other side wants it lowered for his particular use".
1926 Miami Hurricane
The 1920s brought several favorable conditions that helped the land and population boom, one of which was an absence of any severe storms. The last severe hurricane, in 1906, had struck the Florida Keys. Many homes were constructed hastily and poorly as a result of this lull in storms. However, on September 18, 1926, a storm that became known as the 1926 Miami Hurricane struck with winds over , and caused massive devastation. The storm surge was as high as in some places. Henry Flagler's opulent Royal Palm Hotel was destroyed along with many other hotels and buildings. Most people who died did so when they ran out into the street in disbelief while the eye of the hurricane passed over, not knowing the wind was coming in from the other direction. "The lull lasted 35 minutes, and during that time the streets of the city became crowded with people", wrote Richard Gray, the local weather chief. "As a result, many lives were lost during the second phase of the storm." In Miami alone, 115 people were counted dead—although the true figure may have been as high as 175, because death totals were racially segregated. More than 25,000 people were homeless in the city. The town of Moore Haven, bordering Lake Okeechobee, was hardest hit. A levee built of muck collapsed, drowning almost 400 of the town's entire 1,200 residents. The tops of Lake Okeechobee levees were only above the lake itself and the engineers were aware of the danger. Two days before the hurricane, an engineer predicted, "[i]f we have a blow, even a gale, Moore Haven is going under water". The engineer lost his wife and daughter in the flood.
The City of Miami responded to the hurricane by downplaying its effects and turning down aid. The Miami Herald declared two weeks after the storm that almost everything in the city had returned to normal. The governor supported the efforts to minimize the appearance of the destruction by refusing to call a special legislative session to appropriate emergency funds for relief. As a result, the American Red Cross was able to collect only $3 million of $5 million needed. The 1926 hurricane effectively ended the land boom in Miami, despite the attempts at hiding the effects. It also forced drainage commissioners to re-evaluate the effectiveness of the canals. A $20 million plan to build a dike around Lake Okeechobee, to be paid by property taxes, was turned down after a skeptical constituency sued to stop it; more than $14 million had been spent on canals and they were ineffective in taking away excess water or delivering it when needed.
1928 Okeechobee Hurricane
The weather was unremarkable for two years. In 1928, construction was completed on the Tamiami Trail, named because it was the only road spanning between Tampa and Miami. The builders attempted to construct the road several times before they blasted the muck down to the limestone, filled it with rock and paved over it. Hard rains in the summer caused Lake Okeechobee to rise several feet; this was noticed by a local newspaper editor who demanded it be lowered. However, on September 16, 1928, came a massive storm, now known as the 1928 Okeechobee Hurricane. Thousands drowned when Lake Okeechobee breached its levees; the range of estimates of the dead spanned from 1,770 (according to the Red Cross) to 3,000 or more. Many were swept away and never recovered. The majority of the dead were black migrant workers who had recently settled in or near Belle Glade. The catastrophe made national news, and although the governor again refused aid, after he toured the area and counted 126 bodies still unburied or uncollected a week after the storm, he activated the National Guard to assist in the cleanup, and declared in a telegram: "Without exaggeration, the situation in the storm area beggars description".
Herbert Hoover Dike
The focus of government agencies quickly shifted to the control of floods rather than drainage. The Okeechobee Flood Control District, financed by both state and federal funds, was created in 1929. President Herbert Hoover toured the towns affected by the 1928 Okeechobee Hurricane and, an engineer himself, ordered the Army Corps of Engineers to assist the communities surrounding the lake. Between 1930 and 1937, a dike long was built around the southern edge of the lake, and a shorter one around the northern edge. It was tall and thick on the lake side, thick on the top, and thick toward land. Control of the Hoover Dike and the waters of Lake Okeechobee were delegated to federal powers: the United States declared legal limits of the lake to be and .
A massive canal wide and deep was also dug through the Caloosahatchee River; when the lake rose too high, the excess water left through the canal to the Gulf of Mexico. Exotic trees were planted along the north shore levee: Australian pines, Australian oaks, willows, and bamboo. More than $20 million was spent on the entire project. Sugarcane production soared after the dike and canal were built. The populations of the small towns surrounding the lake jumped from 3,000 to 9,000 after World War II.
Drought
The effects of the Hoover Dike were seen immediately. An extended drought occurred in the 1930s, and with the wall preventing water leaving Lake Okeechobee and canals and ditches removing other water, the Everglades became parched. Peat turned to dust, and salty ocean water entered Miami's wells. When the city brought in an expert to investigate, he discovered that the water in the Everglades was the area's groundwater—here, it appeared on the surface. Draining the Everglades removed this groundwater, which was replaced by ocean water seeping into the area's wells. In 1939, of Everglades burned, and the black clouds of peat and sawgrass fires hung over Miami. Underground peat fires burned roots of trees and plants without burning the plants in some places. Scientists who took soil samples before draining had not taken into account that the organic composition of peat and muck in the Everglades was mixed with bacteria that added little to the process of decomposition underwater because they were not mixed with oxygen. As soon as the water was drained and oxygen mixed with the soil, the bacteria began to break down the soil. In some places, homes had to be moved on to stilts and of topsoil was lost.
Conservation attempts
Conservationists concerned about the Everglades have been a vocal minority ever since Miami was a young city. South Florida's first and perhaps most enthusiastic naturalist was Charles Torrey Simpson, who retired from the Smithsonian Institution to Miami in 1905 when he was 53. Nicknamed "the Sage of Biscayne Bay", Simpson wrote several books about tropical plant life around Miami. His backyard contained a tropical hardwood hammock, which he estimated he showed to about 50,000 people. Though he tended to avoid controversy regarding development, in Ornamental Gardening in Florida he wrote, "Mankind everywhere has an insane desire to waste and destroy the good and beautiful things this nature has lavished upon him".
Although the idea of protecting a portion of the Everglades arose in 1905, a crystallized effort was formed in 1928 when Miami landscape designer Ernest F. Coe established the Everglades Tropical National Park Association. It had enough support to be declared a national park by Congress in 1934, but there was not enough money during the Great Depression to buy the proposed for the park. It took another 13 years for it to be dedicated on December 6, 1947.
One month before the dedication of the park, the former editor of The Miami Herald and freelance writer Marjory Stoneman Douglas published her first book, The Everglades: River of Grass. After researching the region for five years, she described the history and ecology of the south of Florida in great detail, characterizing the Everglades as a river instead of a stagnant swamp. Douglas later wrote, "My colleague Art Marshall said that with [the words "River of Grass"] I changed everybody's knowledge and educated the world as to what the Everglades meant". The last chapter was titled "The Eleventh Hour" and warned that the Everglades were approaching death, although the course could be reversed. Its first printing sold out a month after its release.
Flood control
Coinciding with the dedication of Everglades National Park, 1947 in south Florida saw two hurricanes and a wet season responsible for of rain, ending the decade-long drought. Although there were no human casualties, cattle and deer were drowned and standing water was left in suburban areas for months. Agricultural interests lost about $59 million. The embattled head of the Everglades Drainage District carried a gun for protection after being threatened.
Central and Southern Florida Flood Control Project
In 1948 Congress approved the Central and Southern Florida Project for Flood Control and Other Purposes (C&SF) and consolidated the Everglades Drainage District and the Okeechobee Flood Control District under this. The C&SF used four methods in flood management: levees, water storage areas, canal improvements, and large pumps to assist gravity. Between 1952 and 1954 in cooperation with the state of Florida it built a levee long between the eastern Everglades and suburbs from Palm Beach to Homestead, and blocked the flow of water into populated areas. Between 1954 and 1963 it divided the Everglades into basins. In the northern Everglades were Water Conservation Areas (WCAs), and the Everglades Agricultural Area (EAA) bordering to the south of Lake Okeechobee. In the southern Everglades was Everglades National Park. Levees and pumping stations bordered each WCA, which released water in drier times and removed it and pumped it to the ocean or Gulf of Mexico in times of flood. The WCAs took up about 37 percent of the original Everglades.
During the 1950s and 1960s the South Florida metropolitan area grew four times as fast as the rest of the nation. Between 1940 and 1965, 6 million people moved to south Florida: 1,000 people moved to Miami every week. Urban development between the mid-1950s and the late 1960s quadrupled. Much of the water reclaimed from the Everglades was sent to newly developed areas. With metropolitan growth came urban problems associated with rapid expansion: traffic jams; school overcrowding; crime; overloaded sewage treatment plants; and, for the first time in south Florida's urban history, water shortages in times of drought.
The C&SF constructed over of canals, and hundreds of pumping stations and levees within three decades. It produced a film, Waters of Destiny, characterized by author Michael Grunwald as propaganda, that likened nature to a villainous, shrieking force of rage and declared the C&SF's mission was to tame nature and make the Everglades useful. Everglades National Park management and Marjory Stoneman Douglas initially supported the C&SF, as it promised to maintain the Everglades and manage the water responsibly. However, an early report by the project reflected local attitudes about the Everglades as a priority to people in nearby developed areas: "The aesthetic appeal of the Park can never be as strong as the demands of home and livelihood. The manatee and the orchid mean something to people in an abstract way, but the former cannot line their purse, nor the latter fill their empty bellies."
Establishment of the C&SF made Everglades National Park completely dependent upon another political entity for its survival. One of the C&SF's projects was Levee 29, laid along the Tamiami Trail on the northern border of the park. Levee 29 featured four flood control gates that controlled all the water entering Everglades National Park; before construction, water flowed in through open drain pipes. The period from 1962 to 1965 was one of drought for the Everglades, and Levee 29 remained closed to allow the Biscayne Aquifer—the fresh water source for South Florida—to stay filled. Animals began to cross Tamiami Trail for the water held in WCA 3, and many were killed by cars. Biologists estimate the population of alligators in Everglades National Park was halved; otters nearly became extinct. The populations of wading birds had been reduced by 90 percent from the 1940s. When park management and the U.S. Department of the Interior asked the C&SF for assistance, the C&SF offered to build a levee along the southern border of Everglades National Park to retain waters that historically flowed through the mangroves and into Florida Bay. Though the C&SF refused to send the park more water, they constructed Canal 67, bordering the east side of the park and carrying excess water from Lake Okeechobee to the Atlantic.
Everglades Agricultural Area
The C&SF established for the Everglades Agricultural Area—27 percent of the Everglades before development. In the late 1920s, agricultural experiments indicated that adding large amounts of manganese sulfate to Everglades muck produced profitable vegetable harvests. Adding of the compound was more cost effective than adding of manure. The primary cash crop in the EAA is sugarcane, though sod, beans, lettuce, celery, and rice are also grown. Sugarcane became more consolidated an industry than did any other crop; in 1940 the coalition of farms was renamed U.S. Sugar and this produced 86 percent of Everglades sugar. During the 1930s the sugarcane farmers' coalition came under investigation for labor practices that bordered on slavery. Potential employees—primarily young black men—were lured from all over the U.S. by the promise of jobs, but they were held financially responsible for training, transportation, room and board and other costs. Quitting while debts were owed was punishable with jail time. By 1942, U.S. Sugar was indicted for peonage in federal court, though the charges were eventually dismissed on a technicality. U.S. Sugar benefited significantly from the U.S. embargo on Cuban goods beginning in the early 1960s. In 1958, before the Castro regime, of sugarcane were harvested in Florida; by the 1964–1965 season, were harvested. From 1959 to 1962 the region went from two sugar mills to six, one of which in Belle Glade set several world records for sugar production.
Fields in the EAA are typically , on two sides bordered by canals that are connected to larger ones by which water is pumped in or out depending on the needs of the crops. The water level for sugarcane is ideally maintained at below the surface soil, and after the cane is harvested, the stalks are burned. Vegetables require more fertilizer than sugarcane, though the fields may resemble the historic hydrology of the Everglades by being flooded in the wet season. Sugarcane, however, requires water in the dry season. The fertilizers used on vegetables, along with high concentrations of nitrogen and phosphorus that are the by-product of decayed soil necessary for sugarcane production, were pumped into WCAs south of the EAA, predominantly to Everglades National Park. The introduction of large amounts of these let exotic plants take hold in the Everglades. One of the defining characteristics of natural Everglades ecology is its ability to support itself in a nutrient-poor environment, and the introduction of fertilizers began to change this ecology.
Turning point
A turning point for development in the Everglades came in 1969 when a replacement airport was proposed as Miami International Airport outgrew its capacities. Developers began acquiring land, paying $180 an acre in 1968, and the Dade County Port Authority (DCPA) bought in the Big Cypress Swamp without consulting the C&SF, management of Everglades National Park or the Department of the Interior. Park management learned of the official purchase and agreement to build the jetport from The Miami Herald the day it was announced. The DCPA bulldozed the land it had bought, and laid a single runway it declared was for training pilots. The new jetport was planned to be larger than O'Hare, Dulles, JFK, and LAX airports combined; the location chosen was north of the Everglades National Park, within WCA 3. The deputy director of the DCPA declared: "This is going to be one of the great population centers of America. We will do our best to meet our responsibilities and the responsibilities of all men to exercise dominion over the land, sea, and air above us as the higher order of man intends."
The C&SF brought the jetport proposal to national attention by mailing letters about it to 100 conservation groups in the U.S. Initial local press reaction condemned conservation groups who immediately opposed the project. Business Week reported real estate prices jumped from $200 to $800 an acre surrounding the planned location, and Life wrote of the expectations of the commercial interests in the area. The U.S. Geological Survey's study of the environmental impact of the jetport started, "Development of the proposed jetport and its attendant facilities ... will inexorably destroy the south Florida ecosystem and thus the Everglades National Park". The jetport was intended to support a community of a million people and employ 60,000. The DCPA director was reported in Time saying, "I'm more interested in people than alligators. This is the ideal place as far as aviation is concerned."
When studies indicated the proposed jetport would create of raw sewage a day and of jet engine pollutants a year, the national media snapped to attention. Science magazine wrote, in a series on environmental protection highlighting the jetport project, "Environmental scientists have become increasingly aware that, without careful planning, development of a region and the conservation of its natural resources do not go hand in hand". The New York Times called it a "blueprint for disaster", and Wisconsin senator Gaylord Nelson wrote to President Richard Nixon voicing his opposition: "It is a test of whether or not we are really committed in this country to protecting our environment." Governor Claude Kirk withdrew his support for the project, and the 78-year-old Marjory Stoneman Douglas was persuaded to go on tour to give hundreds of speeches against it. She established Friends of the Everglades and encouraged more than 3,000 members to join. Initially the U.S. Department of Transportation pledged funds to support the jetport, but after pressure, Nixon overruled the department. He instead established Big Cypress National Preserve, announcing it in the Special Message to the Congress Outlining the 1972 Environmental Program. Following the jetport proposition, restoration of the Everglades became not only a statewide priority, but an international one as well. In the 1970s the Everglades were declared an International Biosphere Reserve and a World Heritage Site by UNESCO, and a Wetland of International Importance by the Ramsar Convention, making it one of only three locations on Earth that have appeared on all three lists.
See also
Environmental issues in Florida
Indigenous people of the Everglades region
Seminole
History of Miami, Florida
Restoration of the Everglades
Swamp Land Act of 1850
Clean Water Act (1972)
North American Wetlands Conservation Act (1989)
List of canals in the United States#Irrigation, industrial and drainage canals
Notes and references
Bibliography
Barnett, Cynthia (2007). Mirage: Florida and the Vanishing Water of the Eastern U.S.. Ann Arbor: University of Michigan Press.
Carter, W. Hodding (2004). Stolen Water: Saving the Everglades from its Friends, Foes, and Florida. Atria Books.
Caulfield, Patricia (1970) Everglades. New York: Sierra Club / Ballantine Books.
Douglas, Marjory (1947). The Everglades: River of Grass. R. Bemis Publishing, Ltd.
Douglas, Marjory; Rothchild, John (1987). Marjory Stoneman Douglas: Voice of the River. Pineapple Press.
Grunwald, Michael (2006). The Swamp: The Everglades, Florida, and the Politics of Paradise. New York: Simon & Schuster.
Lodge, Thomas E. (1994). The Everglades Handbook: Understanding the Ecosystem. CRC Press.
McCally, David (1999). The Everglades: An Environmental History. Gainesville: University Press of Florida. Available as an etext; Boulder, Colo.: NetLibrary, 2001.
Tebeau, Charlton (1968). Man in the Everglades: 2000 Years of Human History in the Everglades National Park. Coral Gables: University of Miami Press.
External links
U.S. Geological Survey information on the Everglades Agricultural Area
Everglades Timeline
Frank Stoneman and the Florida Everglades During the Early 20th Century
Everglades
Hydraulic engineering
Environmental issues in Florida
History of sugar
Sugar industry of Florida | Draining and development of the Everglades | [
"Physics",
"Engineering",
"Environmental_science"
] | 9,841 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
17,376,281 | https://en.wikipedia.org/wiki/Drucker%E2%80%93Prager%20yield%20criterion | The Drucker–Prager yield criterion is a pressure-dependent model for determining whether a material has failed or undergone plastic yielding. The criterion was introduced to deal with the plastic deformation of soils. It and its many variants have been applied to rock, concrete, polymers, foams, and other pressure-dependent materials.
The Drucker–Prager yield criterion has the form
where is the first invariant of the Cauchy stress and is the second invariant of the deviatoric part of the Cauchy stress. The constants are determined from experiments.
In terms of the equivalent stress (or von Mises stress) and the hydrostatic (or mean) stress, the Drucker–Prager criterion can be expressed as
where is the equivalent stress, is the hydrostatic stress, and
are material constants. The Drucker–Prager yield criterion expressed in Haigh–Westergaard coordinates is
The Drucker–Prager yield surface is a smooth version of the Mohr–Coulomb yield surface.
Expressions for A and B
The Drucker–Prager model can be written in terms of the principal stresses as
If is the yield stress in uniaxial tension, the Drucker–Prager criterion implies
If is the yield stress in uniaxial compression, the Drucker–Prager criterion implies
Solving these two equations gives
Uniaxial asymmetry ratio
Different uniaxial yield stresses in tension and in compression are predicted by the Drucker–Prager model. The uniaxial asymmetry ratio for the Drucker–Prager model is
Expressions in terms of cohesion and friction angle
Since the Drucker–Prager yield surface is a smooth version of the Mohr–Coulomb yield surface, it is often expressed in terms of the cohesion () and the angle of internal friction () that are used to describe the Mohr–Coulomb yield surface. If we assume that the Drucker–Prager yield surface circumscribes the Mohr–Coulomb yield surface then the expressions for and are
If the Drucker–Prager yield surface middle circumscribes the Mohr–Coulomb yield surface then
If the Drucker–Prager yield surface inscribes the Mohr–Coulomb yield surface then
{| class="toccolours collapsible collapsed" width="90%" style="text-align:left"
!Derivation of expressions for in terms of
|-
|The expression for the Mohr–Coulomb yield criterion in Haigh–Westergaard space is
If we assume that the Drucker–Prager yield surface circumscribes the Mohr–Coulomb yield surface such that the two surfaces coincide at , then at those points the Mohr–Coulomb yield surface can be expressed as
or,
The Drucker–Prager yield criterion expressed in Haigh–Westergaard coordinates is
Comparing equations (1.1) and (1.2), we have
These are the expressions for in terms of .
On the other hand, if the Drucker–Prager surface inscribes the Mohr–Coulomb surface, then matching the two surfaces at gives
|}
Drucker–Prager model for polymers
The Drucker–Prager model has been used to model polymers such as polyoxymethylene and polypropylene. For polyoxymethylene the yield stress is a linear function of the pressure. However, polypropylene shows a quadratic pressure-dependence of the yield stress.
Drucker–Prager model for foams
For foams, the GAZT model uses
where is a critical stress for failure in tension or compression, is the density of the foam, and is the density of the base material.
Extensions of the isotropic Drucker–Prager model
The Drucker–Prager criterion can also be expressed in the alternative form
Deshpande–Fleck yield criterion or isotropic foam yield criterion
The Deshpande–Fleck yield criterion for foams has the form given in above equation. The parameters for the Deshpande–Fleck criterion are
where is a parameter that determines the shape of the yield surface, and is the yield stress in tension or compression.
Anisotropic Drucker–Prager yield criterion
An anisotropic form of the Drucker–Prager yield criterion is the Liu–Huang–Stout yield criterion. This yield criterion is an extension of the generalized Hill yield criterion and has the form
The coefficients are
where
and are the uniaxial yield stresses in compression in the three principal directions of anisotropy, are the uniaxial yield stresses in tension, and are the yield stresses in pure shear. It has been assumed in the above that the quantities are positive and are negative.
The Drucker yield criterion
The Drucker–Prager criterion should not be confused with the earlier Drucker criterion which is independent of the pressure (). The Drucker yield criterion has the form
where is the second invariant of the deviatoric stress, is the third invariant of the deviatoric stress, is a constant that lies between -27/8 and 9/4 (for the yield surface to be convex), is a constant that varies with the value of . For , where is the yield stress in uniaxial tension.
Anisotropic Drucker Criterion
An anisotropic version of the Drucker yield criterion is the Cazacu–Barlat (CZ) yield criterion which has the form
where are generalized forms of the deviatoric stress and are defined as
Cazacu–Barlat yield criterion for plane stress
For thin sheet metals, the state of stress can be approximated as plane stress. In that case the Cazacu–Barlat yield criterion reduces to its two-dimensional version with
For thin sheets of metals and alloys, the parameters of the Cazacu–Barlat yield criterion are
See also
Yield surface
Yield (engineering)
Plasticity (physics)
Material failure theory
Daniel C. Drucker
William Prager
References
Plasticity (physics)
Soil mechanics
Solid mechanics
Yield criteria | Drucker–Prager yield criterion | [
"Physics",
"Materials_science"
] | 1,294 | [
"Solid mechanics",
"Applied and interdisciplinary physics",
"Deformation (mechanics)",
"Soil mechanics",
"Plasticity (physics)",
"Mechanics"
] |
17,376,728 | https://en.wikipedia.org/wiki/Renewable%20Energy%20Sources%20and%20Climate%20Change%20Mitigation | The United Nations Intergovernmental Panel on Climate Change (IPCC) published a special report on Renewable Energy Sources and Climate Change Mitigation (SRREN) on May 9, 2011. The report developed under the leadership of Ottmar Edenhofer evaluates the global potential for using renewable energy to mitigate climate change. This IPCC special report provides broader coverage of renewable energy than was included in the IPCC's 2007 climate change assessment report, as well as stronger renewable energy policy coverage.
In the present time, there is an obvious trend to have more renewable energy sources and therefore to overcome life crisis that can go when oil and gas expire. Renewable energy can contribute to "social and economic development, energy access, secure energy supply, climate change mitigation, and the reduction of negative environmental and health impacts". Under favourable circumstances, cost savings in comparison to non-renewable energy use exist.
History
Previously the IPCC examined both renewable energy and energy efficiency in its fourth assessment report, published in 2007, but members decided that renewable energy commercialization merits additional in-depth coverage because of its importance in reducing carbon emissions.
The outline of the IPCC WG III's Special Report on Renewable Energy Sources and Climate Change Mitigation (SRREN) was approved at the IPCC Plenary in Budapest in April, 2008. The final report was approved at the 11th session of the IPCC Working Group III, May 2011, in Abu Dhabi. The SRREN addresses the information needs of policy makers, private sector and civil society in a comprehensive way and will provide valuable information for further IPCC publications, including the upcoming IPCC 5th Assessment Report. The SRREN was released for publication on May 9, 2011.
The Special Report "aims to provide a better understanding and broader information on the mitigation potential of renewable energy sources: technological feasibility, economic potential and market status, economic and environmental costs&benefits, impacts on energy security, co-benefits in achieving sustainable development, opportunities and synergies, options and constraints for integration into the energy supply systems and in the societies".
Main findings
Renewable energy can contribute to "social and economic development, energy access, secure energy supply, climate change mitigation, and the reduction of negative environmental and health impacts".
In the report, the IPCC said "as infrastructure and energy systems develop, in spite of the complexities, there are few, if any, fundamental technological limits to integrating a portfolio of renewable energy technologies to meet a majority share of total energy demand in locations where suitable renewable resources exist or can be supplied". Under favourable circumstances, cost savings in comparison to non-renewable energy use exist. IPCC scenarios "generally indicate that growth in renewable energy will be widespread around the world". The IPCC said that if governments were supportive, and the full range of renewable technologies were deployed, renewable energy could account for almost 80% of the world's energy supply within four decades. Rajendra Pachauri, chairman of the IPCC, said the necessary investment in renewables would cost only about 1% of global GDP annually. This approach could keep greenhouse gas concentrations to less than 450 parts per million, the safe level beyond which climate change becomes catastrophic and irreversible.
See also
IPCC 4th Assessment Report (AR4)
IPCC Fifth Assessment Report (AR5)
IPCC Summary for Policymakers
IRENA
Renewable energy commercialization
REN21
List of books about renewable energy
References
External links
Report (in English)
IPCC SRREN: Full Report (In German)
Intergovernmental Panel on Climate Change
Climate change books
Energy development
Environmental non-fiction books
Environmental reports
Technology in society
Sustainability books
Economics and climate change
Books about energy issues
2011 non-fiction books
Renewable energy
Emissions reduction
Cambridge University Press books | Renewable Energy Sources and Climate Change Mitigation | [
"Chemistry"
] | 765 | [
"Greenhouse gases",
"Emissions reduction"
] |
17,377,095 | https://en.wikipedia.org/wiki/Photo-oxidation%20of%20polymers | In polymer chemistry, photo-oxidation (sometimes: oxidative photodegradation) is the degradation of a polymer surface due to the combined action of light and oxygen. It is the most significant factor in the weathering of plastics. Photo-oxidation causes the polymer chains to break (chain scission), resulting in the material becoming increasingly brittle. This leads to mechanical failure and, at an advanced stage, the formation of microplastics. In textiles, the process is called phototendering.
Technologies have been developed to both accelerate and inhibit this process. For example, plastic building components like doors, window frames and gutters are expected to last for decades, requiring the use of advanced UV-polymer stabilizers. Conversely, single-use plastics can be treated with biodegradable additives to accelerate their fragmentation.
Many pigments and dyes can similarly have effects due to their ability to absorb UV-energy.
Susceptible polymers
Susceptibility to photo-oxidation varies depending on the chemical structure of the polymer. Some materials have excellent stability, such as fluoropolymers, polyimides, silicones and certain acrylate polymers. However, global polymer production is dominated by a range of commodity plastics which account for the majority of plastic waste. Of these polyethylene terephthalate (PET) has only moderate UV resistance and the others, which include polystyrene, polyvinyl chloride (PVC) and polyolefins like polypropylene (PP) and polyethylene (PE) are all highly susceptible.
Photo-oxidation is a form of photodegradation and begins with formation of free radicals on the polymer chain, which then react with oxygen in chain reactions. For many polymers the general autoxidation mechanism is a reasonable approximation of the underlying chemistry. The process is autocatalytic, generating increasing numbers of radicals and reactive oxygen species. These reactions result in changes to the molecular weight (and molecular weight distribution) of the polymer and as a consequence the material becomes more brittle. The process can be divided into four stages:
Initiation the process of generating the initial free radical.
Propagation the conversion of one active species to another
Chain branching steps which end with more than one active species being produced. The photolysis of hydroperoxides is the main example.
Termination steps in which active species are removed, for instance by radical disproportionation
Photo-oxidation can occur simultaneously with other processes like thermal degradation, and each of these can accelerate the other.
Polyolefins
Polyolefins such as polyethylene and polypropylene are susceptible to photo-oxidation and around 70% of light stabilizers produced world-wide are used in their protection, despite them representing only around 50% of global plastic production. Aliphatic hydrocarbons can only adsorb high energy UV-rays with a wavelength below ~250 nm, however the Earth's atmosphere and ozone layer screen out such rays, with the normal minimum wavelength being 280–290 nm.
The bulk of the polymer is therefore photo-inert and degradation is instead attributed to the presence of various impurities, which are introduced during the manufacturing or processing stages. These include hydroperoxide and carbonyl groups, as well as metal salts such as catalyst residues.
All of these species act as photoinitiators.
The organic hydroperoxide and carbonyl groups are able to absorb UV light above 290 nm whereupon they undergo photolysis to generate radicals. Metal impurities act as photocatalysts, although such reactions can be complex. It has also been suggested that polymer-O2 charge-transfer complexes are involved. Initiation generates radical-carbons on the polymer chain, sometimes called macroradicals (P•).
Chain initiation
Polymer -> P\bullet +\ P\bullet
Chain propagation
P\bullet +\ O2 -> POO\bullet
POO\bullet +\ PH -> {POOH} +\ P\bullet
Chain branching
POOH -> PO\bullet +\ OH\bullet
{PH} + OH\bullet -> P\bullet +\ H2O
PO\bullet -> Chain\ scission\ reactions
Termination
POO\bullet +\ POO\bullet -> cross\ linking\ reaction\ to\ non-radical\ product
POO\bullet +\ P\bullet -> cross\ linking\ reaction\ to\ non-radical\ product
P\bullet +\ P\bullet -> cross\ linking\ reaction\ to\ non-radical\ product
Classically the carbon-centred macroradicals (P•) rapidly react with oxygen to form hydroperoxyl radicals (POO•), which in turn abstract an H atom from the polymer chain to give a hydroperoxide (POOH) and a fresh macroradical. Hydroperoxides readily undergo photolysis to give an alkoxyl macroradical radical (PO•) and a hydroxyl radical (HO•), both of which may go on to form new polymer radicals via hydrogen abstraction. Non-classical alternatives to these steps have been proposed. The alkoxyl radical may also undergo beta scission, generating an acyl-ketone and macroradical. This is considered to be the main cause of chain breaking in polypropylene.
Secondary hydroperoxides can also undergo an intramolecular reaction to give a ketone group, although this is limited to polyethylene.
The ketones generated by these processes are themselves photo-active, although much more weakly. At ambient temperatures they undergo Type II Norrish reactions with chain scission. They may also absorb UV-energy, which they can then transfer to O2, causing it to enter its highly reactive singlet state. Singlet oxygen is a potent oxidising agent and can go on to cause further degradation.
Polystyrene
For polystyrene the complete mechanism of photo-oxidation is still a matter of debate, as different pathways may operate concurrently and vary according to the wavelength of the incident light.
Regardless, there is agreement on the major steps.
Pure polystyrene should not be able to absorb light with a wavelength below ~280 nm and initiation is explained though photo-labile impurities (hydroperoxides) and charge transfer complexes, all of which are able to absorb normal sunlight. Charge-transfer complexes of oxygen and polystyrene phenyl groups absorb light to form singlet oxygen, which acts as a radical initiator. Carbonyl impurities in the polymer (c.f. acetophenone) also absorb light in the near ultraviolet range (300 to 400 nm), forming excited ketones able to abstract hydrogen atoms directly from the polymer. Hyroperoxide undergoes photolysis to form hydroxyl and alkoxyl radicals.
These initiation steps generate macroradicals at tertiary sites, as these are more stabilised. The propagation steps are essentially identical to those seen for polyolefins; with oxidation, hydrogen abstraction and photolysis leading to beta scission reactions and increasing numbers of radicals.
These steps account for the majority of chain-breaking, however in a minor pathway the hydroperoxide reacts directly with polymer to form a ketone group (acetophenone) and a terminal alkene without the formation of additional radicals.
Polystyrene is observed to yellow during photo-oxidation, which is attributed to the formation of polyenes from these terminal alkenes.
Polyvinyl chloride (PVC)
Pure organochlorides like polyvinyl chloride (PVC) do not absorb any light above 220 nm. The initiation of photo-oxidation is instead caused by various irregularities in the polymer chain, such as structural defects as well as hydroperoxides, carbonyl groups, and double bonds.
Hydroperoxides formed during processing are the most important initiator to begin with, however their concentration decreases during photo-oxidation whereas carbonyl concentration increases, as such carbonyls may become the primary initiator over time.
Propagation steps involve the hydroperoxyl radical, which can abstract hydrogen from both hydrocarbon (-CH2-) and organochloride (-CH2Cl-) sites in the polymer at comparable rates. Radicals formed at hydrocarbon sites rapidly convert to alkenes with loss of radical chlorine. This forms allylic hydrogens (shown in red) which are more susceptible to hydrogen abstraction leading to the formation of polyenes in zipper-like reactions.
When the polyenes contain at least eight conjugated double bonds they become coloured, leading to yellowing and eventual browning of the material. This is off-set slightly by longer polyenes being photobleached with atmospheric oxygen, however PVC does eventually discolour unless polymer stabilisers are present. Reactions at organochloride sites proceed via the usual hydroperoxyl and hydroperoxide before photolysis yields the α-chloro-alkoxyl radical. This species can undergo various reactions to give carbonyls, peroxide cross-links and beta scission products.
Poly(ethylene terephthalate) - (PET)
Unlike most other commodity plastics polyethylene terephthalate (PET) is able to absorb the near ultraviolet rays in sunlight. Absorption begins at 360 nm, becoming stronger below 320 nm and is very significant below 300 nm. Despite this PET has better resistance to photo-oxidation than other commodity plastics, this is due to a poor quantum yield or the absorption. The degradation chemistry is complicated due to simultaneous photodissociation (i.e. not involving oxygen) and photo-oxidation reactions of both the aromatic and aliphatic parts of the molecule. Chain scission is the dominant process, with chain branching and the formation of coloured impurities being less common. Carbon monoxide, carbon dioxide, and carboxylic acids are the main products.
The photo-oxidation of other linear polyesters such as polybutylene terephthalate and polyethylene naphthalate proceeds similarly.
Photodissociation involves the formation of an excited terephthalic acid unit which undergoes Norrish reactions. The type I reaction dominates, which cause chain scission at the carbonyl unit to give a range of products.
Type II Norrish reactions are less common but give rise to acetaldehyde by way of vinyl alcohol esters. This has an exceedingly low odour and taste threshold and can cause an off-taste in bottled water.
Radicals formed by photolysis may initiate the photo-oxidation in PET. Photo-oxidation of the aromatic terephthalic acid core results in its step-wise oxidation to 2,5-dihydroxyterephthalic acid. The photo-oxidation process at aliphatic sites is similar to that seen for polyolefins, with the formation of hydroperoxide species eventually leading to beta-scission of the polymer chain.
Secondary factors
Environment
Perhaps surprisingly, the effect of temperature is often greater than the effect of UV exposure. This can be seen in terms of the Arrhenius equation, which shows that reaction rates have an exponential dependence on temperature. By comparison the dependence of degradation rate on UV exposure and the availability of oxygen is broadly linear. As the oceans are cooler than land plastic pollution in the marine environment degrades more slowly. Materials buried in landfill do not degrade by photo-oxidation at all, though they may gradually decay by other processes.
Mechanical stress can effect the rate of photo-oxidation and may also accelerate the physical breakup of plastic objects. Stress can be caused by mechanical load (tensile and shear stresses) or even by temperature cycling, particularly in composite systems consisting of materials with differing temperature coefficients of expansion. Similarly, sudden rainfall can cause thermal stress.
Effects of dyes and other additives
Dyes and pigments are used in polymer materials to provide colour, however they can also effect the rate of photo-oxidation. Many absorb UV rays and in so doing protect the polymer, however absorption can cause the dyes to enter an excited state where they may attack the polymer or transfer energy to O2 to form damaging singlet oxygen. Cu-phthalocyanine is an example, it strongly absorbs UV light however the excited Cu-phthalocyanine may act as a photoinitiator by abstracting hydrogen atoms from the polymer. Its interactions may become even more complicated when other additives are present.
Fillers such as carbon black can screen out UV light, effectively stabilisers the polymer, whereas flame retardants tend to cause increased levels of photo-oxidation.
Additives to enhance degradation
Biodegradable additives may be added to polymers to accelerate their degradation. In the case of photo-oxidation OXO-biodegradation additives are used. These are transition metal salts such as iron (Fe), manganese (Mn), and cobalt (Co). Fe complexes increase the rate of photooxidation by promoting the homolysis of hydroperoxides via Fenton reactions.
The use of such additives has been controversial due to concerns that treated plastics do not fully biodegrade and instead result in the accelerated formation of microplastics. Oxo-plastics would be difficult to distinguish from untreated plastic but their inclusion during plastic recycling can create a destabilised product with fewer potential uses, potentially jeopardising the business case for recycling any plastic. OXO-biodegradation additives were banned in the EU in 2019
Prevention
UV attack by sunlight can be ameliorated or prevented by adding anti-UV polymer stabilizers, usually prior to shaping the product by injection moulding. UV stabilizers in plastics usually act by absorbing the UV radiation preferentially, and dissipating the energy as low-level heat. The chemicals used are similar to those in sunscreen products, which protect skin from UV attack. They are used frequently in plastics, including cosmetics and films. Different UV stabilizers are utilized depending upon the substrate, intended functional life, and sensitivity to UV degradation. UV stabilizers, such as benzophenones, work by absorbing the UV radiation and preventing the formation of free radicals. Depending upon substitution, the UV absorption spectrum is changed to match the application. Concentrations normally range from 0.05% to 2%, with some applications up to 5%.
Frequently, glass can be a better alternative to polymers when it comes to UV degradation. Most of the commonly used glass types are highly resistant to UV radiation. Explosion protection lamps for oil rigs for example can be made either from polymer or glass. Here, the UV radiation and rough weathers belabor the polymer so much, that the material has to be replaced frequently.
Poly(ethylene-naphthalate) (PEN) can be protected by applying a zinc oxide coating, which acts as protective film reducing the diffusion of oxygen. Zinc oxide can also be used on polycarbonate (PC) to decrease the oxidation and photo-yellowing rate caused by solar radiation.
Analysis
Weather testing of polymers
The photo-oxidation of polymers can be investigated by either natural or accelerated weather testing. Such testing is important in determining the expected service-life of plastic items as well as the fate of waste plastic.
In natural weather testing, polymer samples are directly exposed to open weather for a continuous period of time, while accelerated weather testing uses a specialized test chamber which simulates weathering by sending a controlled amount of UV light and water at a sample. A test chamber may be advantageous in that the exact weathering conditions can be controlled, and the UV or moisture conditions can be made more intense than in natural weathering. Thus, degradation is accelerated and the test is less time-consuming.
Through weather testing, the impact of photooxidative processes on the mechanical properties and lifetimes of polymer samples can be determined. For example, the tensile behavior can be elucidated through measuring the stress–strain curve for a specimen. This stress–strain curve is created by applying a tensile stress (which is measured as the force per area applied to a sample face) and measuring the corresponding strain (the fractional change in length). Stress is usually applied until the material fractures, and from this stress–strain curve, mechanical properties such as the Young's modulus can be determined. Overall, weathering weakens the sample, and as it becomes more brittle, it fractures more easily. This is observed as a decrease in the yield strain, fracture strain, and toughness, as well as an increase in the Young's modulus and break stress (the stress at which the material fractures).
Aside from measuring the impact of degradation on mechanical properties, the degradation rate of plastic samples can also be quantified by measuring the change in mass of a sample over time, as microplastic fragments can break off from the bulk material as degradation progresses and the material becomes more brittle through chain-scission. Thus, the percentage change in mass is often measured in experiments to quantify degradation.
Mathematical models can also be created to predict the change in mass of a polymer sample over the weathering process. Because mass loss occurs at the surface of the polymer sample, the degradation rate is dependent on surface area. Thus, a model for the dependence of degradation on surface area can be made by assuming that the rate of change in mass resulting from degradation is directly proportional to the surface area SA of the specimen:
Here, is the density and kd is known as the specific surface degradation rate (SSDR), which changes depending on the polymer sample's chemical composition and weathering environment. Furthermore, for a microplastic sample, SA is often approximated as the surface area of a cylinder or sphere. Such an equation can be solved to determine the mass of a polymer sample as a function of time.
Detection
Degradation can be detected before serious cracks are seen in a product by using infrared spectroscopy, which is able to detect chemical species formed by photo-oxidation. In particular, peroxy-species and carbonyl groups have distinct absorption bands.
In the example shown at left, carbonyl groups were easily detected by IR spectroscopy from a cast thin film. The product was a road cone made by rotational moulding in LDPE, which had cracked prematurely in service. Many similar cones also failed because an anti-UV additive had not been used during processing. Other plastic products which failed included polypropylene mancabs used at roadworks which cracked after service of only a few months.
The effects of degradation can also be characterized through scanning electron microscopy (SEM). For example, through SEM, defects like cracks and pits can be directly visualized, as shown at right. These samples were exposed to 840 hours of exposure to UV light and moisture using a test chamber. Crack formation is often associated with degradation, such that materials that do not display significant cracking behavior, such as HDPE in the right example, are more likely to be stable against photooxidation compared to other materials like LDPE and PP. However, some plastics that have undergone photooxidation may also appear smoother in an SEM image, with some defects like grooves having disappeared afterwards. This is seen in polystyrene in the right example.
See also
Forensic polymer engineering
Photodegradation
Polymer degradation
Stress corrosion cracking
Thermal degradation of polymers
References
Polymers
Materials degradation
Plastics and the environment
Ultraviolet radiation | Photo-oxidation of polymers | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,974 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Materials science",
"Ultraviolet radiation",
"Polymer chemistry",
"Polymers",
"Materials degradation"
] |
17,377,231 | https://en.wikipedia.org/wiki/Thunderbolt-2000 | The Thunderbolt-2000 (; RT/LT-2000) is a wheeled MLRS system produced by the National Chung-Shan Institute of Science and Technology (NCSIST). It is in service with the Republic of China armed forces and was created with the intention of attacking enemy forces when disembarking from sea.
Overview
The prototype of the Thunderbolt-2000 weapon system was originally placed on the chassis of a M977 Heavy Expanded Mobility Tactical Truck. The production model utilises MAN HX81 8x8 wheeled trucks instead, with the first order batch including 57 launchers and 54 ammunition carriers/reloaders or local production of the same version. The LT-2000 was scheduled to enter service with all 3 main army groups in Taiwan from 2010, with each army group's artillery corps receiving 1 battalion of RT/LT-2000, featuring 3 companies/batteries with 6 RT/LT-2000 launchers each. The original CSIST LT-2000 prototype battery is in service with Kinmen Command, deployed there since mid-2000.
History
It had made its debut in 1997 when it appeared for the first time to the public during the Han Kuang Exercise. The platform is going to be MAN HX81 8x8 wheeled trucks, with 57 launchers and 54 ammo carriers/reloaders.
Munitions
The LT/RT-2000 uses three types of munitions: Mk15 (60 rounds, 3 pods of 20 round each, 15 km range), Mk30 (27 rounds, 3 pods of 9 rounds each, 30 km range) and Mk45 (12 rounds, 2 pods of 6 rounds each, 45 km range). While the Mk15 is the 117 mm rockets used by the Kung Feng VI that carries 6,400 6.4mm size steel balls, the Mk30 rocket is a bit larger than the Mk15 at 180mm caliber and can carry either 267 rounds of M77 Dual Purpose Improved Conventional Munitions (DPICM) bomblets or 18,300 8mm size steel balls with range of 30 km. And Mk45 larger than the Mk30 at 227mm caliber as it can carry either 518 rounds of M77 bomblets or 25,000 8mm steel balls with range of 45 km. Other types of munitions also being developed by CSIST/ROC (Taiwan) Army, including FAE bomblets.
Extended range munitions
Due to increasing threats NCSIST has developed more advanced versions of the original family of rockets with increased ranges. As of 2019 NCSIST had tested improved rockets to 63nm and believed that even longer ranges were readily achievable. Testing of more advanced rockets began soon after. With a range of 200-300km the new munitions are able to reach China from Taiwan’s main island.
See also
References
Rocket artillery
Self-propelled artillery of the Republic of China
Modular rocket launchers
Multiple rocket launchers
Military vehicles introduced in the 1990s
National Chung-Shan Institute of Science and Technology | Thunderbolt-2000 | [
"Engineering"
] | 606 | [
"Modular design",
"Modular rocket launchers"
] |
17,378,052 | https://en.wikipedia.org/wiki/Noisy%20channel%20model | The noisy channel model is a framework used in spell checkers, question answering, speech recognition, and machine translation. In this model, the goal is to find the intended word given a word where the letters have been scrambled in some manner.
In spell-checking
See Chapter B of.
Given an alphabet , let be the set of all finite strings over . Let the dictionary of valid words be some subset of , i.e.,
.
The noisy channel is the matrix
,
where is the intended word and is the scrambled word that was actually received.
The goal of the noisy channel model is to find the intended word given the scrambled word that was received. The decision function is a function that, given a scrambled word, returns
the intended word.
Methods of constructing a decision function include the maximum likelihood rule, the maximum a posteriori rule, and the minimum distance rule.
In some cases, it may be better to accept the scrambled word as the intended
word rather than attempt to find an intended word in the dictionary. For
example, the word schönfinkeling may not be in the dictionary, but might
in fact be the intended word.
Example
Consider the English alphabet
. Some subset
makes up the dictionary of valid English
words.
There are several mistakes that may occur while typing, including:
Missing letters, e.g., instead of letter
Accidental letter additions, e.g., instead of mistake
Swapping letters, e.g., instead of received
Replacing letters, e.g., instead of finite
To construct the noisy channel matrix , we must consider
the probability of each mistake, given the intended word ( for all and ). These probabilities may be gathered, for example, by considering the Damerau–Levenshtein distance between and or by comparing the draft of an essay with one that has been manually edited for spelling.
In machine translation
See chapter 1, and chapter 25 of.
Suppose we want to translate a foreign language to English, we could model directly: the probability that we have English sentence E given foreign sentence F, then we pick the most likely one . However, by Bayes law, we have the equivalent equation:The benefit of the noisy-channel model is in terms of data: If collecting a parallel corpus is costly, then we would have only a small parallel corpus, so we can only train a moderately good English-to-foreign translation model, and a moderately good foreign-to-English translation model. However, we can collect a large corpus in the foreign language only, and a large corpus in the English language only, to train two good language models. Combining these four models, we immediately get a good English-to-foreign translator and a good foreign-to-English translator.
The cost of noisy-channel model is that using Bayesian inference is more costly than using a translation model directly. Instead of reading out the most likely translation by , it would have to read out predictions by both the translation model and the language model, multiply them, and search for the highest number.
In speech recognition
Speech recognition can be thought of as translating from a sound-language to a text-language. Consequently, we havewhere is the probability that a speech sound S is produced if the speaker is intending to say text T. Intuitively, this equation states that the most likely text is a text that's both a likely text in the language, and produces the speech sound with high probability.
The utility of the noisy-channel model is not in capacity. Theoretically, any noisy-channel model can be replicated by a direct model. However, the noisy-channel model factors the model into two parts which are appropriate for the situation, and consequently it is generally more well-behaved.
When a human speaks, it does not produce the sound directly, but first produces the text it wants to speak in the language centers of the brain, then the text is translated into sound by the motor cortex, vocal cords, and other parts of the body. The noisy-channel model matches this model of the human, and so it is appropriate. This is justified in the practical success of noisy-channel model in speech recognition.
Example
Consider the sound-language sentence (written in IPA for English) S = aɪ wʊd laɪk wʌn tuː. There are three possible texts :
I would like one to.
I would like one too.
I would like one two.
that are equally likely, in the sense that . With a good English language model, we would have , since the second sentence is grammatical, the first is not quite, but close to a grammatical one (such as "I would like one to [go]."), while the third one is far from grammatical.
Consequently, the noisy-channel model would output as the best transcription.
See also
Coding theory
References
Automatic identification and data capture
Computational linguistics
Statistical natural language processing | Noisy channel model | [
"Technology"
] | 992 | [
"Natural language and computing",
"Computational linguistics",
"Data",
"Automatic identification and data capture"
] |
17,378,120 | https://en.wikipedia.org/wiki/Sabato%20Institute%20of%20Technology | Sabato Institute of Technology () is an academic institution that belongs partially to the National University of General San Martín and partially to Argentina's National Atomic Energy Commission. It is named after Jorge Alberto Sabato, Argentine physicist and technologist distinguished in the field of metallurgy.
Sabato Institute teaches mainly Materials Science related courses, at undergraduate and postgraduate levels. It is one of the three institutes managed by Argentina's National Atomic Energy Commission (CNEA), as well as Balseiro Institute and Dan Beninson Institute of Nuclear Technology. The International Atomic Energy Agency (IAEA) designated the CNEA in 2018 as a "Collaborative Center" in Latin America. The CNEA, through its training institutes, assumed the commitment to provide assistance to the IAEA Activities Program, thus contributing to the promotion of the peaceful uses of nuclear energy.
The Institute has adequate scientific equipment, libraries, laboratories and a high academic level of its teaching staff, made up of researchers from the National Atomic Energy Commission, the National Scientific and Technical Research Council, the National Institute of Industrial Technology, the National Agricultural Technology Institute and industry specialists. Sabato Institute students have full access to the "Eduardo J. Savino" Information Center.
Course offerings
Undergraduate
Bachelor's degree (B.Eng.) in Materials Engineering. The institute admits students who have completed at least two years of university studies (corresponding to basic knowledge in calculus, algebra, physics and chemistry) and undergoes an admission exam and an interview with the authorities.
Postgraduate
Master's degree (M.Sc.) in Materials Science and Technology.
Doctor's degree (Ph.D.) in Science and Technology, Materials mention.
Doctor's degree (Ph.D.) in Science and Technology, Physics mention.
Double doctor's degree (Ph.D.) in Astrophysics, along with the Karlsruhe Institute of Technology.
Specialization in nondestructive testing.
Other
Diploma in Materials Science for Nuclear Industry. The institute admits students who have completed secondary education.
Lab Zero course for senior year high school students.
Scholarships
Sabato Institute offers scholarships for the Engineering program via Argentina's National Atomic Energy Commission or via private companies such as Techint. The scholarships are intended to guarantee exclusive dedication to study. In the same way, it offers scholarships for the Master's.
Notable people
Professors
Dr. José Rodolfo Galvele (1937–2011), chemist and corrosion specialist; founder of the Corrosion Department at National Atomic Energy Commission; in 1981 and 1987 he received the Corrosion Science TP Hoar Award; in 1999 he received the NACE Whitney Award and the UK Institute of Corrosion Evans Award.
Dr. José Victorio Ovejero-García ( 1941 – 2021), physicist, metallurgist and hydrogen damage specialist; developer of the Hydrogen Microprint Technique, a method to visualize hydrogen trapped in steels; in 1989 and 1991 he received an award from the Argentine Institute of Iron and Steel Industry.
Deans
1993 – 2007: José Rodolfo Galvele
2007 – 2019: Ana María Monti
since 2019: Ricardo Mario Carranza
See also
National Atomic Energy Commission
National University of General San Martin
Jorge Alberto Sabato
Balseiro Institute
References
External links
Sabato Institute website
"Eduardo J. Savino" Information Center website
National Atomic Energy Commission website
National University of General San Martín website
National University of General San Martín website
Educational institutions established in 1993
Engineering universities and colleges in Argentina
Education in Argentina
Materials science institutes
1993 establishments in Argentina | Sabato Institute of Technology | [
"Materials_science"
] | 711 | [
"Materials science organizations",
"Materials science institutes"
] |
17,378,755 | https://en.wikipedia.org/wiki/Glutaconyl-CoA | Glutaconyl-CoA is an intermediate in the metabolism of lysine. It is an organic compound containing a coenzyme substructure, which classifies it as a fatty ester lipid molecule. Being a lipid makes the molecule hydrophobic, which makes it insoluble in water. The molecule has a molecular formula of , and a molecular weight 879.62 grams per mole.
Glutaconyl-CoA is postulated to be the main toxin in glutaric aciduria type 1. In certain fermentative bacteria, glutaconyl-CoA decarboxylation is catalyzed by a Na+-dependent decarboxylase () and is coupled with Na+ ion translocation, which creates a sodium-motive force as an alternate energy source for these organisms.
See also
Glutaconate CoA-transferase
Glutaconyl-CoA decarboxylase
References
Thioesters of coenzyme A | Glutaconyl-CoA | [
"Chemistry"
] | 210 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
17,378,861 | https://en.wikipedia.org/wiki/Isobutyryl-CoA | Isobutyryl-coenzyme A is a starting material for many natural products derived from Poly-Ketide Synthase (PKS) assembly lines, as well as PKS-NRPS hybrid assembly lines. These products can often be used as antibiotics. Notably, it is also an intermediate in the metabolism of the amino acid Valine, and structurally similar to intermediates in the catabolism of other small amino acids.
See also
Isobutyryl-CoA mutase
Isobutyryl-coenzyme A dehydrogenase deficiency
Thioesters of coenzyme A | Isobutyryl-CoA | [
"Chemistry"
] | 124 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
17,378,970 | https://en.wikipedia.org/wiki/2-Methylbutyryl-CoA | 2-Methylbutyryl-CoA is an intermediate in the metabolism of isoleucine.
See also
2-Methylbutyryl-CoA dehydrogenase deficiency
References
Thioesters of coenzyme A | 2-Methylbutyryl-CoA | [
"Chemistry"
] | 47 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
17,379,844 | https://en.wikipedia.org/wiki/Hinduja%20Foundries | Hinduja Foundries Ltd (HFL) is a part of the $12 billion Hinduja Group. Hinduja Foundries is India’s largest casting maker. Hinduja has three facilities in Chennai and Hyderabad which put together manufacturer’s 100,000 MT of castings in the form of cylinder blocks, heads, housings, manifolds, brake drums etc., made of aluminum, cast iron and SG iron.
Hinduja Foundries has two plants. The parent company is in Ennore, Chennai, one in Sriperumbudur.
History
Hinduja Foundries was established in 1959 as Ennore Foundries. It was named so because it was founded in Ennore, a fishermens hamlet situated app.15 km north of Chennai. Initially promoted by British Leyland, Ennore Foundry began commercial production in 1961. Since then the castings manufactured at its plant has been supplied to automobile industries across India.
Hinduja Foundries was established to cater to the needs of Ashok Leyland and became the largest casting maker in India. Ennore foundries acquired Ductron Castings in Hyderabad and set up a green field foundry in Sriperumbudur.
Hinduja is the largest automotive jobbing foundry in India with production capacity of nearly 100,000 MT of grey iron casting and 3000 MT of aluminum gravity die-casting.
Products
Products from Hinduja Foundries range from 10 kg to 300 kg in grey iron and 0.5 to 16.5 kg in aluminum gravity die castings. Product ranges include cylinder blocks, cylinder heads, flywheels, flywheel housings, transmission casings, clutch plates, brake drums, intake manifolds and clutch housings for HCV, LCV and car segments.
Holdings
LRLIH Limited, UK (earlier a part of a British Leyland Group, UK) was acquired by Hindujas and Iveco Limited in 1987. Iveco a part of FIAT Group in Italy.
Hinduja Group was founded by Shri Paramnand Deepchand Hinduja in 1914 at Mumbai. Today Hinduja Group is a conglomerate with a presence in 25 countries and employs over 25,000 personnel worldwide. HG includes Transport, Energy, Information Technology, Agri Business, Project Development, Banking and Finance and Trading.
The company's promoters and their share holding details are LRLIH Limited, UK - 59.09%, Ashok Leyland - 21.01% and Public and Financial Institution - 19.90%.
References
External links
Hinduja Foundries website
Hinduja Group website
Ashok Leyland website
Hinduja Foundries plans investments worth 3.5 bln rupees. Reuters India. Tue 6 May 2008 6:25pm IST
Hinduja Foundries Is Back In Dividend List. Hinduja Foundries Limited, formerly known as Ennore Foundries Limited, will pay equity dividend after a gap of 10 years. News Post India. Tuesday 6 May 2008
Hinduja Foundries New Facility at Sriperumbudur Commences Production
Indian companies established in 1959
Manufacturing companies based in Chennai
Foundries
Hinduja Group
Metal companies of India
1959 establishments in Madras State | Hinduja Foundries | [
"Chemistry"
] | 636 | [
"Foundries",
"Metallurgical facilities"
] |
17,380,942 | https://en.wikipedia.org/wiki/Source%20rock | In petroleum geology, source rock is rock which has generated hydrocarbons or which could generate hydrocarbons. Source rocks are one of the necessary elements of a working petroleum system. They are organic-rich sediments that may have been deposited in a variety of environments including deep water marine, lacustrine and deltaic. Oil shale can be regarded as an organic-rich but immature source rock from which little or no oil has been generated and expelled. Subsurface source rock mapping methodologies make it possible to identify likely zones of petroleum occurrence in sedimentary basins as well as shale gas plays.
Types of source rocks
Source rocks are classified from the types of kerogen that they contain, which in turn governs the type of hydrocarbons that will be generated:
Type I source rocks are formed from algal remains deposited under anoxic conditions in deep lakes: they tend to generate waxy crude oils when submitted to thermal stress during deep burial.
Type II source rocks are formed from marine planktonic and bacterial remains preserved under anoxic conditions in marine environments: they produce both oil and gas when thermally cracked during deep burial.
Type III source rocks are formed from terrestrial plant material that has been decomposed by bacteria and fungi under oxic or sub-oxic conditions: they tend to generate mostly gas with associated light oils when thermally cracked during deep burial. Most coals and coaly shales are generally Type III source rocks.
Maturation and expulsion
With increasing burial by later sediments and increase in temperature, the kerogen within the rock begins to break down. This thermal degradation or cracking releases shorter chain hydrocarbons from the original large and complex molecules occurring in the kerogen.
The hydrocarbons generated from thermally mature source rock are first expelled, along with other pore fluids, due to the effects of internal source rock over-pressuring caused by hydrocarbon generation as well as by compaction. Once released into porous and permeable carrier beds or into faults planes, oil and gas then move upwards towards the surface in an overall buoyancy-driven process known as secondary migration.
Mapping source rocks in sedimentary basins
Areas underlain by thermally mature generative source rocks in a sedimentary basin are called generative basins or depressions or else hydrocarbon kitchens. Mapping those regional oil and gas generative "hydrocarbon kitchens" is feasible by integrating the existing source rock data into seismic depth maps that structurally follow the source horizon(s). It has been statistically observed at a world scale that zones of high success ratios in finding oil and gas generally correlate in most basin types (such as intracratonic or rift basins) with the mapped "generative depressions". Cases of long distance oil migration into shallow traps away from the "generative depressions" are usually found in foreland basins.
Besides pointing to zones of high petroleum potential within a sedimentary basin, subsurface mapping of a source rock's degree of thermal maturity is also the basic tool to identify and broadly delineate shale gas plays.
World class source rocks
Certain source rocks are referred to as "world class", meaning that they are not only of very high quality but are also thick and of wide geographical distribution. Examples include:
Middle Devonian to lower Mississippian widespread marine anoxic oil and gas source beds in the Mid-Continent and Appalachia areas of North America: (e.g. the Bakken Formation of the Williston Basin, the Antrim Shale of the Michigan Basin, the Marcellus Shale of the Appalachian Basin).
Kimmeridge Clay – This upper Jurassic marine mudstone or its stratigraphic equivalents generated most of the oil found in the North Sea and the Norwegian Sea.
La Luna Formation – This Late Cretaceous (mostly Turonian) formation generated most of the oil in northwestern Venezuela.
Late Carboniferous coals – Coals of this age generated most of the gas in the southern North Sea, the Netherlands Basin and the northwest German Basin.
Hanifa Formation – This upper Jurassic laminated carbonate-rich unit has sourced the oil in the giant Ghawar field in Saudi Arabia.
See also
Basin modelling
References
External links
Australian Source Rock and Fluid Atlas
Source rock publications
Petroleum geology | Source rock | [
"Chemistry"
] | 853 | [
"Petroleum",
"Petroleum geology"
] |
17,381,118 | https://en.wikipedia.org/wiki/Comb%20space | In mathematics, particularly topology, a comb space is a particular subspace of that resembles a comb. The comb space has properties that serve as a number of counterexamples. The topologist's sine curve has similar properties to the comb space. The deleted comb space is a variation on the comb space.
Formal definition
Consider with its standard topology and let K be the set . The set C defined by:
considered as a subspace of equipped with the subspace topology is known as the comb space. The deleted comb space, D, is defined by:
.
This is the comb space with the line segment deleted.
Topological properties
The comb space and the deleted comb space have some interesting topological properties mostly related to the notion of connectedness.
The comb space, C, is path connected and contractible, but not locally contractible, locally path connected, or locally connected.
The deleted comb space, D, is connected:
Let E be the comb space without . E is also path connected and the closure of E is the comb space. As E D the closure of E, where E is connected, the deleted comb space is also connected.
The deleted comb space is not path connected since there is no path from (0,1) to (0,0):
Suppose there is a path from p = (0, 1) to the point (0, 0) in D. Let f : [0, 1] → D be this path. We shall prove that f −1{p} is both open and closed in [0, 1] contradicting the connectedness of this set. Clearly we have f −1{p} is closed in [0, 1] by the continuity of f. To prove that f −1{p} is open, we proceed as follows: Choose a neighbourhood V (open in R2) about p that doesn’t intersect the x–axis. Suppose x is an arbitrary point in f −1{p}. Clearly, f(x) = p. Then since f −1(V) is open, there is a basis element U containing x such that f(U) is a subset of V. We assert that f(U) = {p} which will mean that U is an open subset of f −1{p} containing x. Since x was arbitrary, f −1{p} will then be open. We know that U is connected since it is a basis element for the order topology on [0, 1]. Therefore, f(U) is connected. Suppose f(U) contains a point s other than p. Then s = (1/n, z) must belong to D. Choose r such that 1/(n + 1) < r < 1/n. Since f(U) does not intersect the x-axis, the sets A = (−∞, r) × and B = (r, +∞) × will form a separation on f(U); contradicting the connectedness of f(U). Therefore, f −1{p} is both open and closed in [0, 1]. This is a contradiction.
The comb space is homotopic to a point but does not admit a strong deformation retract onto a point for every choice of basepoint that lies in the segment
See also
Connected space
Hedgehog space
Infinite broom
List of topologies
Locally connected space
Order topology
Topologist's sine curve
References
Topological spaces
Trees (topology) | Comb space | [
"Mathematics"
] | 715 | [
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Trees (topology)"
] |
17,382,375 | https://en.wikipedia.org/wiki/Iron-based%20superconductor | Iron-based superconductors (FeSC) are iron-containing chemical compounds whose superconducting properties were discovered in 2006.
In 2008, led by recently discovered iron pnictide compounds (originally known as oxypnictides), they were in the first stages of experimentation and implementation. (Previously most high-temperature superconductors were cuprates and being based on layers of copper and oxygen sandwiched between other substances (La, Ba, Hg)).
This new type of superconductors is based instead on conducting layers of iron and a pnictide (chemical elements in group 15 of the periodic table, here typically arsenic (As) and phosphorus (P)) and seems to show promise as the next generation of high temperature superconductors.
Much of the interest is because the new compounds are very different from the cuprates and may help lead to a theory of non-BCS-theory superconductivity.
More recently these have been called the ferropnictides. The first ones found belong to the group of oxypnictides. Some of the compounds have been known since 1995,
and their semiconductive properties have been known and patented since 2006.
It has also been found that some iron chalcogens superconduct. The undoped β-FeSe is the simplest iron-based superconductor but with the diverse properties. It has a critical temperature (Tc) of 8 K at normal pressure, and 36.7 K under high pressure and by means of intercalation. The combination of both intercalation and higher pressure results in re-emerging superconductivity at Tc of up to 48 K (see, and references therein).
A subset of iron-based superconductors with properties similar to the oxypnictides, known as the 122 iron arsenides, attracted attention in 2008 due to their relative ease of synthesis.
The oxypnictides such as LaOFeAs are often referred to as the '1111' pnictides.
Iron pnictide superconductors crystallize into the [FeAs] layered structure alternating with spacer or charge reservoir block.
The compounds can thus be classified into "1111" system RFeAsO (R: the rare earth element) including LaFeAsO, SmFeAsO, PrFeAsO, etc.; "122" type BaFe2As2, SrFe2As2 or CaFe2As2; "111" type LiFeAs, NaFeAs, and LiFeP. Doping or applied pressure will transform the compounds into superconductors.
Compounds such as Sr2ScFePO3 discovered in 2009 are referred to as the '42622' family, as FePSr2ScO3. Noteworthy is the synthesis of (Ca4Al2O6−y)(Fe2Pn2) (or Al-42622(Pn); Pn = As and P) using high-pressure synthesis technique. Al-42622(Pn) exhibit superconductivity for both Pn = As and P with the transition temperatures of 28.3 K and 17.1 K, respectively. The a-lattice parameters of Al-42622(Pn) (a = 3.713 Å and 3.692 Å for Pn = As and P, respectively) are smallest among the iron-pnictide superconductors. Correspondingly, Al-42622(As) has the smallest As–Fe–As bond angle (102.1°) and the largest As distance from the Fe planes (1.5 Å). High-pressure technique also yields (Ca3Al2O5−y)(Fe2Pn2) (Pn = As and P), the first reported iron-based superconductors with the perovskite-based '32522' structure. The transition temperature (Tc) is 30.2 K for Pn = As and 16.6 K for Pn = P. The emergence of superconductivity is ascribed to the small tetragonal a-axis lattice constant of these materials. From these results, an empirical relationship was established between the a-axis lattice constant and Tc in iron-based superconductors.
In 2009, it was shown that undoped iron pnictides had a magnetic quantum critical point deriving from competition between electronic localization and itinerancy.
Phase diagrams
Similarly to superconducting cuprates, the properties of iron based superconductors change dramatically with doping. Parent compounds of FeSC are usually metals (unlike the cuprates) but, similarly to cuprates, are ordered antiferromagnetically that often termed as a spin-density wave (SDW). The superconductivity (SC) emerges upon either hole or electron doping. In general, the phase diagram is similar to the cuprates.
Superconductivity at high temperature
Superconducting transition temperatures are listed in the tables (some at high pressure). BaFe1.8Co0.2As2 is predicted to have an upper critical field of 43 tesla from the measured coherence length of 2.8 nm.
In 2011, Japanese scientists made a discovery which increased a metal compound's superconductivity by immersing iron-based compounds in hot alcoholic beverages such as red wine. Earlier reports indicated that excess Fe is the cause of the bicollinear antiferromagnetic order and is not in favor of superconductivity. Further investigation revealed that weak acid has the ability to deintercalate the excess Fe from the interlayer sites. Therefore, weak acid annealing suppresses the antiferromagnetic correlation by deintercalating the excess Fe and, hence superconductivity is achieved.
There is an empirical correlation of the transition temperature with electronic band structure: the Tc maximum is observed when some of the Fermi surface stays in proximity to Lifshitz topological transition. Similar correlation has been later reported for high-Tc cuprates that indicates possible similarity of the superconductivity mechanisms in these two families of high temperature superconductors.
Thin films
The critical temperature is increased further in thin-films of iron chalcogenides on suitable substrates. In 2015, a Tc of around 105–111 K was observed in thin films of iron selenide grown on strontium titanate.
See also
Charge-transfer complex
Color superconductivity in quarks
Kondo effect
Magnetic sail
National Superconducting Cyclotron Laboratory
Spallation Neutron Source
Superconducting radio frequency
Superfluid film
Timeline of low-temperature technology
References
Superconductors
Iron compounds | Iron-based superconductor | [
"Chemistry",
"Materials_science"
] | 1,400 | [
"Superconductivity",
"Superconductors"
] |
17,382,394 | https://en.wikipedia.org/wiki/ThinkPad%20UltraBay | UltraBay is originally IBM's name for the swappable drive bay in the ThinkPad range of laptop computers. When the ThinkPad product line was sold to Lenovo, the concept and the name stayed. It is also used in some of Lenovo's own IdeaPad Y Series laptops.
Introduced with the ThinkPad 750 series in 1995, this technology has gone through redesigns with almost every new generation of ThinkPad, which may lead to confusion. The following table gives an overview of the different UltraBay types, in which models they occurred and which drives are available for them. Note that the optical drive bay in G series and R40e series ThinkPads is not an UltraBay in that the drives are fixed and not removable. It is however, mechanically, an UltraBay 2000-device without the surrounding "caddy".
On the media side different UltraBays relate to the form factor of the drives they accept; Some machines can accept UltraBay devices up to 12.5 mm thick, whereas others are limited to devices no more than 9.5mm thick.
The IdeaPad Y400 and Y500 laptops have an UltraBay slot which can be swapped for another hard drive, another fan or another Nvidia GT650M (or GT750M) GPU which will work in SLI with the system's primary video card for increased graphics performance. Existing orders for the UltraBay Y500 DVD Burner (no built in optical drive) were cancelled in early June, 2013.
Starting in 2014, Lenovo changed the design of the ThinkPad bay adapter and dropped the "UltraBay" terminology from use. What remained (in the ThinkPad W540 product) was an option for a removable Serial ATA (SATA) "Caddy" accessory which, with a screw driver, allowed the optical drive to be replaced with a second 2.5 inch SATA storage device. Battery expansion in the caddy bay was no longer offered, and earlier hot-swap functionality was essentially rendered difficult if not impossible.
Nomenclature
See also
Disk enclosure
Caddy (hardware)
References
Computer peripherals
UltraBay
IBM laptops
ThinkPad
Computer-related introductions in 1995 | ThinkPad UltraBay | [
"Technology"
] | 449 | [
"Mobile computer stubs",
"Computer peripherals",
"Mobile technology stubs",
"Components"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.