id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
173,186
https://en.wikipedia.org/wiki/Roll-to-roll%20processing
In the field of electronic devices, roll-to-roll processing, also known as web processing, reel-to-reel processing or R2R, is the process of creating electronic devices on a roll of flexible plastic, metal foil, or flexible glass. In other fields predating this use, it can refer to any process of applying coating, printing, or performing other processes starting with a roll of a flexible material and re-reeling after the process to create an output roll. These processes, and others such as sheeting, can be grouped together under the general term converting. When the rolls of material have been coated, laminated or printed they can be subsequently slit to their finished size on a slitter rewinder. In electronic devices Large circuits made with thin-film transistors and other devices can be patterned onto these large substrates, which can be up to a few metres wide and long. Some of the devices can be patterned directly, much like an inkjet printer deposits ink. For most semiconductors, however, the devices must be patterned using photolithography techniques. Roll-to-roll processing of large-area electronic devices reduces manufacturing cost. Most notable would be solar cells, which are still prohibitively expensive for most markets due to the high cost per unit area of traditional bulk (mono- or polycrystalline) silicon manufacturing. Other applications could arise which take advantage of the flexible nature of the substrates, such as electronics embedded into clothing, large-area flexible displays, and roll-up portable displays. LED (Light Emitting Diode) Inorganic LED - Flexible LED is commonly made into 25, 50, 100 m, or even longer strips using a roll-to-roll process. A long neon LED tube is using such a long flexible strip and encapsulated with PVC or silicone diffusing encapsulation. Organic LED (OLED) - OLED for foldable phone screen is adopting roll-to-roll processing technology. Thin-film cells A crucial issue for a roll-to-roll thin-film cell production system is the deposition rate of the microcrystalline layer, and this can be tackled using four approaches: very high frequency plasma-enhanced chemical vapour deposition (VHF-PECVD) microwave (MW)-PECVD hot wire chemical vapour deposition (hot-wire CVD) the use of ultrasonic nozzles in an in-line process In electrochemical devices Roll-to-roll processing has been used in the manufacture of electrochemical devices such as batteries, supercapacitors, fuel cells, and water electrolyzers. Here, the roll-to-roll processing is utilized for electrode manufacturing and is the key to reducing manufacturing cost through stable production of electrodes on various film substrates such as metal foils, membranes, diffusion media, and separators. See also Amorphous silicon Low cost solar cell Printed electronics Roll slitting Rolling (metalworking) Thin film solar cell Web manufacturing Tape automated bonding, TAB References Electronics manufacturing Semiconductors
Roll-to-roll processing
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
621
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Electronics manufacturing", "Solid state engineering", "Matter" ]
173,196
https://en.wikipedia.org/wiki/Spin%20network
In physics, a spin network is a type of diagram which can be used to represent states and interactions between particles and fields in quantum mechanics. From a mathematical perspective, the diagrams are a concise way to represent multilinear functions and functions between representations of matrix groups. The diagrammatic notation can thus greatly simplify calculations. Roger Penrose described spin networks in 1971. Spin networks have since been applied to the theory of quantum gravity by Carlo Rovelli, Lee Smolin, Jorge Pullin, Rodolfo Gambini and others. Spin networks can also be used to construct a particular functional on the space of connections which is invariant under local gauge transformations. Definition Penrose's definition A spin network, as described in Penrose (1971), is a kind of diagram in which each line segment represents the world line of a "unit" (either an elementary particle or a compound system of particles). Three line segments join at each vertex. A vertex may be interpreted as an event in which either a single unit splits into two or two units collide and join into a single unit. Diagrams whose line segments are all joined at vertices are called closed spin networks. Time may be viewed as going in one direction, such as from the bottom to the top of the diagram, but for closed spin networks the direction of time is irrelevant to calculations. Each line segment is labelled with an integer called a spin number. A unit with spin number n is called an n-unit and has angular momentum nħ/2, where ħ is the reduced Planck constant. For bosons, such as photons and gluons, n is an even number. For fermions, such as electrons and quarks, n is odd. Given any closed spin network, a non-negative integer can be calculated which is called the norm of the spin network. Norms can be used to calculate the probabilities of various spin values. A network whose norm is zero has zero probability of occurrence. The rules for calculating norms and probabilities are beyond the scope of this article. However, they imply that for a spin network to have nonzero norm, two requirements must be met at each vertex. Suppose a vertex joins three units with spin numbers a, b, and c. Then, these requirements are stated as: Triangle inequality: a ≤ b + c and b ≤ a + c and c ≤ a + b. Fermion conservation: a + b + c must be an even number. For example, a = 3, b = 4, c = 6 is impossible since 3 + 4 + 6 = 13 is odd, and a = 3, b = 4, c = 9 is impossible since 9 > 3 + 4. However, a = 3, b = 4, c = 5 is possible since 3 + 4 + 5 = 12 is even, and the triangle inequality is satisfied. Some conventions use labellings by half-integers, with the condition that the sum a + b + c must be a whole number. Formal approach to definition Formally, a spin network may be defined as a (directed) graph whose edges are associated with irreducible representations of a compact Lie group and whose vertices are associated with intertwiners of the edge representations adjacent to it. Properties A spin network, immersed into a manifold, can be used to define a functional on the space of connections on this manifold. One computes holonomies of the connection along every link (closed path) of the graph, determines representation matrices corresponding to every link, multiplies all matrices and intertwiners together, and contracts indices in a prescribed way. A remarkable feature of the resulting functional is that it is invariant under local gauge transformations. Usage in physics In the context of loop quantum gravity In loop quantum gravity (LQG), a spin network represents a "quantum state" of the gravitational field on a 3-dimensional hypersurface. The set of all possible spin networks (or, more accurately, "s-knots"that is, equivalence classes of spin networks under diffeomorphisms) is countable; it constitutes a basis of LQG Hilbert space. One of the key results of loop quantum gravity is quantization of areas: the operator of the area A of a two-dimensional surface Σ should have a discrete spectrum. Every spin network is an eigenstate of each such operator, and the area eigenvalue equals where the sum goes over all intersections i of Σ with the spin network. In this formula, PL is the Planck length, is the Immirzi parameter and ji = 0, 1/2, 1, 3/2, ... is the spin associated with the link i of the spin network. The two-dimensional area is therefore "concentrated" in the intersections with the spin network. According to this formula, the lowest possible non-zero eigenvalue of the area operator corresponds to a link that carries spin 1/2 representation. Assuming an Immirzi parameter on the order of 1, this gives the smallest possible measurable area of ~10−66 cm2. The formula for area eigenvalues becomes somewhat more complicated if the surface is allowed to pass through the vertices, as with anomalous diffusion models. Also, the eigenvalues of the area operator A are constrained by ladder symmetry. Similar quantization applies to the volume operator. The volume of a 3D submanifold that contains part of a spin network is given by a sum of contributions from each node inside it. One can think that every node in a spin network is an elementary "quantum of volume" and every link is a "quantum of area" surrounding this volume. More general gauge theories Similar constructions can be made for general gauge theories with a compact Lie group G and a connection form. This is actually an exact duality over a lattice. Over a manifold however, assumptions like diffeomorphism invariance are needed to make the duality exact (smearing Wilson loops is tricky). Later, it was generalized by Robert Oeckl to representations of quantum groups in 2 and 3 dimensions using the Tannaka–Krein duality. Michael A. Levin and Xiao-Gang Wen have also defined string-nets using tensor categories that are objects very similar to spin networks. However the exact connection with spin networks is not clear yet. String-net condensation produces topologically ordered states in condensed matter. Usage in mathematics In mathematics, spin networks have been used to study skein modules and character varieties, which correspond to spaces of connections. See also Spin connection Spin structure Character variety Penrose graphical notation Spin foam String-net Trace diagram Tensor network References Further reading Early papers I. B. Levinson, "Sum of Wigner coefficients and their graphical representation," Proceed. Phys-Tech Inst. Acad Sci. Lithuanian SSR 2, 17-30 (1956) (see the Euclidean high temperature (strong coupling) section) (see the sections on Abelian gauge theories) Modern papers Xiao-Gang Wen, "Quantum Field Theory of Many-body Systems – from the Origin of Sound to an Origin of Light and Fermions," . (Dubbed string-nets here.) Books G. E. Stedman, Diagram Techniques in Group Theory, Cambridge University Press, 1990. Predrag Cvitanović, Group Theory: Birdtracks, Lie's, and Exceptional Groups, Princeton University Press, 2008. Diagrams Quantum field theory Loop quantum gravity Mathematical physics Diagram algebras
Spin network
[ "Physics", "Mathematics" ]
1,534
[ "Quantum field theory", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Mathematical physics" ]
173,225
https://en.wikipedia.org/wiki/Alan%20Lomax
Alan Lomax (; January 31, 1915 – July 19, 2002) was an American ethnomusicologist, best known for his numerous field recordings of folk music of the 20th century. He was a musician, folklorist, archivist, writer, scholar, political activist, oral historian, and film-maker. Lomax produced recordings, concerts, and radio shows in the US and in England, which played an important role in preserving folk music traditions in both countries, and helped start both the American and British folk revivals of the 1940s, 1950s, and early 1960s. He collected material first with his father, folklorist and collector John Lomax, and later alone and with others, Lomax recorded thousands of songs and interviews for the Archive of American Folk Song, of which he was the director, at the Library of Congress on aluminum and acetate discs. After 1942, when Congress terminated the Library of Congress's funding for folk song collecting, Lomax continued to collect independently in Britain, Ireland, Caribbean region, Italy, Spain, and United States, using the latest recording technology, assembling an enormous collection of American and international culture. In March 2004, the material captured and produced without Library of Congress funding was acquired by the Library, which "brings the entire seventy years of Alan Lomax's work together under one roof at the Library of Congress, where it has found a permanent home." With the start of the Cold War, Lomax continued to advocate for a public role for folklore, even as academic folklorists turned inward. He devoted much of the latter part of his life to advocating what he called Cultural Equity, which he sought to put on a solid theoretical foundation through to his Cantometrics research (which included a prototype Cantometrics-based educational program, the Global Jukebox). In the 1970s and 1980s, Lomax advised the Smithsonian Institution's Folklife Festival and produced a series of films about folk music, American Patchwork, which aired on PBS in 1991. In his late 70s, Lomax completed the long-deferred memoir The Land Where the Blues Began (1993), linking the birth of the blues to debt peonage, segregation, and forced labor in the American South. Lomax's greatest legacy is in preserving and publishing recordings of musicians in many folk and blues traditions around the US and Europe. Among the artists Lomax is credited with discovering and bringing to a wider audience include blues guitarist Robert Johnson, protest singer Woody Guthrie, folk artist Pete Seeger, country musician Burl Ives, Scottish Gaelic singer Flora MacNeil, and country blues singers Lead Belly and Muddy Waters, among many others. "Alan scraped by the whole time, and left with no money," said Don Fleming, director of Lomax's Association for Culture Equity. "He did it out of the passion he had for it, and found ways to fund projects that were closest to his heart". Biography Early life Lomax was born in Austin, Texas in 1915, the third of four children born to Bess Brown and pioneering folklorist and author John A. Lomax. Two of his siblings also developed significant careers studying folklore: Bess Lomax Hawes and John Lomax Jr. The elder Lomax, a former professor of English at Texas A&M University and a celebrated authority on Texas folklore and cowboy songs, had worked as an administrator, and later Secretary of the Alumni Society, of the University of Texas. Due to childhood asthma, chronic ear infections, and generally frail health, Lomax had mostly been home schooled in elementary school. In Dallas, he entered the Terrill School for Boys (a tiny prep school that later became St. Mark's School of Texas). Lomax excelled at Terrill and then transferred to the Choate School (now Choate Rosemary Hall) in Connecticut for a year, graduating eighth in his class at age 15 in 1930. Owing to his mother's declining health, however, rather than going to Harvard University as his father had wished, Lomax matriculated at the University of Texas at Austin. A roommate, future anthropologist Walter Goldschmidt, recalled Lomax as "frighteningly smart, probably classifiable as a genius", though Goldschmidt remembers Lomax exploding one night while studying: "Damn it! The hardest thing I've had to learn is that I'm not a genius." At the University of Texas Lomax read Nietzsche and developed an interest in philosophy. He joined and wrote a few columns for the school paper, The Daily Texan but resigned when it refused to publish an editorial he had written on birth control. At this time he also he began collecting "race" records and taking his dates to black-owned night clubs, at the risk of expulsion. During the spring term his mother died, and his youngest sister Bess, age 10, was sent to live with an aunt. Although the Great Depression was rapidly causing his family's resources to plummet, Harvard came up with enough financial aid for the 16-year-old Lomax to spend his second year there. He enrolled in philosophy and physics and also pursued a long-distance informal reading course in Plato and the Pre-Socratics with University of Texas professor Albert P. Brogan. He also became involved in radical politics and came down with pneumonia. His grades suffered, diminishing his financial aid prospects. Lomax, now 17, therefore took a break from studying to join his father's folk song collecting field trips for the Library of Congress, co-authoring American Ballads and Folk Songs (1934) and Negro Folk Songs as Sung by Lead Belly (1936). His first field collecting without his father was done with Zora Neale Hurston and Mary Elizabeth Barnicle in the summer of 1935. He returned to the University of Texas that fall and was awarded a BA in Philosophy, summa cum laude, and membership in Phi Beta Kappa in May 1936. Lack of money prevented him from immediately attending graduate school at the University of Chicago, as he desired, but he later corresponded with and pursued graduate studies with Melville J. Herskovits at Columbia University and with Ray Birdwhistell at the University of Pennsylvania. Alan Lomax married Elizabeth Harold Goodman, then a student at the University of Texas, in February 1937. They were married for 12 years and had a daughter, Anne (later known as Anna). Elizabeth assisted him in recording in Haiti, Alabama, Appalachia, and Mississippi. Elizabeth also wrote radio scripts of folk operas featuring American music that were broadcast over the BBC Home Service as part of the war effort. During the 1950s, after she and Lomax divorced, she conducted lengthy interviews for Lomax with folk music personalities, including Vera Ward Hall and the Reverend Gary Davis. Lomax also did important field work with Elizabeth Barnicle and Zora Neale Hurston in Florida and the Bahamas (1935); with John Wesley Work III and Lewis Jones in Mississippi (1941 and 42); with folksingers Robin Roberts and Jean Ritchie in Ireland (1950); with his second wife Antoinette Marchand in the Caribbean (1961); with Shirley Collins in Great Britain and the Southeastern U.S. (1959); with Joan Halifax in Morocco; and with his daughter. All those who assisted and worked with him were accurately credited on the resultant Library of Congress and other recordings, as well as in his many books, films, and publications. Assistant in charge as well as commercial records and radio broadcasts From 1937 to 1942, Lomax was Assistant in Charge of the Archive of Folk Song of the Library of Congress to which he and his father and numerous collaborators contributed more than ten thousand field recordings. A pioneering oral historian, Lomax recorded substantial interviews with many folk and jazz musicians, including Woody Guthrie, Lead Belly, Jelly Roll Morton and other jazz pioneers, and Big Bill Broonzy. On one of his trips in 1941, he went to Clarksdale, Mississippi, hoping to record the music of Robert Johnson. When he arrived, he was told by locals that Johnson had died but that another local man, Muddy Waters, might be willing to record his music for Lomax. Using recording equipment that filled the trunk of his car, Lomax recorded Waters' music; it is said that hearing Lomax's recording was the motivation that Waters needed to leave his farm job in Mississippi to pursue a career as a blues musician, first in Memphis and later in Chicago. As part of this work, Lomax traveled through Michigan and Wisconsin in 1938 to record and document the traditional music of that region. Over four hundred recordings from this collection are now available at the Library of Congress. "He traveled in a 1935 Plymouth sedan, toting a Presto instantaneous disc recorder and a movie camera. And when he returned nearly three months later, having driven thousands of miles on barely paved roads, it was with a cache of 250 discs and 8 reels of film, documents of the incredible range of ethnic diversity, expressive traditions, and occupational folklife in Michigan." In late 1939, Lomax hosted two series on CBS's nationally broadcast American School of the Air, called American Folk Song and Wellsprings of Music, both music appreciation courses that aired daily in the schools and were supposed to highlight links between American folk and classical orchestral music. As host, Lomax sang and presented other performers, including Burl Ives, Woody Guthrie, Lead Belly, Pete Seeger, Josh White, and the Golden Gate Quartet. The individual programs reached ten million students in 200,000 U.S. classrooms and were also broadcast in Canada, Hawaii, and Alaska, but both Lomax and his father felt that the concept of the shows, which portrayed folk music as mere raw material for orchestral music, was deeply flawed and failed to do justice to vernacular culture. In 1940, under Lomax's supervision, RCA made two groundbreaking suites of commercial folk music recordings: Woody Guthrie's Dust Bowl Ballads and Lead Belly's The Midnight Special and Other Southern Prison Songs. Though they did not sell especially well when released, Lomax's biographer John Szwed calls these "some of the first concept albums". In 1940, Lomax and his close friend Nicholas Ray wrote and produced the 15-minute program Back Where I Came From, which aired three nights per week on CBS and featured folk tales, proverbs, prose, and sermons, as well as songs, organized thematically. Its racially integrated cast included Burl Ives, Lead Belly, Josh White, Sonny Terry, and Brownie McGhee. In February 1941, Lomax spoke and gave a demonstration of his program along with talks by Nelson A. Rockefeller from the Pan American Union, and the president of the American Museum of Natural History, at a global conference in Mexico of a thousand broadcasters CBS had sponsored to launch its worldwide programming initiative. Mrs. Roosevelt invited Lomax to Hyde Park. Despite its success and high visibility, Back Where I Come From never picked up a commercial sponsor. The show ran for only twenty-one weeks before it was suddenly canceled in February 1941. On hearing the news, Woody Guthrie wrote Lomax from California, "Too honest again, I suppose? Maybe not purty enough. O well, this country's a getting to where it can't hear its own voice. Someday the deal will change." Lomax himself wrote that in all his work he had tried to capture "the seemingly incoherent diversity of American folk song as an expression of its democratic, inter-racial, international character, as a function of its inchoate and turbulent many-sided development." On December 8, 1941, as "Assistant in Charge at the Library of Congress", he sent telegrams to fieldworkers in ten different localities across the United States, asking them to collect reactions of ordinary Americans to the bombing of Pearl Harbor and the subsequent declaration of war by the United States. A second series of interviews, called "Dear Mr. President", was recorded in January and February 1942. While serving in the United States Army in World War II, Lomax produced and hosted numerous radio programs in connection with the war effort. The 1944 "ballad opera", The Martins and the Coys, broadcast in Britain (but not the USA) by the BBC, featuring Burl Ives, Woody Guthrie, Will Geer, Sonny Terry, Pete Seeger, and Fiddlin' Arthur Smith, among others, was released on Rounder Records in 2000. In the late 1940s, Lomax produced a series of commercial folk music albums for Decca Records and organized a series of concerts at New York's Town Hall and Carnegie Hall, featuring blues, calypso, and flamenco music. He also hosted a radio show, Your Ballad Man, in 1949 that was broadcast nationwide on the Mutual Radio Network and featured a highly eclectic program, such as gamelan music; Django Reinhardt; klezmer music; Sidney Bechet; Wild Bill Davison; jazzy pop songs by Maxine Sullivan and Jo Stafford; readings of the poetry of Carl Sandburg; hillbilly music with electric guitars; and Finnish brass bands. He also was a key participant in the V.D. Radio Project in 1949, creating a number of "ballad dramas" featuring country and gospel superstars, including Roy Acuff, Woody Guthrie, Hank Williams, and Sister Rosetta Tharpe (among others), that aimed to convince men and women suffering from syphilis to seek treatment. Move to Europe and later life In December 1949 a newspaper printed a story, "Red Convictions Scare 'Travelers, that mentioned a dinner given by the Civil Rights Association to honor five lawyers who had defended people accused of being Communists. The article mentioned Alan Lomax as one of the sponsors of the dinner, along with C. B. Baldwin, campaign manager for Henry A. Wallace in 1948; music critic Olin Downes of The New York Times; and W.E.B. Du Bois, all of whom it accused of being members of Communist front groups. The following June, Red Channels, a pamphlet edited by former F.B.I. agents which became the basis for the entertainment industry blacklist of the 1950s, listed Lomax as an artist or broadcast journalist sympathetic to Communism. (Others listed included Aaron Copland, Leonard Bernstein, Yip Harburg, Lena Horne, Langston Hughes, Burl Ives, Dorothy Parker, Pete Seeger, and Josh White.) That summer, Congress was debating the McCarran Act, which required the registration and fingerprinting of all "subversives" in the United States, restrictions of their right to travel, and detention in case of "emergencies", while the House Un-American Activities Committee was broadening its hearings. Feeling sure that the Act would pass and realizing that his career in broadcasting was in jeopardy, Lomax, who was newly divorced and already had an agreement with Goddard Lieberson of Columbia Records to record in Europe, hastened to renew his passport, cancel his speaking engagements, and plan for his departure, telling his agent he hoped to return in January "if things cleared up". He set sail on September 24, 1950, on board the steamer . Sure enough, in October, FBI agents were interviewing Lomax's friends and acquaintances. Lomax never told his family exactly why he went to Europe, only that he was developing a library of world folk music for Columbia. Nor did he allow anyone to say he was forced to leave. In a letter to the editor of a British newspaper, Lomax took a writer to task for describing him as a "victim of witch-hunting," insisting that he was in the UK only to work on his Columbia Project. Lomax spent the 1950s based in London, from where he edited the 18-volume Columbia World Library of Folk and Primitive Music, an anthology issued on newly invented LP records. He spent seven months in Spain, where, in addition to recording three thousand items from most of the regions of Spain, he made copious notes and took hundreds of photos of "not only singers and musicians but anything that interested him – empty streets, old buildings, and country roads", bringing to these photos, "a concern for form and composition that went beyond the ethnographic to the artistic". He drew a parallel between photography and field recording: Recording folk songs works like a candid cameraman. I hold the mike, use my hand for shading volume. It's a big problem in Spain because there is so much emotional excitement, noise all around. Empathy is most important in field work. It's necessary to put your hand on the artist while he sings. They have to react to you. Even if they're mad at you, it's better than nothing. When Columbia Records producer George Avakian gave jazz arranger Gil Evans a copy of the Spanish World Library LP, Miles Davis and Evans were "struck by the beauty of pieces such as the 'Saeta', recorded in Seville, and a panpiper's tune ('Alborada de Vigo') from Galicia, and worked them into the 1960 album Sketches of Spain." For the Scottish, English, and Irish volumes, he worked with the BBC and folklorists Peter Douglas Kennedy, Scots poet Hamish Henderson, and with the Irish folklorist Séamus Ennis, recording among others, Margaret Barry and the songs in Irish of Elizabeth Cronin; Scots ballad singer Jeannie Robertson; and Harry Cox of Norfolk, England, and interviewing some of these performers at length about their lives. In 1953 a young David Attenborough commissioned Lomax to host six 20-minute episodes of the BBC TV series The Song Hunter, which featured performances by a wide range of traditional musicians from all over Britain and Ireland, as well as Lomax himself. In 1957, Lomax hosted a folk music show on BBC's Home Service titled A Ballad Hunter and organized a skiffle group, Alan Lomax and the Ramblers (who included Ewan MacColl, Peggy Seeger, and Shirley Collins), which appeared on British television. His ballad opera Big Rock Candy Mountain premiered December 1955 at Joan Littlewood's Theatre Workshop and featured Ramblin' Jack Elliot. In Scotland, Lomax is credited with being an inspiration for the School of Scottish Studies, founded in 1951, the year of his first visit there. Lomax and Diego Carpitella's survey of Italian folk music for the Columbia World Library, conducted in 1953 and 1954, with the cooperation of the BBC and the Accademia Nazionale di Santa Cecilia in Rome, helped capture a snapshot of a multitude of important traditional folk styles shortly before they disappeared. The pair amassed one of the most representative folk song collections of any culture. From Lomax's Spanish and Italian recordings emerged one of the first theories explaining the types of folk singing that predominate in particular areas, a theory that incorporates work style, the environment, and the degrees of social and sexual freedom. Return to the United States Upon his return to New York in 1959, Lomax produced a concert, Folksong '59, in Carnegie Hall, featuring Arkansas singer Jimmy Driftwood; the Selah Jubilee Singers and Drexel Singers (gospel groups); Muddy Waters and Memphis Slim (blues); Earl Taylor and the Stoney Mountain Boys (bluegrass); Pete Seeger, Mike Seeger (urban folk revival); and The Cadillacs (a rock and roll group). The occasion marked the first time rock and roll and bluegrass were performed on the Carnegie Hall Stage. "The time has come for Americans not to be ashamed of what we go for, musically, from primitive ballads to rock 'n' roll songs", Lomax told the audience. According to Izzy Young, the audience booed when he told them to lay down their prejudices and listen to rock 'n' roll. In Young's opinion, "Lomax put on what is probably the turning point in American folk music...At that concert, the point he was trying to make was that Negro and white music were mixing, and rock and roll was that thing." Alan Lomax had met 20-year-old English folk singer Shirley Collins while living in London. The two were romantically involved and lived together for some years. When Lomax obtained a contract from Atlantic Records to re-record some of the American musicians first recorded in the 1940s, using improved equipment, Collins accompanied him. Their folk song collecting trip to the Southern states, known colloquially as the Southern Journey, lasted from July to November 1959 and resulted in many hours of recordings, featuring performers such as Almeda Riddle, Hobart Smith, Wade Ward, Charlie Higgins and Bessie Jones and culminated in the discovery of Fred McDowell. Recordings from this trip were issued under the title Sounds of the South and some were also featured in the Coen brothers' 2000 film O Brother, Where Art Thou?. Lomax wished to marry Collins but when the recording trip was over, she returned to England and married Austin John Marshall. In an interview in The Guardian newspaper, Collins expressed irritation that The Land Where The Blues Began, Lomax's 1993 account of the journey, barely mentioned her. "All it said was, 'Shirley Collins was along for the trip'. It made me hopping mad. I wasn't just 'along for the trip'. I was part of the recording process, I made notes, I drafted contracts, I was involved in every part". Collins addressed the perceived omission in her memoir, America Over the Water, published in 2004. Lomax married Antoinette Marchand on August 26, 1961. They separated the following year and divorced in 1967. In 1962, Lomax and singer and Civil Rights Activist Guy Carawan, music director at the Highlander Folk School in Monteagle, Tennessee, produced the album, Freedom in the Air: Albany Georgia, 1961–62, on Vanguard Records for the Student Nonviolent Coordinating Committee. Lomax was a consultant to Carl Sagan for the Voyager Golden Record sent into space on the 1977 Voyager Spacecraft to represent the music of the earth. Music he helped choose included the blues, jazz, and rock 'n' roll of Blind Willie Johnson, Louis Armstrong, and Chuck Berry; Andean panpipes and Navajo chants; Azerbaijani mugham performed by two balaban players, a Sicilian sulfur miner's lament; polyphonic vocal music from the Mbuti Pygmies of Zaire, and the Georgians of the Caucasus; and a shepherdess song from Bulgaria by Valya Balkanska; in addition to Bach, Mozart, and Beethoven, and more. Sagan later wrote that it was Lomax "who was a persistent and vigorous advocate for including ethnic music even at the expense of Western classical music. He brought pieces so compelling and beautiful that we gave in to his suggestions more often than I would have thought possible. There was, for example, no room for Debussy among our selections because Azerbaijanis play bagpipe-sounding instruments [balaban] and Peruvians play panpipes and such exquisite pieces had been recorded by ethnomusicologists known to Lomax." Death Alan Lomax died in Safety Harbor, Florida on July 19, 2002 at the age of 87. Cultural equity As a member of the Popular Front and People's Songs in the 1940s, Alan Lomax promoted what was then known as "One World" and today is called multiculturalism. In the late forties he produced a series of concerts at Town Hall and Carnegie Hall that presented flamenco guitar and calypso, along with country blues, Appalachian music, Andean music, and jazz. His radio shows of the 1940s and 1950s explored musics of all the world's peoples. Lomax recognized that folklore (like all forms of creativity) occurs at the local and not the national level and flourishes not in isolation but in fruitful interplay with other cultures. He was dismayed that mass communications appeared to be crushing local cultural expressions and languages. In 1950 he echoed anthropologist Bronisław Malinowski (1884–1942), who believed the role of the ethnologist should be that of advocate for primitive man (as indigenous people were then called), when he urged folklorists to similarly advocate for the folk. Some, such as Richard Dorson, objected that scholars shouldn't act as cultural arbiters, but Lomax believed it was unethical to stand idly by as the magnificent variety of the world's cultures and languages was "grayed out" by centralized commercial entertainment and educational systems. Although he acknowledged potential problems with intervention, he urged that folklorists with their special training actively assist communities in safeguarding and revitalizing their own local traditions. Similar ideas had been put into practice by Benjamin Botkin, Harold W. Thompson, and Louis C. Jones, who believed that folklore studied by folklorists should be returned to its home communities to enable it to thrive anew. They have been realized in the annual (since 1967) Smithsonian Folk Festival on the Mall in Washington, D.C. (for which Lomax served as a consultant), in national and regional initiatives by public folklorists and local activists in helping communities gain recognition for their oral traditions and lifeways both in their home communities and in the world at large; and in the National Heritage Awards, concerts, and fellowships given by the NEA and various State governments to master folk and traditional artists. In 1983, Lomax founded The Association for Cultural Equity (ACE). It is housed at the Fine Arts Campus of Hunter College in New York City and is the custodian of the Alan Lomax Archive. The Association's mission is to "facilitate cultural equity" and practice "cultural feedback" and "preserve, publish, repatriate and freely disseminate" its collections. Though Alan Lomax's appeals to anthropology conferences and repeated letters to UNESCO fell on deaf ears, the modern world seems to have caught up to his vision. In an article first published in the 2009 Louisiana Folklore Miscellany, Barry Jean Ancelet, folklorist and chair of the Modern Languages Department at University of Louisiana at Lafayette, wrote: In 2001, in the wake of the attacks in New York and Washington of September 11, UNESCO's Universal Declaration of Cultural Diversity declared the safeguarding of languages and intangible culture on a par with protection of individual human rights and as essential for human survival as biodiversity is for nature, ideas remarkably similar to those forcefully articulated by Alan Lomax many years before. FBI investigations From 1942 to 1979, Lomax repeatedly was investigated and interviewed by the Federal Bureau of Investigation (FBI), but nothing incriminating was discovered, and the investigation was abandoned. Scholar and jazz pianist Ted Gioia uncovered and published extracts from Alan Lomax's 800-page FBI files. The investigation appears to have started when an anonymous informant reported overhearing Lomax's father telling guests in 1941 about what he considered his son's communist sympathies. Looking for leads, the FBI seized on the fact that, at the age of 17 in 1932 while attending Harvard University for a year, Lomax had been arrested in Boston, Massachusetts in connection with a political demonstration. In 1942 the FBI sent agents to interview students at Harvard's freshman dormitory about Lomax's participation in a demonstration that had occurred at Harvard ten years earlier in support of the immigration rights of one Edith Berkman, a Jewish woman, dubbed the "red flame" for her labor organizing activities among the textile workers of Lawrence, Massachusetts, and threatened with deportation as an alleged "Communist agitator". Lomax had been charged with disturbing the peace and fined $25. Berkman, however, had been cleared of all accusations against her and was not deported. Nor had Lomax's Harvard academic record been affected in any way by his activities in her defense. Nevertheless, the bureau continued trying vainly to show that in 1932 Lomax had either distributed communist literature or made public speeches in support of the Communist Party. According to Ted Gioia: Lomax must have felt it necessary to address the suspicions. He gave a sworn statement to an FBI agent on April 3, 1942, denying both of these charges. He also explained his arrest while at Harvard as the result of police overreaction. He was, he claimed, 15 at the time – he was actually 17 and a college student – and he said he had intended to participate in a peaceful demonstration. Lomax said he and his colleagues agreed to stop their protest when police asked them to, but that he was grabbed by a couple of policemen as he was walking away. "That is pretty much the story there, except that it distressed my father very, very much", Lomax told the FBI. "I had to defend my righteous position, and he couldn't understand me and I couldn't understand him. It has made a lot of unhappiness for the two of us because he loved Harvard and wanted me to be a great success there." Lomax transferred to the University of Texas the following year. Lomax left Harvard, after having spent his sophomore year there, to join John A. Lomax and John Lomax, Jr. in collecting folk songs for the Library of Congress and to assist his father in writing his books. In withdrawing him (in addition to not being able to afford the tuition), the elder Lomax had probably wanted to separate his son from new political associates that he considered undesirable. But Alan had also not been happy there and probably also wanted to be nearer his bereaved father and young sister, Bess, and to return to the close friends he had made during his first year at the University of Texas. In June 1942 the FBI approached the Librarian of Congress, Archibald McLeish, in an attempt to have Lomax fired as Assistant in Charge of the Library's Archive of American Folk Song. At the time, Lomax was preparing for a field trip to the Mississippi Delta on behalf of the Library, where he made landmark recordings of Muddy Waters, Son House, and David "Honeyboy" Edwards, among others. McLeish wrote to Hoover, defending Lomax: "I have studied the findings of these reports very carefully. I do not find positive evidence that Mr. Lomax has been engaged in subversive activities and I am therefore taking no disciplinary action toward him." Nevertheless, according to Gioia: Yet what the probe failed to find in terms of prosecutable evidence, it made up for in speculation about his character. An FBI report dated July 23, 1943, describes Lomax as possessing "an erratic, artistic temperament" and a "bohemian attitude". It says: "He has a tendency to neglect his work over a period of time and then just before a deadline he produces excellent results." The file quotes one informant who said that "Lomax was a very peculiar individual, that he seemed to be very absent-minded and that he paid practically no attention to his personal appearance." This same source adds that he suspected Lomax's peculiarity and poor grooming habits came from associating with the "hillbillies" who provided him with folk tunes. Lomax, who was a founding member of People's Songs, was in charge of campaign music for Henry A. Wallace's 1948 Presidential run on the Progressive Party ticket on a platform opposing the arms race and supporting civil rights for Jews and African Americans. Subsequently, Lomax was one of the performers listed in the publication Red Channels as a possible Communist sympathizer and was consequently blacklisted from working in US entertainment industries. A 2007 BBC news article revealed that in the early 1950s, the British MI5 placed Alan Lomax under surveillance as a suspected Communist. Its report concluded that although Lomax undoubtedly held "left wing" views, there was no evidence he was a Communist. Released September 4, 2007 (File ref KV 2/2701), a summary of his MI5 file reads as follows: Noted American folk music archivist and collector Alan Lomax first attracted the attention of the Security Service when it was noted that he had made contact with the Romanian press attaché in London while he was working on a series of folk music broadcasts for the BBC in 1952. Correspondence ensued with the American authorities as to Lomax' suspected membership of the Communist Party, though no positive proof is found on this file. The Service took the view that Lomax' work compiling his collections of world folk music gave him a legitimate reason to contact the attaché, and that while his views (as demonstrated by his choice of songs and singers) were undoubtedly left wing, there was no need for any specific action against him. The file contains a partial record of Lomax' movements, contacts and activities while in Britain, and includes for example a police report of the "Songs of the Iron Road" concert at St Pancras in December 1953. His association with [blacklisted American] film director Joseph Losey is also mentioned (serial 30a). The FBI again investigated Lomax in 1956 and sent a 68-page report to the CIA and the Attorney General's office. However, William Tompkins, assistant attorney general, wrote to Hoover that the investigation had failed to disclose sufficient evidence to warrant prosecution or the suspension of Lomax's passport. Then, as late as 1979, an FBI report suggested that Lomax had recently impersonated an FBI agent. The report appears to have been based on mistaken identity. The person who reported the incident to the FBI said that the man in question was around 43, about 5 feet 9 inches and 190 pounds. The FBI file notes that Lomax stood tall, weighed 240 pounds and was 64 at the time: Lomax resisted the FBI's attempts to interview him about the impersonation charges, but he finally met with agents at his home in November 1979. He denied that he'd been involved in the matter but did note that he'd been in New Hampshire in July 1979, visiting a film editor about a documentary. The FBI's report concluded that "Lomax made no secret of the fact that he disliked the FBI and disliked being interviewed by the FBI. Lomax was extremely nervous throughout the interview." The FBI investigation was concluded the following year, shortly after Lomax's 65th birthday. Awards Alan Lomax received the National Medal of Arts from President Ronald Reagan in 1986; a Library of Congress Living Legend Award in 2000; and was awarded an Honorary Doctorate in Philosophy from Tulane University in 2001. He won the National Book Critics Circle Award and the Ralph J. Gleason Music Book Award in 1993 for his book The Land Where the Blues Began, connecting the story of the origins of blues music with the prevalence of forced labor in the pre-World War II South (especially on the Mississippi levees). Lomax also received a posthumous Grammy Trustees Award for his lifetime achievements in 2003. Jelly Roll Morton: The Complete Library of Congress Recordings by Alan Lomax (Rounder Records, 8 CDs boxed set) won in two categories at the 48th annual Grammy Awards ceremony held on February 8, 2006 Alan Lomax in Haiti: Recordings For The Library Of Congress, 1936–1937, issued by Harte Records and made with the support and major funding from Kimberley Green and the Green foundation, and featuring 10 CDs of recorded music and film footage (shot by Elizabeth Lomax, then nineteen), a bound book of Lomax's selected letters and field journals, and notes by musicologist Gage Averill, was nominated for two Grammy Awards in 2011. World music and digital legacy Brian Eno wrote of Lomax's later recording career in his notes to accompany an anthology of Lomax's world recordings: [He later] turned his intelligent attentions to music from many other parts of the world, securing for them a dignity and status they had not previously been accorded. The "World Music" phenomenon arose partly from those efforts, as did his great book, Folk Song Style and Culture. I believe this is one of the most important books ever written about music, in my all time top ten. It is one of the very rare attempts to put cultural criticism onto a serious, comprehensible, and rational footing by someone who had the experience and breadth of vision to be able to do it. In January 2012, the American Folklife Center at the Library of Congress, with the Association for Cultural Equity, announced that it would release Lomax's vast archive in digital form. Lomax spent the last 20 years of his life working on an interactive multimedia educational computer project he called the Global Jukebox, which included 5,000 hours of sound recordings, 400,000 feet of film, 3,000 videotapes, and 5,000 photographs. By February 2012, 17,000 music tracks from his archived collection were expected to be made available for free streaming, and later some of that music may be for sale as CDs or digital downloads. As of March 2012 this has been accomplished. Approximately 17,400 of Lomax's recordings from 1946 and later have been made available free online. This is material from Alan Lomax's independent archive, begun in 1946, which has been digitized and offered by the Association for Cultural Equity. This is "distinct from the thousands of earlier recordings on acetate and aluminum discs he made from 1933 to 1942 under the auspices of the Library of Congress. This earlier collection – which includes the famous Jelly Roll Morton, Woody Guthrie, Lead Belly, and Muddy Waters sessions, as well as Lomax's prodigious collections made in Haiti and Eastern Kentucky (1937) – is the provenance of the American Folklife Center" at the Library of Congress. (Roberta Rand) This is as good a place as any to come into the narrative. Most of what I will add here are memories attached to personal facts found scattered between the months of the years 1996-1997. What happened during those months are not formally recorded anywhere (so far as I know ) - but the references to the comments made by Bob Dylan on August 24, 1997 at Wolf Trap to the audience and Alan Lomax, sitting in the audience - in the front row (disabled sitting row) can be found tucked in the corners of paragraphs across the Net. Here is the lead-up to those comments and how Alan Lomax happened to be in the audience that day. (I feel as if this should be a serial presentation with new facts added to the story every week.) It was not a planned meeting between them but serendipity. It was all: serendipity! I was just checking the summer schedule for Wolf Trap one day and noticed that Bob Dylan was going to be giving a performance and thought what a great afternoon it would be for Alan and me - and maybe we would invite Marie Funkhouser - a good friend who sometimes regaled Alan with some old songs after dinner. Lovely voice - beautiful person. Alan Lomax came into my life like a thunderstorm off of the Serengeti. It was serendipitous, unpredictable - but the way you like your weather : pleasant and moderate - it seldom got uncomfortably hot. I was at the beginning of a new project supported by the National Agricultural Library (USDA, ARS) and was looking for a piece of software that had been touted as a solid solution to the puzzle of "natural language queries," - new to us at the time. A friend at the National Institutes of Health (NIH) told me he knew where I could find it - at Hunter College, he said - and it could be found at his friend, Alan Lomax's lab in New York. Alan could tell me more about it. It was just a train-ride-away! So, we booked two tickets on the Acela out of Union Station in Washington, D.C. to Port Authority, New York - and then walked to the Hunter College lab or the ACE. We had called ahead to set up the meeting. We were naive and unknowing - the two of us. But, curious and up for an adventure. I wish I could say I knew more about who Alan Lomax was- really was! Maybe my friend (whose name I cannot remember, I'm sorry to say) was telling me all about him - I cannot remember. I can say, when we knocked on the door to the Lab - it was answered by this wild-haired-big-energy-man - with a handshake of steel- was offered to us. On August 24, 1997, at a concert at Wolf Trap in Vienna, Virginia, Bob Dylan said about Lomax, who had helped introduce him to folk music and whom he had known as a young man in Greenwich Village: There is a distinguished gentlemen here who came...I want to introduce him – named Alan Lomax. I don't know if many of you have heard of him [Audience applause.] Yes, he's here, he's made a trip out to see me. I used to know him years ago. I learned a lot there and Alan...Alan was one of those who unlocked the secrets of this kind of music. So if we've got anybody to thank, it's Alan. Thanks, Alan. In 1999 electronica musician Moby released his fifth album Play. It extensively used samples from field recordings collected by Lomax on the 1993 box set Sounds of the South: A Musical Journey from the Georgia Sea Islands to the Mississippi Delta. The album went on to be certified platinum in more than 20 countries. In his autobiography Chronicles, Part One, Bob Dylan recollects a 1961 scene: "There was an art movie house in the Village on 12th Street that showed foreign movies—French, Italian, German. This made sense, because even Alan Lomax himself, the great folk archivist, had said somewhere that if you want to go to America, go to Greenwich Village." Lomax is portrayed by actor Norbert Leo Butz in the 2024 feature film about Bob Dylan's early career entitled A Complete Unknown. Bibliography A partial list of books by Alan Lomax includes: L'Anno piu' felice della mia vita (The Happiest Year of My Life), a book of ethnographic photos by Alan Lomax from his 1954–55 fieldwork in Italy, edited by Goffredo Plastino, preface by Martin Scorsese. Milano: Il Saggiatore, M2008. Alan Lomax: Mirades Miradas Glances. Photos by Alan Lomax, ed. by Antoni Pizà (Barcelona: Lunwerg / Fundacio Sa Nostra, 2006) Alan Lomax: Selected Writings 1934–1997. Ronald D. Cohen, Editor (includes a chapter defining all the categories of cantometrics). New York: Routledge: 2003. Brown Girl in the Ring: An Anthology of Song Games from the Eastern Caribbean Compiler, with J. D. Elder and Bess Lomax Hawes. New York: Pantheon Books, 1997 (Cloth, ); New York: Random House, 1998 (Cloth). The Land Where The Blues Began. New York: Pantheon, 1993. Cantometrics: An Approach to the Anthropology of Music: Audiocassettes and a Handbook. Berkeley: University of California Media Extension Center, 1976. Folk Song Style and Culture. With contributions by Conrad Arensberg, Edwin E. Erickson, Victor Grauer, Norman Berkowitz, Irmgard Bartenieff, Forrestine Paulay, Joan Halifax, Barbara Ayres, Norman N. Markel, Roswell Rudd, Monika Vizedom, Fred Peng, Roger Wescott, David Brown. Washington, D.C.: Colonial Press Inc, American Association for the Advancement of Science, Publication no. 88, 1968. Penguin Book of American Folk Songs (1968) 3000 Years of Black Poetry. Alan Lomax and Raoul Abdul, Editors. New York: Dodd Mead Company, 1969. Paperback edition, Fawcett Publications, 1971. The Leadbelly Songbook. Moses Asch and Alan Lomax, Editors. Musical transcriptions by Jerry Silverman. Foreword by Moses Asch. New York: Oak Publications, 1962. Folk Songs of North America. Melodies and guitar chords transcribed by Peggy Seeger. New York: Doubleday, 1960. The Rainbow Sign. New York: Duell, Sloan and Pierce, 1959. Leadbelly: A Collection of World Famous Songs by Huddie Ledbetter. Edited with John A. Lomax. Hally Wood, Music Editor. Special note on Lead Belly's 12-string guitar by Pete Seeger. New York: Folkways Music Publishers Company, 1959. Harriet and Her Harmonium: An American adventure with thirteen folk songs from the Lomax collection. Illustrated by Pearl Binder. Music arranged by Robert Gill. London: Faber and Faber, 1955. Mister Jelly Roll: The Fortunes of Jelly Roll Morton, New Orleans Creole and "Inventor of Jazz". Drawings by David Stone Martin. New York: Duell, Sloan and Pierce, 1950. Folk Song: USA. With John A. Lomax. Piano accompaniment by Charles and Ruth Crawford Seeger. New York: Duell, Sloan and Pierce, c.1947. Republished as Best Loved American Folk Songs, New York: Grosset and Dunlap, 1947 (Cloth). Freedom Songs of the United Nations. With Svatava Jakobson. Washington, D.C.: Office of War Information, 1943. Our Singing Country: Folk Songs and Ballads. With John A. Lomax and Ruth Crawford Seeger. New York: MacMillan, 1941. Check-list of Recorded Songs in the English Language in the Archive of American Folk Song in July 1940. Washington, D.C.: Music Division, Library of Congress, 1942. Three volumes. American Folksong and Folklore: A Regional Bibliography. With Sidney Robertson Cowell. New York, Progressive Education Association, 1942. Reprint, Temecula, California: Reprint Services Corp., 1988 (62 pp. ). Negro Folk Songs as Sung by Lead Belly. With John A. Lomax. New York: Macmillan, 1936. American ballads and folk songs. With John Avery Lomax. Macmillan, 1934. Film Lomax the Songhunter, documentary directed by Rogier Kappers, 2004 (issued on DVD 2007). American Patchwork television series, 1990 (five DVDs). Oss Oss Wee Oss 1951 (on a DVD with other films related to the Padstow May Day). Rhythms of Earth. Four films (Dance & Human History, Step Style, Palm Play, and The Longest Trail) made by Lomax (1974–1984) about his Choreometric cross-cultural analysis of dance and movement style. Two-and-a-half hours, plus one-and-a-half hours of interviews and 177 pages of text. The Land Where The Blues Began, expanded, thirtieth-anniversary edition of the 1979 documentary by Alan Lomax, filmmaker John Melville Bishop, and ethnomusicologist and civil rights activist Worth Long, with 3.5 hours of additional music and video. Ballads, Blues and Bluegrass, an Alan Lomax documentary released in 2012. His assistant Carla Rotolo was seen in the film. Southern Journey (Revisited), this 2020 documentary retraces the route of an iconic song-collecting trip from the late 1950s - Alan Lomax's so-called "Southern Journey". See also Notable alumni of St. Mark's School of Texas Ian Brennan (music producer) Cantometrics The Singing Street Footnotes Further reading Barton, Matthew. "The Lomaxes", pp. 151–169, in Spenser, Scott B. The Ballad Collectors of North America: How Gathering Folksongs Transformed Academic Thought and American Identity (American Folk Music and Musicians Series). Plymouth, UK: Scarecrow Press. 2011. The American song collecting of John A. and Alan Lomax in historical perspective. Salsburg, Nathan (2019) Southern Journeys: Alan Lomax's Steel-String Discoveries. Acoustic Guitar magazine, March/April 2019. Sorce Keller, Marcello. "Kulturkreise, Culture Areas, and Chronotopes: Old Concepts Reconsidered for the Mapping of Music Cultures Today", in Britta Sweers and Sarah H. Ross (eds.) Cultural Mapping and Musical Diversity. Sheffield UK/Bristol CT: Equinox Publishing Ltd. 2020, 19–34. Szwed, John. Alan Lomax: The Man Who Recorded the World . New York: Viking Press, 2010 (438 pp.: ) / London: William Heinemann, 2010 (438 pp.;). Comprehensive biography. Wood, Anna Lomax. Songs of Earth: Aesthetic and Social Codes in Music. Jackson: University Press of Mississippi, 2021. 480 pages. . External links Alan Lomax's "List of American Folk Songs on Commercial Records" (1940) September 24, 2012. "The Sonic Journey of Alan Lomax: Recording America and the World" (NPR streaming radio podcast, 2 hours) American Routes (March 13, 2013). Nick Spitzer, host. Features interviews with Lomax biographer John Szwed, daughter Anna Lomax Wood, nephew John Lomax III, folksinger Pete Seeger, and some past interviews with Lomax himself. Alan Lomax Collection, The American Folklife Center, Library of Congress "Remembrances of Alan Lomax, 2002" by Guy Carawan "Alan Lomax: Citizen Activist", by Ronald D. Cohen "Remembering Alan Lomax" by Bruce Jackson Interview with Shirley Collins Alan Lomax at Folkstreams , a scene from Lomax the songhunter Oss Oss Wee Oss, a DVD of the Padstow May Day Ceremony (1951) "Blues Travelers", The New York Times, May 17, 2012. Lomax and Lead Belly together Link 1915 births 2002 deaths 20th-century American musicians American folk-song collectors 20th-century American historians American male non-fiction writers American ethnomusicologists American music critics American music historians Choate Rosemary Hall alumni Columbia University alumni Harvard University alumni Library of Congress United States National Medal of Arts recipients University of Texas at Austin alumni Alan American people of English descent St. Mark's School (Texas) alumni 20th-century American musicologists 20th-century American male writers Field recording Folk music historians
Alan Lomax
[ "Engineering" ]
10,353
[ "Audio engineering", "Field recording" ]
173,238
https://en.wikipedia.org/wiki/Diels%E2%80%93Alder%20reaction
In organic chemistry, the Diels–Alder reaction is a chemical reaction between a conjugated diene and a substituted alkene, commonly termed the dienophile, to form a substituted cyclohexene derivative. It is the prototypical example of a pericyclic reaction with a concerted mechanism. More specifically, it is classified as a thermally allowed [4+2] cycloaddition with Woodward–Hoffmann symbol [π4s + π2s]. It was first described by Otto Diels and Kurt Alder in 1928. For the discovery of this reaction, they were awarded the Nobel Prize in Chemistry in 1950. Through the simultaneous construction of two new carbon–carbon bonds, the Diels–Alder reaction provides a reliable way to form six-membered rings with good control over the regio- and stereochemical outcomes. Consequently, it has served as a powerful and widely applied tool for the introduction of chemical complexity in the synthesis of natural products and new materials. The underlying concept has also been applied to π-systems involving heteroatoms, such as carbonyls and imines, which furnish the corresponding heterocycles; this variant is known as the hetero-Diels–Alder reaction. The reaction has also been generalized to other ring sizes, although none of these generalizations have matched the formation of six-membered rings in terms of scope or versatility. Because of the negative values of ΔH° and ΔS° for a typical Diels–Alder reaction, the microscopic reverse of a Diels–Alder reaction becomes favorable at high temperatures, although this is of synthetic importance for only a limited range of Diels–Alder adducts, generally with some special structural features; this reverse reaction is known as the retro-Diels–Alder reaction. Mechanism The reaction is an example of a concerted pericyclic reaction. It is believed to occur via a single, cyclic transition state, with no intermediates generated during the course of the reaction. As such, the Diels–Alder reaction is governed by orbital symmetry considerations: it is classified as a [π4s + π2s] cycloaddition, indicating that it proceeds through the suprafacial/suprafacial interaction of a 4π electron system (the diene structure) with a 2π electron system (the dienophile structure), an interaction that leads to a transition state without an additional orbital symmetry-imposed energetic barrier and allows the Diels–Alder reaction to take place with relative ease. A consideration of the reactants' frontier molecular orbitals (FMO) makes plain why this is so. (The same conclusion can be drawn from an orbital correlation diagram or a Dewar-Zimmerman analysis.) For the more common "normal" electron demand Diels–Alder reaction, the more important of the two HOMO/LUMO interactions is that between the electron-rich diene's ψ2 as the highest occupied molecular orbital (HOMO) with the electron-deficient dienophile's π* as the lowest unoccupied molecular orbital (LUMO). However, the HOMO–LUMO energy gap is close enough that the roles can be reversed by switching electronic effects of the substituents on the two components. In an inverse (reverse) electron-demand Diels–Alder reaction, electron-withdrawing substituents on the diene lower the energy of its empty ψ3 orbital and electron-donating substituents on the dienophile raise the energy of its filled π orbital sufficiently that the interaction between these two orbitals becomes the most energetically significant stabilizing orbital interaction. Regardless of which situation pertains, the HOMO and LUMO of the components are in phase and a bonding interaction results as can be seen in the diagram below. Since the reactants are in their ground state, the reaction is initiated thermally and does not require activation by light. The "prevailing opinion" is that most Diels–Alder reactions proceed through a concerted mechanism; the issue, however, has been thoroughly contested. Despite the fact that the vast majority of Diels–Alder reactions exhibit stereospecific, syn addition of the two components, a diradical intermediate has been postulated (and supported with computational evidence) on the grounds that the observed stereospecificity does not rule out a two-step addition involving an intermediate that collapses to product faster than it can rotate to allow for inversion of stereochemistry. There is a notable rate enhancement when certain Diels–Alder reactions are carried out in polar organic solvents such as dimethylformamide and ethylene glycol, and even in water. The reaction of cyclopentadiene and butenone for example is 700 times faster in water relative to 2,2,4-trimethylpentane as solvent. Several explanations for this effect have been proposed, such as an increase in effective concentration due to hydrophobic packing or hydrogen-bond stabilization of the transition state. The geometry of the diene and dienophile components each propagate into stereochemical details of the product. For intermolecular reactions especially, the preferred positional and stereochemical relationship of substituents of the two components compared to each other are controlled by electronic effects. However, for intramolecular Diels–Alder cycloaddition reactions, the conformational stability of the structure of the transition state can be an overwhelming influence. Regioselectivity Frontier molecular orbital theory has also been used to explain the regioselectivity patterns observed in Diels–Alder reactions of substituted systems. Calculation of the energy and orbital coefficients of the components' frontier orbitals provides a picture that is in good accord with the more straightforward analysis of the substituents' resonance effects, as illustrated below. In general, the regioselectivity found for both normal and inverse electron-demand Diels–Alder reaction follows the ortho-para rule, so named, because the cyclohexene product bears substituents in positions that are analogous to the ortho and para positions of disubstituted arenes. For example, in a normal-demand scenario, a diene bearing an electron-donating group (EDG) at C1 has its largest HOMO coefficient at C4, while the dienophile with an electron withdrawing group (EWG) at C1 has the largest LUMO coefficient at C2. Pairing these two coefficients gives the "ortho" product as seen in case 1 in the figure below. A diene substituted at C2 as in case 2 below has the largest HOMO coefficient at C1, giving rise to the "para" product. Similar analyses for the corresponding inverse-demand scenarios gives rise to the analogous products as seen in cases 3 and 4. Examining the canonical mesomeric forms above, it is easy to verify that these results are in accord with expectations based on consideration of electron density and polarization. In general, with respect to the energetically most well-matched HOMO-LUMO pair, maximizing the interaction energy by forming bonds between centers with the largest frontier orbital coefficients allows the prediction of the main regioisomer that will result from a given diene-dienophile combination. In a more sophisticated treatment, three types of substituents (Z withdrawing: HOMO and LUMO lowering (CF3, NO2, CN, C(O)CH3), X donating: HOMO and LUMO raising (Me, OMe, NMe2), C conjugating: HOMO raising and LUMO lowering (Ph, vinyl)) are considered, resulting in a total of 18 possible combinations. The maximization of orbital interaction correctly predicts the product in all cases for which experimental data is available. For instance, in uncommon combinations involving X groups on both diene and dienophile, a 1,3-substitution pattern may be favored, an outcome not accounted for by a simplistic resonance structure argument. However, cases where the resonance argument and the matching of largest orbital coefficients disagree are rare. Stereospecificity and stereoselectivity Diels–Alder reactions, as concerted cycloadditions, are stereospecific. Stereochemical information of the diene and the dienophile are retained in the product, as a syn addition with respect to each component. For example, substituents in a cis (trans, resp.) relationship on the double bond of the dienophile give rise to substituents that are cis (trans, resp.) on those same carbons with respect to the cyclohexene ring. Likewise, cis,cis- and trans,trans-disubstituted dienes give cis substituents at these carbons of the product whereas cis,trans-disubstituted dienes give trans substituents: Diels–Alder reactions in which adjacent stereocenters are generated at the two ends of the newly formed single bonds imply two different possible stereochemical outcomes. This is a stereoselective situation based on the relative orientation of the two separate components when they react with each other. In the context of the Diels–Alder reaction, the transition state in which the most significant substituent (an electron-withdrawing and/or conjugating group) on the dienophile is oriented towards the diene π system and slips under it as the reaction takes place is known as the endo transition state. In the alternative exo transition state, it is oriented away from it. (There is a more general usage of the terms endo and exo in stereochemical nomenclature.) In cases where the dienophile has a single electron-withdrawing / conjugating substituent, or two electron-withdrawing / conjugating substituents cis to each other, the outcome can often be predicted. In these "normal demand" Diels–Alder scenarios, the endo transition state is typically preferred, despite often being more sterically congested. This preference is known as the Alder endo rule. As originally stated by Alder, the transition state that is preferred is the one with a "maximum accumulation of double bonds." Endo selectivity is typically higher for rigid dienophiles such as maleic anhydride and benzoquinone; for others, such as acrylates and crotonates, selectivity is not very pronounced. The most widely accepted explanation for the origin of this effect is a favorable interaction between the π systems of the dienophile and the diene, an interaction described as a secondary orbital effect, though dipolar and van der Waals attractions may play a part as well, and solvent can sometimes make a substantial difference in selectivity. The secondary orbital overlap explanation was first proposed by Woodward and Hoffmann. In this explanation, the orbitals associated with the group in conjugation with the dienophile double-bond overlap with the interior orbitals of the diene, a situation that is possible only for the endo transition state. Although the original explanation only invoked the orbital on the atom α to the dienophile double bond, Salem and Houk have subsequently proposed that orbitals on the α and β carbons both participate when molecular geometry allows. Often, as with highly substituted dienes, very bulky dienophiles, or reversible reactions (as in the case of furan as diene), steric effects can override the normal endo selectivity in favor of the exo isomer. The diene The diene component of the Diels–Alder reaction can be either open-chain or cyclic, and it can host many different types of substituents. It must, however, be able to exist in the s-cis conformation, since this is the only conformer that can participate in the reaction. Though butadienes are typically more stable in the s-trans conformation, for most cases energy difference is small (~2–5 kcal/mol). A bulky substituent at the C2 or C3 position can increase reaction rate by destabilizing the s-trans conformation and forcing the diene into the reactive s-cis conformation. 2-tert-butyl-buta-1,3-diene, for example, is 27 times more reactive than simple butadiene. Conversely, a diene having bulky substituents at both C2 and C3 is less reactive because the steric interactions between the substituents destabilize the s-cis conformation. Dienes with bulky terminal substituents (C1 and C4) decrease the rate of reaction, presumably by impeding the approach of the diene and dienophile. An especially reactive diene is 1-methoxy-3-trimethylsiloxy-buta-1,3-diene, otherwise known as Danishefsky's diene. It has particular synthetic utility as means of furnishing α,β–unsaturated cyclohexenone systems by elimination of the 1-methoxy substituent after deprotection of the enol silyl ether. Other synthetically useful derivatives of Danishefsky's diene include 1,3-alkoxy-1-trimethylsiloxy-1,3-butadienes (Brassard dienes) and 1-dialkylamino-3-trimethylsiloxy-1,3-butadienes (Rawal dienes). The increased reactivity of these and similar dienes is a result of synergistic contributions from donor groups at C1 and C3, raising the HOMO significantly above that of a comparable monosubstituted diene. Unstable (and thus highly reactive) dienes can be synthetically useful, e.g. o-quinodimethanes can be generated in situ. In contrast, stable dienes, such as naphthalene, require forcing conditions and/or highly reactive dienophiles, such as N-phenylmaleimide. Anthracene, being less aromatic (and therefore more reactive for Diels–Alder syntheses) in its central ring can form a 9,10 adduct with maleic anhydride at 80 °C and even with acetylene, a weak dienophile, at 250 °C. The dienophile In a normal demand Diels–Alder reaction, the dienophile has an electron-withdrawing group in conjugation with the alkene; in an inverse-demand scenario, the dienophile is conjugated with an electron-donating group. Dienophiles can be chosen to contain a "masked functionality". The dienophile undergoes Diels–Alder reaction with a diene introducing such a functionality onto the product molecule. A series of reactions then follow to transform the functionality into a desirable group. The end product cannot be made in a single DA step because equivalent dienophile is either unreactive or inaccessible. An example of such approach is the use of α-chloroacrylonitrile (CH2=CClCN). When reacted with a diene, this dienophile will introduce α-chloronitrile functionality onto the product molecule. This is a "masked functionality" which can be then hydrolyzed to form a ketone. α-Chloroacrylonitrile dienophile is an equivalent of ketene dienophile (CH2=C=O), which would produce same product in one DA step. The problem is that ketene itself cannot be used in Diels–Alder reactions because it reacts with dienes in unwanted manner (by [2+2] cycloaddition), and therefore "masked functionality" approach has to be used. Other such functionalities are phosphonium substituents (yielding exocyclic double bonds after Wittig reaction), various sulfoxide and sulfonyl functionalities (both are acetylene equivalents), and nitro groups (ketene equivalents). Variants on the classical Diels–Alder reaction Hetero-Diels–Alder Diels–Alder reactions involving at least one heteroatom are also known and are collectively called hetero-Diels–Alder reactions. Carbonyl groups, for example, can successfully react with dienes to yield dihydropyran rings, a reaction known as the oxo-Diels–Alder reaction, and imines can be used, either as the dienophile or at various sites in the diene, to form various N-heterocyclic compounds through the aza-Diels–Alder reaction. Nitroso compounds (R-N=O) can react with dienes to form oxazines. Chlorosulfonyl isocyanate can be utilized as a dienophile to prepare Vince lactam. Lewis acid activation Lewis acids, such as zinc chloride, boron trifluoride, tin tetrachloride, or aluminium chloride, can catalyze Diels–Alder reactions by binding to the dienophile. Traditionally, the enhanced Diels-Alder reactivity is ascribed to the ability of the Lewis acid to lower the LUMO of the activated dienophile, which results in a smaller normal electron demand HOMO-LUMO orbital energy gap and hence more stabilizing orbital interactions. Recent studies, however, have shown that this rationale behind Lewis acid-catalyzed Diels–Alder reactions is incorrect. It is found that Lewis acids accelerate the Diels–Alder reaction by reducing the destabilizing steric Pauli repulsion between the interacting diene and dienophile and not by lowering the energy of the dienophile's LUMO and consequently, enhancing the normal electron demand orbital interaction. The Lewis acid binds via a donor-acceptor interaction to the dienophile and via that mechanism polarizes occupied orbital density away from the reactive C=C double bond of the dienophile towards the Lewis acid. This reduced occupied orbital density on C=C double bond of the dienophile will, in turn, engage in a less repulsive closed-shell-closed-shell orbital interaction with the incoming diene, reducing the destabilizing steric Pauli repulsion and hence lowers the Diels–Alder reaction barrier. In addition, the Lewis acid catalyst also increases the asynchronicity of the Diels–Alder reaction, making the occupied π-orbital located on the C=C double bond of the dienophile asymmetric. As a result, this enhanced asynchronicity leads to an extra reduction of the destabilizing steric Pauli repulsion as well as a diminishing pressure on the reactants to deform, in other words, it reduced the destabilizing activation strain (also known as distortion energy). This working catalytic mechanism is known as Pauli-lowering catalysis, which is operative in a variety of organic reactions. The original rationale behind Lewis acid-catalyzed Diels–Alder reactions is incorrect, because besides lowering the energy of the dienophile's LUMO, the Lewis acid also lowers the energy of the HOMO of the dienophile and hence increases the inverse electron demand LUMO-HOMO orbital energy gap. Thus, indeed Lewis acid catalysts strengthen the normal electron demand orbital interaction by lowering the LUMO of the dienophile, but, they simultaneously weaken the inverse electron demand orbital interaction by also lowering the energy of the dienophile's HOMO. These two counteracting phenomena effectively cancel each other, resulting in nearly unchanged orbital interactions when compared to the corresponding uncatalyzed Diels–Alder reactions and making this not the active mechanism behind Lewis acid-catalyzed Diels–Alder reactions. Asymmetric Diels–Alder Many methods have been developed for influencing the stereoselectivity of the Diels–Alder reaction, such as the use of chiral auxiliaries, catalysis by chiral Lewis acids, and small organic molecule catalysts. Evans' oxazolidinones, oxazaborolidines, bis-oxazoline–copper chelates, imidazoline catalysis, and many other methodologies exist for effecting diastereo- and enantioselective Diels–Alder reactions. Hexadehydro Diels–Alder In the hexadehydro Diels–Alder reaction, alkynes and diynes are used instead of alkenes and dienes, forming an unstable benzyne intermediate which can then be trapped to form an aromatic product. This reaction allows the formation of heavily functionalized aromatic rings in a single step. Applications and natural occurrence The retro-Diels–Alder reaction is used in the industrial production of cyclopentadiene. Cyclopentadiene is a precursor to various norbornenes, which are common monomers. The Diels–Alder reaction is also employed in the production of vitamin B6. History The Diels-Alder reaction was the culmination of several intertwined research threads, some near misses, and ultimately, the insightful recognition of a general principle by Otto Diels and Kurt Alder. Their seminal work, detailed in a series of 28 articles published in the Justus Liebigs Annalen der Chemie and Berichte der deutschen chemischen Gesellschaft from 1928 to 1937, established the reaction's wide applicability and its importance in constructing six-membered rings. The first 19 articles were authored by Diels and Alder, while the later articles were authored by Diels and various other coauthors. However, the history of the reaction extends further back, revealing a fascinating narrative of discoveries missed and opportunities overlooked. Several chemists, working independently in the late 19th and early 20th centuries, encountered reactions that, in retrospect, involved the Diels-Alder process but remained unrecognized as such. Theodor Zincke performed a series of experiments between 1892 and 1912 involving tetrachlorocyclopentadienone, a highly reactive diene analogue. In 1910, Sergey Lebedev systematically investigated thermal polymerization of three conjugated dienes (butadiene, isoprene and dimethylbutadiene), a process now recognized as a Diels-Alder self-reaction, providing a detailed analysis of the dimerization products and recognizing the importance of the conjugated system in the process. Five years earlier, Carl Harries studied the degradation of natural rubber, leading him to propose a cyclic structure for the polymer. Hermann Staudinger's work with ketenes published in 1912 covered both [2+2] cycloadditions, where one molecule of a ketene reacted with an unsaturated compound to form a four-membered ring, and, importantly, [4+2] cycloadditions. In the latter case, two molecules of ketene combined with one molecule of an unsaturated compound (such as a quinone) to yield a six-membered ring. While not a classic Diels-Alder reaction in the typical sense of a conjugated diene and a separate dienophile, Staudinger's observation of this [4+2] process, forming a six-membered ring, foreshadowed the later work of Diels and Alder. However, his focus remained primarily on the more common [2+2] ketene cycloaddition. Hans von Euler-Chelpin and K. O. Josephson, investigating isoprene and butadiene reactions in 1920, both observed products consistent with Diels-Alder cycloadditions, but didn't go on to research it further. Perhaps the most striking near miss came from Walter Albrecht in early 1900s. Working in Johannes Thiele's laboratory, Albrecht investigated the reaction of cyclopentadiene with para-benzoquinone. His 1902 doctoral dissertation clearly describes the formation of the Diels-Alder adduct, even providing (incorrect) structural assignments. However, influenced by Thiele's focus on conjugation and partial valence, Albrecht in his 1906 publication interpreted the reaction as a 1,4-addition followed by a 1,2-addition, completely overlooking the cycloaddition aspect. While these observations hinted at the possibility of a broader class of cycloaddition reactions, they remained isolated incidents, their significance not fully appreciated at the time, with none of the researchers even trying to generalize their findings. It fell to Diels and Alder to synthesize these disparate threads into a coherent whole. Unlike the earlier researchers, they recognized the generality and predictability of the diene and dienophile combining to form a cyclic structure. Through their systematic investigations, exploring various combinations of dienes and dienophiles, they firmly established the "diene synthesis" as a powerful new synthetic method. Their meticulous work not only demonstrated the reaction's scope and versatility but also laid the groundwork for future theoretical developments, including the Woodward-Hoffmann rules, which would provide a deeper understanding of pericyclic reactions, including the Diels-Alder. Applications in total synthesis The Diels–Alder reaction was one step in an early preparation of the steroids cortisone and cholesterol. The reaction involved the addition of butadiene to a quinone. Diels–Alder reactions were used in the original synthesis of prostaglandins F2α and E2. The Diels–Alder reaction establishes the relative stereochemistry of three contiguous stereocenters on the prostaglandin cyclopentane core. Activation by Lewis acidic cupric tetrafluoroborate was required. A Diels–Alder reaction was used in the synthesis of disodium prephenate, a biosynthetic precursor of the amino acids phenylalanine and tyrosine. A synthesis of reserpine uses a Diels–Alder reaction to set the cis-decalin framework of the D and E rings. In another synthesis of reserpine, the cis-fused D and E rings was formed by a Diels–Alder reaction. Intramolecular Diels–Alder of the pyranone below with subsequent extrusion of carbon dioxide via a retro [4+2] afforded the bicyclic lactam. Epoxidation from the less hindered α-face, followed by epoxide opening at the less hindered C18 afforded the desired stereochemistry at these positions, while the cis-fusion was achieved with hydrogenation, again proceeding primarily from the less hindered face. A pyranone was similarly used as the dienophile in the total synthesis of taxol. The intermolecular reaction of the hydroxy-pyrone and α,β–unsaturated ester shown below suffered from poor yield and regioselectivity; however, when directed by phenylboronic acid the desired adduct could be obtained in 61% yield after cleavage of the boronate with neopentyl glycol. The stereospecificity of the Diels–Alder reaction in this instance allowed for the definition of four stereocenters that were carried on to the final product. A Diels–Alder reaction is a key step in the synthesis of (-)-furaquinocin C. Tabersonine was prepared by a Diels–Alder reaction to establish cis relative stereochemistry of the alkaloid core. Conversion of the cis-aldehyde to its corresponding alkene by Wittig olefination and subsequent ring-closing metathesis with a Schrock catalyst gave the second ring of the alkaloid core. The diene in this instance is notable as an example of a 1-amino-3-siloxybutadiene, otherwise known as a Rawal diene. (+)-Sterpurene can be prepared by asymmetric D-A reaction that featured a remarkable intramolecular Diels–Alder reaction of an allene. The [2,3]-sigmatropic rearrangement of the thiophenyl group to give the sulfoxide as below proceeded enantiospecifically due to the predefined stereochemistry of the propargylic alcohol. In this way, the single allene isomer formed could direct the Diels–Alder reaction to occur on only one face of the generated 'diene'. The tetracyclic core of the antibiotic (-)-tetracycline was prepared with a Diels–Alder reaction. Thermally initiated, conrotatory opening of the benzocyclobutene generated the o-quinodimethane, which reacted intermolecularly to give the tetracycline skeleton. The dienophile's free hydroxyl group is integral to the success of the reaction, as hydroxyl-protected variants did not react under several different reaction conditions. Takemura et al. synthesized cantharidin in 1980 by Diels–Alder reaction, utilizing high pressure. Synthetic applications of the Diels–Alder reaction have been reviewed extensively. See also Bradsher cycloaddition Wagner-Jauregg reaction Aza-Diels–Alder reaction References Bibliography External links English Translation of Diels and Alder's seminal 1928 German article that won them the Nobel prize. English title: 'Syntheses of the hydroaromatic series'; German title "Synthesen in der hydroaromatischen Reihe". Cycloadditions Carbon-carbon bond forming reactions Ring forming reactions German inventions 1928 in science 1928 in Germany Name reactions
Diels–Alder reaction
[ "Chemistry" ]
6,267
[ "Name reactions", "Carbon-carbon bond forming reactions", "Ring forming reactions", "Organic reactions" ]
173,272
https://en.wikipedia.org/wiki/Multi-exposure%20HDR%20capture
In photography and videography, multi-exposure HDR capture is a technique that creates high dynamic range (HDR) images (or extended dynamic range images) by taking and combining multiple exposures of the same subject matter at different exposures. Combining multiple images in this way results in an image with a greater dynamic range than what would be possible by taking one single image. The technique can also be used to capture video by taking and combining multiple exposures for each frame of the video. The term "HDR" is used frequently to refer to the process of creating HDR images from multiple exposures. Many smartphones have an automated HDR feature that relies on computational imaging techniques to capture and combine multiple exposures. A single image captured by a camera provides a finite range of luminosity inherent to the medium, whether it is a digital sensor or film. Outside this range, tonal information is lost and no features are visible; tones that exceed the range are "burned out" and appear pure white in the brighter areas, while tones that fall below the range are "crushed" and appear pure black in the darker areas. The ratio between the maximum and the minimum tonal values that can be captured in a single image is known as the dynamic range. In photography, dynamic range is measured in exposure value (EV) differences, also known as stops. The human eye's response to light is non-linear: halving the light level does not halve the perceived brightness of a space, it makes it look only slightly dimmer. For most illumination levels, the response is approximately logarithmic. Human eyes adapt fairly rapidly to changes in light levels. HDR can thus produce images that look more like what a human sees when looking at the subject. This technique can be applied to produce images that preserve local contrast for a natural rendering, or exaggerate local contrast for artistic effect. HDR is useful for recording many real-world scenes containing a wider range of brightness than can be captured directly, typically both bright, direct sunlight and deep shadows. Due to the limitations of printing and display contrast, the extended dynamic range of HDR images must be compressed to the range that can be displayed. The method of rendering a high dynamic range image to a standard monitor or printing device is called tone mapping; it reduces the overall contrast of an HDR image to permit display on devices or prints with lower dynamic range. Benefits One aim of HDR is to present a similar range of luminance to that experienced through the human visual system. The human eye, through non-linear response, adaptation of the iris, and other methods, adjusts constantly to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions. Most cameras are limited to a much narrower range of exposure values within a single image, due to the dynamic range of the capturing medium. With a limited dynamic range, tonal differences can be captured only within a certain range of brightness. Outside of this range, no details can be distinguished: when the tone being captured exceeds the range in bright areas, these tones appear as pure white, and when the tone being captured does not meet the minimum threshold, these tones appear as pure black. Images captured with non-HDR cameras that have a limited exposure range (low dynamic range, LDR), may lose detail in highlights or shadows. Modern CMOS image sensors have improved dynamic range and can often capture a wider range of tones in a single exposure reducing the need to perform multi-exposure HDR. Color film negatives and slides consist of multiple film layers that respond to light differently. Original film (especially negatives versus transparencies or slides) feature a very high dynamic range (in the order of 8 for negatives and 4 to 4.5 for positive transparencies). Multi-exposure HDR is used in photography and also in extreme dynamic range applications such as welding or automotive work. In security cameras the term "wide dynamic range" is used instead of HDR. Limitations A fast-moving subject, or camera movement between the multiple exposures, will generate a "ghost" effect or a staggered-blur strobe effect due to the merged images not being identical. Unless the subject is static and the camera mounted on a tripod there may be a tradeoff between extended dynamic range and sharpness. Sudden changes in the lighting conditions (strobed LED light) can also interfere with the desired results, by producing one or more HDR layers that do have the luminosity expected by an automated HDR system, though one might still be able to produce a reasonable HDR image manually in software by rearranging the image layers to merge in order of their actual luminosity. Because of the nonlinearity of some sensors image artifacts can be common. Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images. Process High-dynamic-range photographs are generally composites of multiple standard dynamic range images, often captured using exposure bracketing. Afterwards, photo manipulation software merges the input files into a single HDR image, which is then also tone mapped in accordance with the limitations of the planned output or display. Capturing multiple images (exposure bracketing) Any camera that allows manual exposure control can perform multi-exposure HDR image capture, although one equipped with automatic exposure bracketing (AEB) facilitates the process. Some cameras have an AEB feature that spans a far greater dynamic range than others, from ±0.6 in simpler cameras to ±18 EV in top professional cameras, The exposure value (EV) refers to the amount of light applied to the light-sensitive detector, whether film or digital sensor such as a CCD. An increase or decrease of one stop is defined as a doubling or halving of the amount of light captured. Revealing detail in the darkest of shadows requires an increased EV, while preserving detail in very bright situations requires very low EVs. EV is controlled using one of two photographic controls: varying either the size of the aperture or the exposure time. A set of images with multiple EVs intended for HDR processing should be captured only by altering the exposure time; altering the aperture size also would affect the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image. Multi-exposure HDR photography generally is limited to still scenes because any movement between successive images will impede or prevent success in combining them afterward. Also, because the photographer must capture three or more images to obtain the desired luminance range, taking such a full set of images takes extra time. Photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is advised to minimize framing differences between exposures. Merging the images into an HDR image Tonal information and details from shadow areas can be recovered from images that are deliberately overexposed (i.e., with positive EV compared to the correct scene exposure), while similar tonal information from highlight areas can be recovered from images that are deliberately underexposed (negative EV). The process of selecting and extracting shadow and highlight information from these over/underexposed images and then combining them with image(s) that are exposed correctly for the overall scene is known as exposure fusion. Exposure fusion can be performed manually, relying on the HDR operator's judgment, experience, and training, but usually, fusion is performed automatically by software. Storing Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed using mathematical functions such as power laws logarithms, or floating point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges. HDR images often do not use fixed ranges per color channel, other than traditional images, to represent many more colors over a much wider dynamic range (multiple channels). For that purpose, they do not use integer values to represent the single color channels (e.g., 0–255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common values are 16-bit (half precision) or 32-bit floating-point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10 to 12 bits ( to values) for luminance and 8 bits ( values) for chrominance without introducing any visible quantization artifacts. Tone mapping Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDR files by the same software package. Tone mapping is often needed because the dynamic range that can be displayed is often lower than the dynamic range of the captured or processed image. HDR displays can receive a higher dynamic range signal than SDR displays, reducing the need for tone mapping. Types of HDR HDR can be done via several methods: DOL: Digital overlap BME: Binned multiplexed exposure SME: Spatially multiplexed exposure QBC: Quad Bayer Coding Examples This is an example of four standard dynamic range images that are combined to produce three resulting tone mapped images: This is an example of a scene with a very wide dynamic range: Devices Post-capture software Several software applications are available on the PC, Mac, and Linux platforms for producing HDR files and tone mapped images. Notable titles include: Adobe Photoshop Affinity Photo Aurora HDR Dynamic Photo HDR EasyHDR GIMP HDR PhotoStudio Luminance HDR Nik Collection HDR Efex Pro Oloneo PhotoEngine Photomatix Pro PTGui SNS-HDR Photography Several camera manufacturers offer built-in multi-exposure HDR features. For example, the Pentax K-7 DSLR has an HDR mode that makes 3 or 5 exposures and outputs (only) a tone mapped HDR image in a JPEG file. The Canon PowerShot G12, Canon PowerShot S95, and Canon PowerShot S100 offer similar features in a smaller format. Nikon's approach is called 'Active D-Lighting' which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the emphasis being on creating a realistic effect. Some smartphones provide HDR modes for their cameras, and most mobile platforms have apps that provide multi-exposure HDR picture taking. Google released a HDR+ mode for the Nexus 5 and Nexus 6 smartphones in 2014, which automatically captures a series of images and combines them into a single still image, as detailed by Marc Levoy. Unlike traditional HDR, Levoy's implementation of HDR+ uses multiple images underexposed by using a short shutter speed, which are then aligned and averaged by pixel, improving dynamic range and reducing noise. By selecting the sharpest image as the baseline for alignment, the effect of camera shake is reduced. Some of the sensors on modern phones and cameras may combine two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing. Videography Although not as established as for still photography capture, it is also possible to capture and combine multiple images for each frame of a video in order to increase the dynamic range captured by the camera. This can be done via multiple methods: Creating a time-lapse of individually images created via the multi-exposure HDR technique. Taking consecutively two differently exposed images by cutting the frame rate in half. Taking simultaneously two differently exposed images by cutting the resolution in half. Taking simultaneously two differently exposed images with full resolution and frame rate via a sensor with dual gain architecture. For example: Arri Alexa's sensor, Samsung sensors with Smart-ISO Pro. Some cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. In 2020, Qualcomm announced Snapdragon 888, a mobile SoC able to do computational multi-exposure HDR video capture in 4K and also to record it in a format compatible with HDR displays. In 2021, the Xiaomi Mi 11 Ultra smartphone is able to do computational multi-exposure HDR for video capture. Surveillance cameras HDR capture can be implemented on surveillance cameras, even inexpensive models. This is usually termed a wide dynamic range (WDR) function Examples include CarCam Tiny, Prestige DVR-390, and DVR-478. History Mid-19th century The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive. Mid-20th century Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took five days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow. Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System. With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods. Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s. Late 20th century Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera's image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras in 1988 by a group from the Technion in Israel, led by Oliver Hilsenrath and Yehoshua Y. Zeevi. Technion researchers filed for a patent on this concept in 1991, and several related patents in 1992 and 1993. In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured successively by a sensor or simultaneously by two sensors of the camera. This process is known as bracketing used for a video stream. In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols. Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal-to-noise ratio. In 1993, another commercial medical camera producing an HDR video image, by the Technion. Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 1993 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard. On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (high dynamic range + graphic) image of STS-95 on the launch pad at NASA's Kennedy Space Center. It consisted of four film images of the space shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC, in 1999 and then published in Hasselblad Forum. The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Lab. Mann's method involved a two-step procedure: First, generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods). Second, convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations. 21st century In February 2001, the Dynamic Ranger technique was demonstrated, using multiple photos with different exposure levels to accomplish high dynamic range similar to the naked eye. In the early 2000s, several scholarly research efforts used consumer-grade sensors and cameras. A few companies such as RED and Arri have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture time-sequential HDRx images with a user-selectable 1–3 stops of additional highlight latitude in the "x" channel. The "x" channel can be merged with the normal channel in post production software. The Arri Alexa camera uses a dual-gain architecture to generate an HDR image from two exposures captured at the same time. With the advent of low-cost consumer digital cameras, many amateurs began posting tone-mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010, the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and consumer grade HD video cameras. Similar methods have been described in the academic literature in 2001 and 2007. In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping. On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform. See also Comparison of graphics file formats HDRi (data format) High-dynamic-range rendering High-dynamic-range television JPEG XT Logluv TIFF OpenEXR RGBE image format scRGB Wide dynamic range References Benjamin Sarao (1999). Ben Sarao, Trenton, NJ, USA: Space Shuttle Discovery, pages 16–17 (English ed.). Victor Hasselblad AB, Goteborg, Sweden. ISSN 0282-5449 External links Articles containing video clips Computer graphics High dynamic range High-dynamic-range imaging Photographic techniques
Multi-exposure HDR capture
[ "Engineering" ]
4,343
[ "Electrical engineering", "High dynamic range" ]
173,278
https://en.wikipedia.org/wiki/Yrj%C3%B6%20V%C3%A4is%C3%A4l%C3%A4
Yrjö Väisälä (; 6 September 1891 – 21 July 1971) was a Finnish astronomer and physicist. His main contributions were in the field of optics. He was also active in geodetics, astronomy and optical metrology. He had an affectionate nickname of Wizard of Tuorla (Observatory/Optics laboratory), and a book with the same title in Finnish describes his works. His discoveries include 128 asteroids and 3 comets. His brothers were mathematician Kalle Väisälä (1893–1968) and meteorologist Vilho Väisälä (1889–1969). His daughter Marja Väisälä (1916–2011) was an astronomer and discoverer of minor planets. Väisälä was a fervent supporter of Esperanto, presiding over the Internacia Scienca Asocio Esperantista ("International Association of Esperanto Scientists") in 1968. Optician He developed several methods for measuring the quality of optical elements, as well as a lot of practical methods of manufacturing said elements. This allowed the construction of some of the earliest high-quality Schmidt cameras, in particular a "field-flattened" version known as Schmidt-Väisälä camera. Contemporary to Bernhard Schmidt's design, but unpublished was also Prof. Yrjö Väisälä's identical design which he had mentioned in lecture notes in 1924 with a footnote: "problematic spherical focal surface". Once he saw Schmidt's publication, he promptly went ahead and "solved" the field flattening problem by placing a doubly convex lens slightly in front of the film holder – back in the 1930s, astronomical films were glass plates (also see photographic plates). The resulting system is known as the Schmidt-Väisälä camera or sometimes as the Väisälä camera. (This solution is not perfect, as images of different colour end up at slightly different places.) Prof. Väisälä made a small test unit of 7 mirrors in a mosaic on stiff background steel frame, however it proved to be impossible to stabilize as "just adjust and forget" structure, and next time anybody tried it, was with active controls on Multiple Mirror Telescope. Geodesy In the 1920s and 1930s Finland was doing its first precision triangulation chain measurements, and to create long-distance vertices Prof. Väisälä proposed usage of flash-lights on altitude balloons, or on some big fireworks rockets. The idea was to measure the exact position of the flash against background stars, and by precisely knowing one camera location, to derive an accurate location for another camera. This required better wide-field cameras than were available, and was discarded. Later, Prof. Väisälä developed a method to multiply an optical length reference using white light interferometry to precisely determine lengths of baselines used in triangulation chains. Several such baselines were created in Finland for second high-precision triangulation campaign in 1950s and 1960s. Later GPS made these methods largely obsolete. The Nummela Standard Baseline established by Väisälä is still maintained by the Finnish Geodetic Institute in Nummela for the calibration of other distance measurement instruments. Prof. Väisälä also developed excellent tools to measure earth rotational axis position by building so called zenith telescopes, and in the 1960s Tuorla Observatory was in the top rank of North Pole position tracking measurements. In the 1980s radioastronomy was able to replace Earth rotation tracking by referring things against "non-moving background" of quasars. For these Zenith Telescopes, Prof. Väisälä made also one of the first experiments at doing mirrors of liquid mercury. (Such mirror needs extremely smooth rotational speeds which were achieved in the late 1990s.) Astronomer The big Schmidt-Väisälä telescope he built was used at the University of Turku for searching asteroids and comets. His research group discovered 7 comets and 807 asteroids. For this rather massive photographic survey work, Prof. Väisälä developed also a protocol of taking two exposures on same plate some 2–3 hours apart and offsetting those images slightly. Any dot-pairs that differed from background were moving, and deserved follow-up photos. This method halved the film consumption compared to method of "blink comparing", where plates get single exposures, and are compared by rapidly showing first and second exposures to human operator. (Blink-comparing was used to find e.g. Pluto.) Yrjö Väisälä is credited by the Minor Planet Center with the discovery of 128 asteroids (see below) during 1935–1944. He used to name them with the names of his personal friends that had birthdays. One of them was the professor Matti Herman Palomaa, after whom an asteroid 1548 Palomaa was named. For this reason the Palomar Mountain Observatory in California has never had an asteroid bearing its name – the rules for naming asteroids state that the names have to differ from each other with more than one letter. Besides minor planets, he has also discovered 3 comets. The parabolic comet C/1944 H1 observed in 1944 and 1945, as well as the two short period comets, 40P/Väisälä, a Jupiter-family comet, and C/1942 EA, a Halley-type and near-Earth comet. Together with Liisi Oterma he co-discovered the Jupiter-family comet 139P/Väisälä–Oterma, which was first classified as asteroid and received the provisional designation "1939 TN". Honors and awards The University of Turku Astronomy department is known as VISPA: Väisälä Institute for Space Physics and Astronomy in honour of its founder. The lunar crater Väisälä is named after him, and so are the minor planets 1573 Väisälä and 2804 Yrjö. List of discovered minor planets Gallery Notes References External links Turun Ursa, 1891 births 1971 deaths 20th-century astronomers Finnish astronomers Discoverers of asteroids Discoverers of comets Finnish Esperantists Finnish geodesists Optical engineers People associated with the University of Turku People from Joensuu Astronomy-optics society Astronomical instrument makers
Yrjö Väisälä
[ "Astronomy" ]
1,242
[ "Astronomical instrument makers", "Astronomical instruments" ]
173,283
https://en.wikipedia.org/wiki/Poly%28methyl%20methacrylate%29
Poly(methyl methacrylate) (PMMA) is a synthetic polymer derived from methyl methacrylate. It is a transparent thermoplastic, used as an engineering plastic. PMMA is also known as acrylic, acrylic glass, as well as by the trade names and brands Crylux, Hesalite, Plexiglas, Acrylite, Lucite, and Perspex, among several others (see below). This plastic is often used in sheet form as a lightweight or shatter-resistant alternative to glass. It can also be used as a casting resin, in inks and coatings, and for many other purposes. It is often technically classified as a type of glass, in that it is a non-crystalline vitreous substance—hence its occasional historic designation as acrylic glass. History The first acrylic acid was created in 1843. Methacrylic acid, derived from acrylic acid, was formulated in 1865. The reaction between methacrylic acid and methanol results in the ester methyl methacrylate. It was developed in 1928 in several different laboratories by many chemists, such as William R. Conn, Otto Röhm, and Walter Bauer, and first brought to market in 1933 by German Röhm & Haas AG (as of January 2019, part of Evonik Industries) and its partner and former U.S. affiliate Rohm and Haas Company under the trademark Plexiglas. Polymethyl methacrylate was discovered in the early 1930s by British chemists Rowland Hill and John Crawford at Imperial Chemical Industries (ICI) in the United Kingdom. ICI registered the product under the trademark Perspex. About the same time, chemist and industrialist Otto Röhm of Röhm and Haas AG in Germany attempted to produce safety glass by polymerizing methyl methacrylate between two layers of glass. The polymer separated from the glass as a clear plastic sheet, which Röhm gave the trademarked name Plexiglas in 1933. Both Perspex and Plexiglas were commercialized in the late 1930s. In the United States, E.I. du Pont de Nemours & Company (now DuPont Company) subsequently introduced its own product under the trademark Lucite. In 1936 ICI Acrylics (now Lucite International) began the first commercially viable production of acrylic safety glass. During World War II both Allied and Axis forces used acrylic glass for submarine periscopes and aircraft windscreen, canopies, and gun turrets. Scraps of acrylic were also used to make clear pistol grips for the M1911A1 pistol or clear handle grips for the M1 bayonet or theater knifes so that soldiers could put small photos of loved ones or pin-up girls' pictures inside. They were called "Sweetheart Grips" or "Pin-up Grips". Others were used to make handles for theater knives made from scrap materials. Civilian applications followed after the war. Names Common orthographic stylings include polymethyl methacrylate and polymethylmethacrylate. The full IUPAC chemical name is poly(methyl 2-methylpropoate), although it is a common mistake to use "an" instead of "en". Although PMMA is often called simply "acrylic", acrylic can also refer to other polymers or copolymers containing polyacrylonitrile. Notable trade names and brands include Acrylite, Altuglas, Astariglas, Cho Chen, Crystallite, Cyrolite, Hesalite (when used in Omega watches), Lucite, Optix, Oroglas, PerClax, Perspex, Plexiglas, R-Cast, and Sumipex. PMMA is an economical alternative to polycarbonate (PC) when tensile strength, flexural strength, transparency, polishability, and UV tolerance are more important than impact strength, chemical resistance, and heat resistance. Additionally, PMMA does not contain the potentially harmful bisphenol-A subunits found in polycarbonate and is a far better choice for laser cutting. It is often preferred because of its moderate properties, easy handling and processing, and low cost. Non-modified PMMA behaves in a brittle manner when under load, especially under an impact force, and is more prone to scratching than conventional inorganic glass, but modified PMMA is sometimes able to achieve high scratch and impact resistance. Properties PMMA is a strong, tough, and lightweight material. It has a density of 1.17–1.20 g/cm, which is approximately half that of glass, which is generally, depending on composition, 2.2–2.53 g/cm. It also has good impact strength, higher than both glass and polystyrene, but significantly lower than polycarbonate and some engineered polymers. PMMA ignites at and burns, forming carbon dioxide, water, carbon monoxide, and low-molecular-weight compounds, including formaldehyde. PMMA transmits up to 92% of visible light ( thickness), and gives a reflection of about 4% from each of its surfaces due to its refractive index (1.4905 at 589.3nm). It filters ultraviolet (UV) light at wavelengths below about 300 nm (similar to ordinary window glass). Some manufacturers add coatings or additives to PMMA to improve absorption in the 300–400 nm range. PMMA passes infrared light of up to 2,800 nm and blocks IR of longer wavelengths up to 25,000 nm. Colored PMMA varieties allow specific IR wavelengths to pass while blocking visible light (for remote control or heat sensor applications, for example). PMMA swells and dissolves in many organic solvents; it also has poor resistance to many other chemicals due to its easily hydrolyzed ester groups. Nevertheless, its environmental stability is superior to most other plastics such as polystyrene and polyethylene, and therefore it is often the material of choice for outdoor applications. PMMA has a maximum water absorption ratio of 0.3–0.4% by weight. Tensile strength decreases with increased water absorption. Its coefficient of thermal expansion is relatively high at (5–10)×10 °C. The Futuro house was made of fibreglass-reinforced polyester plastic, polyester-polyurethane, and poly(methylmethacrylate); one of them was found to be degrading by cyanobacteria and Archaea. PMMA can be joined using cyanoacrylate cement (commonly known as superglue), with heat (welding), or by using chlorinated solvents such as dichloromethane or trichloromethane (chloroform) to dissolve the plastic at the joint, which then fuses and sets, forming an almost invisible weld. Scratches may easily be removed by polishing or by heating the surface of the material. Laser cutting may be used to form intricate designs from PMMA sheets. PMMA vaporizes to gaseous compounds (including its monomers) upon laser cutting, so a very clean cut is made, and cutting is performed very easily. However, the pulsed lasercutting introduces high internal stresses, which on exposure to solvents produce undesirable "stress-crazing" at the cut edge and several millimetres deep. Even ammonium-based glass-cleaner and almost everything short of soap-and-water produces similar undesirable crazing, sometimes over the entire surface of the cut parts, at great distances from the stressed edge. Annealing the PMMA sheet/parts is therefore an obligatory post-processing step when intending to chemically bond lasercut parts together. In the majority of applications, PMMA will not shatter. Rather, it breaks into large dull pieces. Since PMMA is softer and more easily scratched than glass, scratch-resistant coatings are often added to PMMA sheets to protect it (as well as possible other functions). Pure poly(methyl methacrylate) homopolymer is rarely sold as an end product, since it is not optimized for most applications. Rather, modified formulations with varying amounts of other comonomers, additives, and fillers are created for uses where specific properties are required. For example: A small amount of acrylate comonomers are routinely used in PMMA grades destined for heat processing, since this stabilizes the polymer to depolymerization ("unzipping") during processing. Comonomers such as butyl acrylate are often added to improve impact strength. Comonomers such as methacrylic acid can be added to increase the glass transition temperature of the polymer for higher temperature use such as in lighting applications. Plasticizers may be added to improve processing properties, lower the glass transition temperature, improve impact properties, and improve mechanical properties such as elastic modulus Dyes may be added to give color for decorative applications, or to protect against (or filter) UV light. Fillers may be substituted to reduce cost. Synthesis and processing PMMA is routinely produced by emulsion polymerization, solution polymerization, and bulk polymerization. Generally, radical initiation is used (including living polymerization methods), but anionic polymerization of PMMA can also be performed. The glass transition temperature (T) of atactic PMMA is . The T values of commercial grades of PMMA range from ; the range is so wide because of the vast number of commercial compositions that are copolymers with co-monomers other than methyl methacrylate. PMMA is thus an organic glass at room temperature; i.e., it is below its T. The forming temperature starts at the glass transition temperature and goes up from there. All common molding processes may be used, including injection molding, compression molding, and extrusion. The highest quality PMMA sheets are produced by cell casting, but in this case, the polymerization and molding steps occur concurrently. The strength of the material is higher than molding grades owing to its extremely high molecular mass. Rubber toughening has been used to increase the toughness of PMMA to overcome its brittle behavior in response to applied loads. Applications Being transparent and durable, PMMA is a versatile material and has been used in a wide range of fields and applications such as rear-lights and instrument clusters for vehicles, appliances, and lenses for glasses. PMMA in the form of sheets affords to shatter resistant panels for building windows, skylights, bulletproof security barriers, signs and displays, sanitary ware (bathtubs), LCD screens, furniture and many other applications. It is also used for coating polymers based on MMA provides outstanding stability against environmental conditions with reduced emission of VOC. Methacrylate polymers are used extensively in medical and dental applications where purity and stability are critical to performance. Glass substitute PMMA is commonly used for constructing residential and commercial aquariums. Designers started building large aquariums when poly(methyl methacrylate) could be used. It is less often used in other building types due to incidents such as the Summerland disaster. PMMA is used for viewing ports and even complete pressure hulls of submersibles, such as the Alicia submarine's viewing sphere and the window of the bathyscaphe Trieste. PMMA is used in the lenses of exterior lights of automobiles. Spectator protection in ice hockey rinks is made from PMMA. Historically, PMMA was an important improvement in the design of aircraft windows, making possible such designs as the bombardier's transparent nose compartment in the Boeing B-17 Flying Fortress. Modern aircraft transparencies often use stretched acrylic plies. Police vehicles for riot control often have the regular glass replaced with PMMA to protect the occupants from thrown objects. PMMA is an important material in the making of certain lighthouse lenses. PMMA was used for the roofing of the compound in the Olympic Park for the 1972 Summer Olympics in Munich. It enabled a light and translucent construction of the structure. PMMA (under the brand name "Lucite") was used for the ceiling of the Houston Astrodome. Daylight redirection Laser cut acrylic panels have been used to redirect sunlight into a light pipe or tubular skylight and, from there, to spread it into a room. Their developers Veronica Garcia Hansen, Ken Yeang, and Ian Edmonds were awarded the Far East Economic Review Innovation Award in bronze for this technology in 2003. Attenuation being quite strong for distances over one meter (more than 90% intensity loss for a 3000 K source), acrylic broadband light guides are then dedicated mostly to decorative uses. Pairs of acrylic sheets with a layer of microreplicated prisms between the sheets can have reflective and refractive properties that let them redirect part of incoming sunlight in dependence on its angle of incidence. Such panels act as miniature light shelves. Such panels have been commercialized for purposes of daylighting, to be used as a window or a canopy such that sunlight descending from the sky is directed to the ceiling or into the room rather than to the floor. This can lead to a higher illumination of the back part of a room, in particular when combined with a white ceiling, while having a slight impact on the view to the outside compared to normal glazing. Medicine PMMA has a good degree of compatibility with human tissue, and it is used in the manufacture of rigid intraocular lenses which are implanted in the eye when the original lens has been removed in the treatment of cataracts. This compatibility was discovered by the English ophthalmologist Harold Ridley in WWII RAF pilots, whose eyes had been riddled with PMMA splinters coming from the side windows of their Supermarine Spitfire fighters – the plastic scarcely caused any rejection, compared to glass splinters coming from aircraft such as the Hawker Hurricane. Ridley had a lens manufactured by the Rayner company (Brighton & Hove, East Sussex) made from Perspex polymerised by ICI. On 29 November 1949 at St Thomas' Hospital, London, Ridley implanted the first intraocular lens at St Thomas's Hospital in London. In particular, acrylic-type lenses are useful for cataract surgery in patients that have recurrent ocular inflammation (uveitis), as acrylic material induces less inflammation. Eyeglass lenses are commonly made from PMMA. Historically, hard contact lenses were frequently made of this material. Soft contact lenses are often made of a related polymer, where acrylate monomers containing one or more hydroxyl groups make them hydrophilic. In orthopedic surgery, PMMA bone cement is used to affix implants and to remodel lost bone. It is supplied as a powder with liquid methyl methacrylate (MMA). Although PMMA is biologically compatible, MMA is considered to be an irritant and a possible carcinogen. PMMA has also been linked to cardiopulmonary events in the operating room due to hypotension. Bone cement acts like a grout and not so much like a glue in arthroplasty. Although sticky, it does not bond to either the bone or the implant; rather, it primarily fills the spaces between the prosthesis and the bone preventing motion. A disadvantage of this bone cement is that it heats up to while setting that may cause thermal necrosis of neighboring tissue. A careful balance of initiators and monomers is needed to reduce the rate of polymerization, and thus the heat generated. In cosmetic surgery, tiny PMMA microspheres suspended in some biological fluid are injected as a soft-tissue filler under the skin to reduce wrinkles or scars permanently. PMMA as a soft-tissue filler was widely used in the beginning of the century to restore volume in patients with HIV-related facial wasting. PMMA is used illegally to shape muscles by some bodybuilders. Plombage is an outdated treatment of tuberculosis where the pleural space around an infected lung was filled with PMMA balls, in order to compress and collapse the affected lung. Emerging biotechnology and biomedical research use PMMA to create microfluidic lab-on-a-chip devices, which require 100 micrometre-wide geometries for routing liquids. These small geometries are amenable to using PMMA in a biochip fabrication process and offers moderate biocompatibility. Bioprocess chromatography columns use cast acrylic tubes as an alternative to glass and stainless steel. These are pressure rated and satisfy stringent requirements of materials for biocompatibility, toxicity, and extractables. Dentistry Due to its aforementioned biocompatibility, poly(methyl methacrylate) is a commonly used material in modern dentistry, particularly in the fabrication of dental prosthetics, artificial teeth, and orthodontic appliances. Acrylic prosthetic construction: Pre-polymerized, powdered PMMA spheres are mixed with a Methyl Methacrylate liquid monomer, Benzoyl Peroxide (initiator), and NN-Dimethyl-P-Toluidine (accelerator), and placed under heat and pressure to produce a hardened polymerized PMMA structure. Through the use of injection molding techniques, wax based designs with artificial teeth set in predetermined positions built on gypsum stone models of patients' mouths can be converted into functional prosthetics used to replace missing dentition. PMMA polymer and methyl methacrylate monomer mix is then injected into a flask containing a gypsum mold of the previously designed prosthesis, and placed under heat to initiate polymerization process. Pressure is used during the curing process to minimize polymerization shrinkage, ensuring an accurate fit of the prosthesis. Though other methods of polymerizing PMMA for prosthetic fabrication exist, such as chemical and microwave resin activation, the previously described heat-activated resin polymerization technique is the most commonly used due to its cost effectiveness and minimal polymerization shrinkage. Artificial teeth: While denture teeth can be made of several different materials, PMMA is a material of choice for the manufacturing of artificial teeth used in dental prosthetics. Mechanical properties of the material allow for heightened control of aesthetics, easy surface adjustments, decreased risk of fracture when in function in the oral cavity, and minimal wear against opposing teeth. Additionally, since the bases of dental prosthetics are often constructed using PMMA, adherence of PMMA denture teeth to PMMA denture bases is unparalleled, leading to the construction of a strong and durable prosthetic. Art and aesthetics Acrylic paint essentially consists of PMMA suspended in water; however since PMMA is hydrophobic, a substance with both hydrophobic and hydrophilic groups needs to be added to facilitate the suspension. Modern furniture makers, especially in the 1960s and 1970s, seeking to give their products a space age aesthetic, incorporated Lucite and other PMMA products into their designs, especially office chairs. Many other products (for example, guitars) are sometimes made with acrylic glass to make the commonly opaque objects translucent. Perspex has been used as a surface to paint on, for example by Salvador Dalí. Diasec is a process which uses acrylic glass as a substitute for normal glass in picture frames. This is done for its relatively low cost, light weight, shatter-resistance, aesthetics and because it can be ordered in larger sizes than standard picture framing glass. As early as 1939, Los Angeles-based Dutch sculptor Jan De Swart experimented with samples of Lucite sent to him by DuPont; De Swart created tools to work the Lucite for sculpture and mixed chemicals to bring about certain effects of color and refraction. From approximately the 1960s onward, sculptors and glass artists such as Jan Kubíček, Leroy Lamis, and Frederick Hart began using acrylics, especially taking advantage of the material's flexibility, light weight, cost and its capacity to refract and filter light. In the 1950s and 1960s, Lucite was an extremely popular material for jewelry, with several companies specialized in creating high-quality pieces from this material. Lucite beads and ornaments are still sold by jewelry suppliers. Acrylic sheets are produced in dozens of standard colors, most commonly sold using color numbers developed by Rohm & Haas in the 1950s. Methyl methacrylate "synthetic resin" for casting (simply the bulk liquid chemical) may be used in conjunction with a polymerization catalyst such as methyl ethyl ketone peroxide (MEKP), to produce hardened transparent PMMA in any shape, from a mold. Objects like insects or coins, or even dangerous chemicals in breakable quartz ampules, may be embedded in such "cast" blocks, for display and safe handling. Other uses PMMA, in the commercial form Technovit 7200 is used vastly in the medical field. It is used for plastic histology, electron microscopy, as well as many more uses. PMMA has been used to create ultra-white opaque membranes that are flexible and switch appearance to transparent when wet. Acrylic is used in tanning beds as the transparent surface that separates the occupant from the tanning bulbs while tanning. The type of acrylic used in tanning beds is most often formulated from a special type of polymethyl methacrylate, a compound that allows the passage of ultraviolet rays. Sheets of PMMA are commonly used in the sign industry to make flat cut out letters in thicknesses typically varying from . These letters may be used alone to represent a company's name and/or logo, or they may be a component of illuminated channel letters. Acrylic is also used extensively throughout the sign industry as a component of wall signs where it may be a backplate, painted on the surface or the backside, a faceplate with additional raised lettering or even photographic images printed directly to it, or a spacer to separate sign components. PMMA was used in Laserdisc optical media. (CDs and DVDs use both acrylic and polycarbonate for impact resistance). It is used as a light guide for the backlights in TFT-LCDs. Plastic optical fiber used for short-distance communication is made from PMMA, and perfluorinated PMMA, clad with fluorinated PMMA, in situations where its flexibility and cheaper installation costs outweigh its poor heat tolerance and higher attenuation versus glass fiber. PMMA, in a purified form, is used as the matrix in laser dye-doped organic solid-state gain media for tunable solid state dye lasers. In semiconductor research and industry, PMMA aids as a resist in the electron beam lithography process. A solution consisting of the polymer in a solvent is used to spin coat silicon and other semiconducting and semi-insulating wafers with a thin film. Patterns on this can be made by an electron beam (using an electron microscope), deep UV light (shorter wavelength than the standard photolithography process), or X-rays. Exposure to these creates chain scission or (de-cross-linking) within the PMMA, allowing for the selective removal of exposed areas by a chemical developer, making it a positive photoresist. PMMA's advantage is that it allows for extremely high resolution patterns to be made. Smooth PMMA surface can be easily nanostructured by treatment in oxygen radio-frequency plasma and nanostructured PMMA surface can be easily smoothed by vacuum ultraviolet (VUV) irradiation. PMMA is used as a shield to stop beta radiation emitted from radioisotopes. Small strips of PMMA are used as dosimeter devices during the Gamma Irradiation process. The optical properties of PMMA change as the gamma dose increases, and can be measured with a spectrophotometer. Blacklight-reactive UV tattoos may use tattoo ink made with PMMA microcapsules and fluorescent dyes. In the 1960s, luthier Dan Armstrong developed a line of electric guitars and basses whose bodies were made completely of acrylic. These instruments were marketed under the Ampeg brand. Ibanez and B.C. Rich have also made acrylic guitars. Ludwig-Musser makes a line of acrylic drums called Vistalites, well known as being used by Led Zeppelin drummer John Bonham. Artificial nails in the "acrylic" type often include PMMA powder. Some modern briar, and occasionally meerschaum, tobacco pipes sport stems made of Lucite. PMMA technology is utilized in roofing and waterproofing applications. By incorporating a polyester fleece sandwiched between two layers of catalyst-activated PMMA resin, a fully reinforced liquid membrane is created in situ. PMMA is a widely used material to create deal toys and financial tombstones. PMMA is used by the Sailor Pen Company of Kure, Japan, in their standard models of gold-nib fountain pens, specifically as the cap and body material. See also Cast acrylic Organic laser Organic photonics Polycarbonate References External links Perspex Technical Properties Perspex Material Safety Data Sheet (MSDS) Acrylate polymers Amorphous solids Biomaterials Commodity chemicals Dental materials Dielectrics Engineering plastic German inventions Optical materials Plastics Thermoplastics Transparent materials
Poly(methyl methacrylate)
[ "Physics", "Chemistry", "Biology" ]
5,281
[ "Biomaterials", "Physical phenomena", "Dental materials", "Commodity chemicals", "Products of chemical industry", "Unsolved problems in physics", "Optical phenomena", "Materials", "Optical materials", "Medical technology", "Transparent materials", "Dielectrics", "Amorphous solids", "Matter...
173,285
https://en.wikipedia.org/wiki/Equilateral%20triangle
An equilateral triangle is a triangle in which all three sides have the same length, and all three angles are equal. Because of these properties, the equilateral triangle is a regular polygon, occasionally known as the regular triangle. It is the special case of an isosceles triangle by modern definition, creating more special properties. The equilateral triangle can be found in various tilings, and in polyhedrons such as the deltahedron and antiprism. It appears in real life in popular culture, architecture, and the study of stereochemistry resembling the molecular known as the trigonal planar molecular geometry. Properties An equilateral triangle is a triangle that has three equal sides. It is a special case of an isosceles triangle in the modern definition, stating that an isosceles triangle is defined at least as having two equal sides. Based on the modern definition, this leads to an equilateral triangle in which one of the three sides may be considered its base. The follow-up definition above may result in more precise properties. For example, since the perimeter of an isosceles triangle is the sum of its two legs and base, the equilateral triangle is formulated as three times its side. The internal angle of an equilateral triangle are equal, 60°. Because of these properties, the equilateral triangles are regular polygons. The cevians of an equilateral triangle are all equal in length, resulting in the median and angle bisector being equal in length, considering those lines as their altitude depending on the base's choice. When the equilateral triangle is flipped across its altitude or rotated around its center for one-third of a full turn, its appearance is unchanged; it has the symmetry of a dihedral group of order six. Other properties are discussed below. Area The area of an equilateral triangle with edge length is The formula may be derived from the formula of an isosceles triangle by Pythagoras theorem: the altitude of a triangle is the square root of the difference of squares of a side and half of a base. Since the base and the legs are equal, the height is: In general, the area of a triangle is half the product of its base and height. The formula of the area of an equilateral triangle can be obtained by substituting the altitude formula. Another way to prove the area of an equilateral triangle is by using the trigonometric function. The area of a triangle is formulated as the half product of base and height and the sine of an angle. Because all of the angles of an equilateral triangle are 60°, the formula is as desired. A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral. That is, for perimeter and area , the equality holds for the equilateral triangle: Relationship with circles The radius of the circumscribed circle is: and the radius of the inscribed circle is half of the circumradius: The theorem of Euler states that the distance between circumradius and inradius is formulated as . As a corollary of this, the equilateral triangle has the smallest ratio of the circumradius to the inradius of any triangle. That is: Pompeiu's theorem states that, if is an arbitrary point in the plane of an equilateral triangle but not on its circumcircle, then there exists a triangle with sides of lengths , , and . That is, , , and satisfy the triangle inequality that the sum of any two of them is greater than the third. If is on the circumcircle then the sum of the two smaller ones equals the longest and the triangle has degenerated into a line, this case is known as Van Schooten's theorem. A packing problem asks the objective of circles packing into the smallest possible equilateral triangle. The optimal solutions show that can be packed into the equilateral triangle, but the open conjectures expand to . Other mathematical properties Morley's trisector theorem states that, in any triangle, the three points of intersection of the adjacent angle trisectors form an equilateral triangle. Viviani's theorem states that, for any interior point in an equilateral triangle with distances , , and from the sides and altitude , independent of the location of . An equilateral triangle may have integer sides with three rational angles as measured in degrees, known for the only acute triangle that is similar to its orthic triangle (with vertices at the feet of the altitudes), and the only triangle whose Steiner inellipse is a circle (specifically, the incircle). The triangle of the largest area of all those inscribed in a given circle is equilateral, and the triangle of the smallest area of all those circumscribed around a given circle is also equilateral. It is the only regular polygon aside from the square that can be inscribed inside any other regular polygon. Given a point in the interior of an equilateral triangle, the ratio of the sum of its distances from the vertices to the sum of its distances from the sides is greater than or equal to 2, equality holding when is the centroid. In no other triangle is there a point for which this ratio is as small as 2. This is the Erdős–Mordell inequality; a stronger variant of it is Barrow's inequality, which replaces the perpendicular distances to the sides with the distances from to the points where the angle bisectors of , , and cross the sides (, , and being the vertices). There are numerous other triangle inequalities that hold equality if and only if the triangle is equilateral. Construction The equilateral triangle can be constructed in different ways by using circles. The first proposition in the Elements first book by Euclid. Start by drawing a circle with a certain radius, placing the point of the compass on the circle, and drawing another circle with the same radius; the two circles will intersect in two points. An equilateral triangle can be constructed by taking the two centers of the circles and the points of intersection. An alternative way to construct an equilateral triangle is by using Fermat prime. A Fermat prime is a prime number of the form wherein denotes the non-negative integer, and there are five known Fermat primes: 3, 5, 17, 257, 65537. A regular polygon is constructible by compass and straightedge if and only if the odd prime factors of its number of sides are distinct Fermat primes. To do so geometrically, draw a straight line and place the point of the compass on one end of the line, then swing an arc from that point to the other point of the line segment; repeat with the other side of the line, which connects the point where the two arcs intersect with each end of the line segment in the aftermath. If three equilateral triangles are constructed on the sides of an arbitrary triangle, either all outward or inward, by Napoleon's theorem the centers of those equilateral triangles themselves form an equilateral triangle. Appearances In other related figures Notably, the equilateral triangle tiles the Euclidean plane with six triangles meeting at a vertex; the dual of this tessellation is the hexagonal tiling. Truncated hexagonal tiling, rhombitrihexagonal tiling, trihexagonal tiling, snub square tiling, and snub hexagonal tiling are all semi-regular tessellations constructed with equilateral triangles. Other two-dimensional objects built from equilateral triangles include the Sierpiński triangle (a fractal shape constructed from an equilateral triangle by subdividing recursively into smaller equilateral triangles) and Reuleaux triangle (a curved triangle with constant width, constructed from an equilateral triangle by rounding each of its sides). Equilateral triangles may also form a polyhedron in three dimensions. A polyhedron whose faces are all equilateral triangles is called a deltahedron. There are eight strictly convex deltahedra: three of the five Platonic solids (regular tetrahedron, regular octahedron, and regular icosahedron) and five of the 92 Johnson solids (triangular bipyramid, pentagonal bipyramid, snub disphenoid, triaugmented triangular prism, and gyroelongated square bipyramid). More generally, all Johnson solids have equilateral triangles among their faces, though most also have other other regular polygons. The antiprisms are a family of polyhedra incorporating a band of alternating triangles. When the antiprism is uniform, its bases are regular and all triangular faces are equilateral. As a generalization, the equilateral triangle belongs to the infinite family of -simplexes, with . Applications Equilateral triangles have frequently appeared in man-made constructions and in popular culture. In architecture, an example can be seen in the cross-section of the Gateway Arch and the surface of the Vegreville egg. It appears in the flag of Nicaragua and the flag of the Philippines. It is a shape of a variety of road signs, including the yield sign. The equilateral triangle occurs in the study of stereochemistry. It can be described as the molecular geometry in which one atom in the center connects three other atoms in a plane, known as the trigonal planar molecular geometry. In the Thomson problem, concerning the minimum-energy configuration of charged particles on a sphere, and for the Tammes problem of constructing a spherical code maximizing the smallest distance among the points, the best solution known for places the points at the vertices of an equilateral triangle, inscribed in the sphere. This configuration is proven optimal for the Tammes problem, but a rigorous solution to this instance of the Thomson problem is unknown. See also Almost-equilateral Heronian triangle Malfatti circles Ternary plot Trilinear coordinates References Notes Works cited External links Types of triangles Constructible polygons
Equilateral triangle
[ "Mathematics" ]
2,152
[ "Constructible polygons", "Planes (geometry)", "Euclidean plane geometry" ]
173,286
https://en.wikipedia.org/wiki/Extropianism
Extropianism, also referred to as the philosophy of extropy, is an "evolving framework of values and standards for continuously improving the human condition". Extropians believe that advances in science and technology will some day let people live indefinitely. An extropian may wish to contribute to this goal, e.g. by doing research and development or by volunteering to test new technology. Originated by a set of principles developed by the philosopher Max More in The Principles of Extropy, extropian thinking places strong emphasis on rational thinking and on practical optimism. According to More, these principles "do not specify particular beliefs, technologies, or policies". Extropians share an optimistic view of the future, expecting considerable advances in computational power, life extension, nanotechnology and the like. Many extropians foresee the eventual realization of indefinite lifespans or immortality, and the recovery, thanks to future advances in biomedical technology or mind uploading, of those whose bodies/brains have been preserved by means of cryonics. Extropy The term extropy, as defined by Max More, is "The extent of a living or organizational system’s intelligence, functional order, vitality, and capacity and drive for improvement". It means the opposite of entropy, metaphorically interpreted as the tendency to degenerate and die out. Extropianism is "the philosophy that seeks to increase extropy". Extropy Institute In 1986, More joined Alcor, a cryonics company, and helped establish (along with Michael Price, Garret Smyth and Luigi Warren) the first European cryonics organization, Mizar Limited (later Alcor UK). In 1987, More moved to Los Angeles from Oxford University in England to work on his Ph.D. in philosophy at the University of Southern California. In 1988, Extropy: The Journal of Transhumanist Thought was first published. (For the first few issues, it was "Extropy: Vaccine for Future Shock".) This brought together thinkers with interests in artificial intelligence, nanotechnology, genetic engineering, life extension, mind uploading, idea futures, robotics, space exploration, memetics, and the politics and economics of transhumanism. Alternative media organizations soon began reviewing the magazine, and it attracted interest from like-minded thinkers. Later, More and Tom Bell co-founded the Extropy Institute (ExI), a non-profit 501(c)(3) educational organization. The institute was formed as a transhumanist networking and information center to use current scientific understanding along with critical and creative thinking to define a small set of principles or values that could help make sense of new capabilities opening up to humanity. In 2006, the board of directors of the Extropy Institute made a decision to close the organisation, stating that its mission was "essentially completed." See also Biopunk movement Cyborg anthropology Democratic transhumanism Eclipse Phase, a tabletop game which uses the philosophy in its futuristic setting. Effective accelerationism Futures studies Holism Omega Point Meliorism Negentropy Posthuman Proactionary Principle Russian Cosmism Sustainability Systems philosophy Systems thinking Transhumanism References External links Kevin Kelly on Extropy - Kevin Kelly at The Technium, August 29, 2009 Transhumanism Philosophy of life Virtue ethics Optimism
Extropianism
[ "Technology", "Engineering", "Biology" ]
696
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
173,305
https://en.wikipedia.org/wiki/Isobutane
Isobutane, also known as i-butane, 2-methylpropane or methylpropane, is a chemical compound with molecular formula HC(CH3)3. It is an isomer of butane. Isobutane is a colorless, odorless gas. It is the simplest alkane with a tertiary carbon atom. Isobutane is used as a precursor molecule in the petrochemical industry, for example in the synthesis of isooctane. Production Isobutane is obtained by isomerization of butane. Uses Isobutane is the principal feedstock in alkylation units of refineries. Using isobutane, gasoline-grade "blendstocks" are generated with high branching for good combustion characteristics. Typical products created with isobutane are 2,4-dimethylpentane and especially 2,2,4-trimethylpentane. Solvent In the Chevron Phillips slurry process for making high-density polyethylene, isobutane is used as a diluent. As the slurried polyethylene is removed, isobutane is "flashed" off, and condensed, and recycled back into the loop reactor for this purpose. Precursor to tert-butyl hydroperoxide Isobutane is oxidized to tert-butyl hydroperoxide, which is subsequently reacted with propylene to yield propylene oxide. The tert-butanol that results as a by-product is typically used to make gasoline additives such as methyl tert-butyl ether (MTBE). Miscellaneous uses Isobutane is also used as a propellant for aerosol spray cans. Isobutane is used as part of blended fuels, especially common in fuel canisters used for camping. Refrigerant Isobutane is used as a refrigerant. Use in refrigerators started in 1993 when Greenpeace presented the Greenfreeze project with the former East German company . In this regard, blends of pure, dry "isobutane" (R-600a) (that is, isobutane mixtures) have negligible ozone depletion potential and very low global warming potential (having a value of 3.3 times the GWP of carbon dioxide) and can serve as a functional replacement for R-12, R-22 (both of these being commonly known by the trademark Freon), R-134a, and other chlorofluorocarbon or hydrofluorocarbon refrigerants in conventional stationary refrigeration and air conditioning systems. As a refrigerant, isobutane poses a fire and explosion risk in addition to the hazards associated with non-flammable CFC refrigerants. Substitution of this refrigerant for motor vehicle air conditioning systems not originally designed for isobutane is widely prohibited or discouraged. Vendors and advocates of hydrocarbon refrigerants argue against such bans on the grounds that there have been very few such incidents relative to the number of vehicle air conditioning systems filled with hydrocarbons. A leak of isobutane in the refrigerant system of a fridge initiated the 2024 Valencia residential complex fire in Spain, that claimed 10 lives. Nomenclature The traditional name isobutane was still retained in the 1993 IUPAC recommendations, but is no longer recommended according to the 2013 recommendations. Since the longest continuous chain in isobutane contains only three carbon atoms, the preferred IUPAC name is 2-methylpropane but the locant (2-) is typically omitted in general nomenclature as redundant; C2 is the only position on a propane chain where a methyl substituent can be located without altering the main chain and forming the constitutional isomer n-butane. References External links International Chemical Safety Card 0901 NIOSH Pocket Guide to Chemical Hazards Alkanes Butane E-number additives Propellants Refrigerants
Isobutane
[ "Chemistry" ]
838
[ "Organic compounds", "Alkanes" ]
173,309
https://en.wikipedia.org/wiki/Liquefaction
In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics. It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels. Geology In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018. In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load. Physics and chemistry In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases. Coal Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes. Dissolution Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid. Food preparation In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English. Irradiation Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanosized samples in the column of transmission electron microscope. Biology In biology, liquefaction often involves organic tissue turning into a more liquid-like state. For example, liquefactive necrosis in pathology, or liquefaction as a parameter in semen analysis. See also Cryogenic energy storage Fluidization Liquefaction of gases Liquifaction point Liquefied natural gas Liquefied petroleum gas Liquid air Liquid helium Liquid hydrogen Liquid nitrogen Liquid oxygen Thixotropy References External links Seminal Clot Liquefaction Condensed matter physics Earthquake engineering Food preparation techniques Laboratory techniques Food science
Liquefaction
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
533
[ "Structural engineering", "Phases of matter", "Materials science", "Civil engineering", "Condensed matter physics", "nan", "Earthquake engineering", "Matter" ]
173,316
https://en.wikipedia.org/wiki/Strychnine
Strychnine (, , US chiefly ) is a highly toxic, colorless, bitter, crystalline alkaloid used as a pesticide, particularly for killing small vertebrates such as birds and rodents. Strychnine, when inhaled, swallowed, or absorbed through the eyes or mouth, causes poisoning which results in muscular convulsions and eventually death through asphyxia. While it is no longer used medicinally, it was used historically in small doses to strengthen muscle contractions, such as a heart and bowel stimulant and performance-enhancing drug. The most common source is from the seeds of the Strychnos nux-vomica tree. Biosynthesis Strychnine is a terpene indole alkaloid belonging to the Strychnos family of Corynanthe alkaloids, and it is derived from tryptamine and secologanin. The biosynthesis of strychnine was solved in 2022. The enzyme, strictosidine synthase, catalyzes the condensation of tryptamine and secologanin, followed by a Pictet-Spengler reaction to form strictosidine. Many steps have been inferred by isolation of intermediates from Strychnos nux-vomica. The next step is hydrolysis of the acetal, which opens the ring by elimination of glucose (O-Glu) and provides a reactive aldehyde. The nascent aldehyde is then attacked by a secondary amine to afford geissoschizine, a common intermediate of many related compounds in the Strychnos family. A reverse Pictet-Spengler reaction cleaves the C2–C3 bond, while subsequently forming the C3–C7 bond via a 1,2-alkyl migration, an oxidation from a Cytochrome P450 enzyme to a spiro-oxindole, nucleophilic attack from the enol at C16, and elimination of oxygen forms the C2–C16 bond to provide dehydropreakuammicine. Hydrolysis of the methyl ester and decarboxylation leads to norfluorocurarine. Stereospecific reduction of the endocyclic double bond by NADPH and hydroxylation provides the Wieland-Gumlich aldehyde, which was first isolated by Heimberger and Scott in 1973, although previously synthesized by Wieland and Gumlich in 1932. To elongate the appendage by two carbons, acetyl-CoA is added to the aldehyde in an aldol reaction to afford prestrychnine. Strychnine is then formed by a facile addition of the amine with the carboxylic acid or its activated CoA thioester, followed by ring-closure via displacement of an activated alcohol. Chemical synthesis As early researchers noted, the strychnine molecular structure, with its specific array of rings, stereocenters, and nitrogen functional groups, is a complex synthetic target, and has stimulated interest for that reason and for interest in the structure–activity relationships underlying its pharmacologic activities. An early synthetic chemist targeting strychnine, Robert Burns Woodward, quoted the chemist who determined its structure through chemical decomposition and related physical studies as saying that "for its molecular size it is the most complex organic substance known" (attributed to Sir Robert Robinson). The first total synthesis of strychnine was reported by the research group of R. B. Woodward in 1954, and is considered a classic in this field. The Woodward account published in 1954 was very brief (3 pages), but was followed by a 42-page report in 1963. The molecule has since received continuing wide attention in the years since for the challenges to synthetic organic strategy and tactics presented by its complexity; its synthesis has been targeted and its stereocontrolled preparation independently achieved by more than a dozen research groups since the first success. Mechanism of action Strychnine is a neurotoxin which acts as an antagonist of glycine and acetylcholine receptors. It primarily affects the motor nerve fibers in the spinal cord which control muscle contraction. An impulse is triggered at one end of a nerve cell by the binding of neurotransmitters to the receptors. In the presence of an inhibitory neurotransmitter, such as glycine, a greater quantity of excitatory neurotransmitters must bind to receptors before an action potential is generated. Glycine acts primarily as an agonist of the glycine receptor, which is a ligand-gated chloride channel in neurons located in the spinal cord and in the brain. This chloride channel allows the negatively charged chloride ions into the neuron, causing a hyperpolarization which pushes the membrane potential further from threshold. Strychnine is an antagonist of glycine; it binds noncovalently to the same receptor, preventing the inhibitory effects of glycine on the postsynaptic neuron. Therefore, action potentials are triggered with lower levels of excitatory neurotransmitters. When the inhibitory signals are prevented, the motor neurons are more easily activated and the victim has spastic muscle contractions, resulting in death by asphyxiation. Strychnine binds the Aplysia californica acetylcholine binding protein (a homolog of nicotinic receptors) with high affinity but low specificity, and does so in multiple conformations. Toxicity In high doses, strychnine is very toxic to humans (minimum lethal oral dose in adults is 30–120 mg) and many other animals (oral = 16 mg/kg in rats, 2 mg/kg in mice), and poisoning by inhalation, swallowing, or absorption through eyes or mouth can be fatal. S. nux-vomica seeds are generally effective as a poison only when they are crushed or chewed before swallowing because the pericarp is quite hard and indigestible; poisoning symptoms may therefore not appear if the seeds are ingested whole. Animal toxicity Strychnine poisoning in animals usually occurs from ingestion of baits designed for use against gophers, rats, squirrels, moles, chipmunks and coyotes. Strychnine is also used as a rodenticide, but is not specific to such unwanted pests and may kill other small animals. In the United States, most baits containing strychnine have been replaced with zinc phosphide baits since 1990. In the European Union, rodenticides with strychnine have been forbidden since 2006. Some animals are immune to strychnine; usually these have evolved resistance to poisonous strychnos alkaloids in the fruit they eat, such as fruit bats. The drugstore beetle has a symbiotic gut yeast that allows it to digest pure strychnine. Strychnine toxicity in rats is dependent on sex. It is more toxic to females than to males when administered via subcutaneous injection or intraperitoneal injection. Differences are due to higher rates of metabolism by male rat liver microsomes. Dogs and cats are more susceptible among domestic animals, pigs are believed to be as susceptible as dogs, and horses are able to tolerate relatively large amounts of strychnine. Birds affected by strychnine poisoning exhibit wing droop, salivation, tremors, muscle tenseness, and convulsions. Death occurs as a result of respiratory arrest. The clinical signs of strychnine poisoning relate to its effects on the central nervous system. The first clinical signs of poisoning include nervousness, restlessness, twitching of the muscles, and stiffness of the neck. As the poisoning progresses, the muscular twitching becomes more pronounced and convulsions suddenly appear in all the skeletal muscles. The limbs are extended and the neck is curved to opisthotonus. The pupils are widely dilated. As death approaches, the convulsions follow one another with increased rapidity, severity, and duration. Death results from asphyxia due to prolonged paralysis of the respiratory muscles. Following the ingestion of strychnine, symptoms of poisoning usually appear within 15 to 60 minutes. Human toxicity After injection, inhalation, or ingestion, the first symptoms to appear are generalized muscle spasms. They appear very quickly after inhalation or injection – within as few as five minutes – and take somewhat longer to manifest after ingestion, typically approximately 15 minutes. With a very high dose, the onset of respiratory failure and brain death can occur in 15 to 30 minutes. If a lower dose is ingested, other symptoms begin to develop, including seizures, cramping, stiffness, hypervigilance, and agitation. Seizures caused by strychnine poisoning can start as early as 15 minutes after exposure and last 12–24 hours. They are often triggered by sights, sounds, or touch and can cause other adverse symptoms, including hyperthermia, rhabdomyolysis, myoglobinuric kidney failure, metabolic acidosis, and respiratory acidosis. During seizures, mydriasis (abnormal dilation), exophthalmos (protrusion of the eyes), and nystagmus (involuntary eye movements) may occur. As strychnine poisoning progresses, tachycardia (rapid heart beat), hypertension (high blood pressure), tachypnea (rapid breathing), cyanosis (blue discoloration), diaphoresis (sweating), water-electrolyte imbalance, leukocytosis (high number of white blood cells), trismus (lockjaw), risus sardonicus (spasm of the facial muscles), and opisthotonus (dramatic spasm of the back muscles, causing arching of the back and neck) can occur. In rare cases, the affected person may experience nausea or vomiting. The proximate cause of death in strychnine poisoning can be cardiac arrest, respiratory failure, multiple organ failure, or brain damage. For occupational exposures to strychnine, the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have set exposure limits at 0.15 mg/m3 over an 8-hour work day. Because strychnine produces some of the most dramatic and painful symptoms of any known toxic reaction, strychnine poisoning is often portrayed in literature and film including authors Agatha Christie and Arthur Conan Doyle. Treatment There is no antidote for strychnine poisoning. Strychnine poisoning demands aggressive management with early control of muscle spasms, intubation for loss of airway control, toxin removal (decontamination), intravenous hydration and potentially active cooling efforts in the context of hyperthermia as well as hemodialysis in kidney failure (strychnine has not been shown to be removed by hemodialysis). Treatment involves oral administration of activated charcoal, which adsorbs strychnine within the digestive tract; unabsorbed strychnine is removed from the stomach by gastric lavage, along with tannic acid or potassium permanganate solutions to oxidize strychnine. Activated charcoal Activated charcoal is a substance that can bind to certain toxins in the digestive tract and prevent their absorption into the bloodstream. The effectiveness of this treatment, as well as how long it is effective after ingestion, are subject to debate. According to one source, activated charcoal is only effective within one hour of poison being ingested, although the source does not regard strychnine specifically. Other sources specific to strychnine state that activated charcoal may be used after one hour of ingestion, depending on dose and type of strychnine-containing product. Therefore, other treatment options are generally favoured over activated charcoal. The use of activated charcoal is considered dangerous in patients with tenuous airways or altered mental states. Other treatments Most other treatment options focus on controlling the convulsions that arise from strychnine poisoning. These treatments involve keeping the patient in a quiet and darkened room, anticonvulsants such as phenobarbital or diazepam, muscle relaxants such as dantrolene, barbiturates and propofol, and chloroform or heavy doses of chloral, bromide, urethane or amyl nitrite. If a poisoned person is able to survive for 6 to 12 hours subsequent to initial dose, they have a good prognosis. The sine qua non of strychnine toxicity is the "awake" seizure, in which tonic-clonic activity occurs but the patient is alert and oriented throughout and afterwards. Accordingly, George Harley (1829–1896) showed in 1850 that curare (wourali) was effective for the treatment of tetanus and strychnine poisoning. Pharmacokinetics Absorption Strychnine may be introduced into the body orally, by inhalation, or by injection. It is a potently bitter substance, and in humans has been shown to activate bitter taste receptors TAS2R10 and TAS2R46. Strychnine is rapidly absorbed from the gastrointestinal tract. Distribution Strychnine is transported by plasma and red blood cells. Due to slight protein binding, strychnine leaves the bloodstream quickly and distributes to bodily tissues. Approximately 50% of the ingested dose can enter the tissues in 5 minutes. Also within a few minutes of ingestion, strychnine can be detected in the urine. Little difference was noted between oral and intramuscular administration of strychnine in a 4 mg dose. In persons killed by strychnine, the highest concentrations are found in the blood, liver, kidney and stomach wall. The usual fatal dose is 60–100 mg strychnine and is fatal after a period of 1–2 hours, though lethal doses vary depending on the individual. Metabolism Strychnine is rapidly metabolized by the liver microsomal enzyme system requiring NADPH and O2. Strychnine competes with the inhibitory neurotransmitter glycine resulting in an excitatory state. However, the toxicokinetics after overdose have not been well described. In most severe cases of strychnine poisoning, the patient dies before reaching the hospital. The biological half-life of strychnine is about 10 hours. Excretion A few minutes after ingestion, strychnine is excreted unchanged in the urine, and accounts for about 5 to 15% of a sublethal dose given over 6 hours. Approximately 10 to 20% of the dose will be excreted unchanged in the urine in the first 24 hours. The percentage excreted decreases with the increasing dose. Of the amount excreted by the kidneys, about 70% is excreted in the first 6 hours, and almost 90% in the first 24 hours. Excretion is virtually complete in 48 to 72 hours. History Strychnine was the first alkaloid to be identified in plants of the genus Strychnos, family Loganiaceae. Strychnos, named by Carl Linnaeus in 1753, is a genus of trees and climbing shrubs of the Gentianales order. The genus contains 196 various species and is distributed throughout the warm regions of Asia (58 species), America (64 species), and Africa (75 species). The seeds and bark of many plants in this genus contain strychnine. The toxic and medicinal effects of Strychnos nux-vomica have been well known from the times of ancient India, although the chemical compound itself was not identified and characterized until the 19th century. The inhabitants of these countries had historical knowledge of the species Strychnos nux-vomica and Saint-Ignatius' bean (Strychnos ignatii). Strychnos nux-vomica is a tree native to the tropical forests on the Malabar Coast in Southern India, Sri Lanka and Indonesia, which attains a height of about . The tree has a crooked, short, thick trunk and the wood is close grained and very durable. The fruit has an orange color and is about the size of a large apple with a hard rind and contains five seeds, which are covered with a soft wool-like substance. The ripe seeds look like flattened disks, which are very hard. These seeds are the chief commercial source of strychnine and were first imported to and marketed in Europe as a poison to kill rodents and small predators. Strychnos ignatii is a woody climbing shrub of the Philippines. The fruit of the plant, known as Saint Ignatius' bean, contains as many as 25 seeds embedded in the pulp. The seeds contain more strychnine than other commercial alkaloids. The properties of S. nux-vomica and S. ignatii are substantially those of the alkaloid strychnine. Strychnine was first discovered by French chemists Joseph Bienaimé Caventou and Pierre-Joseph Pelletier in 1818 in the Saint-Ignatius' bean. In some Strychnos plants a 9,10-dimethoxy derivative of strychnine, the alkaloid brucine, is also present. Brucine is not as poisonous as strychnine. Historic records indicate that preparations containing strychnine (presumably) had been used to kill dogs, cats, and birds in Europe as far back as 1640. It was allegedly used by convicted murderer William Palmer to kill his final victim, John Cook. It was also used during World War II by the Dirlewanger Brigade against civilian population. The structure of strychnine was first determined in 1946 by Sir Robert Robinson and in 1954 this alkaloid was synthesized in a laboratory by Robert B. Woodward. This is one of the most famous syntheses in the history of organic chemistry. Both chemists won the Nobel prize (Robinson in 1947 and Woodward in 1965). Strychnine has been used as a plot device in the author Agatha Christie's murder mysteries. Other uses Strychnine was popularly used as an athletic performance enhancer and recreational stimulant in the late 19th century and early 20th century, due to its convulsant effects. One notorious instance of its use was during the 1904 Olympics marathon, when track-and-field athlete Thomas Hicks was unwittingly administered a concoction of egg whites and brandy laced with a small amount of strychnine by his assistants in a vain attempt to boost his stamina. Hicks won the race, but was hallucinating by the time he reached the finish line, and soon after collapsed. Maximilian Theodor Buch proposed it as a cure for alcoholism around the same time. It was thought to be similar to coffee, and also has been used and abused recreationally. Its effects are well-described in H. G. Wells' novella The Invisible Man: the title character states "Strychnine is a grand tonic ... to take the flabbiness out of a man." Dr Kemp, an acquaintance, replies: "It's the devil. It's the palaeolithic in a bottle." See also Avicide References Avicides Bitter compounds Chloride channel blockers Convulsants Ethers Glycine receptor antagonists Indole alkaloids Lactams Neurotoxins Nitrogen heterocycles Oxygen heterocycles Plant toxins Strychnine poisoning
Strychnine
[ "Chemistry", "Biology" ]
4,084
[ "Chemical ecology", "Biocides", "Indole alkaloids", "Functional groups", "Plant toxins", "Organic compounds", "Alkaloids by chemical classification", "Ethers", "Neurochemistry", "Neurotoxins", "Avicides" ]
173,323
https://en.wikipedia.org/wiki/Chorology
Chorology (from Greek , khōros, "place, space"; and , -logia) can mean the study of the causal relations between geographical phenomena occurring within a particular region the study of the spatial distribution of organisms (biogeography). In geography, the term was first used by Strabo. In the twentieth century, Richard Hartshorne worked on that notion again. The term was popularized by Ferdinand von Richthofen. See also Chorography Khôra References Biogeography
Chorology
[ "Biology" ]
106
[ "Biogeography" ]
173,332
https://en.wikipedia.org/wiki/Overfitting
In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise) as if that variation represented underlying model structure. Underfitting occurs when a mathematical model cannot adequately capture the underlying structure of the data. An under-fitted model is a model where some parameters or terms that would appear in a correctly specified model are missing. Underfitting would occur, for example, when fitting a linear model to nonlinear data. Such a model will tend to have poor predictive performance. The possibility of over-fitting exists because the criterion used for selecting the model is not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set of training data, and yet its suitability might be determined by its ability to perform well on unseen data; overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend. As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions. Overfitting is directly related to approximation error of the selected function class and the optimization error of the optimization procedure. A function class that is too large, in a suitable sense, relative to the dataset size is likely to overfit. Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new dataset than on the dataset used for fitting (a phenomenon sometimes known as shrinkage). In particular, the value of the coefficient of determination will shrink relative to the original data. To lessen the chance or amount of overfitting, several techniques are available (e.g., model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout). The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter. Statistical inference In statistics, an inference is drawn from a statistical model, which has been selected via some procedure. Burnham & Anderson, in their much-cited text on model selection, argue that to avoid overfitting, we should adhere to the "Principle of Parsimony". The authors also state the following. Overfitting is more likely to be a serious concern when there is little theory available to guide the analysis, in part because then there tend to be a large number of models to select from. The book Model Selection and Model Averaging (2008) puts it this way. Regression In regression analysis, overfitting occurs frequently. As an extreme example, if there are p variables in a linear regression with p data points, the fitted line can go exactly through every point. For logistic regression or Cox proportional hazards models, there are a variety of rules of thumb (e.g. 5–9, 10 and 10–15 — the guideline of 10 observations per independent variable is known as the "one in ten rule"). In the process of regression model selection, the mean squared error of the random regression function can be split into random noise, approximation bias, and variance in the estimate of the regression function. The bias–variance tradeoff is often used to overcome overfit models. With a large set of explanatory variables that actually have no relation to the dependent variable being predicted, some variables will in general be falsely found to be statistically significant and the researcher may thus retain them in the model, thereby overfitting the model. This is known as Freedman's paradox. Machine learning Usually, a learning algorithm is trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training. Overfitting is the use of models or procedures that violate Occam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data for can be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function is a priori less probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training data fit to offset the complexity increase, then the new complex function "overfits" the data and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset. When comparing different types of models, complexity cannot be measured solely by counting how many parameters exist in each model; the expressivity of each parameter must be considered as well. For example, it is nontrivial to directly compare the complexity of a neural net (which can track curvilinear relationships) with parameters to a regression model with parameters. Overfitting is especially likely in cases where learning was performed too long or where training examples are rare, causing the learner to adjust to very specific random features of the training data that have no causal relation to the target function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse. As a simple example, consider a database of retail purchases that includes the item bought, the purchaser, and the date and time of purchase. It's easy to construct a model that will fit the training set perfectly by using the date and time of purchase to predict the other attributes, but this model will not generalize at all to new data because those past times will never occur again. Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight). One can intuitively understand overfitting from the fact that information from all past experience can be divided into two groups: information that is relevant for the future, and irrelevant information ("noise"). Everything else being equal, the more difficult a criterion is to predict (i.e., the higher its uncertainty), the more noise exists in past information that needs to be ignored. The problem is determining which part to ignore. A learning algorithm that can reduce the risk of fitting noise is called "robust." Consequences The most obvious consequence of overfitting is poor performance on the validation dataset. Other negative consequences include: A function that is overfitted is likely to request more information about each item in the validation dataset than does the optimal function; gathering this additional unneeded data can be expensive or error-prone, especially if each individual piece of information must be gathered by human observation and manual data entry. A more complex, overfitted function is likely to be less portable than a simple one. At one extreme, a one-variable linear regression is so portable that, if necessary, it could even be done by hand. At the other extreme are models that can be reproduced only by exactly duplicating the original modeler's entire setup, making reuse or scientific reproduction difficult. It may be possible to reconstruct details of individual training instances from an overfitted machine learning model's training set. This may be undesirable if, for example, the training data includes sensitive personally identifiable information (PII). This phenomenon also presents problems in the area of artificial intelligence and copyright, with the developers of some generative deep learning models such as Stable Diffusion and GitHub Copilot being sued for copyright infringement because these models have been found to be capable of reproducing certain copyrighted items from their training data. Remedy The optimal function usually needs verification on bigger or completely new datasets. There are, however, methods like minimum spanning tree or life-time of correlation that applies the dependence between correlation coefficients and time-series (window width). Whenever the window width is big enough, the correlation coefficients are stable and don't depend on the window width size anymore. Therefore, a correlation matrix can be created by calculating a coefficient of correlation between investigated variables. This matrix can be represented topologically as a complex network where direct and indirect influences between variables are visualized. Dropout regularisation (random removal of training set data) can also improve robustness and therefore reduce over-fitting by probabilistically removing inputs to a layer. Underfitting Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse of overfitting: low bias and high variance). This can be gathered from the Bias-variance tradeoff, which is the method of analyzing a model or algorithm for bias error, variance error, and irreducible error. With a high bias and low variance, the result of the model is that it will inaccurately represent the data points and thus insufficiently be able to predict future data results (see Generalization error). As shown in Figure 5, the linear line could not represent all the given data points due to the line not resembling the curvature of the points. We would expect to see a parabola-shaped line as shown in Figure 6 and Figure 1. If we were to use Figure 5 for analysis, we would get false predictive results contrary to the results if we analyzed Figure 6. Burnham & Anderson state the following. Resolving underfitting There are multiple ways to deal with underfitting: Increase the complexity of the model: If the model is too simple, it may be necessary to increase its complexity by adding more features, increasing the number of parameters, or using a more flexible model. However, this should be done carefully to avoid overfitting. Use a different algorithm: If the current algorithm is not able to capture the patterns in the data, it may be necessary to try a different one. For example, a neural network may be more effective than a linear regression model for some types of data. Increase the amount of training data: If the model is underfitting due to a lack of data, increasing the amount of training data may help. This will allow the model to better capture the underlying patterns in the data. Regularization: Regularization is a technique used to prevent overfitting by adding a penalty term to the loss function that discourages large parameter values. It can also be used to prevent underfitting by controlling the complexity of the model. Ensemble Methods: Ensemble methods combine multiple models to create a more accurate prediction. This can help reduce underfitting by allowing multiple models to work together to capture the underlying patterns in the data. Feature engineering: Feature engineering involves creating new model features from the existing ones that may be more relevant to the problem at hand. This can help improve the accuracy of the model and prevent underfitting. Benign overfitting Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest in deep neural networks, but is studied from a theoretical perspective in the context of much simpler models, such as linear regression. In particular, it has been shown that overparameterization is essential for benign overfitting in this setting. In other words, the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size. See also Bias–variance tradeoff Curve fitting Data dredging Feature selection Feature engineering Freedman's paradox Generalization error Goodness of fit Life-time of correlation Model selection Researcher degrees of freedom Occam's razor Primary model Vapnik–Chervonenkis dimension – larger VC dimension implies larger risk of overfitting Notes References Tip 7: Minimize overfitting. Further reading External links The Problem of Overfitting Data – Stony Brook University What is "overfitting," exactly? – Andrew Gelman blog CSE546: Linear Regression Bias / Variance Tradeoff – University of Washington What is Underfitting – IBM Curve fitting Applied mathematics Mathematical modeling Statistical inference Machine learning
Overfitting
[ "Mathematics", "Engineering" ]
2,791
[ "Artificial intelligence engineering", "Applied mathematics", "Mathematical modeling", "Machine learning" ]
173,354
https://en.wikipedia.org/wiki/Automation
Automation describes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines. Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices, and computers, usually in combination. Complicated systems, such as modern factories, airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings in electricity costs, savings in material costs, and improvements to quality, accuracy, and precision. Automation includes the use of various equipment and control systems such as machinery, processes in factories, boilers, and heat-treating ovens, switching on telephone networks, steering, stabilization of ships, aircraft and other applications and vehicles with reduced human intervention. Examples range from a household thermostat controlling a boiler to a large industrial control system with tens of thousands of input measurements and output control signals. Automation has also found a home in the banking industry. It can range from simple on-off control to multi-variable high-level algorithms in terms of control complexity. In the simplest type of an automatic control loop, a controller compares a measured value of a process with a desired set value and processes the resulting error signal to change some input to the process, in such a way that the process stays at its set point despite disturbances. This closed-loop control is an application of negative feedback to a system. The mathematical basis of control theory was begun in the 18th century and advanced rapidly in the 20th. The term automation, inspired by the earlier word automatic (coming from automaton), was not widely used before 1947, when Ford established an automation department. It was during this time that the industry was rapidly adopting feedback controllers, which were introduced in the 1930s. The World Bank's World Development Report of 2019 shows evidence that the new industries and jobs in the technology sector outweigh the economic effects of workers being displaced by automation. Job losses and downward mobility blamed on automation have been cited as one of many factors in the resurgence of nationalist, protectionist and populist politics in the US, UK and France, among other countries since the 2010s. History Early history It was a preoccupation of the Greeks and Arabs (in the period between about 300 BC and about 1200 AD) to keep accurate track of time. In Ptolemaic Egypt, about 270 BC, Ctesibius described a float regulator for a water clock, a device not unlike the ball and cock in a modern flush toilet. This was the earliest feedback-controlled mechanism. The appearance of the mechanical clock in the 14th century made the water clock and its feedback control system obsolete. The Persian Banū Mūsā brothers, in their Book of Ingenious Devices (850 AD), described a number of automatic controls. Two-step level controls for fluids, a form of discontinuous variable structure controls, were developed by the Banu Musa brothers. They also described a feedback controller. The design of feedback control systems up through the Industrial Revolution was by trial-and-error, together with a great deal of engineering intuition. It was not until the mid-19th century that the stability of feedback control systems was analyzed using mathematics, the formal language of automatic control theory. The centrifugal governor was invented by Christiaan Huygens in the seventeenth century, and used to adjust the gap between millstones. Industrial Revolution in Western Europe The introduction of prime movers, or self-driven machines advanced grain mills, furnaces, boilers, and the steam engine created a new requirement for automatic control systems including temperature regulators (invented in 1624; see Cornelius Drebbel), pressure regulators (1681), float regulators (1700) and speed control devices. Another control mechanism was used to tent the sails of windmills. It was patented by Edmund Lee in 1745. Also in 1745, Jacques de Vaucanson invented the first automated loom. Around 1800, Joseph Marie Jacquard created a punch-card system to program looms. In 1771 Richard Arkwright invented the first fully automated spinning mill driven by water power, known at the time as the water frame. An automatic flour mill was developed by Oliver Evans in 1785, making it the first completely automated industrial process. A centrifugal governor was used by Mr. Bunce of England in 1784 as part of a model steam crane. The centrifugal governor was adopted by James Watt for use on a steam engine in 1788 after Watt's partner Boulton saw one at a flour mill Boulton & Watt were building. The governor could not actually hold a set speed; the engine would assume a new constant speed in response to load changes. The governor was able to handle smaller variations such as those caused by fluctuating heat load to the boiler. Also, there was a tendency for oscillation whenever there was a speed change. As a consequence, engines equipped with this governor were not suitable for operations requiring constant speed, such as cotton spinning. Several improvements to the governor, plus improvements to valve cut-off timing on the steam engine, made the engine suitable for most industrial uses before the end of the 19th century. Advances in the steam engine stayed well ahead of science, both thermodynamics and control theory. The governor received relatively little scientific attention until James Clerk Maxwell published a paper that established the beginning of a theoretical basis for understanding control theory. 20th century Relay logic was introduced with factory electrification, which underwent rapid adaption from 1900 through the 1920s. Central electric power stations were also undergoing rapid growth and the operation of new high-pressure boilers, steam turbines and electrical substations created a large demand for instruments and controls. Central control rooms became common in the 1920s, but as late as the early 1930s, most process controls were on-off. Operators typically monitored charts drawn by recorders that plotted data from instruments. To make corrections, operators manually opened or closed valves or turned switches on or off. Control rooms also used color-coded lights to send signals to workers in the plant to manually make certain changes. The development of the electronic amplifier during the 1920s, which was important for long-distance telephony, required a higher signal-to-noise ratio, which was solved by negative feedback noise cancellation. This and other telephony applications contributed to the control theory. In the 1940s and 1950s, German mathematician Irmgard Flügge-Lotz developed the theory of discontinuous automatic controls, which found military applications during the Second World War to fire control systems and aircraft navigation systems. Controllers, which were able to make calculated changes in response to deviations from a set point rather than on-off control, began being introduced in the 1930s. Controllers allowed manufacturing to continue showing productivity gains to offset the declining influence of factory electrification. Factory productivity was greatly increased by electrification in the 1920s. U.S. manufacturing productivity growth fell from 5.2%/yr 1919–29 to 2.76%/yr 1929–41. Alexander Field notes that spending on non-medical instruments increased significantly from 1929 to 1933 and remained strong thereafter. The First and Second World Wars saw major advancements in the field of mass communication and signal processing. Other key advances in automatic controls include differential equations, stability theory and system theory (1938), frequency domain analysis (1940), ship control (1950), and stochastic analysis (1941). Starting in 1958, various systems based on solid-state digital logic modules for hard-wired programmed logic controllers (the predecessors of programmable logic controllers [PLC]) emerged to replace electro-mechanical relay logic in industrial control systems for process control and automation, including early Telefunken/AEG Logistat, Siemens Simatic, Philips/Mullard/ Norbit, BBC Sigmatronic, ACEC Logacec, Estacord, Krone Mibakron, Bistat, Datapac, Norlog, SSR, or Procontic systems. In 1959 Texaco's Port Arthur Refinery became the first chemical plant to use digital control. Conversion of factories to digital control began to spread rapidly in the 1970s as the price of computer hardware fell. Significant applications The automatic telephone switchboard was introduced in 1892 along with dial telephones. By 1929, 31.9% of the Bell system was automatic. Automatic telephone switching originally used vacuum tube amplifiers and electro-mechanical switches, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, prompting Bell Labs to begin research on the transistor. The logic performed by telephone switching relays was the inspiration for the digital computer. The first commercially successful glass bottle-blowing machine was an automatic model introduced in 1905. The machine, operated by a two-man crew working 12-hour shifts, could produce 17,280 bottles in 24 hours, compared to 2,880 bottles made by a crew of six men and boys working in a shop for a day. The cost of making bottles by machine was 10 to 12 cents per gross compared to $1.80 per gross by the manual glassblowers and helpers. Sectional electric drives were developed using control theory. Sectional electric drives are used on different sections of a machine where a precise differential must be maintained between the sections. In steel rolling, the metal elongates as it passes through pairs of rollers, which must run at successively faster speeds. In paper making paper, the sheet shrinks as it passes around steam-heated drying arranged in groups, which must run at successively slower speeds. The first application of a sectional electric drive was on a paper machine in 1919. One of the most important developments in the steel industry during the 20th century was continuous wide strip rolling, developed by Armco in 1928. Before automation, many chemicals were made in batches. In 1930, with the widespread use of instruments and the emerging use of controllers, the founder of Dow Chemical Co. was advocating continuous production. Self-acting machine tools that displaced hand dexterity so they could be operated by boys and unskilled laborers were developed by James Nasmyth in the 1840s. Machine tools were automated with Numerical control (NC) using punched paper tape in the 1950s. This soon evolved into computerized numerical control (CNC). Today extensive automation is practiced in practically every type of manufacturing and assembly process. Some of the larger processes include electrical power generation, oil refining, chemicals, steel mills, plastics, cement plants, fertilizer plants, pulp and paper mills, automobile and truck assembly, aircraft production, glass manufacturing, natural gas separation plants, food and beverage processing, canning and bottling and manufacture of various kinds of parts. Robots are especially useful in hazardous applications like automobile spray painting. Robots are also used to assemble electronic circuit boards. Automotive welding is done with robots and automatic welders are used in applications like pipelines. Space/computer age With the advent of the space age in 1957, controls design, particularly in the United States, turned away from the frequency-domain techniques of classical control theory and backed into the differential equation techniques of the late 19th century, which were couched in the time domain. During the 1940s and 1950s, German mathematician Irmgard Flugge-Lotz developed the theory of discontinuous automatic control, which became widely used in hysteresis control systems such as navigation systems, fire-control systems, and electronics. Through Flugge-Lotz and others, the modern era saw time-domain design for nonlinear systems (1961), navigation (1960), optimal control and estimation theory (1962), nonlinear control theory (1969), digital control and filtering theory (1974), and the personal computer (1983). Advantages, disadvantages, and limitations Perhaps the most cited advantage of automation in industry is that it is associated with faster production and cheaper labor costs. Another benefit could be that it replaces hard, physical, or monotonous work. Additionally, tasks that take place in hazardous environments or that are otherwise beyond human capabilities can be done by machines, as machines can operate even under extreme temperatures or in atmospheres that are radioactive or toxic. They can also be maintained with simple quality checks. However, at the time being, not all tasks can be automated, and some tasks are more expensive to automate than others. Initial costs of installing the machinery in factory settings are high, and failure to maintain a system could result in the loss of the product itself. Moreover, some studies seem to indicate that industrial automation could impose ill effects beyond operational concerns, including worker displacement due to systemic loss of employment and compounded environmental damage; however, these findings are both convoluted and controversial in nature, and could potentially be circumvented. The main advantages of automation are: Increased throughput or productivity Improved quality Increased predictability Improved robustness (consistency), of processes or product Increased consistency of output Reduced direct human labor costs and expenses Reduced cycle time Increased accuracy Relieving humans of monotonously repetitive work Required work in development, deployment, maintenance, and operation of automated processes — often structured as "jobs" Increased human freedom to do other things Automation primarily describes machines replacing human action, but it is also loosely associated with mechanization, machines replacing human labor. Coupled with mechanization, extending human capabilities in terms of size, strength, speed, endurance, visual range & acuity, hearing frequency & precision, electromagnetic sensing & effecting, etc., advantages include: Relieving humans of dangerous work stresses and occupational injuries (e.g., fewer strained backs from lifting heavy objects) Removing humans from dangerous environments (e.g. fire, space, volcanoes, nuclear facilities, underwater, etc.) The main disadvantages of automation are: High initial cost Faster production without human intervention can mean faster unchecked production of defects where automated processes are defective. Scaled-up capacities can mean scaled-up problems when systems fail — releasing dangerous toxins, forces, energies, etc., at scaled-up rates. Human adaptiveness is often poorly understood by automation initiators. It is often difficult to anticipate every contingency and develop fully preplanned automated responses for every situation. The discoveries inherent in automating processes can require unanticipated iterations to resolve, causing unanticipated costs and delays. People anticipating employment income may be seriously disrupted by others deploying automation where no similar income is readily available. Paradox of automation The paradox of automation says that the more efficient the automated system, the more crucial the human contribution of the operators. Humans are less involved, but their involvement becomes more critical. Lisanne Bainbridge, a cognitive psychologist, identified these issues notably in her widely cited paper "Ironies of Automation." If an automated system has an error, it will multiply that error until it is fixed or shut down. This is where human operators come in. A fatal example of this was Air France Flight 447, where a failure of automation put the pilots into a manual situation they were not prepared for. Limitations Current technology is unable to automate all the desired tasks. Many operations using automation have large amounts of invested capital and produce high volumes of products, making malfunctions extremely costly and potentially hazardous. Therefore, some personnel is needed to ensure that the entire system functions properly and that safety and product quality are maintained. As a process becomes increasingly automated, there is less and less labor to be saved or quality improvement to be gained. This is an example of both diminishing returns and the logistic function. As more and more processes become automated, there are fewer remaining non-automated processes. This is an example of the exhaustion of opportunities. New technological paradigms may, however, set new limits that surpass the previous limits. Current limitations Many roles for humans in industrial processes presently lie beyond the scope of automation. Human-level pattern recognition, language comprehension, and language production ability are well beyond the capabilities of modern mechanical and computer systems (but see Watson computer). Tasks requiring subjective assessment or synthesis of complex sensory data, such as scents and sounds, as well as high-level tasks such as strategic planning, currently require human expertise. In many cases, the use of humans is more cost-effective than mechanical approaches even where the automation of industrial tasks is possible. Therefore, algorithmic management as the digital rationalization of human labor instead of its substitution has emerged as an alternative technological strategy. Overcoming these obstacles is a theorized path to post-scarcity economics. Societal impact and unemployment Increased automation often causes workers to feel anxious about losing their jobs as technology renders their skills or experience unnecessary. Early in the Industrial Revolution, when inventions like the steam engine were making some job categories expendable, workers forcefully resisted these changes. Luddites, for instance, were English textile workers who protested the introduction of weaving machines by destroying them. More recently, some residents of Chandler, Arizona, have slashed tires and pelted rocks at self-driving car, in protest over the cars' perceived threat to human safety and job prospects. The relative anxiety about automation reflected in opinion polls seems to correlate closely with the strength of organized labor in that region or nation. For example, while a study by the Pew Research Center indicated that 72% of Americans are worried about increasing automation in the workplace, 80% of Swedes see automation and artificial intelligence (AI) as a good thing, due to the country's still-powerful unions and a more robust national safety net. According to one estimate, 47% of all current jobs in the US have the potential to be fully automated by 2033. Furthermore, wages and educational attainment appear to be strongly negatively correlated with an occupation's risk of being automated. Erik Brynjolfsson and Andrew McAfee argue that "there's never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there's never been a worse time to be a worker with only 'ordinary' skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate." Others however argue that highly skilled professional jobs like a lawyer, doctor, engineer, journalist are also at risk of automation. According to a 2020 study in the Journal of Political Economy, automation has robust negative effects on employment and wages: "One more robot per thousand workers reduces the employment-to-population ratio by 0.2 percentage points and wages by 0.42%." A 2025 study in the American Economic Journal found that the introduction of industrial robots reduced 1993 and 2014 led to reduced employment of men and women by 3.7 and 1.6 percentage points. Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School argued that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement, and 47% of jobs in the US were at risk. The study, released as a working paper in 2013 and published in 2017, predicted that automation would put low-paid physical occupations most at risk, by surveying a group of colleagues on their opinions. However, according to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not the replacement of employees but the automation of portions of the tasks they perform. The methodology of the McKinsey study has been heavily criticized for being intransparent and relying on subjective assessments. The methodology of Frey and Osborne has been subjected to criticism, as lacking evidence, historical awareness, or credible methodology. Additionally, the Organisation for Economic Co-operation and Development (OECD) found that across the 21 OECD countries, 9% of jobs are automatable. Based on a formula by Gilles Saint-Paul, an economist at Toulouse 1 University, the demand for unskilled human capital declines at a slower rate than the demand for skilled human capital increases. In the long run and for society as a whole it has led to cheaper products, lower average work hours, and new industries forming (i.e., robotics industries, computer industries, design industries). These new industries provide many high salary skill-based jobs to the economy. By 2030, between 3 and 14 percent of the global workforce will be forced to switch job categories due to automation eliminating jobs in an entire sector. While the number of jobs lost to automation is often offset by jobs gained from technological advances, the same type of job loss is not the same one replaced and that leading to increasing unemployment in the lower-middle class. This occurs largely in the US and developed countries where technological advances contribute to higher demand for highly skilled labor but demand for middle-wage labor continues to fall. Economists call this trend "income polarization" where unskilled labor wages are driven down and skilled labor is driven up and it is predicted to continue in developed economies. Lights-out manufacturing Lights-out manufacturing is a production system with no human workers, to eliminate labor costs. It grew in popularity in the U.S. when General Motors in 1982 implemented humans "hands-off" manufacturing to "replace risk-averse bureaucracy with automation and robots". However, the factory never reached full "lights out" status. The expansion of lights out manufacturing requires: Reliability of equipment Long-term mechanic capabilities Planned preventive maintenance Commitment from the staff Health and environment The costs of automation to the environment are different depending on the technology, product or engine automated. There are automated engines that consume more energy resources from the Earth in comparison with previous engines and vice versa. Hazardous operations, such as oil refining, the manufacturing of industrial chemicals, and all forms of metal working, were always early contenders for automation. The automation of vehicles could prove to have a substantial impact on the environment, although the nature of this impact could be beneficial or harmful depending on several factors. Because automated vehicles are much less likely to get into accidents compared to human-driven vehicles, some precautions built into current models (such as anti-lock brakes or laminated glass) would not be required for self-driving versions. Removal of these safety features reduces the weight of the vehicle, and coupled with more precise acceleration and braking, as well as fuel-efficient route mapping, can increase fuel economy and reduce emissions. Despite this, some researchers theorize that an increase in the production of self-driving cars could lead to a boom in vehicle ownership and usage, which could potentially negate any environmental benefits of self-driving cars if they are used more frequently. Automation of homes and home appliances is also thought to impact the environment. A study of energy consumption of automated homes in Finland showed that smart homes could reduce energy consumption by monitoring levels of consumption in different areas of the home and adjusting consumption to reduce energy leaks (e.g. automatically reducing consumption during the nighttime when activity is low). This study, along with others, indicated that the smart home's ability to monitor and adjust consumption levels would reduce unnecessary energy usage. However, some research suggests that smart homes might not be as efficient as non-automated homes. A more recent study has indicated that, while monitoring and adjusting consumption levels do decrease unnecessary energy use, this process requires monitoring systems that also consume an amount of energy. The energy required to run these systems sometimes negates their benefits, resulting in little to no ecological benefit. Convertibility and turnaround time Another major shift in automation is the increased demand for flexibility and convertibility in manufacturing processes. Manufacturers are increasingly demanding the ability to easily switch from manufacturing Product A to manufacturing Product B without having to completely rebuild the production lines. Flexibility and distributed processes have led to the introduction of Automated Guided Vehicles with Natural Features Navigation. Digital electronics helped too. Former analog-based instrumentation was replaced by digital equivalents which can be more accurate and flexible, and offer greater scope for more sophisticated configuration, parametrization, and operation. This was accompanied by the fieldbus revolution which provided a networked (i.e. a single cable) means of communicating between control systems and field-level instrumentation, eliminating hard-wiring. Discrete manufacturing plants adopted these technologies fast. The more conservative process industries with their longer plant life cycles have been slower to adopt and analog-based measurement and control still dominate. The growing use of Industrial Ethernet on the factory floor is pushing these trends still further, enabling manufacturing plants to be integrated more tightly within the enterprise, via the internet if necessary. Global competition has also increased demand for Reconfigurable Manufacturing Systems. Automation tools Engineers can now have numerical control over automated devices. The result has been a rapidly expanding range of applications and human activities. Computer-aided technologies (or CAx) now serve as the basis for mathematical and organizational tools used to create complex systems. Notable examples of CAx include computer-aided design (CAD software) and computer-aided manufacturing (CAM software). The improved design, analysis, and manufacture of products enabled by CAx has been beneficial for industry. Information technology, together with industrial machinery and processes, can assist in the design, implementation, and monitoring of control systems. One example of an industrial control system is a programmable logic controller (PLC). PLCs are specialized hardened computers which are frequently used to synchronize the flow of inputs from (physical) sensors and events with the flow of outputs to actuators and events. Human-machine interfaces (HMI) or computer human interfaces (CHI), formerly known as man-machine interfaces, are usually employed to communicate with PLCs and other computers. Service personnel who monitor and control through HMIs can be called by different names. In the industrial process and manufacturing environments, they are called operators or something similar. In boiler houses and central utility departments, they are called stationary engineers. Different types of automation tools exist: ANN – Artificial neural network DCS – Distributed control system HMI – Human machine interface RPA – Robotic process automation SCADA – Supervisory control and data acquisition PLC – Programmable logic controller Instrumentation Motion control Robotics Host simulation software (HSS) is a commonly used testing tool that is used to test the equipment software. HSS is used to test equipment performance concerning factory automation standards (timeouts, response time, processing time). Cognitive automation Cognitive automation, as a subset of AI, is an emerging genus of automation enabled by cognitive computing. Its primary concern is the automation of clerical tasks and workflows that consist of structuring unstructured data. Cognitive automation relies on multiple disciplines: natural language processing, real-time computing, machine learning algorithms, big data analytics, and evidence-based learning. According to Deloitte, cognitive automation enables the replication of human tasks and judgment "at rapid speeds and considerable scale." Such tasks include: Document redaction Data extraction and document synthesis / reporting Contract management Natural language search Customer, employee, and stakeholder onboarding Manual activities and verifications Follow-up and email communications Recent and emerging applications CAD AI Artificially intelligent computer-aided design (CAD) can use text-to-3D, image-to-3D, and video-to-3D to automate in 3D modeling. AI CAD libraries could also be developed using linked open data of schematics and diagrams. Ai CAD assistants are used as tools to help streamline workflow. Automated power production Technologies like solar panels, wind turbines, and other renewable energy sources—together with smart grids, micro-grids, battery storage—can automate power production. Agricultural production Many agricultural operations are automated with machinery and equipment to improve their diagnosis, decision-making and/or performing. Agricultural automation can relieve the drudgery of agricultural work, improve the timeliness and precision of agricultural operations, raise productivity and resource-use efficiency, build resilience, and improve food quality and safety. Increased productivity can free up labour, allowing agricultural households to spend more time elsewhere. The technological evolution in agriculture has resulted in progressive shifts to digital equipment and robotics. Motorized mechanization using engine power automates the performance of agricultural operations such as ploughing and milking. With digital automation technologies, it also becomes possible to automate diagnosis and decision-making of agricultural operations. For example, autonomous crop robots can harvest and seed crops, while drones can gather information to help automate input application. Precision agriculture often employs such automation technologies Motorized mechanization has generally increased in recent years. Sub-Saharan Africa is the only region where the adoption of motorized mechanization has stalled over the past decades. Automation technologies are increasingly used for managing livestock, though evidence on adoption is lacking. Global automatic milking system sales have increased over recent years, but adoption is likely mostly in Northern Europe, and likely almost absent in low- and middle-income countries. Automated feeding machines for both cows and poultry also exist, but data and evidence regarding their adoption trends and drivers is likewise scarce. Retail Many supermarkets and even smaller stores are rapidly introducing self-checkout systems reducing the need for employing checkout workers. In the U.S., the retail industry employs 15.9 million people as of 2017 (around 1 in 9 Americans in the workforce). Globally, an estimated 192 million workers could be affected by automation according to research by Eurasia Group. Online shopping could be considered a form of automated retail as the payment and checkout are through an automated online transaction processing system, with the share of online retail accounting jumping from 5.1% in 2011 to 8.3% in 2016. However, two-thirds of books, music, and films are now purchased online. In addition, automation and online shopping could reduce demands for shopping malls, and retail property, which in the United States is currently estimated to account for 31% of all commercial property or around . Amazon has gained much of the growth in recent years for online shopping, accounting for half of the growth in online retail in 2016. Other forms of automation can also be an integral part of online shopping, for example, the deployment of automated warehouse robotics such as that applied by Amazon using Kiva Systems. Food and drink The food retail industry has started to apply automation to the ordering process; McDonald's has introduced touch screen ordering and payment systems in many of its restaurants, reducing the need for as many cashier employees. The University of Texas at Austin has introduced fully automated cafe retail locations. Some cafes and restaurants have utilized mobile and tablet "apps" to make the ordering process more efficient by customers ordering and paying on their device. Some restaurants have automated food delivery to tables of customers using a conveyor belt system. The use of robots is sometimes employed to replace waiting staff. Construction Automation in construction is the combination of methods, processes, and systems that allow for greater machine autonomy in construction activities. Construction automation may have multiple goals, including but not limited to, reducing jobsite injuries, decreasing activity completion times, and assisting with quality control and quality assurance. Mining Automated mining involves the removal of human labor from the mining process. The mining industry is currently in the transition towards automation. Currently, it can still require a large amount of human capital, particularly in the third world where labor costs are low so there is less incentive for increasing efficiency through automation. Video surveillance The Defense Advanced Research Projects Agency (DARPA) started the research and development of automated visual surveillance and monitoring (VSAM) program, between 1997 and 1999, and airborne video surveillance (AVS) programs, from 1998 to 2002. Currently, there is a major effort underway in the vision community to develop a fully-automated tracking surveillance system. Automated video surveillance monitors people and vehicles in real-time within a busy environment. Existing automated surveillance systems are based on the environment they are primarily designed to observe, i.e., indoor, outdoor or airborne, the number of sensors that the automated system can handle and the mobility of sensors, i.e., stationary camera vs. mobile camera. The purpose of a surveillance system is to record properties and trajectories of objects in a given area, generate warnings or notify the designated authorities in case of occurrence of particular events. Highway systems As demands for safety and mobility have grown and technological possibilities have multiplied, interest in automation has grown. Seeking to accelerate the development and introduction of fully automated vehicles and highways, the U.S. Congress authorized more than $650 million over six years for intelligent transport systems (ITS) and demonstration projects in the 1991 Intermodal Surface Transportation Efficiency Act (ISTEA). Congress legislated in ISTEA that:[T]he Secretary of Transportation shall develop an automated highway and vehicle prototype from which future fully automated intelligent vehicle-highway systems can be developed. Such development shall include research in human factors to ensure the success of the man-machine relationship. The goal of this program is to have the first fully automated highway roadway or an automated test track in operation by 1997. This system shall accommodate the installation of equipment in new and existing motor vehicles.Full automation commonly defined as requiring no control or very limited control by the driver; such automation would be accomplished through a combination of sensor, computer, and communications systems in vehicles and along the roadway. Fully automated driving would, in theory, allow closer vehicle spacing and higher speeds, which could enhance traffic capacity in places where additional road building is physically impossible, politically unacceptable, or prohibitively expensive. Automated controls also might enhance road safety by reducing the opportunity for driver error, which causes a large share of motor vehicle crashes. Other potential benefits include improved air quality (as a result of more-efficient traffic flows), increased fuel economy, and spin-off technologies generated during research and development related to automated highway systems. Waste management Automated waste collection trucks prevent the need for as many workers as well as easing the level of labor required to provide the service. Business process Business process automation (BPA) is the technology-enabled automation of complex business processes. It can help to streamline a business for simplicity, achieve digital transformation, increase service quality, improve service delivery or contain costs. BPA consists of integrating applications, restructuring labor resources and using software applications throughout the organization. Robotic process automation (RPA; or RPAAI for self-guided RPA 2.0) is an emerging field within BPA and uses AI. BPAs can be implemented in a number of business areas including marketing, sales and workflow. Home Home automation (also called domotics) designates an emerging practice of increased automation of household appliances and features in residential dwellings, particularly through electronic means that allow for things impracticable, overly expensive or simply not possible in recent past decades. The rise in the usage of home automation solutions has taken a turn reflecting the increased dependency of people on such automation solutions. However, the increased comfort that gets added through these automation solutions is remarkable. Laboratory Automation is essential for many scientific and clinical applications. Therefore, automation has been extensively employed in laboratories. From as early as 1980 fully automated laboratories have already been working. However, automation has not become widespread in laboratories due to its high cost. This may change with the ability of integrating low-cost devices with standard laboratory equipment. Autosamplers are common devices used in laboratory automation. Logistics automation Logistics automation is the application of computer software or automated machinery to improve the efficiency of logistics operations. Typically this refers to operations within a warehouse or distribution center, with broader tasks undertaken by supply chain engineering systems and enterprise resource planning systems. Industrial automation Industrial automation deals primarily with the automation of manufacturing, quality control, and material handling processes. General-purpose controllers for industrial processes include programmable logic controllers, stand-alone I/O modules, and computers. Industrial automation is to replace the human action and manual command-response activities with the use of mechanized equipment and logical programming commands. One trend is increased use of machine vision to provide automatic inspection and robot guidance functions, another is a continuing increase in the use of robots. Industrial automation is simply required in industries. Industrial Automation and Industry 4.0 The rise of industrial automation is directly tied to the "Fourth Industrial Revolution", which is better known now as Industry 4.0. Originating from Germany, Industry 4.0 encompasses numerous devices, concepts, and machines, as well as the advancement of the industrial internet of things (IIoT). An "Internet of Things is a seamless integration of diverse physical objects in the Internet through a virtual representation." These new revolutionary advancements have drawn attention to the world of automation in an entirely new light and shown ways for it to grow to increase productivity and efficiency in machinery and manufacturing facilities. Industry 4.0 works with the IIoT and software/hardware to connect in a way that (through communication technologies) add enhancements and improve manufacturing processes. Being able to create smarter, safer, and more advanced manufacturing is now possible with these new technologies. It opens up a manufacturing platform that is more reliable, consistent, and efficient than before. Implementation of systems such as SCADA is an example of software that takes place in Industrial Automation today. SCADA is a supervisory data collection software, just one of the many used in Industrial Automation. Industry 4.0 vastly covers many areas in manufacturing and will continue to do so as time goes on. Industrial robotics Industrial robotics is a sub-branch in industrial automation that aids in various manufacturing processes. Such manufacturing processes include machining, welding, painting, assembling and material handling to name a few. Industrial robots use various mechanical, electrical as well as software systems to allow for high precision, accuracy and speed that far exceed any human performance. The birth of industrial robots came shortly after World War II as the U.S. saw the need for a quicker way to produce industrial and consumer goods. Servos, digital logic and solid-state electronics allowed engineers to build better and faster systems and over time these systems were improved and revised to the point where a single robot is capable of running 24 hours a day with little or no maintenance. In 1997, there were 700,000 industrial robots in use, the number has risen to 1.8M in 2017 In recent years, AI with robotics is also used in creating an automatic labeling solution, using robotic arms as the automatic label applicator, and AI for learning and detecting the products to be labelled. Programmable Logic Controllers Industrial automation incorporates programmable logic controllers in the manufacturing process. Programmable logic controllers (PLCs) use a processing system which allows for variation of controls of inputs and outputs using simple programming. PLCs make use of programmable memory, storing instructions and functions like logic, sequencing, timing, counting, etc. Using a logic-based language, a PLC can receive a variety of inputs and return a variety of logical outputs, the input devices being sensors and output devices being motors, valves, etc. PLCs are similar to computers, however, while computers are optimized for calculations, PLCs are optimized for control tasks and use in industrial environments. They are built so that only basic logic-based programming knowledge is needed and to handle vibrations, high temperatures, humidity, and noise. The greatest advantage PLCs offer is their flexibility. With the same basic controllers, a PLC can operate a range of different control systems. PLCs make it unnecessary to rewire a system to change the control system. This flexibility leads to a cost-effective system for complex and varied control systems. PLCs can range from small "building brick" devices with tens of I/O in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs (I/O), extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory. It was from the automotive industry in the United States that the PLC was born. Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. Since these could number in the hundreds or even thousands, the process for updating such facilities for the yearly model change-over was very time-consuming and expensive, as electricians needed to individually rewire the relays to change their operational characteristics. When digital computers became available, being general-purpose programmable devices, they were soon applied to control sequential and combinatorial logic in industrial processes. However, these early computers required specialist programmers and stringent operating environmental control for temperature, cleanliness, and power quality. To meet these challenges, the PLC was developed with several key attributes. It would tolerate the shop-floor environment, it would support discrete (bit-form) input and output in an easily extensible manner, it would not require years of training to use, and it would permit its operation to be monitored. Since many industrial processes have timescales easily addressed by millisecond response times, modern (fast, small, reliable) electronics greatly facilitate building reliable controllers, and performance could be traded off for reliability. Agent-assisted automation Agent-assisted automation refers to automation used by call center agents to handle customer inquiries. The key benefit of agent-assisted automation is compliance and error-proofing. Agents are sometimes not fully trained or they forget or ignore key steps in the process. The use of automation ensures that what is supposed to happen on the call actually does, every time. There are two basic types: desktop automation and automated voice solutions. Control Open-loop and closed-loop Discrete control (on/off) One of the simplest types of control is on-off control. An example is a thermostat used on household appliances which either open or close an electrical contact. (Thermostats were originally developed as true feedback-control mechanisms rather than the on-off common household appliance thermostat.) Sequence control, in which a programmed sequence of discrete operations is performed, often based on system logic that involves system states. An elevator control system is an example of sequence control. PID controller A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism (controller) widely used in industrial control systems. In a PID loop, the controller continuously calculates an error value as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms, respectively (sometimes denoted P, I, and D) which give their name to the controller type. The theoretical understanding and application date from the 1920s, and they are implemented in nearly all analog control systems; originally in mechanical controllers, and then using discrete electronics and latterly in industrial process computers. Sequential control and logical sequence or system state control Sequential control may be either to a fixed sequence or to a logical one that will perform different actions depending on various system states. An example of an adjustable but otherwise fixed sequence is a timer on a lawn sprinkler. States refer to the various conditions that can occur in a use or sequence scenario of the system. An example is an elevator, which uses logic based on the system state to perform certain actions in response to its state and operator input. For example, if the operator presses the floor n button, the system will respond depending on whether the elevator is stopped or moving, going up or down, or if the door is open or closed, and other conditions. Early development of sequential control was relay logic, by which electrical relays engage electrical contacts which either start or interrupt power to a device. Relays were first used in telegraph networks before being developed for controlling other devices, such as when starting and stopping industrial-sized electric motors or opening and closing solenoid valves. Using relays for control purposes allowed event-driven control, where actions could be triggered out of sequence, in response to external events. These were more flexible in their response than the rigid single-sequence cam timers. More complicated examples involved maintaining safe sequences for devices such as swing bridge controls, where a lock bolt needed to be disengaged before the bridge could be moved, and the lock bolt could not be released until the safety gates had already been closed. The total number of relays and cam timers can number into the hundreds or even thousands in some factories. Early programming techniques and languages were needed to make such systems manageable, one of the first being ladder logic, where diagrams of the interconnected relays resembled the rungs of a ladder. Special computers called programmable logic controllers were later designed to replace these collections of hardware with a single, more easily re-programmed unit. In a typical hard-wired motor start and stop circuit (called a control circuit) a motor is started by pushing a "Start" or "Run" button that activates a pair of electrical relays. The "lock-in" relay locks in contacts that keep the control circuit energized when the push-button is released. (The start button is a normally open contact and the stop button is a normally closed contact.) Another relay energizes a switch that powers the device that throws the motor starter switch (three sets of contacts for three-phase industrial power) in the main power circuit. Large motors use high voltage and experience high in-rush current, making speed important in making and breaking contact. This can be dangerous for personnel and property with manual switches. The "lock-in" contacts in the start circuit and the main power contacts for the motor are held engaged by their respective electromagnets until a "stop" or "off" button is pressed, which de-energizes the lock in relay. Commonly interlocks are added to a control circuit. Suppose that the motor in the example is powering machinery that has a critical need for lubrication. In this case, an interlock could be added to ensure that the oil pump is running before the motor starts. Timers, limit switches, and electric eyes are other common elements in control circuits. Solenoid valves are widely used on compressed air or hydraulic fluid for powering actuators on mechanical components. While motors are used to supply continuous rotary motion, actuators are typically a better choice for intermittently creating a limited range of movement for a mechanical component, such as moving various mechanical arms, opening or closing valves, raising heavy press-rolls, applying pressure to presses. Computer control Computers can perform both sequential control and feedback control, and typically a single computer will do both in an industrial application. Programmable logic controllers (PLCs) are a type of special-purpose microprocessor that replaced many hardware components such as timers and drum sequencers used in relay logic–type systems. General-purpose process control computers have increasingly replaced stand-alone controllers, with a single computer able to perform the operations of hundreds of controllers. Process control computers can process data from a network of PLCs, instruments, and controllers to implement typical (such as PID) control of many individual variables or, in some cases, to implement complex control algorithms using multiple inputs and mathematical manipulations. They can also analyze data and create real-time graphical displays for operators and run reports for operators, engineers, and management. Control of an automated teller machine (ATM) is an example of an interactive process in which a computer will perform a logic-derived response to a user selection based on information retrieved from a networked database. The ATM process has similarities with other online transaction processes. The different logical responses are called scenarios. Such processes are typically designed with the aid of use cases and flowcharts, which guide the writing of the software code. The earliest feedback control mechanism was the water clock invented by Greek engineer Ctesibius (285–222 BC). See also Artificial Intelligence Automate This Automated storage and retrieval system Automation engineering Automation Master Automation technician Cognitive computing Control engineering Critique of work Cybernetics Data-driven control system Dirty, dangerous and demeaning Feedforward control Fully Automated Luxury Communism Futures studies The Human Use of Human Beings Industrial Revolution Industry 4.0 Intelligent automation Inventing the Future: Postcapitalism and a World Without Work Machine to machine Mobile manipulator Multi-agent system Post-work society Process control Productivity improving technologies The Right to Be Lazy Right to repair Robot tax Robotic process automation Semi-automation Technological unemployment The War on Normal People References Citations Sources E. McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) SSRN, part 2(3) Executive Office of the President, Artificial Intelligence, Automation and the Economy (December 2016) Further reading Acemoglu, Daron, and Pascual Restrepo. "Automation and New Tasks: How Technology Displaces and Reinstates Labor." The Journal of Economic Perspectives, vol. 33, no. 2, American Economic Association, 2019, pp. 3–30, . Norton, Andrew. Automation and Inequality: The Changing World of Work in the Global South. International Institute for Environment and Development, 2017, . Danaher, John. "The Case for Technological Unemployment." Automation and Utopia: Human Flourishing in a World without Work, Harvard University Press, 2019, pp. 25–52, . Reinsch, William, and Jack Caporal. "The Digital Economy & Data Governance." Key Trends in the Global Economy through 2030, edited by Matthew P. Goodman and Scott Miller, Center for Strategic and International Studies (CSIS), 2020, pp. 18–21, . Articles containing video clips
Automation
[ "Engineering" ]
10,130
[ "Control engineering", "Automation" ]
173,356
https://en.wikipedia.org/wiki/Sharpless%20epoxidation
The Sharpless epoxidation reaction is an enantioselective chemical reaction to prepare 2,3-epoxyalcohols from primary and secondary allylic alcohols. The oxidizing agent is tert-butyl hydroperoxide. The method relies on a catalyst formed from titanium tetra(isopropoxide) and diethyl tartrate. 2,3-Epoxyalcohols can be converted into diols, aminoalcohols, and ethers. The reactants for the Sharpless epoxidation are commercially available and relatively inexpensive. K. Barry Sharpless published a paper on the reaction in 1980 and was awarded the 2001 Nobel Prize in Chemistry for this and related work on asymmetric oxidations. The prize was shared with William S. Knowles and Ryōji Noyori. Catalyst 5–10 mol% of the catalyst is typical. The presence of 3Å molecular sieves (3Å MS) is necessary. The structure of the catalyst is uncertain although it is thought to be a dimer of []. Selectivity The epoxidation of allylic alcohols is a well-utilized conversion in fine chemical synthesis. The chirality of the product of a Sharpless epoxidation is sometimes predicted with the following mnemonic. A rectangle is drawn around the double bond in the same plane as the carbons of the double bond (the xy-plane), with the allylic alcohol in the bottom right corner and the other substituents in their appropriate corners. In this orientation, the (−) diester tartrate preferentially interacts with the top half of the molecule, and the (+) diester tartrate preferentially interacts with the bottom half of the molecule. This model seems to be valid despite substitution on the olefin. Selectivity decreases with larger R1, but increases with larger R2 and R3 (see introduction). However, this method incorrectly predicts the product of allylic 1,2-diols. Kinetic resolution The Sharpless epoxidation can also give kinetic resolution of a racemic mixture of secondary 2,3-epoxyalcohols. While the yield of a kinetic resolution process cannot be higher than 50%, the enantiomeric excess approaches 100% in some reactions. Synthetic utility The Sharpless epoxidation is viable with a large range of primary and secondary alkenic alcohols. Furthermore, with the exception noted above, a given dialkyl tartrate will preferentially add to the same face independent of the substitution on the alkene.To demonstrate the synthetic utility of the Sharpless epoxidation, the Sharpless group created synthetic intermediates of various natural products: methymycin, erythromycin, leukotriene C-1, and (+)-disparlure. As one of the few highly enantioselective reactions during its time, many manipulations of the 2,3-epoxyalcohols have been developed. The Sharpless epoxidation has been used for the total synthesis of various saccharides, terpenes, leukotrienes, pheromones, and antibiotics. The main drawback of this protocol is the necessity of the presence of an allylic alcohol. The Jacobsen epoxidation, an alternative method to enantioselectively oxidise alkenes, overcomes this issue and tolerates a wider array of functional groups. For specifically glycidic epoxides, the Jørgensen-Córdova epoxidation avoids the need to reduce the carbonyl and then reoxidize, and has more efficient catalyst turnover. References of historic interest See also Asymmetric catalytic oxidation Juliá–Colonna epoxidation — for enones Jacobsen epoxidation — for unfunctionalized alkenes References External links Sharpless Asymmetric Epoxidation Reaction Epoxidation reactions Organic redox reactions Name reactions Epoxides Catalysis
Sharpless epoxidation
[ "Chemistry" ]
846
[ "Catalysis", "Organic redox reactions", "Organic reactions", "Name reactions", "Chemical kinetics", "Ring forming reactions" ]
173,360
https://en.wikipedia.org/wiki/GNU%20Units
GNU Units is a cross-platform computer program for conversion of units of quantities. It has a database of measurement units, including esoteric and historical units. This for instance allows conversion of velocities specified in furlongs per fortnight, and pressures specified in tons per acre. Output units are checked for consistency with the input, allowing verification of conversion of complex expressions. History GNU Units was written by Adrian Mariano as an implementation of the units utility included with the Unix operating system. It was originally available under a permissive license. The GNU variant is distributed under the GPL although the FreeBSD project maintains a free fork of units from before the license change. units (Unix utility) The original units program has been a standard part of Unix since the early Bell Laboratories versions. Source code for a version very similar to the original is available from the Heirloom Project. GNU implementation GNU units includes several extensions to the original version, including Exponents can be written with ^ or **. Exponents can be larger than 9 if written with ^ or **. Rational and decimal exponents are supported. Sums of units (e.g., ) can be converted. Conversions can be made to sums of units, termed unit lists (e.g., from degrees to degrees, minutes, and seconds). Units that measure reciprocal dimensions can be converted (e.g., S to megohm). Parentheses for grouping are supported. This sometimes allows more natural expressions, such as in the example given in Complex units expressions. Roots of units (e.g., can be computed. Affine units conversions (e.g., °F to °C) are supported. Functions such as sin, cos, ln, log, and log2 are included. A script for updating the currency conversions is included; the script requires Python. Units definitions, including nonlinear conversions and unit lists, are user extensible. The plain text database definitions.units is a good reference in itself, as it is extensively commented and cites numerous sources. Other implementations UDUNITS is a similar utility program, except that it has an additional programming library interface and date conversion abilities. UDUNITS is considered the de facto program and library for variable unit conversion for netCDF files. Version history GNU Units version 2.19 was released on 31 May 2019, to reflect the 2019 revision of the SI; Version 2.14 released on 8 March 2017 fixed several minor bugs and improved support for building on Windows. Version 2.10, released on 26 March 2014, added support for rational exponents greater than one, and added the ability to save an interactive session in a file to provide a record of the conversions performed. Beginning with version 2.10, a 32-bit Windows binary distribution has been available on the project Web page (a 32-bit Windows port of version 1.87 has been available since 2008 as part of the GnuWin32 project). Version 2.02, released on 11 July 2013, added hexadecimal floating-point output and two other options to simplify changing the output format. Version 2.0, released on 2 July 2012, added the ability to convert to sums of units, such as hours and minutes or feet and inches. In addition, this release added support for UTF-8 encoding. Provision for locale-specific unit definitions was added. The syntax for defining non-linear units was changed, and added optional domain and range specifications. The names of the standard and personal units data files were changed, and the currency definitions were placed in a separate data file; a Python script for updating the currency definitions was added. The version history is covered in detail in the NEWS file included with the source distribution. Usage Units will output the result of the conversion in two lines. Usually, the first line (multiplication) is the desired result; the second line is the same conversion expressed as a division. Units can also function as a general-purpose scientific calculator; it includes several built-in mathematical functions such as sin, cos, atan, ln, exp, etc. Attempting to convert types of measurements that are incompatible will cause units to print a conformability error message and display a reduced form of each measurement. Examples The examples that follow show results from GNU units version 2.10. Interactive mode Currency exchange rates from www.timegenie.com on 2014-03-28 2729 units, 92 prefixes, 77 nonlinear units You have: 10 furlongs You want: miles * 1.25 / 0.8 You have: 1 gallon + 3 pints You want: quarts * 5.5 / 0.18181818 You have: sqrt(meter) ^ Unit not a root You have: sqrt(acre) You want: ft * 208.71033 / 0.0047913298 You have: 21 btu + 6500 ft lbf You want: btu * 29.352939 / 0.034068139 You have: _ You want: J * 30968.99 / 3.2290366e-005 You have: 3.277 hr You want: time 3 hr + 16 min + 37.2 sec You have: 1|2 inch You want: cm * 1.27 / 0.78740157 The underscore ('_') is used to indicate the result of the last successful unit conversion. On the command line (non-interactive) C:\>units "ten furlongs per fortnight" "kilometers per hour" * 0.0059871429 / 167.02458 % units cup ounces conformability error 0.00023658824 m^3 0.028349523 kg Complex units expressions One form of the Darcy–Weisbach equation for fluid flow is where ΔP is the pressure drop, ρ is the mass density, f is the (dimensionless) friction factor, L is the length of the pipe, Q is the volumetric flow rate, and d is the pipe diameter. It might be desirable to have the equation in the form that would accept typical US units; the constant A1 could be determined manually using the unit-factor method, but it could be determined more quickly and easily using units: $ units "(8/pi^2)(lbm/ft^3)ft(ft^3/s)^2(1/in^5)" psi * 43.533969 / 0.022970568 Crane Technical Paper No. 410, Eq. 3-5, gives the multiplicative value as 43.5. See also Unified Code for Units of Measure Notes References External links Linux man page for units Java version of GNU units GnuWin port of GNU units units source from the Heirloom Project Online units converter based on GNU units A simple online converter based on GNU units UDUNITS Cross-platform software Unix software Units Free mathematics software Units of measurement
GNU Units
[ "Mathematics" ]
1,442
[ "Free mathematics software", "Quantity", "Units of measurement", "Mathematical software" ]
173,362
https://en.wikipedia.org/wiki/Fortnight
A fortnight is a unit of time equal to 14 days (two weeks). The word derives from the Old English term , meaning "" (or "fourteen days", since the Anglo-Saxons counted by nights). Astronomy and tides In astronomy, a lunar fortnight is half a lunar synodic month, which is equivalent to the mean period between a full moon and a new moon (and vice versa). This is equal to 14.07 days. It gives rise to a lunar fortnightly tidal constituent (see: Long-period tides). Analogs and translations In many languages, there is no single word for a two-week period, and the equivalent terms "two weeks", "14 days", or "15 days" (counting inclusively) have to be used. Celtic languages: in Welsh, the term pythefnos, meaning "15 nights", is used. This is in keeping with the Welsh term for a week, which is wythnos ("eight nights"). In Irish, the term is coicís. Similarly, in Greek, the term δεκαπενθήμερο (dekapenthímero), meaning "15 days", is used. The Hindu calendar uses the Sanskrit word पक्ष "pakṣa", meaning one half of a lunar month, which is between 14 and 15 solar days. In Romance languages there are the terms quincena (or quince días) in Galician and Spanish, quinzena or quinze dies in Catalan and quinze dias or quinzena in Portuguese, quindicina in Italian, quinze jours or quinzaine in French, and chenzină in Romanian, all meaning "a grouping of 15". Semitic languages have a "doubling suffix". When added at the end of the word for "week" it changes the meaning to "two weeks". In Hebrew, the single-word שבועיים (shvu′ayim) means exactly "two weeks". Also in Arabic, by adding the common dual suffix to the word for "week", أسبوع, the form أسبوعين (usbu′ayn), meaning "two weeks", is formed. Slavic languages: in Czech the terms čtrnáctidenní and dvoutýdenní have the same meaning as "fortnight". In Ukrainian, the term два тижні is used in relation to "biweekly, two weeks". See also FFF system Half-month Sennight Ides (idus), Roman day for the midst of a month. References Units of time
Fortnight
[ "Physics", "Mathematics" ]
566
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
173,363
https://en.wikipedia.org/wiki/Platoon%20%28automobile%29
In transportation, platooning or flocking is a method for driving a group of vehicles together. It is meant to increase the capacity of roads via an automated highway system. Platoons decrease the distances between cars or trucks using electronic, and possibly mechanical, coupling. This capability would allow many cars or trucks to accelerate or brake simultaneously. This system also allows for a closer headway between vehicles by eliminating reacting distance needed for human reaction. Platoon capability might require buying new vehicles, or it may be something that can be retrofitted. Drivers would probably need a special license endorsement on account of the new skills required and the added responsibility when driving in the lead. Smart cars with artificial intelligence could automatically join and leave platoons. The automated highway system is a proposal for one such system, where cars organise themselves into platoons of 8 to 25. Potential benefits Greater fuel economy due to reduced air resistance and by reducing the need for acceleration, deceleration, and stopping to maintain traffic flow. Reduced congestion. Shorter commutes during peak periods. On longer highway trips, vehicles could be mostly unattended whilst in following mode. Vehicle to vehicle charging Potential disadvantages Some systems have failed in traffic, as they have been hacked by remote computers, creating a hazardous situation. Drivers would be less in control of their own driving, being at the hands of computer software or the lead driver. Drivers may be less attentive than usual, and they may not be able to react as quickly to adverse situations if the software or hardware were to fail. The close spacing would not give drivers enough time to react to hazards if the software or hardware were to fail. Platooning is only possible when enough vehicles with the same software are driving together, and could be difficult in real-life situations with a mixture of vehicle types. Interactions between platooning and non-platooning vehicles in real-life situations may be difficult, due to differing requirements for headway and reaction time, different driving styles, and the possibility of a platoon blocking lane changes and merging traffic. In the event of a traffic collision with the lead vehicle, the entire platoon may be caught in a multiple-vehicle collision due to the reduced vehicle spacing. The close spacing of platooning vehicles may serve to further normalize tailgating among drivers of non-platooning vehicles, increasing dangerous driving behavior. Automated highway system An automated highway system (AHS), or smart road, is a proposed intelligent transportation system technology designed to provide for driverless cars on specific right-of ways. It is most often recommended as a means of traffic congestion relief, on the grounds that it would drastically reduce following distances and headway, thus allowing a given stretch of road to carry more cars. Principle In one scheme, the roadway has magnetized stainless-steel spikes driven one meter apart in its center. The car senses the spikes to measure its speed and locate the center of the lane. Furthermore, the spikes can have either magnetic north or magnetic south facing up. The roadway thus provides small amounts of digital data describing interchanges and recommended speeds. The cars have power steering and automatic speed controls, which are controlled by a computer. The cars organize themselves into platoons of 8 to 25 cars. The cars within a platoon drive themselves a meter apart, so that air resistance is minimized. The distance between platoons is the conventional braking distance. If anything goes wrong, the maximum number of harmed cars should be one platoon. An overview of platooning systems is given in Bergenhem et al. Platooning of trucks has been proposed as a concept to reduce the energy consumption of semi-trucks and improve the feasibility of electric semi-trucks. Early development The origin of research on AHS was done by a team from Ohio State University led by Robert E. Fenton, based on funding from the U.S. Federal Highway Administration. Their first automated vehicle was built in 1962, and is believed to be the first land vehicle to contain a computer. Steering, braking and speed were controlled through the onboard electronics, which filled the trunk, back seat and most of the front of the passenger side of the car. Research continued at OSU until federal funding was cut in the early 1980s. Deployments United States The USDOT-sponsored National Automated Highway System Consortium (NAHSC) project, a prototype automated highway system, was tested in San Diego County, California in 1997 along Interstate 15. However, despite the technical success of the program, investment has moved more toward autonomous intelligent vehicles rather than building specialized infrastructure. The AHS system places sensory technology in cars that can read passive road markings, and use radar and inter-car communications to make the cars organize themselves without the intervention of drivers. Such an autonomous cruise control system is being developed by Mercedes-Benz, BMW, Volkswagen and Toyota. The Federal Highway Administration in 2013 funded two research projects in heavy truck platooning (without steering automation). One is led by Auburn University with Peterbilt, American Trucking Associations, Meritor Wabco, and Peloton Technology and the other is led by California Department of Transportation, with UC Berkeley and Volvo Trucks. SARTRE The SARTRE Project (Safe Road Trains for the Environment), is a European Commission funded project investigating implementation of platooning on unmodified European motorways. The project begun in September 2009, and vehicle platooning, as envisaged by the SARTRE project, is a convoy of vehicles in which a professional driver in a lead vehicle heads a line of closely following vehicles. Each following vehicle autonomously measures the distance, speed and direction and adjusts to the vehicle in front. Once in the platoon, drivers can do other things while the platoon proceeds towards its long-haul destination. All vehicles are detached and can leave the procession at any time. In January 2011, SARTRE made the first successful demonstration of its platooning technology at the Volvo Proving Ground near Gothenburg, Sweden, in which a lead truck was followed by a car. In January 2012, SARTRE made a second demonstration in Barcelona, Spain, in which a lead truck was followed by three cars driven entirely autonomously at speeds of up to with a gap between of at most . The companies that participated in SARTRE were Volvo Trucks and Volvo Car Corporation. EU Truck Platooning Challenge During its Presidency of the European Union in 2016, the Netherlands organised a European Truck Platooning Challenge. Six brands of automated trucks – DAF Trucks, Daimler Trucks, Iveco, MAN Truck & Bus, Scania AB and Volvo Trucks – ran on public roads from several European cities to the Netherlands. Japan In January 2018, trucks from different manufacturers were successfully driven by platooning for the first time on the Shin-Tomei Expressway in Japan. In February, 2021, Ministry of Economy, Trade and Industry (METI) and Ministry of Land, Infrastructure, Transport and Tourism (MLIT) successfully achieved traveling of trucks on part of the Shin-Tomei Expressway in a platoon in which no drivers were present in either the second or the following trucks, with staff in only the passenger seat for security purposes. South Korea In November 2019, Hyundai Motor Group successfully conducted its first platooning of trucks on a highway for the first time in Korea. Demonstrations of platooning, cut-in/cut-out of other vehicles, simultaneous emergency braking, and V2V communication technology were conducted. See also Train Autonomous car Drafting (aerodynamics) Green wave Peloton Road train Safe Road Trains for the Environment Smart highway Traffic assignment Vehicle Infrastructure Integration Virginia Smart Road Wardrop equilibrium References External links Fleet Test and Evaluation Project – Truck Platooning Testing (National Renewable Energy Laboratory) Roadtrains.org Vehicle Platooning and Automated Highways Description of the San Diego experiment. Underground Automated Highway Systems Forecast for the future of urban transportation. Safe Road Trains for the Enivironment (SARTRE) Simulation of Cooperative Automated Driving by Bidirectional Coupling of Vehicle and Network Simulators: An example of platooning simulation in Webots done in the context of the AutoNet2030 European project. Videos 1997 demo of autonomous cars platooning on I-5 San Diego, California (NAHSC) 2011 SARTRE Project demo, Gothenburg, Sweden (a lead truck with a single following car) 2012 SARTRE Project demo, Barcelona, Spain (a lead truck followed by three cars driven entirely autonomously) Automotive technologies de:Fahrzeugkolonne
Platoon (automobile)
[ "Engineering" ]
1,697
[ "Automotive engineering", "Self-driving cars" ]
173,366
https://en.wikipedia.org/wiki/Mechanization
Mechanization (or mechanisation) is the process of changing from working largely or exclusively by hand or with animals to doing that work with machinery. In an early engineering text, a machine is defined as follows: In every fields, mechanization includes the use of hand tools. In modern usage, such as in engineering or economics, mechanization implies machinery more complex than hand tools and would not include simple devices such as an ungeared horse or donkey mill. Devices that cause speed changes or changes to or from reciprocating to rotary motion, using means such as gears, pulleys or sheaves and belts, shafts, cams and cranks, usually are considered machines. After electrification, when most small machinery was no longer hand powered, mechanization was synonymous with motorized machines. Extension of mechanization of the production process is termed as automation and it is controlled by a closed loop system in which feedback is provided by the sensors. In an automated machine the work of different mechanisms is performed automatically. History Ancient times Water wheels date to the Roman period and were used to grind grain and lift irrigation water. Water-powered bellows were in use on blast furnaces in China in 31 AD. By the 13th century, water wheels powered sawmills and trip hammers, to pull cloth and pound flax and later cotton rags into pulp for making paper. Trip hammers are shown crushing ore in De re Metallica (1555). Clocks were some of the most complex early mechanical devices. Clock makers were important developers of machine tools including gear and screw cutting machines, and were also involved in the mathematical development of gear designs. Clocks were some of the earliest mass-produced items, beginning around 1830. Water-powered bellows for blast furnaces, used in China in ancient times, were in use in Europe by the 15th century. De re Metallica contains drawings related to bellows for blast furnaces including a fabrication drawing. Improved gear designs decreased wear and increased efficiency. Mathematical gear designs were developed in the mid 17th century. French mathematician and engineer Desargues designed and constructed the first mill with epicycloidal teeth ca. 1650. In the 18th century involute gears, another mathematical derived design, came into use. Involute gears are better for meshing gears of different sizes than epicycloidal. Gear cutting machines came into use in the 18th century. Industrial revolution The Newcomen steam engine was first used, to pump water from a mine, in 1712. John Smeaton introduced metal gears and axles to water wheels in the mid to last half of the 18th century. The Industrial Revolution started mainly with textile machinery, such as the spinning jenny (1764) and water frame (1768). Demand for metal parts used in textile machinery led to the invention of many machine tools in the late 1700s until the mid-1800s. After the early decades of the 19th century, iron increasingly replaced wood in gearing and shafts in textile machinery. In the 1840s self acting machine tools were developed. Machinery was developed to make nails ca. 1810. The Fourdrinier paper machine for continuous production of paper was patented in 1801, displacing the centuries-old hand method of making individual sheets of paper. One of the first mechanical devices used in agriculture was the seed drill invented by Jethro Tull around 1700. The seed drill allowed more uniform spacing of seed and planting depth than hand methods, increasing yields and saving valuable seed. In 1817, the first bicycle was invented and used in Germany. Mechanized agriculture greatly increased in the late eighteenth and early nineteenth centuries with horse drawn reapers and horse powered threshing machines. By the late nineteenth century steam power was applied to threshing and steam tractors appeared. Internal combustion began being used for tractors in the early twentieth century. Threshing and harvesting was originally done with attachments for tractors, but in the 1930s independently powered combine harvesters were in use. In the mid to late 19th century, hydraulic and pneumatic devices were able to power various mechanical actions, such as positioning tools or work pieces. Pile drivers and steam hammers are examples for heavy work. In food processing, pneumatic or hydraulic devices could start and stop filling of cans or bottles on a conveyor. Power steering for automobiles uses hydraulic mechanisms, as does practically all earth moving equipment and other construction equipment and many attachments to tractors. Pneumatic (usually compressed air) power is widely used to operate industrial valves. Twentieth century By the early 20th century machines developed the ability to perform more complex operations that had previously been done by skilled craftsmen. An example is the glass bottle making machine developed 1905. It replaced highly paid glass blowers and child labor helpers and led to the mass production of glass bottles. After 1900 factories were electrified, and electric motors and controls were used to perform more complicated mechanical operations. This resulted in mechanized processes to manufacture almost all goods. Categories In manufacturing, mechanization replaced hand methods of making goods. Prime movers are devices that convert thermal, potential or kinetic energy into mechanical work. Prime movers include internal combustion engines, combustion turbines (jet engines), water wheels and turbines, windmills and wind turbines and steam engines and turbines. Powered transportation equipment such as locomotives, automobiles and trucks and airplanes, is a classification of machinery which includes sub classes by engine type, such as internal combustion, combustion turbine and steam. Inside factories, warehouses, lumber yards and other manufacturing and distribution operations, material handling equipment replaced manual carrying or hand trucks and carts. In mining and excavation, power shovels replaced picks and shovels. Rock and ore crushing had been done for centuries by water-powered trip hammers, but trip hammers have been replaced by modern ore crushers and ball mills. Bulk material handling systems and equipment are used for a variety of materials including coal, ores, grains, sand, gravel and wood products. Construction equipment includes cranes, concrete mixers, concrete pumps, cherry pickers and an assortment of power tools. Powered machinery Powered machinery today usually means either by electric motor or internal combustion engine. Before the first decade of the 20th century powered usually meant by steam engine, water or wind. Many of the early machines and machine tools were hand powered, but most changed over to water or steam power by the early 19th century. Before electrification, mill and factory power was usually transmitted using a line shaft. Electrification allowed individual machines to each be powered by a separate motor in what is called unit drive. Unit drive allowed factories to be better arranged and allowed different machines to run at different speeds. Unit drive also allowed much higher speeds, which was especially important for machine tools. A step beyond mechanization is automation. Early production machinery, such as the glass bottle blowing machine (ca. 1890s), required a lot of operator involvement. By the 1920s fully automatic machines, which required much less operator attention, were being used. Military usage The term is also used in the military to refer to the use of tracked armoured vehicles, particularly armoured personnel carriers, to move troops ( mechanized infantry) that would otherwise have marched or ridden trucks into combat. In military terminology, mechanized refers to ground units that can fight from vehicles, while motorized refers to units (motorized infantry) that are transported and go to battle in unarmoured vehicles such as trucks. Thus, a towed artillery unit is considered motorized while a self-propelled one is mechanized. Mechanical vs human labour When we compare the efficiency of a labourer, we see that he has an efficiency of about 1%–5.5% (depending on whether he uses arms, or a combination of arms and legs). Internal combustion engines mostly have an efficiency of about 20%, although large diesel engines, such as those used to power ships, may have efficiencies of nearly 50%. Industrial electric motors have efficiencies up to the low 90% range, before correcting for the conversion efficiency of fuel to electricity of about 35%. When we compare the costs of using an internal combustion engine to a worker to perform work, we notice that an engine can perform more work at a comparative cost. 1 liter of fossil fuel burnt with an IC engine equals about 50 hands of workers operating for 24 hours or 275 arms and legs for 24 hours. In addition, the combined work capability of a human is also much lower than that of a machine. An average human worker can provide work good for around 0,9 hp (2.3 MJ per hour) while a machine (depending on the type and size) can provide for far greater amounts of work. For example, it takes more than one and a half hour of hard labour to deliver only one kWh – which a small engine could deliver in less than one hour while burning less than one litre of petroleum fuel. This implies that a gang of 20 to 40 men will require a financial compensation for their work at least equal to the required expended food calories (which is at least 4 to 20 times higher). In most situations, the worker will also want compensation for the lost time, which is easily 96 times greater per day. Even if we assume the real wage cost for the human labour to be at US $1.00/day, an energy cost is generated of about $4.00/kWh. Despite this being a low wage for hard labour, even in some of the countries with the lowest wages, it represents an energy cost that is significantly more expensive than even exotic power sources such as solar photovoltaic panels (and thus even more expensive when compared to wind energy harvesters or luminescent solar concentrators). Levels of mechanization For simplification, one can study mechanization as a series of steps. Many students refer to this series as indicating basic-to-advanced forms of mechanical society. hand/muscle power hand-tools powered hand-tools, e.g. electric-controlled powered tools, single functioned, fixed cycle powered tools, multi-functioned, program controlled powered tools, remote-controlled powered tools, activated by work-piece (e.g.: coin phone) measurement selected signaling control, e.g. hydro power control performance recording automated machine action altered through measurement segregation/rejection according to measurement selection of appropriate action cycle correcting performance after operation correcting performance during operation See also Assembly line Bulk materials handling Industrialisation Newly industrialized country References Further reading Secondary sector of the economy Agricultural machinery Armoured warfare Machinery Industrial history
Mechanization
[ "Physics", "Technology", "Engineering" ]
2,131
[ "Physical systems", "Machines", "Machinery", "Mechanical engineering" ]
173,370
https://en.wikipedia.org/wiki/Whistler%20%28radio%29
A whistler is a very low frequency (VLF) electromagnetic (radio) wave generated by lightning. Frequencies of terrestrial whistlers are 1 kHz to 30 kHz, with maximum frequencies usually at 3 kHz to 5 kHz. Although they are electromagnetic waves, they occur at audio frequencies, and can be converted to audio using a suitable receiver. They are produced by lightning strikes (mostly intracloud and return-path) where the impulse travels along the Earth's magnetic field lines from one hemisphere to the other. They undergo dispersion of several kHz due to the slower velocity of the lower frequencies through the plasma environments of the ionosphere and magnetosphere. Thus they are perceived as a descending tone which can last for a few seconds. The study of whistlers categorizes them into Pure Note, Diffuse, 2-Hop, and Echo Train types. Voyager 1 and 2 spacecraft detected whistler-like activity in the vicinity of Jupiter known as "Jovian Whistlers", supporting the visual observations of lightning made by Voyager 1. Whistlers have been detected in the Earth's magnetosheath, where they are often called “lion roars” due to their frequencies of tens to hundreds of Hz. Sources The pulse of electromagnetic energy of a lightning discharge producing whistlers contains a wide range of frequencies below the electron cyclotron frequency. Due to interactions with free electrons in the ionosphere, the waves becomes highly dispersive and like guided waves, follow the lines of geomagnetic field. These lines provide the field with sufficient focusing influence and prevents the scattering of field energy. Their paths reach into the outer space as far as 3 to 4 times the Earth's radius in the plane of equator and bring energy from lightning discharge to the Earth at a point in the opposite hemisphere which is the magnetic conjugate of the position of radio emission for whistlers. From there, the whistler waves are reflected back to the hemisphere from which they started. The energy is almost perfectly reflected from earth surface 4 or 5 times with increase dispersion and diminishing amplitude. Along such long paths the speed of propagation of energy is between c/10 to c/100 (where c is the speed of light) and the exact value depends upon frequency. Modulated heating of the lower ionosphere with an HF heater array can also be used to generate VLF waves that excite whistler mode propagation. By transmitting high power HF waves with a VLF modulated power envelope into the D-region ionosphere, the conductivity of the ionospheric plasma can be modulated. This conductivity modulation together with naturally occurring electrojet fields produces a virtual antenna which radiates at the modulation frequency. The HAARP HF heater array has been used to excite whistler-mode VLF signals detectable at the magnetic conjugate point, with up to 10 hops visible in the received VLF data. History Whistlers were probably heard as early as 1886 on long telephone lines, but the clearest early description was by Heinrich Barkhausen in 1919. British scientist Llewelyn Robert Owen Storey had shown lightning generated whistlers in his 1953 PhD dissertation. Around the same time, Storey had posited the existence of whistlers meant plasma was present in Earth's atmosphere, and that it moved radio waves in the same direction as Earth's magnetic field lines. From this he deduced but was unable to conclusively prove the existence of the plasmasphere, a thin layer between the ionosphere and magnetosphere. In 1963 American scientist Don Carpenter and Soviet astronomer Konstantin Gringauz—independently of each other, and the latter using data from the Luna 2 spacecraft—experimentally proved the plasmasphere and plasmapause's existence, building on Storey's thinking. American electrical engineer Robert Helliwell is also known for his research into whistlers. Helliwell and one of his students, Jack Mallinckrodt, were investigating lightning noise at very low radio frequencies at Stanford University in 1950. Mallinckrodt heard some whistling sounds and brought them to Helliwell's attention. As Helliwell recalled in an article in the October 1982 issue of the Stanford Engineer, he initially thought it was an artifact, but stood radio watch with Mallinckrodt until he heard the whistlers himself. Helliwell described these sounds as "weird, strange and unbelievable as flying saucers" in a 1954 article in the Palo Alto Times. Helliwell tried to understand the mechanism involved in the production of whistlers. He conducted experiments at the VLF outpost Siple Station in West Antarctica, which was active from 1971 to 1988. Since the wavelength of VLF radio signals is very large (a frequency of 10 kHz corresponds to a wavelength of ), Siple Station had an antenna that was long. The antenna was used to transmit VLF radio signals into Earth's magnetosphere, to be detected in Canada. It was possible to inject these signals into the magnetosphere, since the ionosphere is transparent to these low frequencies. Etymology Whistlers were named by British World War I radio operators. On the wide-band spectrogram, the observed characteristic of a whistler is that the tone rapidly descends over a few seconds—almost like a person whistling or an incoming grenade—hence the name "whistlers." Nomenclature A type of electromagnetic signal propagating in the Earth–ionosphere waveguide, known as a radio atmospheric signal or sferic, may escape the ionosphere and propagate outward into the magnetosphere. The signal is prone to bounce-mode propagation, reflecting back and forth on opposite sides of the planet until totally attenuated. To clarify which part of this hop pattern the signal is in, it is specified by a number, indicating the portion of the bounce path it is currently on. On its first upward path, it is known as a 0+. After passing the geomagnetic equator, it is referred to as a 1−. The + or - sign indicates either upward or downward propagation, respectively. The numeral represents the half-bounce currently in progress. The reflected signal is redesignated 1+, until passing the geomagnetic equator again; then it is called 2−, and so on. See also Dawn chorus (electromagnetic) Electromagnetic electron wave Hiss (electromagnetic) Atmospheric noise Radio atmospheric Helicon (physics) Relevant spacecraft Advanced Composition Explorer (ACE), launched 1997, still operational. FR-1, launched 1965, one of the earliest spacecraft to measure ionospheric and magnetospheric VLF waves, non-operational but still orbiting Earth. Helios (spacecraft) MESSENGER (MErcury Surface, Space ENvironment, GEochemistry and Ranging), launched 2004, decommissioned 2015. Radiation Belt Storm Probes Solar Dynamics Observatory (SDO), launched 2010, still operational. Solar and Heliospheric Observatory (SOHO), launched 1995, still operational. Solar Maximum Mission (SMM), launched 1980, decommissioned 1989. Solar Orbiter (SOLO), Launched in February 2020, Operational in November 2021. Parker Solar Probe, launched in 2018, still operational. STEREO (Solar TErrestrial RElations Observatory), launched 2006, still operational. Transition Region and Coronal Explorer (TRACE), launched 1998, decommissioned 2010. Ulysses (spacecraft), launched 1990, decommissioned 2009. WIND (spacecraft), launched 1994, still operational. References Further reading A beginner's guide to natural VLF radio phenomena - second part. The INSPIRE Project - Exploring Very Low Frequency Natural Radio (NASA educational portfolio program). Atmospheric electricity Electrical phenomena Ionosphere Space physics
Whistler (radio)
[ "Physics", "Astronomy" ]
1,556
[ "Physical phenomena", "Outer space", "Atmospheric electricity", "Electrical phenomena", "Space physics" ]
173,371
https://en.wikipedia.org/wiki/Filter%20design
Filter design is the process of designing a signal processing filter that satisfies a set of requirements, some of which may be conflicting. The purpose is to find a realization of the filter that meets each of the requirements to an acceptable degree. The filter design process can be described as an optimization problem. Certain parts of the design process can be automated, but an experienced designer may be needed to get a good result. The design of digital filters is a complex topic. Although filters are easily understood and calculated, the practical challenges of their design and implementation are significant and are the subject of advanced research. Typical design requirements Typical requirements which may be considered in the design process are: Frequency response Phase shift or group delay impulse response Causal filter required? Stable filter required? Finite (in duration) impulse response required? Computational complexity Technology The frequency function The required frequency response is an important parameter. The steepness and complexity of the response curve determines the filter order and feasibility. A first-order recursive filter will only have a single frequency-dependent component. This means that the slope of the frequency response is limited to 6 dB per octave. For many purposes, this is not sufficient. To achieve steeper slopes, higher-order filters are required. In relation to the desired frequency function, there may also be an accompanying weighting function, which describes, for each frequency, how important it is that the resulting frequency function approximates the desired one. Typical examples of frequency function are: A low-pass filter is used to cut unwanted high-frequency signals. A high-pass filter passes high frequencies fairly well; it is helpful as a filter to cut any unwanted low-frequency components. A band-pass filter passes a limited range of frequencies. A band-stop filter passes frequencies above and below a certain range. A very narrow band-stop filter is known as a notch filter. An all-pass filter passes all frequencies equally in gain. Only the phase shift is changed, which also affects the group delay. A differentiator has an amplitude response proportional to the frequency. A low-shelf filter passes all frequencies, but increases or reduces frequencies below the shelf frequency by specified amount. A high-shelf filter passes all frequencies, but increases or reduces frequencies above the shelf frequency by specified amount. A peak EQ filter makes a peak or a dip in the frequency response, commonly used in parametric equalizers. Phase and group delay An all-pass filter passes through all frequencies unchanged, but changes the phase of the signal. Filters of this type can be used to equalize the group delay of recursive filters. This filter is also used in phaser effects. A Hilbert transformer is a specific all-pass filter that passes sinusoids with unchanged amplitude but shifts each sinusoid phase by ±90°. A fractional delay filter is an all-pass that has a specified and constant group or phase delay for all frequencies. The impulse response There is a direct correspondence between the filter's frequency function and its impulse response: the former is the Fourier transform of the latter. That means that any requirement on the frequency function is a requirement on the impulse response, and vice versa. However, in certain applications it may be the filter's impulse response that is explicit and the design process then aims at producing as close an approximation as possible to the requested impulse response given all other requirements. In some cases it may even be relevant to consider a frequency function and impulse response of the filter which are chosen independently from each other. For example, we may want both a specific frequency function of the filter and that the resulting filter have a small effective width in the signal domain as possible. The latter condition can be realized by considering a very narrow function as the wanted impulse response of the filter even though this function has no relation to the desired frequency function. The goal of the design process is then to realize a filter which tries to meet both these contradicting design goals as much as possible. An example is for high-resolution audio in which the frequency response (magnitude and phase) for steady state signals (sum of sinusoids) is the primary filter requirement, while an unconstrained impulse response may cause unexpected degradation due to time spreading of transient signals. Causality Any filter operating in real time (the filter response only depends on the current and past inputs) must be causal. If the design process yields a noncausal filter, the resulting filter can be made causal by introducing an appropriate time-shift (or delay). Filters that do not operate in real time (e.g. for image processing) can be non-causal. Noncausal filters may be designed to have zero delay. Stability A stable filter assures that every limited input signal produces a limited filter response. A filter which does not meet this requirement may in some situations prove useless or even harmful. Certain design approaches can guarantee stability, for example by using only feed-forward circuits such as an FIR filter. On the other hand, filters based on feedback circuits have other advantages and may therefore be preferred, even if this class of filters includes unstable filters. In this case, the filters must be carefully designed in order to avoid instability. Locality In certain applications we have to deal with signals which contain components which can be described as local phenomena, for example pulses or steps, which have certain time duration. A consequence of applying a filter to a signal is, in intuitive terms, that the duration of the local phenomena is extended by the width of the filter. This implies that it is sometimes important to keep the width of the filter's impulse response function as short as possible. According to the uncertainty relation of the Fourier transform, the product of the width of the filter's impulse response function and the width of its frequency function must exceed a certain constant. This means that any requirement on the filter's locality also implies a bound on its frequency function's width. Consequently, it may not be possible to simultaneously meet requirements on the locality of the filter's impulse response function as well as on its frequency function. This is a typical example of contradicting requirements. Computational complexity A general desire in any design is that the number of operations (additions and multiplications) needed to compute the filter response is as low as possible. In certain applications, this desire is a strict requirement, for example due to limited computational resources, limited power resources, or limited time. The last limitation is typical in real-time applications. There are several ways in which a filter can have different computational complexity. For example, the order of a filter is more or less proportional to the number of operations. This means that by choosing a low order filter, the computation time can be reduced. For discrete filters the computational complexity is more or less proportional to the number of filter coefficients. If the filter has many coefficients, for example in the case of multidimensional signals such as tomography data, it may be relevant to reduce the number of coefficients by removing those which are sufficiently close to zero. In multirate filters, the number of coefficients by taking advantage of its bandwidth limits, where the input signal is downsampled (e.g. to its critical frequency), and upsampled after filtering. Another issue related to computational complexity is separability, that is, if and how a filter can be written as a convolution of two or more simpler filters. In particular, this issue is of importance for multidimensional filters, e.g., 2D filter which are used in image processing. In this case, a significant reduction in computational complexity can be obtained if the filter can be separated as the convolution of one 1D filter in the horizontal direction and one 1D filter in the vertical direction. A result of the filter design process may, e.g., be to approximate some desired filter as a separable filter or as a sum of separable filters. Other considerations It must also be decided how the filter is going to be implemented: Analog filter Analog sampled filter Digital filter Mechanical filter Analog filters The design of linear analog filters is for the most part covered in the linear filter section. Digital filters Digital filters are classified into one of two basic forms, according to how they respond to a unit impulse: Finite impulse response, or FIR, filters express each output sample as a weighted sum of the last N input samples, where N is the order of the filter. FIR filters are normally non-recursive, meaning they do not use feedback and as such are inherently stable. A moving average filter or CIC filter are examples of FIR filters that are normally recursive (that use feedback). If the FIR coefficients are symmetrical (often the case), then such a filter is linear phase, so it delays signals of all frequencies equally which is important in many applications. It is also straightforward to avoid overflow in an FIR filter. The main disadvantage is that they may require significantly more processing and memory resources than cleverly designed IIR variants. FIR filters are generally easier to design than IIR filters - the Parks-McClellan filter design algorithm (based on the Remez algorithm) is one suitable method for designing quite good filters semi-automatically. (See Methodology.) Infinite impulse response, or IIR, filters are the digital counterpart to analog filters. Such a filter contains internal state, and the output and the next internal state are determined by a linear combination of the previous inputs and outputs (in other words, they use feedback, which FIR filters normally do not). In theory, the impulse response of such a filter never dies out completely, hence the name IIR, though in practice, this is not true given the finite resolution of computer arithmetic. IIR filters normally require less computing resources than an FIR filter of similar performance. However, due to the feedback, high order IIR filters may have problems with instability, arithmetic overflow, and limit cycles, and require careful design to avoid such pitfalls. Additionally, since the phase shift is inherently a non-linear function of frequency, the time delay through such a filter is frequency-dependent, which can be a problem in many situations. 2nd order IIR filters are often called 'biquads' and a common implementation of higher order filters is to cascade biquads. A useful reference for computing biquad coefficients is the RBJ Audio EQ Cookbook. Sample rate Unless the sample rate is fixed by some outside constraint, selecting a suitable sample rate is an important design decision. A high rate will require more in terms of computational resources, but less in terms of anti-aliasing filters. Interference and beating with other signals in the system may also be an issue. Anti-aliasing For any digital filter design, it is crucial to analyze and avoid aliasing effects. Often, this is done by adding analog anti-aliasing filters at the input and output, thus avoiding any frequency component above the Nyquist frequency. The complexity (i.e., steepness) of such filters depends on the required signal-to-noise ratio and the ratio between the sampling rate and the highest frequency of the signal. Theoretical basis Parts of the design problem relate to the fact that certain requirements are described in the frequency domain while others are expressed in the time domain and that these may conflict. For example, it is not possible to obtain a filter which has both an arbitrary impulse response and arbitrary frequency function. Other effects which refer to relations between the time and frequency domain are The uncertainty principle between the time and frequency domains The variance extension theorem The asymptotic behaviour of one domain versus discontinuities in the other The uncertainty principle As stated by the Gabor limit, an uncertainty principle, the product of the width of the frequency function and the width of the impulse response cannot be smaller than a specific constant. This implies that if a specific frequency function is requested, corresponding to a specific frequency width, the minimum width of the filter in the signal domain is set. Vice versa, if the maximum width of the response is given, this determines the smallest possible width in the frequency. This is a typical example of contradictory requirements where the filter design process may try to find a useful compromise. The variance extension theorem Let be the variance of the input signal and let be the variance of the filter. The variance of the filter response, , is then given by = + This means that and implies that the localization of various features such as pulses or steps in the filter response is limited by the filter width in the signal domain. If a precise localization is requested, we need a filter of small width in the signal domain and, via the uncertainty principle, its width in the frequency domain cannot be arbitrary small. Discontinuities versus asymptotic behaviour Let f(t) be a function and let be its Fourier transform. There is a theorem which states that if the first derivative of F which is discontinuous has order , then f has an asymptotic decay like . A consequence of this theorem is that the frequency function of a filter should be as smooth as possible to allow its impulse response to have a fast decay, and thereby a short width. Methodology One common method for designing FIR filters is the Parks-McClellan filter design algorithm, based on the Remez exchange algorithm. Here the user specifies a desired frequency response, a weighting function for errors from this response, and a filter order N. The algorithm then finds the set of N coefficients that minimize the maximum deviation from the ideal. Intuitively, this finds the filter that is as close as you can get to the desired response given that you can use only N coefficients. This method is particularly easy in practice and at least one text includes a program that takes the desired filter and N and returns the optimum coefficients. One possible drawback to filters designed this way is that they contain many small ripples in the passband(s), since such a filter minimizes the peak error. Another method to finding a discrete FIR filter is filter optimization described in Knutsson et al., which minimizes the integral of the square of the error, instead of its maximum value. In its basic form this approach requires that an ideal frequency function of the filter is specified together with a frequency weighting function and set of coordinates in the signal domain where the filter coefficients are located. An error function is defined as where is the discrete filter and is the discrete-time Fourier transform defined on the specified set of coordinates. The norm used here is, formally, the usual norm on spaces. This means that measures the deviation between the requested frequency function of the filter, , and the actual frequency function of the realized filter, . However, the deviation is also subject to the weighting function before the error function is computed. Once the error function is established, the optimal filter is given by the coefficients which minimize . This can be done by solving the corresponding least squares problem. In practice, the norm has to be approximated by means of a suitable sum over discrete points in the frequency domain. In general, however, these points should be significantly more than the number of coefficients in the signal domain to obtain a useful approximation. Simultaneous optimization in both domains The previous method can be extended to include an additional error term related to a desired filter impulse response in the signal domain, with a corresponding weighting function. The ideal impulse response can be chosen independently of the ideal frequency function and is in practice used to limit the effective width and to remove ringing effects of the resulting filter in the signal domain. This is done by choosing a narrow ideal filter impulse response function, e.g., an impulse, and a weighting function which grows fast with the distance from the origin, e.g., the distance squared. The optimal filter can still be calculated by solving a simple least squares problem and the resulting filter is then a "compromise" which has a total optimal fit to the ideal functions in both domains. An important parameter is the relative strength of the two weighting functions which determines in which domain it is more important to have a good fit relative to the ideal function. See also Digital filter Prototype filter Finite impulse response#Filter design References Bibliography External links An extensive list of filter design articles and software at Circuit Sage A list of digital filter design software at dspGuru Analog Filter Design Demystified Yehar's digital sound processing tutorial for the braindead! This paper explains simply (between others topics) filters design theory and give some examples Digital signal processing Filter theory Signal processing filter
Filter design
[ "Chemistry", "Engineering" ]
3,350
[ "Telecommunications engineering", "Filters", "Signal processing filter", "Filter theory" ]
173,407
https://en.wikipedia.org/wiki/Analog%20sampled%20filter
An analog sampled filter an electronic filter that is a hybrid between an analog and a digital filter. The input is an analog signal, and usually stored in capacitors. The time domain is discrete, however. Distinct analog samples are shifted through an array of holding capacitors as in a bucket brigade. Analog adders and amplifiers do the arithmetic in the signal domain, just as in an analog computer. Note that these filters are subject to aliasing phenomena just like a digital filter, and anti-aliasing filters will usually be required. See . Companies such as Linear Technology and Maxim produce integrated circuits that implement this functionality. Filters up to the 8th order may be implemented using a single chip. Some are fully configurable; some are pre-configured, usually as low-pass filters. Due to the high filter order that can be achieved in an easy and stable manner, single chip analog sampled filters are often used for implementing anti-aliasing filters for digital filters. The analog sampled filter will in its turn need yet another anti-aliasing filter, but this can often be implemented as a simple 1st order low-pass analog filter consisting of one series resistor and one capacitor to ground. Linear filters Electronic circuits
Analog sampled filter
[ "Engineering" ]
249
[ "Electronic engineering", "Electronic circuits" ]
173,416
https://en.wikipedia.org/wiki/Mathematical%20physics
Mathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics, known as physical mathematics. Scope There are several distinct branches of mathematical physics, and these roughly correspond to particular historical parts of our world. Classical mechanics Applying the techniques of mathematical physics to classical mechanics typically involves the rigorous, abstract, and advanced reformulation of Newtonian mechanics in terms of Lagrangian mechanics and Hamiltonian mechanics (including both approaches in the presence of constraints). Both formulations are embodied in analytical mechanics and lead to an understanding of the deep interplay between the notions of symmetry and conserved quantities during the dynamical evolution of mechanical systems, as embodied within the most elementary formulation of Noether's theorem. These approaches and ideas have been extended to other areas of physics, such as statistical mechanics, continuum mechanics, classical field theory, and quantum field theory. Moreover, they have provided multiple examples and ideas in differential geometry (e.g., several notions in symplectic geometry and vector bundles). Partial differential equations Within mathematics proper, the theory of partial differential equation, variational calculus, Fourier analysis, potential theory, and vector analysis are perhaps most closely associated with mathematical physics. These fields were developed intensively from the second half of the 18th century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. Quantum theory The theory of atomic spectra (and, later, quantum mechanics) developed almost concurrently with some parts of the mathematical fields of linear algebra, the spectral theory of operators, operator algebras and, more broadly, functional analysis. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic and molecular physics. Quantum information theory is another subspecialty. Relativity and quantum relativistic theories The special and general theories of relativity require a rather different type of mathematics. This was group theory, which played an important role in both quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the mathematical description of cosmological as well as quantum field theory phenomena. In the mathematical description of these physical areas, some concepts in homological algebra and category theory are also important. Statistical mechanics Statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics (or its quantum version) and it is closely related with the more mathematical ergodic theory and some parts of probability theory. There are increasing interactions between combinatorics and physics, in particular statistical physics. Usage The usage of the term "mathematical physics" is sometimes idiosyncratic. Certain parts of mathematics that initially arose from the development of physics are not, in fact, considered parts of mathematical physics, while other closely related fields are. For example, ordinary differential equations and symplectic geometry are generally viewed as purely mathematical disciplines, whereas dynamical systems and Hamiltonian mechanics belong to mathematical physics. John Herapath used the term for the title of his 1847 text on "mathematical principles of natural philosophy", the scope at that time being "the causes of heat, gaseous elasticity, gravitation, and other great phenomena of nature". Mathematical vs. theoretical physics The term "mathematical physics" is sometimes used to denote research aimed at studying and solving problems in physics or thought experiments within a mathematically rigorous framework. In this sense, mathematical physics covers a very broad academic realm distinguished only by the blending of some mathematical aspect and theoretical physics aspect. Although related to theoretical physics, mathematical physics in this sense emphasizes the mathematical rigour of the similar type as found in mathematics. On the other hand, theoretical physics emphasizes the links to observations and experimental physics, which often requires theoretical physicists (and mathematical physicists in the more general sense) to use heuristic, intuitive, or approximate arguments. Such arguments are not considered rigorous by mathematicians. Such mathematical physicists primarily expand and elucidate physical theories. Because of the required level of mathematical rigour, these researchers often deal with questions that theoretical physicists have considered to be already solved. However, they can sometimes show that the previous solution was incomplete, incorrect, or simply too naïve. Issues about attempts to infer the second law of thermodynamics from statistical mechanics are examples. Other examples concern the subtleties involved with synchronisation procedures in special and general relativity (Sagnac effect and Einstein synchronisation). The effort to put physical theories on a mathematically rigorous footing not only developed physics but also has influenced developments of some mathematical areas. For example, the development of quantum mechanics and some aspects of functional analysis parallel each other in many ways. The mathematical study of quantum mechanics, quantum field theory, and quantum statistical mechanics has motivated results in operator algebras. The attempt to construct a rigorous mathematical formulation of quantum field theory has also brought about some progress in fields such as representation theory. Prominent mathematical physicists Before Newton There is a tradition of mathematical analysis of nature that goes back to the ancient Greeks; examples include Euclid (Optics), Archimedes (On the Equilibrium of Planes, On Floating Bodies), and Ptolemy (Optics, Harmonics). Later, Islamic and Byzantine scholars built on these works, and these ultimately were reintroduced or became available to the West in the 12th century and during the Renaissance. In the first decade of the 16th century, amateur astronomer Nicolaus Copernicus proposed heliocentrism, and published a treatise on it in 1543. He retained the Ptolemaic idea of epicycles, and merely sought to simplify astronomy by constructing simpler sets of epicyclic orbits. Epicycles consist of circles upon circles. According to Aristotelian physics, the circle was the perfect form of motion, and was the intrinsic motion of Aristotle's fifth element—the quintessence or universal essence known in Greek as aether for the English pure air—that was the pure substance beyond the sublunary sphere, and thus was celestial entities' pure composition. The German Johannes Kepler [1571–1630], Tycho Brahe's assistant, modified Copernican orbits to ellipses, formalized in the equations of Kepler's laws of planetary motion. An enthusiastic atomist, Galileo Galilei in his 1623 book The Assayer asserted that the "book of nature is written in mathematics". His 1632 book, about his telescopic observations, supported heliocentrism. Having made use of experimentation, Galileo then refuted geocentric cosmology by refuting Aristotelian physics itself. Galileo's 1638 book Discourse on Two New Sciences established the law of equal free fall as well as the principles of inertial motion, two central concepts of what today is known as classical mechanics. By the Galilean law of inertia as well as the principle of Galilean invariance, also called Galilean relativity, for any object experiencing inertia, there is empirical justification for knowing only that it is at relative rest or relative motion—rest or motion with respect to another object. René Descartes developed a complete system of heliocentric cosmology anchored on the principle of vortex motion, Cartesian physics, whose widespread acceptance helped bring the demise of Aristotelian physics. Descartes used mathematical reasoning as a model for science, and developed analytic geometry, which in time allowed the plotting of locations in 3D space (Cartesian coordinates) and marking their progressions along the flow of time. Christiaan Huygens, a talented mathematician and physicist and older contemporary of Newton, was the first to successfully idealize a physical problem by a set of mathematical parameters in Horologium Oscillatorum (1673), and the first to fully mathematize a mechanistic explanation of an unobservable physical phenomenon in Traité de la Lumière (1690). He is thus considered a forerunner of theoretical physics and one of the founders of modern mathematical physics. Newtonian physics and post Newtonian The prevailing framework for science in the 16th and early 17th centuries was one borrowed from Ancient Greek mathematics, where geometrical shapes formed the building blocks to describe and think about space, and time was often thought as a separate entity. With the introduction of algebra into geometry, and with it the idea of a coordinate system, time and space could now be thought as axes belonging to the same plane. This essential mathematical framework is at the base of all modern physics and used in all further mathematical frameworks developed in next centuries. By the middle of the 17th century, important concepts such as the fundamental theorem of calculus (proved in 1668 by Scottish mathematician James Gregory) and finding extrema and minima of functions via differentiation using Fermat's theorem (by French mathematician Pierre de Fermat) were already known before Leibniz and Newton. Isaac Newton (1642–1727) developed calculus (although Gottfried Wilhelm Leibniz developed similar concepts outside the context of physics) and Newton's method to solve problems in mathematics and physics. He was extremely successful in his application of calculus and other methods to the study of motion. Newton's theory of motion, culminating in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy) in 1687, modeled three Galilean laws of motion along with Newton's law of universal gravitation on a framework of absolute space—hypothesized by Newton as a physically real entity of Euclidean geometric structure extending infinitely in all directions—while presuming absolute time, supposedly justifying knowledge of absolute motion, the object's motion with respect to absolute space. The principle of Galilean invariance/relativity was merely implicit in Newton's theory of motion. Having ostensibly reduced the Keplerian celestial laws of motion as well as Galilean terrestrial laws of motion to a unifying force, Newton achieved great mathematical rigor, but with theoretical laxity. In the 18th century, the Swiss Daniel Bernoulli (1700–1782) made contributions to fluid dynamics, and vibrating strings. The Swiss Leonhard Euler (1707–1783) did special work in variational calculus, dynamics, fluid dynamics, and other areas. Also notable was the Italian-born Frenchman, Joseph-Louis Lagrange (1736–1813) for work in analytical mechanics: he formulated Lagrangian mechanics) and variational methods. A major contribution to the formulation of Analytical Dynamics called Hamiltonian dynamics was also made by the Irish physicist, astronomer and mathematician, William Rowan Hamilton (1805–1865). Hamiltonian dynamics had played an important role in the formulation of modern theories in physics, including field theory and quantum mechanics. The French mathematical physicist Joseph Fourier (1768 – 1830) introduced the notion of Fourier series to solve the heat equation, giving rise to a new approach to solving partial differential equations by means of integral transforms. Into the early 19th century, following mathematicians in France, Germany and England had contributed to mathematical physics. The French Pierre-Simon Laplace (1749–1827) made paramount contributions to mathematical astronomy, potential theory. Siméon Denis Poisson (1781–1840) worked in analytical mechanics and potential theory. In Germany, Carl Friedrich Gauss (1777–1855) made key contributions to the theoretical foundations of electricity, magnetism, mechanics, and fluid dynamics. In England, George Green (1793–1841) published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828, which in addition to its significant contributions to mathematics made early progress towards laying down the mathematical foundations of electricity and magnetism. A couple of decades ahead of Newton's publication of a particle theory of light, the Dutch Christiaan Huygens (1629–1695) developed the wave theory of light, published in 1690. By 1804, Thomas Young's double-slit experiment revealed an interference pattern, as though light were a wave, and thus Huygens's wave theory of light, as well as Huygens's inference that light waves were vibrations of the luminiferous aether, was accepted. Jean-Augustin Fresnel modeled hypothetical behavior of the aether. The English physicist Michael Faraday introduced the theoretical concept of a field—not action at a distance. Mid-19th century, the Scottish James Clerk Maxwell (1831–1879) reduced electricity and magnetism to Maxwell's electromagnetic field theory, whittled down by others to the four Maxwell's equations. Initially, optics was found consequent of Maxwell's field. Later, radiation and then today's known electromagnetic spectrum were found also consequent of this electromagnetic field. The English physicist Lord Rayleigh [1842–1919] worked on sound. The Irishmen William Rowan Hamilton (1805–1865), George Gabriel Stokes (1819–1903) and Lord Kelvin (1824–1907) produced several major works: Stokes was a leader in optics and fluid dynamics; Kelvin made substantial discoveries in thermodynamics; Hamilton did notable work on analytical mechanics, discovering a new and powerful approach nowadays known as Hamiltonian mechanics. Very relevant contributions to this approach are due to his German colleague mathematician Carl Gustav Jacobi (1804–1851) in particular referring to canonical transformations. The German Hermann von Helmholtz (1821–1894) made substantial contributions in the fields of electromagnetism, waves, fluids, and sound. In the United States, the pioneering work of Josiah Willard Gibbs (1839–1903) became the basis for statistical mechanics. Fundamental theoretical results in this area were achieved by the German Ludwig Boltzmann (1844–1906). Together, these individuals laid the foundations of electromagnetic theory, fluid dynamics, and statistical mechanics. Relativistic By the 1880s, there was a prominent paradox that an observer within Maxwell's electromagnetic field measured it at approximately constant speed, regardless of the observer's speed relative to other objects within the electromagnetic field. Thus, although the observer's speed was continually lost relative to the electromagnetic field, it was preserved relative to other objects in the electromagnetic field. And yet no violation of Galilean invariance within physical interactions among objects was detected. As Maxwell's electromagnetic field was modeled as oscillations of the aether, physicists inferred that motion within the aether resulted in aether drift, shifting the electromagnetic field, explaining the observer's missing speed relative to it. The Galilean transformation had been the mathematical process used to translate the positions in one reference frame to predictions of positions in another reference frame, all plotted on Cartesian coordinates, but this process was replaced by Lorentz transformation, modeled by the Dutch Hendrik Lorentz [1853–1928]. In 1887, experimentalists Michelson and Morley failed to detect aether drift, however. It was hypothesized that motion into the aether prompted aether's shortening, too, as modeled in the Lorentz contraction. It was hypothesized that the aether thus kept Maxwell's electromagnetic field aligned with the principle of Galilean invariance across all inertial frames of reference, while Newton's theory of motion was spared. Austrian theoretical physicist and philosopher Ernst Mach criticized Newton's postulated absolute space. Mathematician Jules-Henri Poincaré (1854–1912) questioned even absolute time. In 1905, Pierre Duhem published a devastating criticism of the foundation of Newton's theory of motion. Also in 1905, Albert Einstein (1879–1955) published his special theory of relativity, newly explaining both the electromagnetic field's invariance and Galilean invariance by discarding all hypotheses concerning aether, including the existence of aether itself. Refuting the framework of Newton's theory—absolute space and absolute time—special relativity refers to relative space and relative time, whereby length contracts and time dilates along the travel pathway of an object. Cartesian coordinates arbitrarily used rectilinear coordinates. Gauss, inspired by Descartes' work, introduced the curved geometry, replacing rectilinear axis by curved ones. Gauss also introduced another key tool of modern physics, the curvature. Gauss's work was limited to two dimensions. Extending it to three or more dimensions introduced a lot of complexity, with the need of the (not yet invented) tensors. It was Riemman the one in charge to extend curved geometry to N dimensions. In 1908, Einstein's former mathematics professor Hermann Minkowski, applied the curved geometry construction to model 3D space together with the 1D axis of time by treating the temporal axis like a fourth spatial dimension—altogether 4D spacetime—and declared the imminent demise of the separation of space and time. Einstein initially called this "superfluous learnedness", but later used Minkowski spacetime with great elegance in his general theory of relativity, extending invariance to all reference frames—whether perceived as inertial or as accelerated—and credited this to Minkowski, by then deceased. General relativity replaces Cartesian coordinates with Gaussian coordinates, and replaces Newton's claimed empty yet Euclidean space traversed instantly by Newton's vector of hypothetical gravitational force—an instant action at a distance—with a gravitational field. The gravitational field is Minkowski spacetime itself, the 4D topology of Einstein aether modeled on a Lorentzian manifold that "curves" geometrically, according to the Riemann curvature tensor. The concept of Newton's gravity: "two masses attract each other" replaced by the geometrical argument: "mass transform curvatures of spacetime and free falling particles with mass move along a geodesic curve in the spacetime" (Riemannian geometry already existed before the 1850s, by mathematicians Carl Friedrich Gauss and Bernhard Riemann in search for intrinsic geometry and non-Euclidean geometry.), in the vicinity of either mass or energy. (Under special relativity—a special case of general relativity—even massless energy exerts gravitational effect by its mass equivalence locally "curving" the geometry of the four, unified dimensions of space and time.) Quantum Another revolutionary development of the 20th century was quantum theory, which emerged from the seminal contributions of Max Planck (1856–1947) (on black-body radiation) and Einstein's work on the photoelectric effect. In 1912, a mathematician Henri Poincare published Sur la théorie des quanta. He introduced the first non-naïve definition of quantization in this paper. The development of early quantum physics followed by a heuristic framework devised by Arnold Sommerfeld (1868–1951) and Niels Bohr (1885–1962), but this was soon replaced by the quantum mechanics developed by Max Born (1882–1970), Louis de Broglie (1892–1987), Werner Heisenberg (1901–1976), Paul Dirac (1902–1984), Erwin Schrödinger (1887–1961), Satyendra Nath Bose (1894–1974), and Wolfgang Pauli (1900–1958). This revolutionary theoretical framework is based on a probabilistic interpretation of states, and evolution and measurements in terms of self-adjoint operators on an infinite-dimensional vector space. That is called Hilbert space (introduced by mathematicians David Hilbert (1862–1943), Erhard Schmidt (1876–1959) and Frigyes Riesz (1880–1956) in search of generalization of Euclidean space and study of integral equations), and rigorously defined within the axiomatic modern version by John von Neumann in his celebrated book Mathematical Foundations of Quantum Mechanics, where he built up a relevant part of modern functional analysis on Hilbert spaces, the spectral theory (introduced by David Hilbert who investigated quadratic forms with infinitely many variables. Many years later, it had been revealed that his spectral theory is associated with the spectrum of the hydrogen atom. He was surprised by this application.) in particular. Paul Dirac used algebraic constructions to produce a relativistic model for the electron, predicting its magnetic moment and the existence of its antiparticle, the positron. List of prominent contributors to mathematical physics in the 20th century Prominent contributors to the 20th century's mathematical physics include (ordered by birth date): William Thomson (Lord Kelvin) (1824–1907) Oliver Heaviside (1850–1925) Jules Henri Poincaré (1854–1912) David Hilbert (1862–1943) Arnold Sommerfeld (1868–1951) Constantin Carathéodory (1873–1950) Albert Einstein (1879–1955) Emmy Noether (1882–1935) Max Born (1882–1970) George David Birkhoff (1884–1944) Hermann Weyl (1885–1955) Satyendra Nath Bose (1894–1974) Louis de Broglie (1892–1987) Norbert Wiener (1894–1964) John Lighton Synge (1897–1995) Mário Schenberg (1914–1990) Wolfgang Pauli (1900–1958) Paul Dirac (1902–1984) Eugene Wigner (1902–1995) Andrey Kolmogorov (1903–1987) Lars Onsager (1903–1976) John von Neumann (1903–1957) Sin-Itiro Tomonaga (1906–1979) Hideki Yukawa (1907–1981) Nikolay Nikolayevich Bogolyubov (1909–1992) Subrahmanyan Chandrasekhar (1910–1995) Mark Kac (1914–1984) Julian Schwinger (1918–1994) Richard Phillips Feynman (1918–1988) Irving Ezra Segal (1918–1998) Ryogo Kubo (1920–1995) Arthur Strong Wightman (1922–2013) Chen-Ning Yang (1922–) Rudolf Haag (1922–2016) Freeman John Dyson (1923–2020) Martin Gutzwiller (1925–2014) Abdus Salam (1926–1996) Jürgen Moser (1928–1999) Michael Francis Atiyah (1929–2019) Joel Louis Lebowitz (1930–) Roger Penrose (1931–) Elliott Hershel Lieb (1932–) Yakir Aharonov (1932–) Sheldon Glashow (1932–) Steven Weinberg (1933–2021) Ludvig Dmitrievich Faddeev (1934–2017) David Ruelle (1935–) Yakov Grigorevich Sinai (1935–) Vladimir Igorevich Arnold (1937–2010) Arthur Michael Jaffe (1937–) Roman Wladimir Jackiw (1939–) Leonard Susskind (1940–) Rodney James Baxter (1940–) Michael Victor Berry (1941–) Giovanni Gallavotti (1941–) Stephen William Hawking (1942–2018) Jerrold Eldon Marsden (1942–2010) Michael C. Reed (1942–) John Michael Kosterlitz (1943–) Israel Michael Sigal (1945–) Alexander Markovich Polyakov (1945–) Barry Simon (1946–) Herbert Spohn (1946–) John Lawrence Cardy (1947–) Giorgio Parisi (1948-) Abhay Ashtekar (1949-) Edward Witten (1951–) F. Duncan Haldane (1951–) Ashoke Sen (1956–) Juan Martín Maldacena (1968–) See also International Association of Mathematical Physics Notable publications in mathematical physics List of mathematical physics journals Gauge theory (mathematics) Relationship between mathematics and physics Theoretical, computational and philosophical physics Notes References Further reading Generic works Textbooks for undergraduate studies , (Mathematical Methods for Physicists, Solutions for Mathematical Methods for Physicists (7th ed.), archive.org) Hassani, Sadri (2009), Mathematical Methods for Students of Physics and Related Fields, (2nd ed.), New York, Springer, eISBN 978-0-387-09504-2 Textbooks for graduate studies Specialized texts in classical physics Specialized texts in modern physics External links
Mathematical physics
[ "Physics", "Mathematics" ]
4,985
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
173,449
https://en.wikipedia.org/wiki/Trinitron
Trinitron was Sony's brand name for its line of aperture-grille-based CRTs used in television sets and computer monitors. It was one of the first television systems to enter the market since the 1950s. Constant improvement in the basic technology and attention to overall quality allowed Sony to charge a premium for Trinitron devices into the 1990s. Patent protection on the basic Trinitron design ran out in 1996, and it quickly faced a number of competitors at much lower prices. The name Trinitron was derived from trinity, meaning the union of three, and tron from electron tube, after the way that the Trinitron combined the three separate electron guns of other CRT designs into one. History Color television Color television had been demoed since the 1920s starting with John Logie Baird's system. However, it was only in the late 1940s that it was perfected by both CBS and RCA. At the time, a number of systems were being proposed that used separate red, green and blue signals (RGB), broadcast in succession. Most systems broadcast entire frames in sequence, with a colored filter (or "gel") that rotated in front of an otherwise conventional black and white television tube. Because they broadcast separate signals for the different colors, all of these systems were incompatible with existing black and white sets. Another problem was that the mechanical filter made them flicker unless very high refresh rates were used. In spite of these problems, the United States Federal Communications Commission selected a sequential-frame 144 frame/s standard from CBS as their color broadcast in 1950. RCA worked along different lines entirely, using the luminance-chrominance system. This system did not directly encode or transmit the RGB signals; instead it combined these colors into one overall brightness figure, the "luminance". Luminance closely matched the black and white signal of existing broadcasts, allowing it to be displayed on existing televisions. This was a major advantage over the mechanical systems being proposed by other groups. Color information was then separately encoded and folded into the signal as a high-frequency modification to produce a composite video signal – on a black and white television this extra information would be seen as a slight randomization of the image intensity, but the limited resolution of existing sets made this invisible in practice. On color sets the signal would be extracted, decoded back into RGB, and displayed. Although RCA's system had enormous benefits, it had not been successfully developed because it was difficult to produce the display tubes. Black and white TVs used a continuous signal and the tube could be coated with an even deposit of phosphor. With the compatible color encoding scheme originally developed by Georges Valensi in 1938, the color was changing continually along the line, which was far too fast for any sort of mechanical filter to follow. Instead, the phosphor had to be broken down into a discrete pattern of colored spots. Focusing the proper signal on each of these tiny spots was beyond the capability of electron guns of the era, and RCA's early experiments used three-tube projectors, or mirror-based systems known as "Triniscope". Shadow masks RCA eventually solved the problem of displaying the color images with their introduction of the shadow mask. The shadow mask consists of a thin sheet of steel with tiny holes photo etched into it, placed just behind the front surface of the picture tube. Three guns, arranged in a triangle, were all aimed at the holes. Stray electrons at the edge of the beam were cut off by the mask, creating a sharply focused spot that was small enough to hit a single colored phosphor on the screen. Since each of the guns was aimed at the hole from a slightly different angle, the spots of phosphor on the tube could be separated slightly to prevent overlap. The disadvantage of this approach was that for any given amount of gun power, the shadow mask filtered out the majority of the energy. To ensure there was no overlap of the beam on the screen, the dots had to be separated and covered perhaps 25% of its surface. This led to very dim images, requiring much greater electron beam power in order to provide a useful picture. Moreover, the system was highly dependent on the relative angles of the beams between the three guns, which required constant adjustment by the user to ensure the guns hit the correct colors. In spite of this, the technical superiority of the RCA system was overwhelming compared to the CBS system, and was selected as the new NTSC standard in 1953. The first broadcast using the new standard occurred on New Year's Day in 1954, when NBC broadcast the Tournament of Roses Parade. In spite of this early start, only a few years after regularly scheduled television broadcasting had begun, consumer uptake of color televisions was very slow to start. The dim images, constant adjustments and high costs had kept them in a niche of their own. Low consumer acceptance led to a lack of color programming, further reducing the demand for the sets in a supply and demand problem. In the United States in 1960, only 1 color set was sold for every 50 sets sold in total. Chromatron Sony had entered the television market in 1960 with the black and white TV8-301, the first non-projection type all-transistor television. A combination of factors, including its small screen size, limited its sales to niche markets. Sony engineers had been studying the color market, but the situation in Japan was even worse than the U.S.; they accounted for only 300 of the 9 million sets sold that year. But by 1961, dealers were asking the Sony sales department when a color set would be available, and the sales department put pressure on engineering in turn. Masaru Ibuka, Sony's president and co-founder, steadfastly refused to develop a system based on RCA's shadow mask design, which he considered technically deficient. He insisted on developing a unique solution. In 1961, a Sony delegation was visiting the IEEE trade show in New York City, including Ibuka, Akio Morita (Sony's other co-founder) and Nobutoshi Kihara, who was promoting his new CV-2000 home video tape recorder. This was Kihara's first trip abroad and he spent much of his time wandering the trade floor, where he came across a small booth by the small company Autometric. They were demonstrating a new type of color television based on the Chromatron tube, which used a single electron gun and a vertical grille of electrically charged thin wires instead of a shadow mask. The resulting image was far brighter than anything the RCA design could produce, and lacked the convergence problems that required constant adjustments. He quickly brought Morita and Ibuka to see the design, and Morita was "sold" on the spot. Morita arranged a deal with Paramount Pictures, who was paying for Chromatic Labs' development of the Chromatron, taking over the entire project. In early 1963, Senri Miyaoka was sent to Manhattan to arrange the transfer of the technology to Sony, which would lead to the closing of Chromatic Labs. He was unimpressed with the labs, describing the windowless basement as "squalor". The American team was only too happy to point out the serious flaws in the Chromatron system, telling Miyaoka that the design was hopeless. By September 1964, a 17-inch prototype had been built in Japan, but mass-production test runs were demonstrating serious problems. Sony engineers were unable to make a version of Chromatron that could be reliably mass-produced. When sets were finally made available in late 1964, they were put on the market at a competitive 198,000 yen (US$550), but cost the company over 400,000 yen (US$1111.11) to produce. Ibuka had bet the company on Chromatron and had already set up a new factory to produce them with the hopes that the production problems would be ironed out and the line would become profitable. After several thousand sets had shipped, the situation was no better, while Panasonic and Toshiba were in the process of introducing sets based on RCA licenses. By 1966, the Chromatron was breaking the company financially. Trinitron In the autumn of 1966, Ibuka finally gave in, and announced he would personally lead a search for a replacement for Chromatron. Susumu Yoshida was sent to the U.S. to look for potential licenses, and was impressed with the improvements that RCA had made in overall brightness by introducing new rare-earth phosphors on the screen. He also saw General Electric's "Porta-color" design, using three guns in a row instead of a triangle, which allowed a greater portion of the screen to be lit. His report was cause for concern in Japan, where it seemed Sony was falling ever-farther behind the U.S. designs. They might be forced to license the shadow mask system if they wanted to remain competitive. Ibuka was not willing to give up entirely, and had his 30 engineers explore a wide variety of approaches to see if they could come up with their own design. At one point, Yoshida asked Senri Miyaoka if the in-line gun arrangement used by GE could be replaced by a single gun with three cathodes; this would be more difficult to build, but be lower cost in the long run. Miyaoka built a prototype and was astonished by how well it worked, although it had focusing problems. Later that week, on Saturday, Miyaoka was summoned to Ibuka's office while he was attempting to leave work to attend his weekly cello practice. Yoshida had just informed Ibuka about his success, and the two asked Miyaoka if they could really develop the gun into a workable product. Miyaoka, anxious to leave, answered yes, excused himself, and left. The following Monday, Ibuka announced that Sony would be developing a new color television tube, based on Miyaoka's prototype. By February 1967, the focusing problems had been solved, and because there was a single gun, the focusing was achieved with permanent magnets instead of a coil, and required no manual adjustments after manufacturing. During development, Sony engineer Akio Ohgoshi introduced another modification. GE's system improved on the RCA shadow mask by replacing the small round holes with slightly larger rectangles. Since the guns were in-line, their electrons would land onto three rectangular patches instead of three smaller spots, about doubling the lit area. Ohgoshi proposed removing the mask entirely and replacing it with a series of vertical slots instead, lighting the entire screen. Although this would require the guns to be very carefully aligned with the phosphors on the tube in order to ensure they hit the right colors, with Miyaoka's new tube, this appeared possible. In practice, this proved easy to build but difficult to place in the tube – the fine wires were mechanically weak and tended to move when the tubes were bumped, resulting in shifting colors on the screen. This problem was solved by running several fine tungsten wires across the grille horizontally to keep the vertical wires of the grille in place. The combination of three-in-one electron gun and the replacement of the shadow mask with the aperture grille resulted in a unique and easily patentable product. In spite of Trinitron and Chromatron having no technology in common, the shared single electron gun has led to many erroneous claims that the two are very similar, or the same. Introduction, early models Officially introduced by Ibuka in April 1968, the original 12 inch Trinitron (KV-1210) had a display quality that easily surpassed any commercial set in terms of brightness, color fidelity, and simplicity of operation. The vertical wires in the aperture grille meant that the tube had to be nearly flat vertically; this gave it a unique cylindrical look. It was also all solid state, with the exception of the picture tube itself, which allowed it to be much more compact and cool running than designs like GE's Porta-color. Some larger models such as the KV-1320UB for the United Kingdom market were initially fitted with 3AT2 valves for the extra high tension (high voltage) circuitry, before being redesigned as solid state in the early 70s. Ibuka ended the press conference by claiming that 10,000 sets would be available by October, well beyond what engineering had told him was possible. Ibuka cajoled Yoshida to take over the effort of bringing the sets into production, and although Yoshida was furious at being put in charge of a task he felt was impossible, he finally accepted the assignment and successfully met the production goal. The KV-1210 was introduced in limited numbers in Japan in October as promised, and in the U.S. as the KV-1210U the following year. Early color sets intended for the UK market had a PAL decoder that was different from those invented and licensed by Telefunken of Germany, who invented the PAL color system. The decoder inside the UK-sold Sony color Trinitron sets, from the KV-1300UB to the KV-1330UB, had an NTSC decoder adapted for PAL. The decoder used a 64 microsecond delay line to store every other line, but instead of using the delay line to average out the phase of the current line and the previous line, it simply repeated the same line twice. Any phase errors could then be compensated for by using a tint control knob on the front of the set, normally unneeded on a PAL set. Reception Reviews of the Trinitron were universally positive, although they all mentioned its high cost. Sony won an Emmy Award for the Trinitron in 1973. On his 84th birthday in 1992, Ibuka claimed the Trinitron was his proudest product. New models quickly followed. Larger sizes at 19" and then 27" were introduced, as well as smaller, including a 7" portable. In the mid-1980s, a new phosphor coating was introduced that was much darker than earlier sets, giving the screens a black color when turned off, as opposed to the earlier light grey. This improved the contrast range of the picture. Early models were generally packaged in silver cases, but with the introduction of the darker screens, Sony also introduced new cases with a dark charcoal color, following a similar change in color taking place in the hi-fi world. This line expanded with 32", 35" and finally 40" units in the 1990s. In 1990, Sony released the first HD Trinitron TV set, for use with the Multiple sub-Nyquist sampling encoding standard. In 1980, Sony introduced the "ProFeel" line of prosumer component televisions, consisting of a range of Trinitron monitors that could be connected to standardized tuners. The original lineup consisted of the KX-20xx1 20" and KX-27xx1 27" monitors (the "xx" is an identifier, PS for Europe, HF for Japan, etc.) the VTX-100ES tuner and TXT-100G TeleText decoder. They were often used with a set of SS-X1A stereo speakers, which matched the grey boxy styling of the suite. The concept was to build a market similar to contemporary stereo equipment, where components from different vendors could be mixed to produce a complete system. However, a lack of any major third party components, along with custom connectors between the tuner and monitors, meant that systems mixing fully compatible elements were never effectively realized. They were popular high-end units, however, and found a strong following in production companies where the excellent quality picture made them effective low-cost monitors. A second series of all-black units followed in 1986, the ProFeel Pro, sporting a space-frame around the back of the trapezoidal enclosure that doubled as a carrying handle and holder for the pop-out speakers. These units were paired with the VT-X5R tuner and optionally the APM-X5A speakers. Sony also produced lines of Trinitron professional studio monitors, the PVM (Professional Video Monitor) and BVM (Broadcast Video Monitor) lines. These models were packaged in grey metal cubes with a variety of inputs that accepted practically any analog format. They originally used tubes similar to the ProFeel line, but over time, they gradually increased in resolution until the late 1990s when they offered over 900 lines. When these were cancelled as part of the wider Trinitron shutdown in 2007, professionals forced Sony to re-open two of the lines to produce the 20 and 14 inch models. Among similar products, Sony produced the KV-1311 monitor/TV combination. It accepted NTSC-compatible video from various devices as well as analog broadcast TV. Along with its other functions, it had video and audio inputs and outputs as well as a wideband sound-IF decoded output. Its exterior looks much like the monitor illustrated here, with added TV controls. By this time, Sony was well established as a supplier of reliable equipment; it was preferable to have minimal field failures instead of supporting an extensive service network for the entire United States. Sony started developing the Trinitron for computer monitor use in the late 1970s. Demand was high, so high that there were examples of third party companies removing Trinitron tubes from televisions to use as monitors. In response, Sony started development of the GDM (Graphic Display Monitor) in 1983, which offered high resolution and faster refresh rates. Sony aggressively promoted the GDM and it became a standard on high-end monitors by the late 1980s. Particularly common models include the Apple Inc. 13" model that was originally sold with the Macintosh II starting in 1987. Well known users also included Digital Equipment Corporation, IBM, Silicon Graphics, Sun Microsystems and others. Demand for a lower cost solution led to the CDP series. In May 1988, the high-end 20 inch DDM model (Data Display Monitor) was introduced with a maximum resolution of 2,048 by 2,048, which went on to be used in the FAA's Advanced Automation System air traffic control system. These developments meant that Sony was well placed to introduce high-definition televisions (HDTV). In April 1981, they announced the High Definition Video System (HDVS), a suite of MUSE equipment including cameras, recorders, Trinitron monitors and projection TVs. Sony shipped its 100 millionth Trinitron screen in July 1994, 25 years after it had been introduced. New uses in the computer field and the demand for higher resolution televisions to match the quality of DVD when it was introduced in 1996 led to increased sales, with another 180 million units delivered in the next decade.<ref name=wallst>"Sony to stop making old-style cathode ray tube TVs", Wall Street Journal MarketWatch', 3 March 2008</ref> End of Trinitron Sony's patent on the Trinitron display ran out in 1996, after 20 years. After the expiration of Sony's Trinitron patent, manufacturers like Mitsubishi (whose monitor production is now part of NEC Display Solutions) were free to use the Trinitron design for their own product line without license from Sony although they could not use the Trinitron name. For example, Mitsubishi's are called Diamondtron''. To some degree, the name Trinitron became a generic term referring to any similar set. Sony responded with the FD Trinitron, which used computer-controlled feedback systems to ensure sharp focus across a flat screen. Initially introduced on their 27, 32 and 36 inch models in 1998, the new tubes were offered in a variety of resolutions for different uses. The basic WEGA models supported normal 480i signals, but a larger version offered 16:9 aspect ratios. The technology was quickly applied to the entire Trinitron range, from 13 to 36 inch. High resolution versions, Hi-Scan and Super Fine Pitch, were also produced. With the introduction of the FD Trinitron, Sony also introduced a new industrial style, leaving the charcoal colored sets introduced in the 1980s for a new silver styling. Sony was not the only company producing flat screen CRTs. Other companies had already introduced high-end brands with flat-screen tubes, like Panasonic's Tau. Many other companies entered the market quickly, widely copying the new silver styling as well. The FD Trinitron was unable to regain the cachet that the Trinitron brand had previously possessed; in the 2004 Christmas season, they increased sales by 5%, but only at the cost of a 75% plunge in profits after being forced to lower costs to compete in the market. At the same time, the introduction of plasma televisions, and then LCD-based ones, led to the high-end market being increasingly focused on the "thin" sets. Both of these technologies have well known problems, and for some time Sony explored a wide array of technologies that would improve upon them in the same way the Trinitron did on the shadow mask. Among these experiments were organic light-emitting diodes (OLED) and the field-emission display, but in spite of considerable effort, neither of these technologies matured into competitors at the time. Sony also introduced their Plasmatron displays, and later LCD as well, but these had no inherent technical advantages over similar sets from other companies. From 2006, all of Sony's BRAVIA television products are LCD displays, initially based on screens from Samsung, and later Sharp. Sony eventually ended production of the Trinitron in Japan in 2004. In 2006, Sony announced that it would no longer market or sell Trinitrons in the United States or Canada, but it would continue to sell the Trinitron in China, India, and regions of South America using tubes delivered from their Singapore plant. Production in Singapore finally ended by the end of March 2008, only months after ending production of their rear-projection systems. Two lines of the factory were later brought back online to supply the professional market. 280 million Trinitron tubes were built. At its peak, 20 million were made annually. Description Basic concept The Trinitron design incorporates two unique features: the single-gun three-cathode picture tube, and the vertically aligned aperture grille. The single gun consists of a long-necked tube with a single electrode at its base, flaring out into a horizontally-aligned rectangular shape with three rectangular cathodes inside. Each cathode is fed the amplified signal from one of the decoded RGB signals. The electrons from the cathodes are all aimed toward a single point at the back of the screen where they hit the aperture grille, a steel sheet with vertical slots cut in it. Due to the slight separation of the cathodes at the back of the tube, the three beams approach the grille at slightly different angles. When they pass through the grille they retain this angle, hitting their individual colored phosphors that are deposited in vertical stripes on the inside of the faceplate. The main purpose of the grille is to ensure that each beam strikes only the phosphor stripes for its color, much as does a shadow mask. However, unlike a shadow mask, there are essentially no obstructions along each entire phosphor stripe. Larger CRTs have a few horizontal stabilizing wires part way between top and bottom. Advantages In comparison to early shadow mask designs, the Trinitron grille cuts off much less of the signal coming from the electron guns. RCA tubes built in the 1950s cut off about 85% of the electron beam, while the grille cuts off about 25%. Improvements to the shadow mask designs continually narrowed this difference between the two designs, and by the late 1980s the difference in performance, at least theoretically, was eliminated. Another advantage of the aperture grille was that the distance between the wires remained constant vertically across the screen. In the shadow mask design, the size of the holes in the mask is defined by the required resolution of the phosphor dots on the screen, which was constant. However, the distance from the guns to the holes changed; for dots near the center of the screen, the distance was its shortest, at points in the corners it was at its maximum. To ensure that the guns were focused on the holes, a system known as dynamic convergence had to constantly adjust the focus point as the beam moved across the screen. In the Trinitron design, the problem was greatly simplified, requiring changes only for large screen sizes, and only on a line-by-line basis. For this reason, Trinitron systems are easier to focus than shadow masks, and generally had a sharper image. This was a major selling point of the Trinitron design for much of its history. In the 1990s, new computer-controlled real-time feedback focusing systems eliminated this advantage, as well as leading to the introduction of "true flat" designs. Disadvantages Visible support or damping wires Even small changes in the alignment of the grille over the phosphors can cause the color purity to shift. Since the wires are thin, small bumps can cause the wires to shift alignment if they are not held in place. Monitors using Trinitron technology have one or more thin tungsten wires running horizontally across the grille to prevent this. Screens 15" and below have one wire located about two thirds of the way down the screen, while monitors greater than 15" have 2 wires at the one-third and two-thirds positions. These wires are less apparent or completely obscured on standard definition sets due to wider scan lines to match the lower resolution of the video being displayed. On computer monitors, where the scan lines are much closer together, the wires are often visible. This is a minor drawback of the Trinitron standard which is not shared by shadow mask CRTs. Aperture grilles are not as mechanically stable as shadow or slot masks; a tap can cause the image to briefly become distorted, even with damping/support wires. Some people may find the wires to be distracting. Anti-glare coating A polyurethane sheet coated to scatter reflections is affixed to the front of the screen, where it can be damaged. Partial list of other aperture grille brands Sharp NEC Display Solutions (NEC/Mitsubishi) "Diamondtron" Gateway, Inc. "Vivitron" (Trinitron and Diamondtron rebrand) MAG InnoVision "Technitron" (Trinitron rebrand) ViewSonic "SonicTron" (Trinitron rebrand) See also History of television References Notes Bibliography External links Trinitron: Sony's Once Unbeatable Product Sony Trinitron Explained Sony products Television technology Vacuum tube displays Cathode ray tube Japanese inventions Audiovisual introductions in 1968 1968 establishments in Japan Products and services discontinued in 2008 2008 disestablishments in Japan
Trinitron
[ "Technology" ]
5,542
[ "Information and communications technology", "Television technology" ]
173,457
https://en.wikipedia.org/wiki/9
9 (nine) is the natural number following and preceding . Evolution of the Hindu–Arabic digit Circa 300 BC, as part of the Brahmi numerals, various Indians wrote a digit 9 similar in shape to the modern closing question mark without the bottom dot. The Kshatrapa, Andhra and Gupta started curving the bottom vertical line coming up with a -look-alike. How the numbers got to their Gupta form is open to considerable debate. The Nagari continued the bottom stroke to make a circle and enclose the 3-look-alike, in much the same way that the sign @ encircles a lowercase a. As time went on, the enclosing circle became bigger and its line continued beyond the circle downwards, as the 3-look-alike became smaller. Soon, all that was left of the 3-look-alike was a squiggle. The Arabs simply connected that squiggle to the downward stroke at the middle and subsequent European change was purely cosmetic. While the shape of the glyph for the digit 9 has an ascender in most modern typefaces, in typefaces with text figures the character usually has a descender, as, for example, in . The form of the number nine (9) could possibly derived from the Arabic letter waw, in which its isolated form (و) resembles the number 9. The modern digit resembles an inverted 6. To disambiguate the two on objects and labels that can be inverted, they are often underlined. It is sometimes handwritten with two strokes and a straight stem, resembling a raised lower-case letter q, which distinguishes it from the 6. Similarly, in seven-segment display, the number 9 can be constructed either with a hook at the end of its stem or without one. Most LCD calculators use the former, but some VFD models use the latter. Mathematics 9 is the fourth composite number, and the first odd composite number. 9 is also a refactorable number. Casting out nines is a quick way of testing the calculations of sums, differences, products, and quotients of integers in decimal, a method known as long ago as the 12th century. If an odd perfect number exists, it will have at least nine distinct prime factors. 9 is the sum of the cubes of the first two non-zero positive integers which makes it the first cube-sum number greater than one. A number that is 4 or 5 modulo 9 cannot be represented as the sum of three cubes. There are nine Heegner numbers, or square-free positive integers that yield an imaginary quadratic field whose ring of integers has a unique factorization, or class number of 1. Geometry A polygon with nine sides is called a nonagon. A regular nonagon can be constructed with a regular compass, straightedge, and angle trisector. The lowest number of squares needed for a perfect tiling of a rectangle is 9. 9 is the largest single-digit number in the decimal system. List of basic calculations Culture and mythology Indian culture Nine is a number that appears often in Indian culture and mythology. For example, there are nine influencers attested to in Indian astrology. In the Vaisheshika branch of Hindu philosophy, there are nine universal substances or elements: Earth, Water, Air, Fire, Ether, Time, Space, Soul, and Mind. And Navaratri is a nine-day festival dedicated to the nine forms of Durga. Chinese culture Nine (; ) is considered a good number in Chinese culture because it sounds the same as the word "long-lasting" (; ). Nine is strongly associated with the Chinese dragon, a symbol of magic and power. There are nine forms of the dragon, it is described in terms of nine attributes, and it has nine children. It has 117 scales – 81 yang (masculine, heavenly) and 36 yin (feminine, earthly). All three numbers are multiples of 9 (, , ). Anthropology Idioms "To go the whole nine yards" "A cat has nine lives" "To be on cloud nine" The word "K-9" pronounces the same as canine and is used in many US police departments to denote the police dog unit. Despite not sounding like the translation of the word canine in other languages, many police and military units around the world use the same designation. Someone dressed "to the nines" is dressed up as much as they can be. In North American urban culture, "nine" is a slang word for a 9mm pistol or homicide, the latter from the Illinois Criminal Code for homicide. Religion and philosophy Nine, as the largest single-digit number (in base ten), symbolizes completeness in the Baháʼí Faith. In addition, the word Baháʼ in the Abjad notation has a value of 9, and a 9-pointed star is used to symbolize the religion. The number 9 is revered in Hinduism and considered a complete, perfected and divine number because it represents the end of a cycle in the decimal system, which originated from the Indian subcontinent as early as 3000 BC. In Norse mythology, the number nine is associated with Odin, as that is how many days he hung from the world tree Yggdrasil before attaining knowledge of the runes. Nine is the number associated with Satan in LaVeyan Satanism. Anton LaVey wrote in The Satanic Rituals that this is because nine is the number of the ego since it "always returns to itself" even after being multiplied by any number. Science Chemistry The purity of chemicals (see Nine (purity)). Physiology A human pregnancy normally lasts nine months, the basis of Naegele's rule. Psychology Common terminal digit in psychological pricing. See also 9 (disambiguation) 0.999... Cloud Nine References Further reading Cecil Balmond, "Number 9, the search for the sigma code" 1998, Prestel 2008, , Integers 9 (number) Superstitions about numbers
9
[ "Mathematics" ]
1,238
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
173,493
https://en.wikipedia.org/wiki/Small%20Magellanic%20Cloud
The Small Magellanic Cloud (SMC) is a dwarf galaxy near the Milky Way. Classified as a dwarf irregular galaxy, the SMC has a D25 isophotal diameter of about , and contains several hundred million stars. It has a total mass of approximately 7 billion solar masses. At a distance of about 200,000 light-years, the SMC is among the nearest intergalactic neighbors of the Milky Way and is one of the most distant objects visible to the naked eye. The SMC is visible from the entire Southern Hemisphere and can be fully glimpsed low above the southern horizon from latitudes south of about 15° north. The galaxy is located across the constellation of Tucana and part of Hydrus, appearing as a faint hazy patch resembling a detached piece of the Milky Way. The SMC has an average apparent diameter of about 4.2° (8 times the Moon's) and thus covers an area of about 14 square degrees (70 times the Moon's). Since its surface brightness is very low, this deep-sky object is best seen on clear moonless nights and away from city lights. The SMC forms a pair with the Large Magellanic Cloud (LMC), which lies 20° to the east, and, like the LMC, is a member of the Local Group. It is currently a satellite of the Milky Way, but is likely a former satellite of the LMC. Observation history In the southern hemisphere, the Magellanic clouds have long been included in the lore of native inhabitants, including south sea islanders and indigenous Australians. Persian astronomer Al Sufi mentions them in his Book of Fixed Stars, repeating a quote by the polymath Ibn Qutaybah, but had not observed them himself. European sailors may have first noticed the clouds during the Middle Ages when they were used for navigation. Portuguese and Dutch sailors called them the Cape Clouds, a name that was retained for several centuries. During the circumnavigation of the Earth by Ferdinand Magellan in 1519–1522, they were described by Antonio Pigafetta as dim clusters of stars. In Johann Bayer's celestial atlas Uranometria, published in 1603, he named the smaller cloud, Nubecula Minor. In Latin, Nubecula means a little cloud. Between 1834 and 1838, John Frederick William Herschel made observations of the southern skies with his reflector from the Royal Observatory. While observing the Nubecula Minor, he described it as a cloudy mass of light with an oval shape and a bright center. Within the area of this cloud he catalogued a concentration of 37 nebulae and clusters. In 1891, Harvard College Observatory opened an observing station at Arequipa in Peru. Between 1893 and 1906, under the direction of Solon Bailey, the telescope at this site was used to survey photographically both the Large and Small Magellanic Clouds. Henrietta Swan Leavitt, an astronomer at the Harvard College Observatory, used the plates from Arequipa to study the variations in relative luminosity of stars in the SMC. In 1908, the results of her study were published, which showed that a type of variable star called a "cluster variable", later called a Cepheid variable after the prototype star Delta Cephei, showed a definite relationship between the variability period and the star's apparent brightness. Leavitt realized that since all the stars in the SMC are roughly the same distance from Earth, this result implied that there is similar relationship between period and absolute brightness. This important period-luminosity relation allowed the distance to any other Cepheid variable to be estimated in terms of the distance to the SMC. She hoped a few Cepheid variables could be found close enough to Earth so that their parallax, and hence distance from Earth, could be measured. This soon happened, allowing Cepheid variables to be used as standard candles, facilitating many astronomical discoveries. Using this period-luminosity relation, in 1913 the distance to the SMC was first estimated by Ejnar Hertzsprung. First he measured thirteen nearby cepheid variables to find the absolute magnitude of a variable with a period of one day. By comparing this to the periodicity of the variables as measured by Leavitt, he was able to estimate a distance of 10,000 parsecs (30,000 light years) between the Sun and the SMC. This later proved to be a gross underestimate of the true distance, but it did demonstrate the potential usefulness of this technique. Announced in 2006, measurements with the Hubble Space Telescope suggest that either the Large and Small Magellanic Clouds may be moving too fast to be orbiting the Milky Way, or that the Milky Way Galaxy is more massive than was thought. Features The SMC contains a central bar structure, and astronomers speculate that it was once a barred spiral galaxy that was disrupted by the Milky Way to become somewhat irregular. There is a bridge of gas connecting the Small Magellanic Cloud with the Large Magellanic Cloud (LMC), which is evidence of tidal interaction between the galaxies. This bridge of gas is a star-forming site. The Magellanic Clouds have a common envelope of neutral hydrogen, indicating they have been gravitationally bound for a long time. In 2017, using the Dark Energy Survey plus MagLiteS data, a stellar over-density associated with the Small Magellanic Cloud was discovered, which is probably the result of interactions between the SMC and LMC. X-ray sources The Small Magellanic Cloud contains a large and active population of X-ray binaries. Recent star formation has led to a large population of massive stars and high-mass X-ray binaries (HMXBs) which are the relics of the short-lived upper end of the initial mass function. The young stellar population and the majority of the known X-ray binaries are concentrated in the SMC's Bar. HMXB pulsars are rotating neutron stars in binary systems with Be-type (spectral type 09-B2, luminosity classes V–III) or supergiant stellar companions. Most HMXBs are of the Be type which account for 70% in the Milky Way and 98% in the SMC. The Be-star equatorial disk provides a reservoir of matter that can be accreted onto the neutron star during periastron passage (most known systems have large orbital eccentricity) or during large-scale disk ejection episodes. This scenario leads to strings of X-ray outbursts with typical X-ray luminosities Lx = 1036–1037 erg/s, spaced at the orbital period, plus infrequent giant outbursts of greater duration and luminosity. Monitoring surveys of the SMC performed with NASA's Rossi X-ray Timing Explorer (RXTE) see X-ray pulsars in outburst at more than 1036 erg/s and have counted 50 by the end of 2008. The ROSAT and ASCA missions detected many faint X-ray point sources, but the typical positional uncertainties frequently made positive identification difficult. Recent studies using XMM-Newton and Chandra have now cataloged several hundred X-ray sources in the direction of the SMC, of which perhaps half are considered likely HMXBs, and the remainder a mix of foreground stars, and background AGN. No X-rays above background were observed from the Magellanic Clouds during the September 20, 1966, Nike-Tomahawk flight. Balloon observation from Mildura, Australia, on October 24, 1967, of the SMC set an upper limit of X-ray detection. An X-ray astronomy instrument was carried aboard a Thor missile launched from Johnston Atoll on September 24, 1970, at 12:54 UTC for altitudes above 300 km, to search for the Small Magellanic Cloud. The SMC was detected with an X-ray luminosity of 5 erg/s in the range 1.5–12 keV, and 2.5 erg/s in the range 5–50 keV for an apparently extended source. The fourth Uhuru catalog lists an early X-ray source within the constellation Tucana: 4U 0115-73 (3U 0115-73, 2A 0116-737, SMC X-1). Uhuru observed the SMC on January 1, 12, 13, 16, and 17, 1971, and detected one source located at 01149-7342, which was then designated SMC X-1. Some X-ray counts were also received on January 14, 15, 18, and 19, 1971. The third Ariel 5 catalog (3A) also contains this early X-ray source within Tucana: 3A 0116-736 (2A 0116-737, SMC X-1). The SMC X-1, a HMXRB, is at J2000 right ascension (RA) declination (Dec) . Two additional sources detected and listed in 3A include SMC X-2 at 3A 0042-738 and SMC X-3 at 3A 0049-726. Mini Magellanic Cloud (MMC) It has been proposed by astrophysicists D. S. Mathewson, V. L. Ford and N. Visvanathan that the SMC may in fact be split in two, with a smaller section of this galaxy behind the main part of the SMC (as seen from Earth perspective), and separated by about 30,000 ly. They suggest the reason for this is due to a past interaction with the LMC that split the SMC, and that the two sections are still moving apart. They dubbed this smaller remnant the Mini Magellanic Cloud. In 2023, it was reported that the SMC is indeed two separate structures with distinct stellar and gaseous chemical compositions, separated by around 5 kiloparsecs. See also Large Magellanic Cloud Magellanic Clouds Objects within the Small Magellanic Cloud: NGC 265 NGC 290 NGC 346 NGC 602 References External links NASA Extragalactic Database entry on the SMC SEDS entry on the SMC SMC at ESA/Hubble Astronomy Picture of the Day 2010 January 7 The Tail of the Small Magellanic Cloud - Likely stripped from the galaxy by gravitational tides, the tail contains mostly gas, dust, and newly formed stars. Dwarf barred irregular galaxies Peculiar galaxies Low surface brightness galaxies Milky Way Subgroup Tucana NGC objects 03085 Astronomical objects known since antiquity Magellanic Clouds Local Group Hydrus Magellanic spiral galaxies
Small Magellanic Cloud
[ "Astronomy" ]
2,185
[ "Hydrus", "Tucana", "Constellations" ]
173,505
https://en.wikipedia.org/wiki/Just%20war%20theory
The just war theory () is a doctrine, also referred to as a tradition, of military ethics that aims to ensure that a war is morally justifiable through a series of criteria, all of which must be met for a war to be considered just. It has been studied by military leaders, theologians, ethicists and policymakers. The criteria are split into two groups: ("right to go to war") and ("right conduct in war"). The first group of criteria concerns the morality of going to war, and the second group of criteria concerns the moral conduct within war. There have been calls for the inclusion of a third category of just war theory (jus post bellum) dealing with the morality of post-war settlement and reconstruction. The just war theory postulates the belief that war, while it is terrible but less so with the right conduct, is not always the worst option. The just war theory presents a justifiable means of war with justice being an objective of armed conflict. Important responsibilities, undesirable outcomes, or preventable atrocities may justify war. Opponents of the just war theory may either be inclined to a stricter pacifist standard (proposing that there has never been nor can there ever be a justifiable basis for war) or they may be inclined toward a more permissive nationalist standard (proposing that a war need only to serve a nation's interests to be justifiable). In many cases, philosophers state that individuals do not need to be plagued by a guilty conscience if they are required to fight. A few philosophers ennoble the virtues of the soldier while they also declare their apprehensions for war itself. A few, such as Rousseau, argue for insurrection against oppressive rule. The historical aspect, or the "just war tradition", deals with the historical body of rules or agreements that have applied in various wars across the ages. The just war tradition also considers the writings of various philosophers and lawyers through history, and examines both their philosophical visions of war's ethical limits and whether their thoughts have contributed to the body of conventions that have evolved to guide war and warfare. In the twenty-first century there has been significant debate between traditional just war theorists, who largely support the existing law of war and develop arguments to support it, and revisionists who reject many traditional assumptions, although not necessarily advocating a change in the law. Origins Ancient Egypt A 2017 study found that the just war tradition can be traced as far back as to Ancient Egypt. Egyptian ethics of war usually centered on three main ideas, these including the cosmological role of Egypt, the pharaoh as a divine office and executor of the will of the gods, and the superiority of the Egyptian state and population over all other states and peoples. Egyptian political theology held that the pharaoh had the exclusive legitimacy in justly initiating a war, usually claimed to carry out the will of the gods. Senusret I, in the Twelfth Dynasty, claimed, "I was nursed to be a conqueror...his [Atum's] son and his protector, he gave me to conquer what he conquered." Later pharaohs also considered their sonship of the god Amun-Re as granting them absolute ability to declare war on the deity's behalf. Pharaohs often visited temples prior to initiating campaigns, where the pharaoh was believed to receive their commands of war from the deities. For example, Kamose claimed that "I went north because I was strong (enough) to attack the Asiatics through the command of Amon, the just of counsels." A stele erected by Thutmose III at the Temple of Amun at Karnak "provides an unequivocal statement of the pharaoh's divine mandate to wage war on his enemies." As the period of the New Kingdom progressed and Egypt heightened its territorial ambition, so did the invocation of just war aid the justification of these efforts. The universal principle of Maat, signifying order and justice, was central to the Egyptian notion of just war and its ability to guarantee Egypt virtually no limits on what it could take, do, or use to guarantee the ambitions of the state. India The Indian Hindu epic, the Mahabharata, offers the first written discussions of a "just war" (dharma-yuddha or "righteous war"). In it, one of five ruling brothers (Pandavas) asks if the suffering caused by war can ever be justified. A long discussion then ensues between the siblings, establishing criteria like proportionality (chariots cannot attack cavalry, only other chariots; no attacking people in distress), just means (no poisoned or barbed arrows), just cause (no attacking out of rage), and fair treatment of captives and the wounded. In Sikhism, the term dharamyudh describes a war that is fought for just, righteous or religious reasons, especially in defence of one's own beliefs. Though some core tenets in the Sikh religion are understood to emphasise peace and nonviolence, especially before the 1606 execution of Guru Arjan by Mughal Emperor Jahangir, military force may be justified if all peaceful means to settle a conflict have been exhausted, thus resulting in a dharamyudh. East Asian Chinese philosophy produced a massive body of work on warfare, much of it during the Zhou dynasty, especially the Warring States era. War was justified only as a last resort and only by the rightful sovereign; however, questioning the decision of the emperor concerning the necessity of a military action was not permissible. The success of a military campaign was sufficient proof that the campaign had been righteous. Japan did not develop its own doctrine of just war but between the 5th and the 7th centuries drew heavily from Chinese philosophy, and especially Confucian views. As part of the Japanese campaign to take the northeastern island Honshu, Japanese military action was portrayed as an effort to "pacify" the Emishi people, who were likened to "bandits" and "wild-hearted wolf cubs" and accused of invading Japan's frontier lands. Ancient Greece and Rome The notion of just war in Europe originates and is developed first in ancient Greece and then in the Roman Empire. It was Aristotle who first introduced the concept and terminology to the Hellenic world that called war a last resort requiring conduct that would allow the restoration of peace. Aristotle argues that the cultivation of a military is necessary and good for the purpose of self-defense, not for conquering: "The proper object of practising military training is not in order that men may enslave those who do not deserve slavery, but in order that first they may themselves avoid becoming enslaved to others" (Politics, Book 7). In ancient Rome, a "just cause" for war might include the necessity of repelling an invasion, or retaliation for pillaging or a breach of treaty. War was always potentially nefas ("wrong, forbidden"), and risked religious pollution and divine disfavor. A "just war" (bellum iustum) thus required a ritualized declaration by the fetial priests. More broadly, conventions of war and treaty-making were part of the ius gentium, the "law of nations", the customary moral obligations regarded as innate and universal to human beings. Christian views Christian theory of the Just War begins around the time of Augustine of Hippo. The Just War theory, with some amendments, is still used by Christians today as a guide to whether or not a war can be justified. Christians may argue "Sometimes war may be necessary and right, even though it may not be good." In the case of a country that has been invaded by an occupying force, war may be the only way to restore justice. Saint Augustine Saint Augustine held that individuals should not resort immediately to violence, but God has given the sword to government for a good reason (based upon Romans 13:4). In Contra Faustum Manichaeum book 22 sections 69–76, Augustine argues that Christians, as part of a government, need not be ashamed of protecting peace and punishing wickedness when they are forced to do so by a government. Augustine asserted that was a personal and philosophical stance: "What is here required is not a bodily action, but an inward disposition. The sacred seat of virtue is the heart." Nonetheless, he asserted, peacefulness in the face of a grave wrong that could be stopped by only violence would be a sin. Defense of oneself or others could be a necessity, especially when it is authorized by a legitimate authority:They who have waged war in obedience to the divine command, or in conformity with His laws, have represented in their persons the public justice or the wisdom of government, and in this capacity have put to death wicked men; such persons have by no means violated the commandment, "Thou shalt not kill."While not breaking down the conditions necessary for war to be just, Augustine nonetheless originated the very phrase itself in his work The City of God: But, say they, the wise man will wage Just Wars. As if he would not all the rather lament the necessity of just wars, if he remembers that he is a man; for if they were not just he would not wage them, and would therefore be delivered from all wars. Augustine further taught: No war is undertaken by a good state except on behalf of good faith or for safety. J. Mark Mattox writes,In terms of the traditional notion of jus ad bellum (justice of war, that is, the circumstances in which wars can be justly fought), war is a coping mechanism for righteous sovereigns who would ensure that their violent international encounters are minimal, a reflection of the Divine Will to the greatest extent possible, and always justified. In terms of the traditional notion of jus in bello (justice in war, or the moral considerations which ought to constrain the use of violence in war), war is a coping mechanism for righteous combatants who, by divine edict, have no choice but to subject themselves to their political masters and seek to ensure that they execute their war-fighting duty as justly as possible. Isidore of Seville Isidore of Seville writes: Those wars are unjust which are undertaken without cause. For aside from vengeance or to fight off enemies no just war can be waged. Peace and Truce of God The medieval Peace of God (Latin: ) was a 10th century mass movement in Western Europe instigated by the clergy that granted immunity from violence for non-combatants. Starting in the 11th Century, the Truce of God (Latin: ) involved Church rules that successfully limited when and where fighting could occur: Catholic forces (e.g. of warring barons) could not fight each other on Sundays, Thursdays, holidays, the entirety of Lent and Advent and other times, severely disrupting the conduct of wars. The 1179 Third Council of the Lateran adopted a version of it for the whole church. Saint Thomas Aquinas The just war theory by Thomas Aquinas has had a lasting impact on later generations of thinkers and was part of an emerging consensus in Medieval Europe on just war. In the 13th century Aquinas reflected in detail on peace and war. Aquinas was a Dominican friar and contemplated the teachings of the Bible on peace and war in combination with ideas from Aristotle, Plato, Socrates, Saint Augustine and other philosophers whose writings are part of the Western canon. Aquinas' views on war drew heavily on the , a book the Italian monk Gratian had compiled with passages from the Bible. After its publication in the 12th century, the had been republished with commentary from Pope Innocent IV and the Dominican friar Raymond of Penafort. Other significant influences on Aquinas just war theory were Alexander of Hales and Henry of Segusio. In Summa Theologica Aquinas asserted that it is not always a sin to wage war, and he set out criteria for a just war. According to Aquinas, three requirements must be met. Firstly, the war must be waged upon the command of a rightful sovereign. Secondly, the war needs to be waged for just cause, on account of some wrong the attacked have committed. Thirdly, warriors must have the right intent, namely to promote good and to avoid evil. Aquinas came to the conclusion that a just war could be offensive and that injustice should not be tolerated so as to avoid war. Nevertheless, Aquinas argued that violence must only be used as a last resort. On the battlefield, violence was only justified to the extent it was necessary. Soldiers needed to avoid cruelty and a just war was limited by the conduct of just combatants. Aquinas argued that it was only in the pursuit of justice, that the good intention of a moral act could justify negative consequences, including the killing of the innocent during a war. Renaissance and Christian Humanists Various Renaissance humanists promoted Pacificist views. John Colet famously preached a Lenten sermon before Henry VIII, who was preparing for a war, quoting Cicero "Better an unjust peace rather than the justest war." Erasmus of Rotterdam wrote numerous works on peace which criticized Just War theory as a smokescreen and added extra limitations, notably The Complaint of Peace and the Treatise on War (Dulce bellum inexpertis). A leading humanist writer after the Reformation was legal theorist Hugo Grotius, whose De jura belli ac pacis re-considered Just War and fighting wars justly. First World War At the beginning of the First World War, a group of theologians in Germany published a manifesto that sought to justify the actions of the German government. At the British government's request, Randall Davidson, Archbishop of Canterbury, took the lead in collaborating with a large number of other religious leaders, including some with whom he had differed in the past, to write a rebuttal of the Germans' contentions. Both German and British theologians based themselves on the just war theory, each group seeking to prove that it applied to the war waged by its own side. Contemporary Catholic doctrine The just war doctrine of the Catholic Church found in the 1992 Catechism of the Catholic Church, in paragraph 2309, lists four strict conditions for "legitimate defense by military force:" The damage inflicted by the aggressor on the nation or community of nations must be lasting, grave and certain. All other means of putting an end to it must have been shown to be impractical or ineffective. There must be serious prospects of success. The use of arms must not produce evils and disorders graver than the evil to be eliminated. The Compendium of the Social Doctrine of the Church elaborates on the just war doctrine in paragraphs 500 to 501, while citing the Charter of the United Nations: Pope John Paul II in an address to a group of soldiers said the following: Russian Orthodox Church The War and Peace section in the Basis of the Social Concept of the Russian Orthodox Church is crucial for understanding the Russian Orthodox Church's attitude towards war. The document offers criteria of distinguishing between an aggressive war, which is unacceptable, and a justified war, attributing the highest moral and sacred value of military acts of bravery to a true believer who participates in a justified war. Additionally, the document considers the just war criteria as developed in Western Christianity to be eligible for Russian Orthodoxy; therefore, the justified war theory in Western theology is also applicable to the Russian Orthodox Church. In the same document, it is stated that wars have accompanied human history since the fall of man, and according to the gospel, they will continue to accompany it. While recognizing war as evil, the Russian Orthodox Church does not prohibit its members from participating in hostilities if there is the security of their neighbours and the restoration of trampled justice at stake. War is considered to be necessary but undesirable. It is also stated that the Russian Orthodox Church has had profound respect for soldiers who gave their lives to protect the life and security of their neighbours. Just war tradition The just war theory, propounded by the medieval Christian philosopher Thomas Aquinas, was developed further by legal scholars in the context of international law. Cardinal Cajetan, the jurist Francisco de Vitoria, the two Jesuit priests Luis de Molina and Francisco Suárez, as well as the humanist Hugo Grotius and the lawyer Luigi Taparelli were most influential in the formation of a just war tradition. The just war tradition, which was well established by the 19th century, found its practical application in the Hague Peace Conferences (1899 and 1907) and in the founding of the League of Nations in 1920. After the United States Congress declared war on Germany in 1917, Cardinal James Gibbons issued a letter that all Catholics were to support the war because "Our Lord Jesus Christ does not stand for peace at any price... If by Pacifism is meant the teaching that the use of force is never justifiable, then, however well meant, it is mistaken, and it is hurtful to the life of our country." Armed conflicts such as the Spanish Civil War, World War II and the Cold War were, as a matter of course, judged according to the norms (as established in Aquinas' just war theory) by philosophers such as Jacques Maritain, Elizabeth Anscombe and John Finnis. The first work dedicated specifically to just war was the 15th-century sermon De bellis justis of Stanisław of Skarbimierz (1360–1431), who justified war by the Kingdom of Poland against the Teutonic Knights. Francisco de Vitoria criticized the conquest of America by the Spanish conquistadors on the basis of just-war theory. With Alberico Gentili and Hugo Grotius, just war theory was replaced by international law theory, codified as a set of rules, which today still encompass the points commonly debated, with some modifications. Just-war theorists combine a moral abhorrence towards war with a readiness to accept that war may sometimes be necessary. The criteria of the just-war tradition act as an aid in determining whether resorting to arms is morally permissible. Just-war theories aim "to distinguish between justifiable and unjustifiable uses of organized armed forces"; they attempt "to conceive of how the use of arms might be restrained, made more humane, and ultimately directed towards the aim of establishing lasting peace and justice". The just war tradition addresses the morality of the use of force in two parts: when it is right to resort to armed force (the concern of jus ad bellum) and what is acceptable in using such force (the concern of jus in bello). In 1869 the Russian military theorist Genrikh Antonovich Leer theorized on the advantages and potential benefits of war. The Soviet leader Vladimir Lenin defined only three types of just war. But picture to yourselves a slave-owner who owned 100 slaves warring against a slave-owner who owned 200 slaves for a more "just" distribution of slaves. Clearly, the application of the term "defensive" war, or war "for the defense of the fatherland" in such a case would be historically false, and in practice would be sheer deception of the common people, of philistines, of ignorant people, by the astute slaveowners. Precisely in this way are the present-day imperialist bourgeoisie deceiving the peoples by means of "national ideology" and the term "defense of the fatherland" in the present war between slave-owners for fortifying and strengthening slavery. The anarcho-capitalist scholar Murray Rothbard (1926-1995) stated that "a just war exists when a people tries to ward off the threat of coercive domination by another people, or to overthrow an already-existing domination. A war is unjust, on the other hand, when a people try to impose domination on another people or try to retain an already-existing coercive rule over them." Jonathan Riley-Smith writes: The consensus among Christians on the use of violence has changed radically since the crusades were fought. The just war theory prevailing for most of the last two centuries—that violence is an evil that can, in certain situations, be condoned as the lesser of evils—is relatively young. Although it has inherited some elements (the criteria of legitimate authority, just cause, right intention) from the older war theory that first evolved around AD 400, it has rejected two premises that underpinned all medieval just wars, including crusades: first, that violence could be employed on behalf of Christ's intentions for mankind and could even be directly authorized by him; and second, that it was a morally neutral force that drew whatever ethical coloring it had from the intentions of the perpetrators. Criteria The just war theory has two sets of criteria, the first establishing jus ad bellum (the right to go to war), and the second establishing jus in bello (right conduct within war). Jus ad bellum The just war theory directs jus ad bellum to norms that aim to require certain circumstances to enable the right to go to war. Competent authority Only duly constituted public authorities may wage war. "A just war must be initiated by a political authority within a political system that allows distinctions of justice. Dictatorships (e.g. Hitler's regime) or deceptive military actions (e.g. the 1968 US bombing of Cambodia) are typically considered as violations of this criterion. The importance of this condition is key. Plainly, we cannot have a genuine process of judging a just war within a system that represses the process of genuine justice. A just war must be initiated by a political authority within a political system that allows distinctions of justice". Probability of success According to this principle, there must be good grounds for concluding that aims of the just war are achievable. This principle emphasizes that mass violence must not be undertaken if it is unlikely to secure the just cause. This criterion is to avoid invasion for invasion's sake and links to the proportionality criteria. One cannot invade if there is no chance of actually winning. However, wars are fought with imperfect knowledge, so one must simply be able to make a logical case that one can win; there is no way to know this in advance. These criteria move the conversation from moral and theoretical grounds to practical grounds. Essentially, this is meant to gather coalition building and win approval of other state actors. Last resort The principle of last resort stipulates that all non-violent options must first be exhausted before the use of force can be justified. Diplomatic options, sanctions, and other non-military methods must be attempted or validly ruled out before the engagement of hostilities. Further, in regard to the amount of harm—proportionally—the principle of last resort would support using small intervention forces first and then escalating rather than starting a war with massive force such as carpet bombing or nuclear warfare. Just cause The reason for going to war needs to be just and cannot, therefore, be solely for recapturing things taken or punishing people who have done wrong; innocent life must be in imminent danger and intervention must be to protect life. A contemporary view of just cause was expressed in 1993 when the US Catholic Conference said: "Force may be used only to correct a grave, public evil, i.e., aggression or massive violation of the basic human rights of whole populations." Jus in bello Once war has begun, just war theory (jus in bello) also directs how combatants are to act or should act: Distinction Just war conduct is governed by the principle of distinction. The acts of war should be directed towards enemy combatants, and not towards non-combatants caught in circumstances that they did not create. The prohibited acts include bombing civilian residential areas that include no legitimate military targets, committing acts of terrorism or reprisal against civilians or prisoners of war (POWs), and attacking neutral targets. Moreover, combatants are not permitted to attack enemy combatants who have surrendered, or who have been captured, or who are injured and not presenting an immediate lethal threat, or who are parachuting from disabled aircraft and are not airborne forces, or who are shipwrecked. Proportionality Just war conduct is governed by the principle of proportionality. Combatants must make sure that the harm caused to civilians or civilian property is not excessive in relation to the concrete and direct military advantage anticipated by an attack on a legitimate military objective. This principle is meant to discern the correct balance between the restriction imposed by a corrective measure and the severity of the nature of the prohibited act. Military necessity Just war conduct is governed by the principle of military necessity. An attack or action must be intended to help in the defeat of the enemy; it must be an attack on a legitimate military objective, and the harm caused to civilians or civilian property must be proportional and not excessive in relation to the concrete and direct military advantage anticipated. Jus in bello allows for military necessity and does not favor a specific justification in allowing for counter-attack recourse. This principle is meant to limit excessive and unnecessary death and destruction. Fair treatment of prisoners of war Enemy combatants who surrendered or who are captured no longer pose a threat. It is therefore wrong to torture them or otherwise mistreat them. No means malum in se Combatants may not use weapons or other methods of warfare that are considered evil, such as mass rape, forcing enemy combatants to fight against their own side or using weapons whose effects cannot be controlled (e.g., nuclear/biological weapons). Ending a war: Jus post bellum In recent years, some theorists, such as Gary Bass, Louis Iasiello and Brian Orend, have proposed a third category within the just war theory. "Jus post bellum is described by some scholars as a new “discipline,” or as “a new category of international law currently under construction". Jus post bellum concerns justice after a war, including peace treaties, reconstruction, environmental remediation, war crimes trials, and war reparations. Jus post bellum has been added to deal with the fact that some hostile actions may take place outside a traditional battlefield. Jus post bellum governs the justice of war termination and peace agreements, as well as the prosecution of war criminals, and publicly labelled terrorists. The idea has largely been added to help decide what to do if there are prisoners that have been taken during battle. It is, through government labelling and public opinion, that people use jus post bellum to justify the pursuit of labelled terrorist for the safety of the government's state in a modern context. The actual fault lies with the aggressor and so by being the aggressor, they forfeit their rights for honourable treatment by their actions. That theory is used to justify the actions taken by anyone fighting in a war to treat prisoners outside of war. Traditionalists and Revisionists There are two altering views related to the just war theory that scholars align with, which are traditionalists and revisionists. The debates between these different viewpoints rest on the moral responsiblites of actors in jus in bello. Traditionalists In the just war theory as it pertains to jus in bello, traditionalist scholars view that the two principles, jus ad bellum and jus in bello, are distinct in which actors in war are morally responsible. The traditional view places accountability on leaders who start the war, while soldiers are accountable for actions breaking jus in bello. Revisionists Revisionist scholars view that moral responsibility in conduct of war is placed on individual soldiers who participate in war, even if they follow the rules associated with jus in bello. Soldiers that participate in unjust wars are morally responsible. The revisionist view is based on an individual level, rather than on a collective whole. See also Appeasement Christian pacifism Cost–benefit analysis Democratic peace theory Deterrence theory Peace and conflict studies Right of conquest Moral equality of combatants References Further reading Benson, Richard. "The Just War Theory: A Traditional Catholic Moral View", The Tidings (2006). Showing the Catholic view in three points, including John Paul II's position concerning war. Blattberg, Charles. Taking War Seriously. A critique of just war theory. Brough, Michael W., John W. Lango, Harry van der Linden, eds., Rethinking the Just War Tradition (Albany, NY: SUNY Press, 2007). Discusses the contemporary relevance of just war theory. Offers an annotated bibliography of current writings on just war theory. Brunsletter, D., & D. O'Driscoll, Just war thinkers from Cicero to the 21st century (Routledge, 2017). Churchman, David. Why we fight: the origins, nature, and management of human conflict (University Press of America, 2013) online. Crawford, Neta. "Just War Theory and the US Countertenor War", Perspectives on Politics 1(1), 2003. online Elshtain, Jean Bethke, ed. Just war theory (NYU Press, 1992) online. Evans, Mark (editor) Just War Theory: A Reappraisal (Edinburgh University Press, 2005) Fotion, Nicholas. War and Ethics (London, New York: Continuum, 2007). . A defence of an updated form of just war theory. Heindel, Max. The Rosicrucian Philosophy in Questions and Answers – Volume II (The Philosophy of War, World War I reference, ed. 1918), (Describing a philosophy of war and just war concepts from the point of view of his Rosicrucian Fellowship) Gutbrod, Hans. Russia's Recent Invasion of Ukraine and Just War Theory ("Global Policy Journal", March 2022); applies the concept to Russia's February 2022 invasion of Ukraine. Holmes, Robert L. On War and Morality (Princeton University Press, 1989. Khawaja, Irfan. Review of Larry May, War Crimes and Just War, in Democratiya 10, (), an extended critique of just war theory. Kwon, David. Justice after War: Jus Post Bellum in the 21st Century (Washington, D.C., Catholic University of America Press, 2023). MacDonald, David Roberts. Padre E. C. Crosse and 'the Devonshire Epitaph': The Astonishing Story of One Man at the Battle of the Somme (with Antecedents to Today's 'Just War' Dialogue), 2007 Cloverdale Books, South Bend. McMahan, Jeff. "Just Cause for War," Ethics and International Affairs, 2005. Nájera, Luna. "Myth and Prophecy in Juan Ginés de Sepúlveda's Crusading "Exhortación" , in Bulletin for Spanish and Portuguese Historical Studies, 35:1 (2011). Discusses Sepúlveda's theories of war in relation to the war against the Ottoman Turks. Nardin, Terry, ed. The ethics of war and peace: Religious and secular perspectives (Princeton University Press, 1998) online O'Donovan, Oliver. The Just War Revisited (Cambridge: Cambridge University Press, 2003). Steinhoff, Uwe. On the Ethics of War and Terrorism (Oxford, Oxford University Press, 2007). Covers the basics and some of the most controversial current debates. Walzer, Michael. Arguing about War, (Yale University Press, 2004). External links Catholic Teaching Concerning Just War at Catholicism.org "Just War" In Our Time, BBC Radio 4 discussion with John Keane and Niall Ferguson (3 June 1999) Military ethics Catholic social teaching Catholic theology and doctrine Thomas Aquinas Christianity and violence
Just war theory
[ "Biology" ]
6,500
[ "Just war theory", "Behavior", "Aggression" ]
173,512
https://en.wikipedia.org/wiki/Differentiated%20services
Differentiated services or DiffServ is a computer networking architecture that specifies a mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. DiffServ can, for example, be used to provide low-latency to critical network traffic such as voice or streaming media while providing best-effort service to non-critical services such as web traffic or file transfers. DiffServ uses a 6-bit differentiated services code point (DSCP) in the 6-bit differentiated services field (DS field) in the IP header for packet classification purposes. The DS field, together with the ECN field, replaces the outdated IPv4 TOS field. Background Modern data networks carry many different types of services, including voice, video, streaming music, web pages and email. Many of the proposed QoS mechanisms that allowed these services to co-exist were both complex and failed to scale to meet the demands of the public Internet. In December 1998, the IETF replaced the TOS and IP precedence fields in the IPv4 header with the DS field, which was later split to refer to only the top 6 bits with the ECN field in the bottom two bits. In the IPv6 header the DS field is part of the Traffic Class field where it occupies the 6 most significant bits. In the DS field, a range of eight values (class selectors) is used for backward compatibility with the former IPv4 IP precedence field. Today, DiffServ has largely supplanted TOS and other layer-3 QoS mechanisms, such as integrated services (IntServ), as the primary architecture routers use to provide QoS. Traffic management mechanisms DiffServ is a coarse-grained, class-based mechanism for traffic management. In contrast, IntServ is a fine-grained, flow-based mechanism. DiffServ relies on a mechanism to classify and mark packets as belonging to a specific class. DiffServ-aware routers implement per-hop behaviors (PHBs), which define the packet-forwarding properties associated with a class of traffic. Different PHBs may be defined to offer, for example, low-loss or low-latency service. Rather than differentiating network traffic based on the requirements of an individual flow, DiffServ operates on the principle of traffic classification, placing each data packet into one of a limited number of traffic classes. Each router on the network is then configured to differentiate traffic based on its class. Each traffic class can be managed differently, ensuring preferential treatment for higher-priority traffic on the network. The premise of Diffserv is that complicated functions such as packet classification and policing can be carried out at the edge of the network by edge routers. Since no classification and policing is required in the core routers, functionality there can then be kept simple. Core routers simply apply PHB treatment to packets based on their markings. PHB treatment is achieved by core routers using a combination of scheduling policy and queue management policy. A group of routers that implement common, administratively defined DiffServ policies are referred to as a DiffServ domain. While DiffServ does recommend a standardized set of traffic classes, the DiffServ architecture does not incorporate predetermined judgments of what types of traffic should be given priority treatment. DiffServ simply provides a framework to allow classification and differentiated treatment. The standard traffic classes (discussed below) serve to simplify interoperability between different networks and different vendors' equipment. Classification and marking Network traffic entering a DiffServ domain is subjected to classification and conditioning. A traffic classifier may inspect many different parameters in incoming packets, such as source address, destination address or traffic type and assign individual packets to a specific traffic class. Traffic classifiers may honor any DiffServ markings in received packets or may elect to ignore or override those markings. For tight control over volumes and type of traffic in a given class, a network operator may choose not to honor markings at the ingress to the DiffServ domain. Traffic in each class may be further conditioned by subjecting the traffic to rate limiters, traffic policers or shapers. The per-hop behavior is determined by the DS and ECN fields in the IP header. The DS field contains the 6-bit DSCP value. Explicit Congestion Notification (ECN) occupies the least-significant 2 bits of the IPv4 TOS field and IPv6 traffic class (TC) field. In theory, a network could have up to 64 different traffic classes using the 64 available DSCP values. The DiffServ RFCs recommend, but do not require, certain encodings. This gives a network operator great flexibility in defining traffic classes. In practice, however, most networks use the following commonly defined per-hop behaviors: Default Forwarding (DF) PHB — which is typically best-effort traffic Expedited Forwarding (EF) PHB — dedicated to low-loss, low-latency traffic Assured Forwarding (AF) PHB — gives assurance of delivery under prescribed conditions Class Selector PHBs — which maintain backward compatibility with the IP precedence field. Default Forwarding A default forwarding (DF) PHB is the only required behavior. Essentially, any traffic that does not meet the requirements of any of the other defined classes uses DF. Typically, DF has best-effort forwarding characteristics. The recommended DSCP for DF is 0. Expedited Forwarding The IETF defines Expedited Forwarding (EF) behavior in . The EF PHB has the characteristics of low delay, low loss and low jitter. These characteristics are suitable for voice, video and other realtime services. EF traffic is often given strict priority queuing above all other traffic classes. Because an overload of EF traffic will cause queuing delays and affect the jitter and delay tolerances within the class, admission control, traffic policing and other mechanisms may be applied to EF traffic. The recommended DSCP for EF is 101110B (46 or 2EH). Voice Admit The IETF defines Voice Admit behavior in . The Voice Admit PHB has identical characteristics to the Expedited Forwarding PHB. However, Voice Admit traffic is also admitted by the network using a Call Admission Control (CAC) procedure. The recommended DSCP for voice admit is 101100B (44 or 2CH). Assured Forwarding The IETF defines the Assured Forwarding (AF) behavior in and . Assured forwarding allows the operator to provide assurance of delivery as long as the traffic does not exceed some subscribed rate. Traffic that exceeds the subscription rate faces a higher probability of being dropped if congestion occurs. The AF behavior group defines four separate AF classes with all traffic within one class having the same priority. Within each class, packets are given a drop precedence (high, medium or low, where higher precedence means more dropping). The combination of classes and drop precedence yields twelve separate DSCP encodings from AF11 through AF43 (see table). Some measure of priority and proportional fairness is defined between traffic in different classes. Should congestion occur between classes, the traffic in the higher class is given priority. Rather than using strict priority queuing, more balanced queue servicing algorithms such as fair queuing or weighted fair queuing are likely to be used. If congestion occurs within a class, the packets with the higher drop precedence are discarded first. Re-marking a packet is sometimes used to increase its drop precedence if a stream's bandwidth exceeds a certain threshold. For example, a stream whose rate is above the Committed Information Rate (CIR) as defined in causes the stream to be marked with a higher AF drop precedence. This allows the decision as to when to shape the stream to devices further upstream if they encounter congestion. To prevent issues associated with tail drop, more sophisticated drop selection algorithms such as random early detection are often used. Class Selector DF= Default Forwarding Prior to DiffServ, IPv4 networks could use the IP precedence field in the TOS byte of the IPv4 header to mark priority traffic. The TOS octet and IP precedence were not widely used. The IETF agreed to reuse the TOS octet as the DS field for DiffServ networks, later splitting it into the DS field and ECN field. In order to maintain backward compatibility with network devices that still use the Precedence field, DiffServ defines the Class Selector PHB. The Class Selector code points are of the binary form 'xxx000'. The first three bits are the former IP precedence bits. Each IP precedence value can be mapped into a DiffServ class. IP precedence 0 maps to CS0, IP precedence 1 to CS1, and so on. If a packet is received from a non-DiffServ-aware router that used IP precedence markings, the DiffServ router can still understand the encoding as a Class Selector code point. Specific recommendations for use of Class Selector code points are given in . Configuration guidelines offers detailed and specific recommendations for the use and configuration of code points. Other RFCs such as have updated these recommendations. sr+bs = single rate with burst size control (such as a token bucket). Design considerations Under DiffServ, all the policing and classifying are done at the boundaries between DiffServ domains. This means that in the core of the Internet, routers are unhindered by the complexities of collecting payment or enforcing agreements. That is, in contrast to IntServ, DiffServ requires no advance setup, no reservation, and no time-consuming end-to-end negotiation for each flow. The details of how individual routers deal with the DS field are configuration specific, therefore it is difficult to predict end-to-end behavior. This is complicated further if a packet crosses two or more DiffServ domains before reaching its destination. From a commercial viewpoint, this means that it is impossible to sell different classes of end-to-end connectivity to end users, as one provider's Gold packet may be another's Bronze. DiffServ or any other IP-based QoS marking does not ensure the quality of the service or a specified service-level agreement (SLA). By marking the packets, the sender indicates that it wants the packets to be treated as a specific service, but there is no guarantee this happens. It is up to all the service providers and their routers in the path to ensure that their policies will take care of the packets in an appropriate fashion. Bandwidth broker A Bandwidth Broker in the framework of DiffServ is an agent that has some knowledge of an organization's priorities and policies and allocates bandwidth with respect to those policies. In order to achieve an end-to-end allocation of resources across separate domains, the Bandwidth Broker managing a domain will have to communicate with its adjacent peers, which allows end-to-end services to be constructed out of purely bilateral agreements. DiffServ RFCs — Definition of the differentiated services field (DS field) in the IPv4 and IPv6 headers. Note that the DS field of 8 bits (the bottom two unused) in was later split into the current 6-bit DS field and a separate 2-bit ECN field. — An architecture for differentiated services. — Assured forwarding PHB group. — Differentiated services and tunnels. — Definition of differentiated services per-domain behaviors and rules for their specification. — Per hop behavior identification codes. (Obsoletes . — An expedited forwarding PHB. (Obsoletes .) — Supplemental information for the new definition of the EF PHB (expedited forwarding per-hop behavior). — New Terminology and Clarifications for Diffserv. (Updates , and .) — Configuration Guidelines for DiffServ Service Classes. — A differentiated services code point (DSCP) for capacity-admitted traffic. (Updates and .) — A Lower-Effort Per-Hop Behavior (LE PHB) for Differentiated Services. (Updates and , obsoletes .) DiffServ Management RFCs — Management information base for the differentiated services architecture. — An informal management model for differentiated services routers. — Differentiated services quality of service policy information base. See also Class of service Teletraffic engineering References Further reading External links IETF DiffServ Working Group page Cisco Whitepaper — DiffServ-The Scalable End-to-End Quality of Service Model ACM SIGCOMM'09 paper-Modeling and Understanding End-to-End Class of Service Policies in Operational Networks: proposes a practical model for extracting DiffServ policies Cisco: Implementing Quality of Service Policies with DSCP Internet architecture Internet Standards Quality of service
Differentiated services
[ "Technology" ]
2,653
[ "Internet architecture", "IT infrastructure" ]
173,513
https://en.wikipedia.org/wiki/Road%20verge
A road verge is a strip of groundcover consisting of grass or garden plants, and sometimes also shrubs and trees, located between a roadway and a sidewalk. Verges are known by dozens of other names such as grass strip, nature strip, curb strip, or park strip, the usage of which is often quite regional. Road verges are often considered public property, with maintenance usually being a municipal responsibility. Some local authorities, however, require abutting property owners to help maintain (e.g. watering, mowing, edging, trimming/pruning and weeding) their respective verge areas, as well as clean the adjunct footpaths and gutters, as a form of community work. Benefits of having road verges include visual aesthetics, increased safety and comfort of sidewalk users, protection from spray from passing vehicles, and a space for benches, bus shelters, street lights, and other public amenities. Verges are also often part of sustainability for water conservation or the management of urban runoff and water pollution and can provide useful wildlife habitat. Snow that has been ploughed off the street in colder climates often is stored in the area of the verge by default. In the British Isles, road verges serve as important habitats for a range of florae, including rare wildflowers. In the UK, around 700 different species of wildflower can be found growing on verges, including 29 of the country's 52 species of orchid. Verges can also support a wide range of animals and plants that may have been displaced from their usual grassland habitats, as the soil is not extensively fertilised and relatively undisturbed by human activity. Animals that reside on verges range from small insects and amphibians, to larger reptiles, mammals and birds, which rely on verges as a corridor connecting areas of undamaged habitat. As a result, verges may be managed by local areas to encourage biodiversity and conserve the ecosystems that rely on them. The main disadvantage of a road verge is that the right-of-way must be wider, increasing the cost of the road. In some localities, a wider verge offers opportunity for later road widening, should the traffic usage of a road demand this. For this reason, footpaths are usually sited a significant distance from the curb. Certain nutrient amounts in a verge's soil can be influenced by the amount of traffic on the road it sits beside; roads with heavier traffic tend to have more nitrate in the soil due to nitrogen compounds from air pollution leaching out of the atmosphere and into the ground. Sustainable urban and landscape design In urban and suburban areas, urban runoff from private and civic properties can be guided by grading and bioswales for rainwater harvesting collection and bioretention within the "tree-lawn" – parkway zone in rain gardens. This is done for reducing runoff of rain and domestic water: for their carrying waterborne pollution off-site into storm drains and sewer systems; and for the groundwater recharge of aquifers. In some cities, such as Santa Monica, California, city code mandates specify: Parkways, the area between the outside edge of the sidewalk and the inside edge of the curb which are a component of the Public Right of Way (PROW) – that the landscaping should require little or no irrigation and the area produce no runoff. For Santa Monica, another reason for this use of "tree-lawns" is to reduce current beach and Santa Monica Bay ocean pollution that is measurably higher at city outfalls. New construction and remodeling projects needing building permits require that landscape design submittals include garden design plans showing the means of compliance. In some cities and counties, such as Portland, Oregon, street and highway departments are regrading and planting rain gardens in road verges to reduce boulevard and highway runoff. This practice can be useful in areas with either independent Storm sewers or combined storm and sanitary sewers, reducing the frequency of pollution, treatment costs, and released overflows of untreated sewage into rivers and oceans during rainstorms. Rural roadsides In some countries, the road verge can be a corridor of vegetation that remains after adjacent land has been cleared. Considerable effort in supporting conservation of the remnant vegetation is prevalent in Australia, where significant tracts of land are managed as part of the roadside conservation strategies by government agencies. Gallery Terminology The term verge has many synonyms and dialectal differences. Some dialects and idiolects lack a specific term for this area, instead using a circumlocution. Terms used include: Berm: Pennsylvania, northern Indiana, Ohio, Michigan, Wisconsin, New Zealand Besidewalk Boulevard: Detroit, Michigan; North Dakota; Minnesota; Iowa; Illinois; Ohio; Wisconsin; United States Upper Midwest; Winnipeg, and western Canada; Toronto, Ontario; Markham, Ontario; Kitchener, Ontario Boulevard strip: U.S. Upper Midwest Common: New England, generally describes a large strip of grass. Also refers to park-like common-use green spaces in small town centers. Curb lawn: Kalamazoo, Michigan; Elyria, Ohio; Miami County, Ohio; Greenville, South Carolina Curb strip: New Jersey, New York, North Carolina, Florida, Ohio, Indiana, Massachusetts, Michigan, Iowa, Kansas, Nebraska, Oregon, Washington Devil strip or devilstrip: Akron, Ohio; Northeast Ohio. This term was once used more widely to refer to the space between tracks on a streetcar line, a space not wide enough to stand in as cars passed. Drivestrip or Drive Strip Extension lawn: Ann Arbor, Michigan Furniture zone, also landscape zone: a term used by urban planners, indicating its suitability for "street furniture" such as utility poles and fire hydrants, as well as trees or planters Grassplot: East Coast of the United States, Pennsylvania Governor’s Strip: Delaware Hellstrip Island strip: Long Island, New York Long acre – a traditional term for wide grassy road verges, used by grazing herds or flocks moving from place to place Median: Washington, Oregon Mow strip: SF East Bay Area Northern California Nature strip: Australia Neutral ground: U.S. Gulf states Park strip: Ohio, Utah Parking: Illinois, Iowa, Western United States Parking strip: Washington, Oregon, Utah, much of California Parkrow: Iowa, Oregon Parkway: Grand Rapids, Michigan; Greater Los Angeles; San Francisco Bay Area; West Coast of the United States; Casper, Wyoming; Ohio; Illinois; Missouri; Florida; Texas Parkway strip: Austin, Texas; Fort Collins, Colorado Planter zone: SmartCode/New Urbanist terminology Planting strip: Berkeley, California, Seattle, Washington Right-of-way: Wisconsin, Illinois Road allowance: Ottawa, Canada Road verge: Australia Roadside: Australia Shoulder Sidewalk lawn: Georgia Sidewalk plot: Virginia, Maryland, Indiana, Tennessee Sidewalk strip: California, North Carolina, Oregon, Utah, Washington Street lawn: Ohio Subway: Western New York Swale: South Florida Terrace: U.S. Great Lakes region, Missouri Tree bank: The Fox River Valley including Elgin, Illinois. Tree belt: Massachusetts Tree box: Washington, DC Tree lawn or treelawn: Ohio, Indiana, New York, and elsewhere Verge: UK, New Zealand, South Africa, Western Australia See also :Category:Environmental conservation Central reservation Roadside conservation Shoulder (road) Urban forestry References External links Parkway with xeric garden photographs Devil Strips – term's use and lore Urban studies and planning terminology Hydrology and urban planning Environmental design Water conservation Types of garden
Road verge
[ "Engineering", "Environmental_science" ]
1,536
[ "Environmental design", "Hydrology and urban planning", "Hydrology", "Design" ]
173,516
https://en.wikipedia.org/wiki/Far-infrared%20astronomy
Far-infrared astronomy is the branch of astronomy and astrophysics that deals with objects visible in far-infrared radiation (extending from 30 μm towards submillimeter wavelengths around 450 μm). In the far-infrared, stars are not especially bright, but emission from very cold matter (140 Kelvin or less) can be observed that is not seen at shorter wavelengths. This is due to thermal radiation of interstellar dust contained in molecular clouds. These emissions are from dust in circumstellar envelopes around numerous old red giant stars. The Bolocam Galactic Plane Survey mapped the galaxy for the first time in the far-infrared. Telescopes On 22 January 2014, European Space Agency scientists reported the detection, for the first definitive time, of water vapor on the dwarf planet, Ceres, largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." The Earth's atmosphere is opaque over most of the far-infrared, so most far-infrared astronomy is performed by satellites such as the Herschel Space Observatory, Spitzer Space Telescope, IRAS, and Infrared Space Observatory. Upper-atmosphere observations are also possible, as conducted by the airborne SOFIA telescope. Ground-based observations are limited to submillimetre wavelengths using high-altitude telescopes such as the James Clerk Maxwell Telescope, the Caltech Submillimeter Observatory, the High Elevation Antarctic Terahertz Telescope and the Submillimeter Array. See also Infrared astronomy References Infrared imaging Astronomical imaging Observational astronomy
Far-infrared astronomy
[ "Astronomy" ]
356
[ "Observational astronomy", "Astronomical sub-disciplines" ]
173,522
https://en.wikipedia.org/wiki/KHTML
KHTML is a discontinued browser engine that was developed by the KDE project. It originated as the engine of the Konqueror browser in the late 1990s, but active development ceased in 2016. It was officially discontinued in 2023. Built on the KParts framework and written in C++, KHTML had relatively good support for Web standards during its prime. Engines forked from KHTML are used by most of the browsers that are widely used today, including WebKit (Safari) and Blink (Google Chrome, Chromium, Microsoft Edge, Opera, Vivaldi and Brave). History Origins KHTML was preceded by an earlier engine called khtmlw or the KDE HTML Widget, developed by Torben Weis and Martin Jones, which implemented support for HTML 3.2, HTTP 1.0, and HTML frames, but not the DOM, CSS, or JavaScript. KHTML itself came into existence on November 4, 1998, as a fork of the khtmlw library, with some slight refactoring and the addition of Unicode support and changes to support the move to Qt 2. Waldo Bastian was among those who did the work of creating that early version of KHTML. Re-write and improvement The real work on KHTML actually started between May and October 1999, with the realization that the choice facing the project was "either do a significant effort to move KHTML forward or to use Mozilla" and with adding support for JavaScript as the highest priority. So in May 1999, Lars Knoll began doing research with an eye toward implementing the DOM specification, finally announcing on August 16, 1999 that he had checked in what amounted to a complete rewrite of the KHTML library—changing KHTML to use the standard DOM as its internal document representation. That in turn allowed the beginnings of JavaScript support to be added in October 1999, followed shortly afterwards with the integration of KJS by Harri Porten. In the closing months of 1999 and first few months of 2000, Knoll did further work with Antti Koivisto and Dirk Mueller to add CSS support and to refine and stabilize the KHTML architecture, with most of that work being completed by March 2000. Among other things, those changes enabled KHTML to become the second browser after Internet Explorer to correctly support Hebrew and Arabic and languages written right-to-left—before Mozilla had such support. KDE 2.0 was the first KDE release (on October 23, 2000) to include KHTML (as the rendering engine of the new Konqueror file and web browser, which replaced the monolithic KDE File Manager). Other modules KSVG was first developed in 2001 by Nikolas Zimmermann and Rob Buis; however, by 2003, it was decided to fork the then-current KSVG implementation into two new projects: KDOM/KSVG2 (to improve the state of DOM rendering in KHTML underneath a more formidable SVG 1.0 render state) and Kcanvas (to abstract any rendering done within khtml/ksvg2 in a single shared library, with multiple backends for it, e.g., Cairo/Qt, etc.). KSVG2 is also a part of WebKit. Sunsetting KHTML was scheduled to be removed in KDE Frameworks 6. Active development ended in 2016, just the necessary maintenance to work with updates to Frameworks 5. It was officially discontinued in 2023. Standards compliance The following standards are supported by the KHTML engine: HTML 4.01 HTML 5 support CSS 1 CSS 2.1 (screen and paged media) CSS 3 Selectors (fully as of KDE 3.5.6) CSS 3 Other (multiple backgrounds, box-sizing and text-shadow) PNG, MNG, JPEG, GIF graphic formats DOM 1, 2 and partially 3 ECMA-262/JavaScript 1.5 Partial Scalable Vector Graphics support Descendants KHTML and KJS were adopted by Apple in 2002 for use in the Safari web browser. Apple publishes the source code for their fork of the KHTML engine, called WebKit. In 2013, Google began development on a fork of WebKit, called Blink. See also Comparison of browser engines References External links Web Browser – the Konqueror website KHTML – KDE's HTML library – description at developer.kde.org KHTML at the KDE API Reference KHTML at the KDE git repository From KDE to WebKit: The Open Source Engine That's Here to Stay – presentation at Yahoo! office by Lars Knoll and George Staikos on December 8, 2006 (video) 1999 software Free layout engines Free software programmed in C++ KDE Frameworks KDE Platform
KHTML
[ "Technology" ]
1,014
[ "KDE Platform", "KDE Frameworks", "Computing platforms" ]
173,523
https://en.wikipedia.org/wiki/Ultraviolet%20astronomy
Ultraviolet astronomy is the observation of electromagnetic radiation at ultraviolet wavelengths between approximately 10 and 320 nanometres; shorter wavelengths—higher energy photons—are studied by X-ray astronomy and gamma-ray astronomy. Ultraviolet light is not visible to the human eye. Most of the light at these wavelengths is absorbed by the Earth's atmosphere, so observations at these wavelengths must be performed from the upper atmosphere or from space. Overview Ultraviolet line spectrum measurements (spectroscopy) are used to discern the chemical composition, densities, and temperatures of the interstellar medium, and the temperature and composition of hot young stars. UV observations can also provide essential information about the evolution of galaxies. They can be used to discern the presence of a hot white dwarf or main sequence companion in orbit around a cooler star. The ultraviolet universe looks quite different from the familiar stars and galaxies seen in visible light. Most stars are actually relatively cool objects emitting much of their electromagnetic radiation in the visible or near-infrared part of the spectrum. Ultraviolet radiation is the signature of hotter objects, typically in the early and late stages of their evolution. In the Earth's sky seen in ultraviolet light, most stars would fade in prominence. Some very young massive stars and some very old stars and galaxies, growing hotter and producing higher-energy radiation near their birth or death, would be visible. Clouds of gas and dust would block the vision in many directions along the Milky Way. Space-based solar observatories such as SDO and SOHO use ultraviolet telescopes (called AIA and EIT, respectively) to view activity on the Sun and its corona. Weather satellites such as the GOES-R series also carry telescopes for observing the Sun in ultraviolet. The Hubble Space Telescope and FUSE have been the most recent major space telescopes to view the near and far UV spectrum of the sky, though other UV instruments have flown on smaller observatories such as GALEX, as well as sounding rockets and the Space Shuttle. Pioneers in ultraviolet astronomy include George Robert Carruthers, Robert Wilson, and Charles Stuart Bowyer. Ultraviolet space telescopes - Far Ultraviolet Camera/Spectrograph on Apollo 16 (April 1972) + ESRO - TD-1A (135-286 nm; 1972–1974) - Orbiting Astronomical Observatory (#2:1968-73. #3:1972-1981) - Orion 1 and Orion 2 Space Observatories (#1: 200-380 nm, 1971; #2: 200-300 nm, 1973) + - Astronomical Netherlands Satellite (150-330 nm, 1974–1976) + - International Ultraviolet Explorer (115-320 nm, 1978–1996) - Astron-1 (150-350 nm, 1983–1989) - Glazar 1 and 2 on Mir (in Kvant-1, 1987–2001) - FAUST (140-180 nm, in ATLAS-1 Spacelab aboard STS-45 mission, March 1992) - EUVE (7-76 nm, 1992–2001) - FUSE (90.5-119.5 nm, 1999–2007) + - Extreme ultraviolet Imaging Telescope (on SOHO imaging Sun at 17.1, 19.5, 28.4, and 30.4 nm) + - Hubble Space Telescope (various 115-800 nm,1990-1997-) (STIS 115–1030 nm, 1997–) (WFC3 200-1700 nm, 2009–) - Swift Gamma-Ray Burst Mission (170–650 nm, 2004- ) - Hopkins Ultraviolet Telescope (flew in 1990 and 1995) - ROSAT XUV (17-210eV) (30-6 nm, 1990–1999) - Far Ultraviolet Spectroscopic Explorer (90.5-119.5 nm, 1999–2007) - Galaxy Evolution Explorer (135–280 nm, 2003–2012) - Hisaki (130-530 nm, 2013 - 2023) - Lunar-based ultraviolet telescope (LUT) (on Chang'e 3 lunar lander, 245-340  nm, 2013 -) - Astrosat (130-530 nm, 2015 -) - Colorado Ultraviolet Transit Experiment - (255-330 nm spectrograph, 2021- ) - PROBA-3 (CUTE) - (530-588 nm coronagraph, 2024- ) - Public Telescope (PST) (100-180 nm, Proposed 2015, EU funded study ) - Viewpoint-1 SpaceFab.US (200-950 nm, Launch planned 2022) See also List of ultraviolet space telescopes Ultraviolet instruments on planetary spacecraft - UVIS (Cassini) - 1997 (at Saturn from 2004 to 2017) - MASCS (MESSENGER) - 2004 (at Mercury from 2011 to 2015) - Alice (New Horizons) - 2006 (Pluto flyby in 2015) - UVS (Juno) - 2011 (at Jupiter since 2016) - IUVS (MAVEN) - 2013 (at Mars since 2014) See also References External links Astronomical imaging Astronomical sub-disciplines Astronomy
Ultraviolet astronomy
[ "Physics", "Chemistry", "Astronomy" ]
1,014
[ "Spectrum (physical sciences)", "Ultraviolet astronomy", "Electromagnetic spectrum", "Ultraviolet radiation", "Astronomical sub-disciplines" ]
173,525
https://en.wikipedia.org/wiki/Probabilistic%20method
In mathematics, the probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for certain, without any possible error. This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding), and information theory. Introduction If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties. Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value. Alternatively, the probabilistic method can also be used to guarantee the existence of a desired element in a sample space with a value that is greater than or equal to the calculated expected value, since the non-existence of such element would imply every element in the sample space is less than the expected value, a contradiction. Common tools used in the probabilistic method include Markov's inequality, the Chernoff bound, and the Lovász local lemma. Two examples due to Erdős Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamiltonian cycles), many of the most well known proofs using this method are due to Erdős. The first example below describes one such result from 1947 that gives a proof of a lower bound for the Ramsey number . First example Suppose we have a complete graph on vertices. We wish to show (for small enough values of ) that it is possible to color the edges of the graph in two colors (say red and blue) so that there is no complete subgraph on vertices which is monochromatic (every edge colored the same color). To do so, we color the graph randomly. Color each edge independently with probability of being red and of being blue. We calculate the expected number of monochromatic subgraphs on vertices as follows: For any set of vertices from our graph, define the variable to be if every edge amongst the vertices is the same color, and otherwise. Note that the number of monochromatic -subgraphs is the sum of over all possible subsets . For any individual set , the expected value of is simply the probability that all of the edges in are the same color: (the factor of comes because there are two possible colors). This holds true for any of the possible subsets we could have chosen, i.e. ranges from to . So we have that the sum of over all is The sum of expectations is the expectation of the sum (regardless of whether the variables are independent), so the expectation of the sum (the expected number of all monochromatic -subgraphs) is Consider what happens if this value is less than . Since the expected number of monochromatic -subgraphs is strictly less than , there exists a coloring satisfying the condition that the number of monochromatic -subgraphs is strictly less than . The number of monochromatic -subgraphs in this random coloring is a non-negative integer, hence it must be ( is the only non-negative integer less than ). It follows that if (which holds, for example, for and ), there must exist a coloring in which there are no monochromatic -subgraphs. By definition of the Ramsey number, this implies that must be bigger than . In particular, must grow at least exponentially with . A weakness of this argument is that it is entirely nonconstructive. Even though it proves (for example) that almost every coloring of the complete graph on vertices contains no monochromatic -subgraph, it gives no explicit example of such a coloring. The problem of finding such a coloring has been open for more than 50 years. Second example A 1959 paper of Erdős (see reference cited below) addressed the following problem in graph theory: given positive integers and , does there exist a graph containing only cycles of length at least , such that the chromatic number of is at least ? It can be shown that such a graph exists for any and , and the proof is reasonably simple. Let be very large and consider a random graph on vertices, where every edge in exists with probability . We show that with positive probability, satisfies the following two properties: Property 1. contains at most cycles of length less than . Proof. Let be the number cycles of length less than . The number of cycles of length in the complete graph on vertices is and each of them is present in with probability . Hence by Markov's inequality we have Thus for sufficiently large , property 1 holds with a probability of more than . Property 2. contains no independent set of size . Proof. Let be the size of the largest independent set in . Clearly, we have when Thus, for sufficiently large , property 2 holds with a probability of more than . For sufficiently large , the probability that a graph from the distribution has both properties is positive, as the events for these properties cannot be disjoint (if they were, their probabilities would sum up to more than 1). Here comes the trick: since has these two properties, we can remove at most vertices from to obtain a new graph on vertices that contains only cycles of length at least . We can see that this new graph has no independent set of size . can only be partitioned into at least independent sets, and, hence, has chromatic number at least . This result gives a hint as to why the computation of the chromatic number of a graph is so difficult: even when there are no local reasons (such as small cycles) for a graph to require many colors the chromatic number can still be arbitrarily large. See also Interactive proof system Las Vegas algorithm Incompressibility method Method of conditional probabilities Probabilistic proofs of non-probabilistic theorems Random graph Additional resources Probabilistic Methods in Combinatorics, MIT OpenCourseWare References Alon, Noga; Spencer, Joel H. (2000). The probabilistic method (2ed). New York: Wiley-Interscience. . J. Matoušek, J. Vondrak. The Probabilistic Method. Lecture notes. Alon, N and Krivelevich, M (2006). Extremal and Probabilistic Combinatorics Elishakoff I., Probabilistic Methods in the Theory of Structures: Random Strength of Materials, Random Vibration, and Buckling, World Scientific, Singapore, , 2017 Elishakoff I., Lin Y.K. and Zhu L.P., Probabilistic and Convex Modeling of Acoustically Excited Structures, Elsevier Science Publishers, Amsterdam, 1994, VIII + pp. 296; Footnotes Combinatorics Mathematical proofs method
Probabilistic method
[ "Mathematics" ]
1,542
[ "Discrete mathematics", "nan", "Combinatorics" ]
173,547
https://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton%20theorem
In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies its own characteristic equation. The characteristic polynomial of an matrix is defined as , where is the determinant operation, is a variable scalar element of the base ring, and is the identity matrix. Since each entry of the matrix is either constant or linear in , the determinant of is a degree- monic polynomial in , so it can be written as By replacing the scalar variable with the matrix , one can define an analogous matrix polynomial expression, (Here, is the given matrix—not a variable, unlike —so is a constant rather than a function.) The Cayley–Hamilton theorem states that this polynomial expression is equal to the zero matrix, which is to say that that is, the characteristic polynomial is an annihilating polynomial for One use for the Cayley–Hamilton theorem is that it allows to be expressed as a linear combination of the lower matrix powers of : When the ring is a field, the Cayley–Hamilton theorem is equivalent to the statement that the minimal polynomial of a square matrix divides its characteristic polynomial. A special case of the theorem was first proved by Hamilton in 1853 in terms of inverses of linear functions of quaternions. This corresponds to the special case of certain real or complex matrices. Cayley in 1858 stated the result for and smaller matrices, but only published a proof for the case. As for matrices, Cayley stated “..., I have not thought it necessary to undertake the labor of a formal proof of the theorem in the general case of a matrix of any degree”. The general case was first proved by Ferdinand Frobenius in 1878. Examples matrices For a matrix , the characteristic polynomial is given by , and so is trivial. matrices As a concrete example, let Its characteristic polynomial is given by The Cayley–Hamilton theorem claims that, if we define then We can verify by computation that indeed, For a generic matrix, the characteristic polynomial is given by , so the Cayley–Hamilton theorem states that which is indeed always the case, evident by working out the entries of . Applications Determinant and inverse matrix For a general invertible matrix , i.e., one with nonzero determinant, −1 can thus be written as an order polynomial expression in : As indicated, the Cayley–Hamilton theorem amounts to the identity The coefficients are given by the elementary symmetric polynomials of the eigenvalues of . Using Newton identities, the elementary symmetric polynomials can in turn be expressed in terms of power sum symmetric polynomials of the eigenvalues: where is the trace of the matrix . Thus, we can express in terms of the trace of powers of . In general, the formula for the coefficients is given in terms of complete exponential Bell polynomials as In particular, the determinant of equals . Thus, the determinant can be written as the trace identity: Likewise, the characteristic polynomial can be written as and, by multiplying both sides by (note ), one is led to an expression for the inverse of as a trace identity, Another method for obtaining these coefficients for a general matrix, provided no root be zero, relies on the following alternative expression for the determinant, Hence, by virtue of the Mercator series, where the exponential only needs be expanded to order , since is of order , the net negative powers of automatically vanishing by the C–H theorem. (Again, this requires a ring containing the rational numbers.) Differentiation of this expression with respect to allows one to express the coefficients of the characteristic polynomial for general as determinants of matrices, Examples For instance, the first few Bell polynomials are = 1, , , and . Using these to specify the coefficients of the characteristic polynomial of a matrix yields The coefficient gives the determinant of the matrix, minus its trace, while its inverse is given by It is apparent from the general formula for cn−k, expressed in terms of Bell polynomials, that the expressions always give the coefficients of and of in the characteristic polynomial of any matrix, respectively. So, for a matrix , the statement of the Cayley–Hamilton theorem can also be written as where the right-hand side designates a matrix with all entries reduced to zero. Likewise, this determinant in the case, is now This expression gives the negative of coefficient of in the general case, as seen below. Similarly, one can write for a matrix , where, now, the determinant is , and so on for larger matrices. The increasingly complex expressions for the coefficients is deducible from Newton's identities or the Faddeev–LeVerrier algorithm. n-th power of matrix The Cayley–Hamilton theorem always provides a relationship between the powers of (though not always the simplest one), which allows one to simplify expressions involving such powers, and evaluate them without having to compute the power n or any higher powers of . As an example, for the theorem gives Then, to calculate , observe Likewise, Notice that we have been able to write the matrix power as the sum of two terms. In fact, matrix power of any order can be written as a matrix polynomial of degree at most , where is the size of a square matrix. This is an instance where Cayley–Hamilton theorem can be used to express a matrix function, which we will discuss below systematically. Matrix functions Given an analytic function and the characteristic polynomial of degree of an matrix , the function can be expressed using long division as where is some quotient polynomial and is a remainder polynomial such that . By the Cayley–Hamilton theorem, replacing by the matrix gives , so one has Thus, the analytic function of the matrix can be expressed as a matrix polynomial of degree less than . Let the remainder polynomial be Since , evaluating the function at the eigenvalues of yields This amounts to a system of linear equations, which can be solved to determine the coefficients . Thus, one has When the eigenvalues are repeated, that is for some , two or more equations are identical; and hence the linear equations cannot be solved uniquely. For such cases, for an eigenvalue with multiplicity , the first derivatives of vanish at the eigenvalue. This leads to the extra linearly independent solutions which, combined with others, yield the required equations to solve for . Finding a polynomial that passes through the points is essentially an interpolation problem, and can be solved using Lagrange or Newton interpolation techniques, leading to Sylvester's formula. For example, suppose the task is to find the polynomial representation of The characteristic polynomial is , and the eigenvalues are . Let . Evaluating at the eigenvalues, one obtains two linear equations, and . Solving the equations yields and . Thus, it follows that If, instead, the function were , then the coefficients would have been and ; hence As a further example, when considering then the characteristic polynomial is , and the eigenvalues are . As before, evaluating the function at the eigenvalues gives us the linear equations and ; the solution of which gives, and . Thus, for this case, which is a rotation matrix. Standard examples of such usage is the exponential map from the Lie algebra of a matrix Lie group into the group. It is given by a matrix exponential, Such expressions have long been known for , where the are the Pauli matrices and for , which is Rodrigues' rotation formula. For the notation, see 3D rotation group#A note on Lie algebras. More recently, expressions have appeared for other groups, like the Lorentz group , and , as well as . The group is the conformal group of spacetime, its simply connected cover (to be precise, the simply connected cover of the connected component of ). The expressions obtained apply to the standard representation of these groups. They require knowledge of (some of) the eigenvalues of the matrix to exponentiate. For (and hence for ), closed expressions have been obtained for all irreducible representations, i.e. of any spin. Algebraic number theory The Cayley–Hamilton theorem is an effective tool for computing the minimal polynomial of algebraic integers. For example, given a finite extension of and an algebraic integer which is a non-zero linear combination of the we can compute the minimal polynomial of by finding a matrix representing the -linear transformation If we call this transformation matrix , then we can find the minimal polynomial by applying the Cayley–Hamilton theorem to . Proofs The Cayley–Hamilton theorem is an immediate consequence of the existence of the Jordan normal form for matrices over algebraically closed fields, see . In this section, direct proofs are presented. As the examples above show, obtaining the statement of the Cayley–Hamilton theorem for an matrix requires two steps: first the coefficients of the characteristic polynomial are determined by development as a polynomial in of the determinant and then these coefficients are used in a linear combination of powers of that is equated to the zero matrix: The left-hand side can be worked out to an matrix whose entries are (enormous) polynomial expressions in the set of entries of , so the Cayley–Hamilton theorem states that each of these expressions equals . For any fixed value of , these identities can be obtained by tedious but straightforward algebraic manipulations. None of these computations, however, can show why the Cayley–Hamilton theorem should be valid for matrices of all possible sizes , so a uniform proof for all is needed. Preliminaries If a vector of size is an eigenvector of with eigenvalue , in other words if , then which is the zero vector since (the eigenvalues of are precisely the roots of ). This holds for all possible eigenvalues , so the two matrices equated by the theorem certainly give the same (null) result when applied to any eigenvector. Now if admits a basis of eigenvectors, in other words if is diagonalizable, then the Cayley–Hamilton theorem must hold for , since two matrices that give the same values when applied to each element of a basis must be equal. Consider now the function which maps matrices to matrices given by the formula , i.e. which takes a matrix and plugs it into its own characteristic polynomial. Not all matrices are diagonalizable, but for matrices with complex coefficients many of them are: the set of diagonalizable complex square matrices of a given size is dense in the set of all such square matrices (for a matrix to be diagonalizable it suffices for instance that its characteristic polynomial not have any multiple roots). Now viewed as a function (since matrices have entries) we see that this function is continuous. This is true because the entries of the image of a matrix are given by polynomials in the entries of the matrix. Since and since the set is dense, by continuity this function must map the entire set of matrices to the zero matrix. Therefore, the Cayley–Hamilton theorem is true for complex numbers, and must therefore also hold for - or -valued matrices. While this provides a valid proof, the argument is not very satisfactory, since the identities represented by the theorem do not in any way depend on the nature of the matrix (diagonalizable or not), nor on the kind of entries allowed (for matrices with real entries the diagonalizable ones do not form a dense set, and it seems strange one would have to consider complex matrices to see that the Cayley–Hamilton theorem holds for them). We shall therefore now consider only arguments that prove the theorem directly for any matrix using algebraic manipulations only; these also have the benefit of working for matrices with entries in any commutative ring. There is a great variety of such proofs of the Cayley–Hamilton theorem, of which several will be given here. They vary in the amount of abstract algebraic notions required to understand the proof. The simplest proofs use just those notions needed to formulate the theorem (matrices, polynomials with numeric entries, determinants), but involve technical computations that render somewhat mysterious the fact that they lead precisely to the correct conclusion. It is possible to avoid such details, but at the price of involving more subtle algebraic notions: polynomials with coefficients in a non-commutative ring, or matrices with unusual kinds of entries. Adjugate matrices All proofs below use the notion of the adjugate matrix of an matrix , the transpose of its cofactor matrix. This is a matrix whose coefficients are given by polynomial expressions in the coefficients of (in fact, by certain determinants), in such a way that the following fundamental relations hold, These relations are a direct consequence of the basic properties of determinants: evaluation of the entry of the matrix product on the left gives the expansion by column of the determinant of the matrix obtained from by replacing column by a copy of column , which is if and zero otherwise; the matrix product on the right is similar, but for expansions by rows. Being a consequence of just algebraic expression manipulation, these relations are valid for matrices with entries in any commutative ring (commutativity must be assumed for determinants to be defined in the first place). This is important to note here, because these relations will be applied below for matrices with non-numeric entries such as polynomials. A direct algebraic proof This proof uses just the kind of objects needed to formulate the Cayley–Hamilton theorem: matrices with polynomials as entries. The matrix whose determinant is the characteristic polynomial of is such a matrix, and since polynomials form a commutative ring, it has an adjugate Then, according to the right-hand fundamental relation of the adjugate, one has Since is also a matrix with polynomials in as entries, one can, for each , collect the coefficients of in each entry to form a matrix of numbers, such that one has (The way the entries of are defined makes clear that no powers higher than occur). While this looks like a polynomial with matrices as coefficients, we shall not consider such a notion; it is just a way to write a matrix with polynomial entries as a linear combination of constant matrices, and the coefficient has been written to the left of the matrix to stress this point of view. Now, one can expand the matrix product in our equation by bilinearity: Writing one obtains an equality of two matrices with polynomial entries, written as linear combinations of constant matrices with powers of as coefficients. Such an equality can hold only if in any matrix position the entry that is multiplied by a given power is the same on both sides; it follows that the constant matrices with coefficient in both expressions must be equal. Writing these equations then for from down to 0, one finds Finally, multiply the equation of the coefficients of from the left by , and sum up: The left-hand sides form a telescoping sum and cancel completely; the right-hand sides add up to : This completes the proof. A proof using polynomials with matrix coefficients This proof is similar to the first one, but tries to give meaning to the notion of polynomial with matrix coefficients that was suggested by the expressions occurring in that proof. This requires considerable care, since it is somewhat unusual to consider polynomials with coefficients in a non-commutative ring, and not all reasoning that is valid for commutative polynomials can be applied in this setting. Notably, while arithmetic of polynomials over a commutative ring models the arithmetic of polynomial functions, this is not the case over a non-commutative ring (in fact there is no obvious notion of polynomial function in this case that is closed under multiplication). So when considering polynomials in with matrix coefficients, the variable must not be thought of as an "unknown", but as a formal symbol that is to be manipulated according to given rules; in particular one cannot just set to a specific value. Let be the ring of matrices with entries in some ring R (such as the real or complex numbers) that has as an element. Matrices with as coefficients polynomials in , such as or its adjugate B in the first proof, are elements of . By collecting like powers of , such matrices can be written as "polynomials" in with constant matrices as coefficients; write for the set of such polynomials. Since this set is in bijection with , one defines arithmetic operations on it correspondingly, in particular multiplication is given by respecting the order of the coefficient matrices from the two operands; obviously this gives a non-commutative multiplication. Thus, the identity from the first proof can be viewed as one involving a multiplication of elements in . At this point, it is tempting to simply set equal to the matrix , which makes the first factor on the left equal to the zero matrix, and the right hand side equal to ; however, this is not an allowed operation when coefficients do not commute. It is possible to define a "right-evaluation map" , which replaces each by the matrix power of , where one stipulates that the power is always to be multiplied on the right to the corresponding coefficient. But this map is not a ring homomorphism: the right-evaluation of a product differs in general from the product of the right-evaluations. This is so because multiplication of polynomials with matrix coefficients does not model multiplication of expressions containing unknowns: a product is defined assuming that commutes with , but this may fail if is replaced by the matrix . One can work around this difficulty in the particular situation at hand, since the above right-evaluation map does become a ring homomorphism if the matrix is in the center of the ring of coefficients, so that it commutes with all the coefficients of the polynomials (the argument proving this is straightforward, exactly because commuting with coefficients is now justified after evaluation). Now, is not always in the center of , but we may replace with a smaller ring provided it contains all the coefficients of the polynomials in question: , , and the coefficients of the polynomial . The obvious choice for such a subring is the centralizer of , the subring of all matrices that commute with ; by definition is in the center of . This centralizer obviously contains , and , but one has to show that it contains the matrices . To do this, one combines the two fundamental relations for adjugates, writing out the adjugate as a polynomial: Equating the coefficients shows that for each , we have as desired. Having found the proper setting in which is indeed a homomorphism of rings, one can complete the proof as suggested above: This completes the proof. A synthesis of the first two proofs In the first proof, one was able to determine the coefficients of based on the right-hand fundamental relation for the adjugate only. In fact the first equations derived can be interpreted as determining the quotient of the Euclidean division of the polynomial on the left by the monic polynomial , while the final equation expresses the fact that the remainder is zero. This division is performed in the ring of polynomials with matrix coefficients. Indeed, even over a non-commutative ring, Euclidean division by a monic polynomial is defined, and always produces a unique quotient and remainder with the same degree condition as in the commutative case, provided it is specified at which side one wishes to be a factor (here that is to the left). To see that quotient and remainder are unique (which is the important part of the statement here), it suffices to write as and observe that since is monic, cannot have a degree less than that of , unless . But the dividend and divisor used here both lie in the subring , where is the subring of the matrix ring generated by : the -linear span of all powers of . Therefore, the Euclidean division can in fact be performed within that commutative polynomial ring, and of course it then gives the same quotient and remainder 0 as in the larger ring; in particular this shows that in fact lies in . But, in this commutative setting, it is valid to set to in the equation in other words, to apply the evaluation map which is a ring homomorphism, giving just like in the second proof, as desired. In addition to proving the theorem, the above argument tells us that the coefficients of are polynomials in , while from the second proof we only knew that they lie in the centralizer of ; in general is a larger subring than , and not necessarily commutative. In particular the constant term lies in . Since is an arbitrary square matrix, this proves that can always be expressed as a polynomial in (with coefficients that depend on . In fact, the equations found in the first proof allow successively expressing as polynomials in , which leads to the identity valid for all matrices, where is the characteristic polynomial of . Note that this identity also implies the statement of the Cayley–Hamilton theorem: one may move to the right hand side, multiply the resulting equation (on the left or on the right) by , and use the fact that A proof using matrices of endomorphisms As was mentioned above, the matrix p(A) in statement of the theorem is obtained by first evaluating the determinant and then substituting the matrix A for t; doing that substitution into the matrix before evaluating the determinant is not meaningful. Nevertheless, it is possible to give an interpretation where is obtained directly as the value of a certain determinant, but this requires a more complicated setting, one of matrices over a ring in which one can interpret both the entries of , and all of itself. One could take for this the ring of matrices over , where the entry is realised as , and as itself. But considering matrices with matrices as entries might cause confusion with block matrices, which is not intended, as that gives the wrong notion of determinant (recall that the determinant of a matrix is defined as a sum of products of its entries, and in the case of a block matrix this is generally not the same as the corresponding sum of products of its blocks!). It is clearer to distinguish from the endomorphism of an -dimensional vector space V (or free -module if is not a field) defined by it in a basis , and to take matrices over the ring End(V) of all such endomorphisms. Then is a possible matrix entry, while designates the element of whose entry is endomorphism of scalar multiplication by ; similarly will be interpreted as element of . However, since is not a commutative ring, no determinant is defined on ; this can only be done for matrices over a commutative subring of . Now the entries of the matrix all lie in the subring generated by the identity and , which is commutative. Then a determinant map is defined, and evaluates to the value of the characteristic polynomial of at (this holds independently of the relation between and ); the Cayley–Hamilton theorem states that is the null endomorphism. In this form, the following proof can be obtained from that of (which in fact is the more general statement related to the Nakayama lemma; one takes for the ideal in that proposition the whole ring ). The fact that is the matrix of in the basis means that One can interpret these as components of one equation in , whose members can be written using the matrix-vector product that is defined as usual, but with individual entries and in being "multiplied" by forming ; this gives: where is the element whose component is (in other words it is the basis of written as a column of vectors). Writing this equation as one recognizes the transpose of the matrix considered above, and its determinant (as element of is also p(φ). To derive from this equation that , one left-multiplies by the adjugate matrix of , which is defined in the matrix ring , giving the associativity of matrix-matrix and matrix-vector multiplication used in the first step is a purely formal property of those operations, independent of the nature of the entries. Now component of this equation says that ; thus vanishes on all , and since these elements generate it follows that , completing the proof. One additional fact that follows from this proof is that the matrix whose characteristic polynomial is taken need not be identical to the value substituted into that polynomial; it suffices that be an endomorphism of satisfying the initial equations for some sequence of elements that generate (which space might have smaller dimension than , or in case the ring is not a field it might not be a free module at all). A bogus "proof": One persistent elementary but incorrect argument for the theorem is to "simply" take the definition and substitute for , obtaining There are many ways to see why this argument is wrong. First, in the Cayley–Hamilton theorem, is an matrix. However, the right hand side of the above equation is the value of a determinant, which is a scalar. So they cannot be equated unless (i.e. is just a scalar). Second, in the expression , the variable λ actually occurs at the diagonal entries of the matrix . To illustrate, consider the characteristic polynomial in the previous example again: If one substitutes the entire matrix for in those positions, one obtains in which the "matrix" expression is simply not a valid one. Note, however, that if scalar multiples of identity matrices instead of scalars are subtracted in the above, i.e. if the substitution is performed as then the determinant is indeed zero, but the expanded matrix in question does not evaluate to ; nor can its determinant (a scalar) be compared to p(A) (a matrix). So the argument that still does not apply. Actually, if such an argument holds, it should also hold when other multilinear forms instead of determinant is used. For instance, if we consider the permanent function and define , then by the same argument, we should be able to "prove" that . But this statement is demonstrably wrong: in the 2-dimensional case, for instance, the permanent of a matrix is given by So, for the matrix in the previous example, Yet one can verify that One of the proofs for Cayley–Hamilton theorem above bears some similarity to the argument that . By introducing a matrix with non-numeric coefficients, one can actually let live inside a matrix entry, but then is not equal to , and the conclusion is reached differently. Proofs using methods of abstract algebra Basic properties of Hasse–Schmidt derivations on the exterior algebra of some -module (supposed to be free and of finite rank) have been used by to prove the Cayley–Hamilton theorem. See also . A combinatorial proof A proof based on developing the Leibniz formula for the characteristic polynomial was given by Straubing and a generalization was given using trace monoid theory of Foata and Cartier. Abstraction and generalizations The above proofs show that the Cayley–Hamilton theorem holds for matrices with entries in any commutative ring , and that will hold whenever is an endomorphism of an -module generated by elements that satisfies This more general version of the theorem is the source of the celebrated Nakayama lemma in commutative algebra and algebraic geometry. The Cayley-Hamilton theorem also holds for matrices over the quaternions, a noncommutative ring. See also Companion matrix Remarks Notes References (open access) (communicated on June 9, 1862) (communicated on June 23, 1862) "Classroom Note: A Simple Proof of the Leverrier--Faddeev Characteristic Polynomial Algorithm" (open archive). External links A proof from PlanetMath. The Cayley–Hamilton theorem at MathPages Theorems in linear algebra Articles containing proofs Matrix theory William Rowan Hamilton
Cayley–Hamilton theorem
[ "Mathematics" ]
5,794
[ "Theorems in algebra", "Theorems in linear algebra", "Articles containing proofs" ]
173,724
https://en.wikipedia.org/wiki/Hawking%20radiation
Hawking radiation is black body radiation released outside a black hole's event horizon due to quantum effects according to a model developed by Stephen Hawking in 1974. The radiation was not predicted by previous models which assumed that once electromagnetic radiation is inside the event horizon, it cannot escape. Hawking radiation is predicted to be extremely faint and is many orders of magnitude below the current best telescopes' detecting ability. Hawking radiation would reduce the mass and rotational energy of black holes and consequently cause black hole evaporation. Because of this, black holes that do not gain mass through other means are expected to shrink and ultimately vanish. For all except the smallest black holes, this happens extremely slowly. The radiation temperature, called Hawking temperature, is inversely proportional to the black hole's mass, so micro black holes are predicted to be larger emitters of radiation than larger black holes and should dissipate faster per their mass. Consequently, if small black holes exist, as permitted by the hypothesis of primordial black holes, they ought to lose mass more rapidly as they shrink, leading to a final cataclysm of high energy radiation alone. Such radiation bursts have not yet been detected. Background Modern black holes were first predicted by Einstein's 1915 theory of general relativity. Evidence of the astrophysical objects termed black holes began to mount half a century later, and these objects are of current interest primarily because of their compact size and immense gravitational attraction. Early research into black holes was done by individuals such as Karl Schwarzschild and John Wheeler, who modeled black holes as having zero entropy. A black hole can form when enough matter or energy is compressed into a volume small enough that the escape velocity is greater than the speed of light. Because nothing can travel that fast, nothing within a certain distance, proportional to the mass of the black hole, can escape beyond that distance. The region beyond which not even light can escape is the event horizon: an observer outside it cannot observe, become aware of, or be affected by events within the event horizon. Alternatively, using a set of infalling coordinates in general relativity, one can conceptualize the event horizon as the region beyond which space is infalling faster than the speed of light. (Although nothing can travel through space faster than light, space itself can infall at any speed.) Once matter is inside the event horizon, all of the matter inside falls inevitably into a gravitational singularity, a place of infinite curvature and zero size, leaving behind a warped spacetime devoid of any matter; a classical black hole is pure empty spacetime, and the simplest (nonrotating and uncharged) is characterized just by its mass and event horizon. Discovery In 1971 Soviet scientists Yakov Zeldovich and Alexei Starobinsky proposed that rotating black holes ought to create and emit particles, reasoning by analogy with electromagnetic spinning metal spheres. In 1972, Jacob Bekenstein developed a theory and reported that the black holes should have an entropy proportional to their surface area. Initially Stephen Hawking argued against Bekenstein's theory, viewing black holes as a simple object with no entropy. After meeting Zeldovich in Moscow in 1973, Hawking put these two ideas together using his mixture of quantum field theory and general relativity. In his 1974 paper Hawking showed that in theory, black holes radiate particles as if it were a blackbody. Particles escaping effectively drain energy from the black hole. Due to Bekenstein's contribution to black hole entropy, it is also known as Bekenstein–Hawking radiation. Hawking radiation derives from vacuum fluctuations. A quantum fluctuation in the electromagnetic field can result in a photon outside of the black hole horizon paired with one on the inside. The horizon allows one to escape in each direction. Emission process Hawking radiation is dependent on the Unruh effect and the equivalence principle applied to black-hole horizons. Close to the event horizon of a black hole, a local observer must accelerate to keep from falling in. An accelerating observer sees a thermal bath of particles that pop out of the local acceleration horizon, turn around, and free-fall back in. The condition of local thermal equilibrium implies that the consistent extension of this local thermal bath has a finite temperature at infinity, which implies that some of these particles emitted by the horizon are not reabsorbed and become outgoing Hawking radiation. A Schwarzschild black hole has a metric The black hole is the background spacetime for a quantum field theory. The field theory is defined by a local path integral, so if the boundary conditions at the horizon are determined, the state of the field outside will be specified. To find the appropriate boundary conditions, consider a stationary observer just outside the horizon at position The local metric to lowest order is which is Rindler in terms of . The metric describes a frame that is accelerating to keep from falling into the black hole. The local acceleration, , diverges as . The horizon is not a special boundary, and objects can fall in. So the local observer should feel accelerated in ordinary Minkowski space by the principle of equivalence. The near-horizon observer must see the field excited at a local temperature which is the Unruh effect. The gravitational redshift is given by the square root of the time component of the metric. So for the field theory state to consistently extend, there must be a thermal background everywhere with the local temperature redshift-matched to the near horizon temperature: The inverse temperature redshifted to at infinity is and is the near-horizon position, near , so this is really Thus a field theory defined on a black-hole background is in a thermal state whose temperature at infinity is From the black-hole temperature, it is straightforward to calculate the black-hole entropy . The change in entropy when a quantity of heat is added is The heat energy that enters serves to increase the total mass, so So the entropy of a black hole is proportional to its surface area: where, since the radius of the black hole is twice its mass, we have that the area A is given by Assuming that a small black hole has zero entropy, the integration constant is zero. Forming a black hole is the most efficient way to compress mass into a region, and this entropy is also a bound on the information content of any sphere in space time. The form of the result strongly suggests that the physical description of a gravitating theory can be somehow encoded onto a bounding surface. Black hole evaporation When particles escape, the black hole loses a small amount of its energy and therefore some of its mass (mass and energy are related by Einstein's equation ). Consequently, an evaporating black hole will have a finite lifespan. By dimensional analysis, the life span of a black hole can be shown to scale as the cube of its initial mass, and Hawking estimated that any black hole formed in the early universe with a mass of less than approximately 1012 kg would have evaporated completely by the present day. In 1976, Don Page refined this estimate by calculating the power produced, and the time to evaporation, for a non-rotating, non-charged Schwarzschild black hole of mass . The time for the event horizon or entropy of a black hole to halve is known as the Page time. The calculations are complicated by the fact that a black hole, being of finite size, is not a perfect black body; the absorption cross section goes down in a complicated, spin-dependent manner as frequency decreases, especially when the wavelength becomes comparable to the size of the event horizon. Page concluded that primordial black holes could survive to the present day only if their initial mass were roughly or larger. Writing in 1976, Page using the understanding of neutrinos at the time erroneously worked on the assumption that neutrinos have no mass and that only two neutrino flavors exist, and therefore his results of black hole lifetimes do not match the modern results which take into account 3 flavors of neutrinos with nonzero masses. A 2008 calculation using the particle content of the Standard Model and the WMAP figure for the age of the universe yielded a mass bound of . Some pre-1998 calculations, using outdated assumptions about neutrinos, were as follows: If black holes evaporate under Hawking radiation, a solar mass black hole will evaporate over 1064 years which is vastly longer than the age of the universe. A supermassive black hole with a mass of 1011 (100 billion) will evaporate in around . Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014 during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 2 × 10106 years. Post-1998 science modifies these results slightly; for example, the modern estimate of a solar-mass black hole lifetime is 1067 years. The power emitted by a black hole in the form of Hawking radiation can be estimated for the simplest case of a nonrotating, non-charged Schwarzschild black hole of mass . Combining the formulas for the Schwarzschild radius of the black hole, the Stefan–Boltzmann law of blackbody radiation, the above formula for the temperature of the radiation, and the formula for the surface area of a sphere (the black hole's event horizon), several equations can be derived. The Hawking radiation temperature is: The Bekenstein–Hawking luminosity of a black hole, under the assumption of pure photon emission (i.e. that no other particles are emitted) and under the assumption that the horizon is the radiating surface is: where is the luminosity, i.e., the radiated power, is the reduced Planck constant, is the speed of light, is the gravitational constant and is the mass of the black hole. It is worth mentioning that the above formula has not yet been derived in the framework of semiclassical gravity. The time that the black hole takes to dissipate is: where and are the mass and (Schwarzschild) volume of the black hole, and are Planck mass and Planck time. A black hole of one solar mass ( = ) takes more than to evaporate—much longer than the current age of the universe at . But for a black hole of , the evaporation time is . This is why some astronomers are searching for signs of exploding primordial black holes. However, since the universe contains the cosmic microwave background radiation, in order for the black hole to dissipate, the black hole must have a temperature greater than that of the present-day blackbody radiation of the universe of 2.7 K. A study suggests that must be less than 0.8% of the mass of the Earth – approximately the mass of the Moon. Black hole evaporation has several significant consequences: Black hole evaporation produces a more consistent view of black hole thermodynamics by showing how black holes interact thermally with the rest of the universe. Unlike most objects, a black hole's temperature increases as it radiates away mass. The rate of temperature increase is exponential, with the most likely endpoint being the dissolution of the black hole in a violent burst of gamma rays. A complete description of this dissolution requires a model of quantum gravity, however, as it occurs when the black hole's mass approaches 1 Planck mass, its radius will also approach two Planck lengths. The simplest models of black hole evaporation lead to the black hole information paradox. The information content of a black hole appears to be lost when it dissipates, as under these models the Hawking radiation is random (it has no relation to the original information). A number of solutions to this problem have been proposed, including suggestions that Hawking radiation is perturbed to contain the missing information, that the Hawking evaporation leaves some form of remnant particle containing the missing information, and that information is allowed to be lost under these conditions. Problems and extensions Trans-Planckian problem The trans-Planckian problem is the issue that Hawking's original calculation includes quantum particles where the wavelength becomes shorter than the Planck length near the black hole's horizon. This is due to the peculiar behavior there, where time stops as measured from far away. A particle emitted from a black hole with a finite frequency, if traced back to the horizon, must have had an infinite frequency, and therefore a trans-Planckian wavelength. The Unruh effect and the Hawking effect both talk about field modes in the superficially stationary spacetime that change frequency relative to other coordinates that are regular across the horizon. This is necessarily so, since to stay outside a horizon requires acceleration that constantly Doppler shifts the modes. An outgoing photon of Hawking radiation, if the mode is traced back in time, has a frequency that diverges from that which it has at great distance, as it gets closer to the horizon, which requires the wavelength of the photon to "scrunch up" infinitely at the horizon of the black hole. In a maximally extended external Schwarzschild solution, that photon's frequency stays regular only if the mode is extended back into the past region where no observer can go. That region seems to be unobservable and is physically suspect, so Hawking used a black hole solution without a past region that forms at a finite time in the past. In that case, the source of all the outgoing photons can be identified: a microscopic point right at the moment that the black hole first formed. The quantum fluctuations at that tiny point, in Hawking's original calculation, contain all the outgoing radiation. The modes that eventually contain the outgoing radiation at long times are redshifted by such a huge amount by their long sojourn next to the event horizon that they start off as modes with a wavelength much shorter than the Planck length. Since the laws of physics at such short distances are unknown, some find Hawking's original calculation unconvincing. The trans-Planckian problem is nowadays mostly considered a mathematical artifact of horizon calculations. The same effect occurs for regular matter falling onto a white hole solution. Matter that falls on the white hole accumulates on it, but has no future region into which it can go. Tracing the future of this matter, it is compressed onto the final singular endpoint of the white hole evolution, into a trans-Planckian region. The reason for these types of divergences is that modes that end at the horizon from the point of view of outside coordinates are singular in frequency there. The only way to determine what happens classically is to extend in some other coordinates that cross the horizon. There exist alternative physical pictures that give the Hawking radiation in which the trans-Planckian problem is addressed. The key point is that similar trans-Planckian problems occur when the modes occupied with Unruh radiation are traced back in time. In the Unruh effect, the magnitude of the temperature can be calculated from ordinary Minkowski field theory, and is not controversial. Large extra dimensions The formulas from the previous section are applicable only if the laws of gravity are approximately valid all the way down to the Planck scale. In particular, for black holes with masses below the Planck mass (~), they result in impossible lifetimes below the Planck time (~). This is normally seen as an indication that the Planck mass is the lower limit on the mass of a black hole. In a model with large extra dimensions (10 or 11), the values of Planck constants can be radically different, and the formulas for Hawking radiation have to be modified as well. In particular, the lifetime of a micro black hole with a radius below the scale of the extra dimensions is given by equation 9 in Cheung (2002) and equations 25 and 26 in Carr (2005). where is the low-energy scale, which could be as low as a few TeV, and is the number of large extra dimensions. This formula is now consistent with black holes as light as a few TeV, with lifetimes on the order of the "new Planck time" ~. In loop quantum gravity A detailed study of the quantum geometry of a black hole event horizon has been made using loop quantum gravity. Loop-quantization does not reproduce the result for black hole entropy originally discovered by Bekenstein and Hawking, unless the value of a free parameter is set to cancel out various constants such that the Bekenstein–Hawking entropy formula is reproduced. However, quantum gravitational corrections to the entropy and radiation of black holes have been computed based on the theory. Based on the fluctuations of the horizon area, a quantum black hole exhibits deviations from the Hawking radiation spectrum that would be observable were X-rays from Hawking radiation of evaporating primordial black holes to be observed. The quantum effects are centered at a set of discrete and unblended frequencies highly pronounced on top of the Hawking spectrum. Experimental observation Astronomical search In June 2008, NASA launched the Fermi space telescope, which is searching for the terminal gamma-ray flashes expected from evaporating primordial black holes. As of Jan 1st, 2024, none have been detected. Heavy-ion collider physics If speculative large extra dimension theories are correct, then CERN's Large Hadron Collider may be able to create micro black holes and observe their evaporation. No such micro black hole has been observed at CERN. Experimental Under experimentally achievable conditions for gravitational systems, this effect is too small to be observed directly. It was predicted that Hawking radiation could be studied by analogy using sonic black holes, in which sound perturbations are analogous to light in a gravitational black hole and the flow of an approximately perfect fluid is analogous to gravity (see Analog models of gravity). Observations of Hawking radiation were reported, in sonic black holes employing Bose–Einstein condensates. In September 2010 an experimental set-up created a laboratory "white hole event horizon" that the experimenters claimed was shown to radiate an optical analog to Hawking radiation. However, the results remain unverified and debatable, and its status as a genuine confirmation remains in doubt. See also Black hole information paradox Black hole thermodynamics Black hole starship Blandford–Znajek process and Penrose process, other extractions of black-hole energy Gibbons–Hawking effect Thorne–Hawking–Preskill bet Unruh effect References Further reading External links Hawking radiation calculator tool The case for mini black holes A. Barrau & J. Grain explain how the Hawking radiation could be detected at colliders Black holes Quantum field theory Radiation Astronomical hypotheses Hypothetical processes 1974 introductions
Hawking radiation
[ "Physics", "Astronomy" ]
3,844
[ "Quantum field theory", "Physical phenomena", "Black holes", "Hypotheses in physics", "Physical quantities", "Astronomical hypotheses", "Theoretical physics", "Unsolved problems in physics", "Quantum mechanics", "Astrophysics", "Astronomical controversies", "Density", "Stellar phenomena", ...
173,773
https://en.wikipedia.org/wiki/Cataclysmic%20variable%20star
In astronomy, cataclysmic variable stars (CVs) are stars which irregularly increase in brightness by a large factor, then drop back down to a quiescent state. They were initially called novae (), since ones with an outburst brightness visible to the naked eye and an invisible quiescent brightness appeared as new stars in the sky. Cataclysmic variable stars are binary stars that consist of two components; a white dwarf primary, and a mass transferring secondary. The stars are so close to each other that the gravity of the white dwarf distorts the secondary, and the white dwarf accretes matter from the companion. Therefore, the secondary is often referred to as the donor star, and it is usually less massive than the primary. The infalling matter, which is usually rich in hydrogen, forms in most cases an accretion disk around the white dwarf. Strong UV and X-ray emission is often detected from the accretion disc, powered by the loss of gravitational potential energy from the infalling material. The shortest currently observed orbit in a hydrogen-rich system is 51 minutes in ZTF J1813+4251. Material at the inner edge of disc falls onto the surface of the white dwarf primary. A classical nova outburst occurs when the density and temperature at the bottom of the accumulated hydrogen layer rise high enough to ignite runaway hydrogen fusion reactions, which rapidly convert the hydrogen layer to helium. If the accretion process continues long enough to bring the white dwarf close to the Chandrasekhar limit, the increasing interior density may ignite runaway carbon fusion and trigger a Type Ia supernova explosion, which would completely destroy the white dwarf. The accretion disc may be prone to an instability leading to dwarf nova outbursts, when the outer portion of the disc changes from a cool, dull mode to a hotter, brighter mode for a time, before reverting to the cool mode. Dwarf novae can recur on a timescale of days to decades. Classification Cataclysmic variables are subdivided into several smaller groups, often named after a bright prototype star characteristic of the class. In some cases the magnetic field of the white dwarf is strong enough to disrupt the inner accretion disk or even prevent disk formation altogether. Magnetic systems often show strong and variable polarization in their optical light, and are therefore sometimes called polars; these often exhibit small-amplitude brightness fluctuations at what is presumed to be the white dwarf's period of rotation. There are over 1600 known CV systems. The catalog was frozen as of 1 February 2006 though more are discovered each year. Discovery Cataclysmic variables are among the classes of astronomical objects most commonly found by amateurs, since a cataclysmic variable in its outburst phase is bright enough to be detectable with very modest instruments, and the only celestial objects easily confused with them are bright asteroids whose movement from night to night is clear. Verifying that an object is a cataclysmic variable is also fairly straightforward: they are usually quite blue objects, they exhibit rapid and strong variability, and they tend to have peculiar emission lines. They emit in the ultraviolet and X-ray ranges; they are expected also to emit gamma rays, from annihilation of positrons from proton-rich nuclei produced in the fusion explosion, but this has not yet been detected. Around six galactic novae (i.e. in our own galaxy) are discovered each year, whilst models based on observations in other galaxies suggest that the rate of occurrence ought to be between 20 and 50; this discrepancy is due partly to obscuration by interstellar dust, and partly to a lack of observers in the southern hemisphere and to the difficulties of observing while the Sun is up and at full moon. Superhumps Some cataclysmic variables experience periodic brightenings caused by deformations of the accretion disk when its rotation is in resonance with the orbital period of the binary. References External links A Catalog and Atlas of Cataclysmic Variables (Archival Edition) Catalogue of Cataclysmic Binaries, Low-Mass X-Ray Binaries and Related Objects (RKcat Edition 7.24, 31 Dec 2015 – The Final Edition) CVnet, a website and community for CV enthusiasts and researchers – features announcements of new discoveries A Beginner's Guide to Cataclysmic Variables – features a very good categorisation of the different classes of stars Cataclysmic Variables, NASA's High Energy Astrophysics Science Archive Research Center (HEASARC) page Semidetached binaries Stellar phenomena
Cataclysmic variable star
[ "Physics" ]
949
[ "Physical phenomena", "Stellar phenomena" ]
173,786
https://en.wikipedia.org/wiki/Seyfert%27s%20Sextet
Seyfert's Sextet is a group of galaxies about 190 million light-years away in the constellation Serpens. The group appears to contain six members, but one of the galaxies, NGC 6027d, is a background object (700 million light years behind the group) and another "galaxy," NGC 6027e, is actually a part of the tail from galaxy NGC 6027. The gravitational interaction among these galaxies should continue for hundreds of millions of years. Ultimately, the galaxies will merge to form a single giant elliptical galaxy. Discovery French astronomer Édouard Stephan discovered NGC 6027 on 20 March 1882, but he was unable to resolve the individual galaxies in the group. The group members were discovered by Carl Keenan Seyfert using photographic plates made at the Barnard Observatory of Vanderbilt University. When these results were first published in 1951, this group was the most compact group ever identified. Members See also Wild's Triplet Zwicky's Triplet Robert's Quartet Stephan's Quintet and NGC 7331 Group (also known as the Deer Lick Group); about half a degree northeast of Stephan's Quintet Copeland Septet References External links ASAHI Net Free Address Service: Seyfert's Sextet (Galaxy Group in Serpens) SEDS: Seyfert's Sextet (HCG 79) 79 10116 Serpens
Seyfert's Sextet
[ "Astronomy" ]
282
[ "Constellations", "Serpens" ]
173,838
https://en.wikipedia.org/wiki/Thorne%E2%80%93%C5%BBytkow%20object
A Thorne–Żytkow object (TŻO or TZO), also known as a hybrid star, is a conjectured type of star wherein a red giant or red supergiant contains a neutron star at its core, formed from the collision of the giant with the neutron star. Such objects were hypothesized by Kip Thorne and Anna Żytkow in 1977. In 2014, it was discovered that the star HV 2112, located in the Small Magellanic Cloud (SMC), was a strong candidate, though this view has since been refuted. Another possible candidate is the star HV 11417, also located in the SMC. Formation A Thorne–Żytkow object would be formed when a neutron star collides with another star, often a red giant or supergiant. The colliding objects can simply be wandering stars, though this is only likely to occur in extremely crowded globular clusters. Alternatively, the neutron star could form in a binary system when one of the two stars goes supernova. Because no supernova is perfectly symmetric, and because the binding energy of the binary changes with the mass lost in the supernova, the neutron star will be left with some velocity relative to its original orbit. This kick may cause its new orbit to intersect with its companion, or, if its companion is a main-sequence star, it may be engulfed when its companion evolves into a red giant. Once the neutron star enters the red giant, drag between the neutron star and the outer, diffuse layers of the red giant causes the binary star system's orbit to decay, and the neutron star and core of the red giant spiral inward toward one another. Depending on their initial separation, this process may take hundreds of years. When the two finally collide, the neutron star and red giant core will merge. If their combined mass exceeds the Tolman–Oppenheimer–Volkoff limit, then the two will collapse into a black hole. Otherwise, the two will coalesce into a single neutron star. If a neutron star and a white dwarf merge, this could form a Thorne–Żytkow object with the properties of an R Coronae Borealis variable. Properties The surface of the neutron star is very hot, with temperatures exceeding 109 K, hotter than the cores of all but the most massive stars. This heat is dominated either by nuclear fusion in the accreting gas or by compression of the gas by the neutron star's gravity. Because of the high temperature, unusual nuclear processes may take place as the envelope of the red giant falls onto the neutron star's surface. Hydrogen may fuse to produce a different mixture of isotopes than it does in ordinary stellar nucleosynthesis, and some astronomers have proposed that the rapid proton nucleosynthesis that occurs in X-ray bursts also takes place inside Thorne–Żytkow objects. Observationally, a Thorne–Żytkow object may resemble a red supergiant, or, if it is hot enough to blow off the hydrogen-rich surface layers, a nitrogen-rich Wolf–Rayet star (type WN8). A TŻO has an estimated lifespan of 105–106 years. Given this lifespan, it is possible that between 20 and 200 Thorne-Żytkow objects currently exist in the Milky Way. The only way to unambiguously determine whether or not a star is a TŻO is a multi-messenger detection of both the gravitational waves of the inner neutron star and an optical spectrum of the metals atypical of a normal red supergiant. It is possible to detect pre-existing TŻOs with current LIGO detectors; the neutron star core would emit a continuous wave. Dissolution It has been theorized that mass loss will eventually end the TŻO stage, with the remaining envelope converted to a disk, resulting in the formation of a neutron star with a massive accretion disk. These neutron stars may form the population of isolated pulsars with accretion disks. The massive accretion disk may also collapse into a new star, becoming a stellar companion to the neutron star. The neutron star may also accrete sufficient material to collapse into a black hole. Observation history In 2014, a team led by Emily Levesque argued that the star HV 2112 had unusually high abundances of elements such as molybdenum, rubidium, lithium, and calcium, and a high luminosity. Since both are expected characteristics of Thorne–Żytkow objects, this led the team to suggest that HV 2112 might be the first discovery of a TZO. However, this claim was challenged in a 2018 paper by Emma Beasor and collaborators, who argued that there is no evidence for HV 2112 having any unusual abundance patterns beyond a possible enrichment of lithium and that its luminosity is too low. They put forth another candidate, HV 11417, based on an apparent over-abundance of rubidium and a similar luminosity as HV 2112. List of candidate TŻOs List of candidate former and future TŻOs See also Quasar Quasi-star References Star types Stellar evolution Red giants Neutron stars 1977 in science Hypothetical stars
Thorne–Żytkow object
[ "Physics", "Astronomy" ]
1,063
[ "Astronomical classification systems", "Star types", "Astrophysics", "Stellar evolution" ]
173,844
https://en.wikipedia.org/wiki/Transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The transpose of a matrix was introduced in 1858 by the British mathematician Arthur Cayley. In the case of a logical matrix representing a binary relation R, the transpose corresponds to the converse relation RT. Transpose of a matrix Definition The transpose of a matrix , denoted by , , , , , , or , may be constructed by any one of the following methods: Reflect over its main diagonal (which runs from top-left to bottom-right) to obtain Write the rows of as the columns of Write the columns of as the rows of Formally, the -th row, -th column element of is the -th row, -th column element of : If is an matrix, then is an matrix. In the case of square matrices, may also denote the th power of the matrix . For avoiding a possible confusion, many authors use left upperscripts, that is, they denote the transpose as . An advantage of this notation is that no parentheses are needed when exponents are involved: as , notation is not ambiguous. In this article, this confusion is avoided by never using the symbol as a variable name. Matrix definitions involving transposition A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, is symmetric if A square matrix whose transpose is equal to its negative is called a skew-symmetric matrix; that is, is skew-symmetric if A square complex matrix whose transpose is equal to the matrix with every entry replaced by its complex conjugate (denoted here with an overline) is called a Hermitian matrix (equivalent to the matrix being equal to its conjugate transpose); that is, is Hermitian if A square complex matrix whose transpose is equal to the negation of its complex conjugate is called a skew-Hermitian matrix; that is, is skew-Hermitian if A square matrix whose transpose is equal to its inverse is called an orthogonal matrix; that is, is orthogonal if A square complex matrix whose transpose is equal to its conjugate inverse is called a unitary matrix; that is, is unitary if Examples Properties Let and be matrices and be a scalar. The operation of taking the transpose is an involution (self-inverse). The transpose respects addition. The transpose of a scalar is the same scalar. Together with the preceding property, this implies that the transpose is a linear map from the space of matrices to the space of the matrices. The order of the factors reverses. By induction, this result extends to the general case of multiple matrices, so . The determinant of a square matrix is the same as the determinant of its transpose. The dot product of two column vectors and can be computed as the single entry of the matrix product If has only real entries, then is a positive-semidefinite matrix. The transpose of an invertible matrix is also invertible, and its inverse is the transpose of the inverse of the original matrix.The notation is sometimes used to represent either of these equivalent expressions. If is a square matrix, then its eigenvalues are equal to the eigenvalues of its transpose, since they share the same characteristic polynomial. for two column vectors and the standard dot product. Over any field , a square matrix is similar to . This implies that and have the same invariant factors, which implies they share the same minimal polynomial, characteristic polynomial, and eigenvalues, among other properties. A proof of this property uses the following two observations. Let and be matrices over some base field and let be a field extension of . If and are similar as matrices over , then they are similar over . In particular this applies when is the algebraic closure of . If is a matrix over an algebraically closed field in Jordan normal form with respect to some basis, then is similar to . This further reduces to proving the same fact when is a single Jordan block, which is a straightforward exercise. Products If is an matrix and is its transpose, then the result of matrix multiplication with these two matrices gives two square matrices: is and is . Furthermore, these products are symmetric matrices. Indeed, the matrix product has entries that are the inner product of a row of with a column of . But the columns of are the rows of , so the entry corresponds to the inner product of two rows of . If is the entry of the product, it is obtained from rows and in . The entry is also obtained from these rows, thus , and the product matrix () is symmetric. Similarly, the product is a symmetric matrix. A quick proof of the symmetry of results from the fact that it is its own transpose: Implementation of matrix transposition on computers On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement. However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality. Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an n × m matrix in-place, with O(1) additional storage or at most storage much less than mn. For n ≠ m, this involves a complicated permutation of the data elements that is non-trivial to implement in-place. Therefore, efficient in-place matrix transposition has been the subject of numerous research publications in computer science, starting in the late 1950s, and several algorithms have been developed. Transposes of linear maps and bilinear forms As the main use of matrices is to represent linear maps between finite-dimensional vector spaces, the transpose is an operation on matrices that may be seen as the representation of some operation on linear maps. This leads to a much more general definition of the transpose that works on every linear map, even when linear maps cannot be represented by matrices (such as in the case of infinite dimensional vector spaces). In the finite dimensional case, the matrix representing the transpose of a linear map is the transpose of the matrix representing the linear map, independently of the basis choice. Transpose of a linear map Let denote the algebraic dual space of an -module . Let and be -modules. If is a linear map, then its algebraic adjoint or dual, is the map defined by . The resulting functional is called the pullback of by . The following relation characterizes the algebraic adjoint of for all and where is the natural pairing (i.e. defined by ). This definition also applies unchanged to left modules and to vector spaces. The definition of the transpose may be seen to be independent of any bilinear form on the modules, unlike the adjoint (below). The continuous dual space of a topological vector space (TVS) is denoted by . If and are TVSs then a linear map is weakly continuous if and only if , in which case we let denote the restriction of to . The map is called the transpose of . If the matrix describes a linear map with respect to bases of and , then the matrix describes the transpose of that linear map with respect to the dual bases. Transpose of a bilinear form Every linear map to the dual space defines a bilinear form , with the relation . By defining the transpose of this bilinear form as the bilinear form defined by the transpose i.e. , we find that . Here, is the natural homomorphism into the double dual. Adjoint If the vector spaces and have respectively nondegenerate bilinear forms and , a concept known as the adjoint, which is closely related to the transpose, may be defined: If is a linear map between vector spaces and , we define as the adjoint of if satisfies for all and . These bilinear forms define an isomorphism between and , and between and , resulting in an isomorphism between the transpose and adjoint of . The matrix of the adjoint of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In this context, many authors however, use the term transpose to refer to the adjoint as defined here. The adjoint allows us to consider whether is equal to . In particular, this allows the orthogonal group over a vector space with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps for which the adjoint equals the inverse. Over a complex vector space, one often works with sesquilinear forms (conjugate-linear in one argument) instead of bilinear forms. The Hermitian adjoint of a map between such spaces is defined similarly, and the matrix of the Hermitian adjoint is given by the conjugate transpose matrix if the bases are orthonormal. See also Adjugate matrix, the transpose of the cofactor matrix Conjugate transpose Moore–Penrose pseudoinverse Projection (linear algebra) References Further reading . External links Gilbert Strang (Spring 2010) Linear Algebra from MIT Open Courseware Matrices Abstract algebra Linear algebra
Transpose
[ "Mathematics" ]
2,064
[ "Mathematical objects", "Matrices (mathematics)", "Linear algebra", "Abstract algebra", "Algebra" ]
173,870
https://en.wikipedia.org/wiki/Swern%20oxidation
In organic chemistry, the Swern oxidation, named after Daniel Swern, is a chemical reaction whereby a primary or secondary alcohol () is oxidized to an aldehyde () or ketone () using oxalyl chloride, dimethyl sulfoxide (DMSO) and an organic base, such as triethylamine. It is one of the many oxidation reactions commonly referred to as 'activated DMSO' oxidations. The reaction is known for its mild character and wide tolerance of functional groups. The by-products are dimethyl sulfide ((CH3)2S), carbon monoxide (CO), carbon dioxide (CO2) and—when triethylamine is used as base—triethylammonium chloride (Et3NHCl). Of the volatile by-products, dimethyl sulfide has a strong, pervasive odour and carbon monoxide is acutely toxic, so the reaction and the work-up needs to be performed in a fume hood. Dimethyl sulfide is a volatile liquid (B.P. 37 °C) with an unpleasant odour at even low concentrations. Mechanism The first step of the Swern oxidation is the low-temperature reaction of DMSO, 1a, formally as resonance contributor 1b, with oxalyl chloride, 2. The first intermediate, 3, quickly decomposes giving off carbon dioxide and carbon monoxide and producing chloro(dimethyl)sulfonium chloride, 4. After addition of the alcohol 5, the chloro(dimethyl)sulfonium chloride 4 reacts with the alcohol to give the key alkoxysulfonium ion intermediate, 6. The addition of at least 2 equivalents of base — typically triethylamine — will deprotonate the alkoxysulfonium ion to give the sulfur ylide 7. In a five-membered ring transition state, the sulfur ylide 7 decomposes to give dimethyl sulfide and the desired carbonyl compound 8. Variations When using oxalyl chloride as the dehydration agent, the reaction must be kept colder than −60 °C to avoid side reactions. With cyanuric chloride or trifluoroacetic anhydride instead of oxalyl chloride, the reaction can be warmed to −30 °C without side reactions. Other methods for the activation of DMSO to initiate the formation of the key intermediate 6 are the use of carbodiimides (Pfitzner–Moffatt oxidation), a sulfur trioxide pyridine complex (Parikh–Doering oxidation) or acetic anhydride (Albright-Goldman oxidation). The intermediate 4 can also be prepared from dimethyl sulfide and N-chlorosuccinimide (the Corey–Kim oxidation). In some cases, the use of triethylamine as the base can lead to epimerisation at the carbon alpha to the newly formed carbonyl. Using a bulkier base, such as diisopropylethylamine, can mitigate this side reaction. Considerations Dimethyl sulfide, a byproduct of the Swern oxidation, is one of the most notoriously unpleasant odors known in organic chemistry. Humans can detect this compound in concentrations as low as 0.02 to 0.1 parts per million. A simple remedy for this problem is to rinse used glassware with bleach or oxone solution, which will oxidize the dimethyl sulfide back to dimethyl sulfoxide or to dimethyl sulfone, both of which are odourless and nontoxic. The reaction conditions allow oxidation of acid-sensitive compounds, which might decompose under the acidic oxidation conditions such as Jones oxidation. For example, in Thompson & Heathcock's synthesis of the sesquiterpene isovelleral, the final step uses the Swern protocol, avoiding rearrangement of the acid-sensitive cyclopropanemethanol moiety. See also Alcohol oxidation Sulfonium-based oxidation of alcohols to aldehydes Pyridinium chlorochromate Jones oxidation Oppenauer oxidation Pfitzner–Moffatt oxidation Parikh–Doering oxidation Albright-Goldman oxidation Corey–Kim oxidation Dess–Martin periodinane oxidation Ley oxidation (TPAP oxidation) TEMPO oxidation References External links Organic Chemistry Portal Organic oxidation reactions Name reactions
Swern oxidation
[ "Chemistry" ]
929
[ "Name reactions", "Organic oxidation reactions", "Organic redox reactions", "Organic reactions" ]
173,889
https://en.wikipedia.org/wiki/Killer%20heuristic
In competitive two-player games, the killer heuristic is a move-ordering method based on the observation that a strong move or small set of such moves in a particular position may be equally strong in similar positions at the same move (ply) in the game tree. Retaining such moves obviates the effort of rediscovering them in sibling nodes. This technique improves the efficiency of alpha–beta pruning, which in turn improves the efficiency of the minimax algorithm. Alpha–beta pruning works best when the best moves are considered first. This is because the best moves are the ones most likely to produce a cutoff, a condition where the game-playing program knows that the position it is considering could not possibly have resulted from best play by both sides and so need not be considered further. I.e. the game-playing program will always make its best available move for each position. It only needs to consider the other player's possible responses to that best move, and can skip evaluation of responses to (worse) moves it will not make. The killer heuristic attempts to produce a cutoff by assuming that a move that produced a cutoff in another branch of the game tree at the same depth is likely to produce a cutoff in the present position, that is to say that a move that was a very good move from a different (but possibly similar) position might also be a good move in the present position. By trying the killer move before other moves, a game-playing program can often produce an early cutoff, saving itself the effort of considering or even generating all legal moves from a position. In practical implementation, game-playing programs frequently keep track of two killer moves for each depth of the game tree (greater than depth of 1) and see if either of these moves, if legal, produces a cutoff before the program generates and considers the rest of the possible moves. If a non-killer move produces a cutoff, it replaces one of the two killer moves at its depth. This idea can be generalized into a set of refutation tables. A generalization of the killer heuristic is the history heuristic. The history heuristic can be implemented as a table that is indexed by some characteristic of the move, for example "from" and "to" squares or piece moving and the "to" square. When there is a cutoff, the appropriate entry in the table is incremented, such as by adding d or d² where d is the current search depth. See also Negascout References External links Informed Search in Complex Games by Mark Winands Killer Heuristic Chess Programming Wiki Game artificial intelligence Heuristics Optimization algorithms and methods
Killer heuristic
[ "Mathematics" ]
558
[ "Game theory", "Game artificial intelligence" ]
173,900
https://en.wikipedia.org/wiki/Epicenter
The epicenter (), epicentre, or epicentrum in seismology is the point on the Earth's surface directly above a hypocenter or focus, the point where an earthquake or an underground explosion originates. Determination The primary purpose of a seismometer is to locate the initiating points of earthquake epicenters. The secondary purpose, of determining the 'size' or magnitude must be calculated after the precise location is known. The earliest seismographs were designed to give a sense of the direction of the first motions from an earthquake. The Chinese frog seismograph would have dropped its ball in the general compass direction of the earthquake, assuming a strong positive pulse. We now know that first motions can be in almost any direction depending on the type of initiating rupture (focal mechanism). The first refinement that allowed a more precise determination of the location was the use of a time scale. Instead of merely noting, or recording, the absolute motions of a pendulum, the displacements were plotted on a moving graph, driven by a clock mechanism. This was the first seismogram, which allowed precise timing of the first ground motion, and an accurate plot of subsequent motions. From the first seismograms, as seen in the figure, it was noticed that the trace was divided into two major portions. The first seismic wave to arrive was the P wave, followed closely by the S wave. Knowing the relative 'velocities of propagation', it was a simple matter to calculate the distance of the earthquake. One seismograph would give the distance, but that could be plotted as a circle, with an infinite number of possibilities. Two seismographs would give two intersecting circles, with two possible locations. Only with a third seismograph would there be a precise location. Modern earthquake location still requires a minimum of three seismometers. Most likely, there are many, forming a seismic array. The emphasis is on precision since much can be learned about the fault mechanics and seismic hazard, if the locations can be determined to be within a kilometer or two, for small earthquakes. For this, computer programs use an iterative process, involving a 'guess and correction' algorithm. As well, a very good model of the local crustal velocity structure is required: seismic velocities vary with the local geology. For P waves, the relation between velocity and bulk density of the medium has been quantified in Gardner's relation. Surface damage Before the instrumental period of earthquake observation, the epicenter was thought to be the location where the greatest damage occurred, but the subsurface fault rupture may be long and spread surface damage across the entire rupture zone. As an example, in the magnitude 7.9 Denali earthquake of 2002 in Alaska, the epicenter was at the western end of the rupture, but the greatest damage was about away at the eastern end. Focal depths of earthquakes occurring in continental crust mostly range from . Continental earthquakes below are rare whereas in subduction zone earthquakes can originate at depths deeper than . Epicentral distance During an earthquake, seismic waves propagates in all directions from the hypocenter. Seismic shadowing occurs on the opposite side of the Earth from the earthquake epicenter because the planet's liquid outer core refracts the longitudinal or compressional (P waves) while it absorbs the transverse or shear waves (S waves). Outside the seismic shadow zone, both types of wave can be detected, but because of their different velocities and paths through the Earth, they arrive at different times. By measuring the time difference on any seismograph and the distance on a travel-time graph on which the P wave and S wave have the same separation, geologists can calculate the distance to the quake's epicenter. This distance is called the epicentral distance, commonly measured in ° (degrees) and denoted as Δ (delta) in seismology. The Láska's empirical rule provides an approximation of epicentral distance in the range of 2,000−10,000 km. Once distances from the epicenter have been calculated from at least three seismographic measuring stations, the point can be located, using trilateration. Epicentral distance is also used in calculating seismic magnitudes as developed by Richter and Gutenberg. Fault rupture The point at which fault slipping begins is referred to as the focus of the earthquake. The fault rupture begins at the focus and then expands along the fault surface. The rupture stops where the stresses become insufficient to continue breaking the fault (because the rocks are stronger) or where the rupture enters ductile material. The magnitude of an earthquake is related to the total area of its fault rupture. Most earthquakes are small, with rupture dimensions less than the depth of the focus so the rupture doesn't break the surface, but in high magnitude, destructive earthquakes, surface breaks are common. Fault ruptures in large earthquakes can extend for more than . When a fault ruptures unilaterally (with the epicenter at or near the end of the fault break) the waves are stronger in one direction along the fault. Macroseismic epicenter The macroseismic epicenter is the best estimate of the location of the epicenter derived without instrumental data. This may be estimated using intensity data, information about foreshocks and aftershocks, knowledge of local fault systems or extrapolations from data regarding similar earthquakes. For historical earthquakes that have not been instrumentally recorded, only a macroseismic epicenter can be given. Etymology The word is derived from the Neo-Latin noun epicentrum, the latinisation of the ancient Greek adjective ἐπίκεντρος (), "occupying a cardinal point, situated on a centre", from ἐπί (epi) "on, upon, at" and κέντρον (kentron) "centre". The term was coined by Irish seismologist Robert Mallet. It is also used to mean "center of activity", as in "Travel is restricted in the Chinese province thought to be the epicentre of the SARS outbreak." Garner's Modern American Usage gives several examples of use in which "epicenter" is used to mean "center". Garner also refers to a William Safire article in which Safire quotes a geophysicist as attributing the use of the term to "spurious erudition on the part of writers combined with scientific illiteracy on the part of copy editors". Garner has speculated that these misuses may just be "metaphorical descriptions of focal points of unstable and potentially destructive environments." References Seismology Geometric centers Geographic position
Epicenter
[ "Physics", "Mathematics" ]
1,404
[ "Point (geometry)", "Geographic position", "Geometric centers", "Position", "Symmetry" ]
173,905
https://en.wikipedia.org/wiki/David%20Baltimore
David Baltimore (born March 7, 1938) is an American biologist, university administrator, and 1975 Nobel laureate in Physiology or Medicine. He is a professor of biology at the California Institute of Technology (Caltech), where he served as president from 1997 to 2006. He founded the Whitehead Institute and directed it from 1982 to 1990. In 2008, he served as president of the American Association for the Advancement of Science. At age 37, Baltimore won the Nobel Prize with Renato Dulbecco and Howard M. Temin "for their discoveries concerning the interaction between tumour viruses and the genetic material of the cell", specifically the discovery of the enzyme reverse transcriptase. He has contributed to immunology, virology, cancer research, biotechnology, and recombinant DNA research. He has also trained many doctoral students and postdoctoral fellows, several of whom have gone on to notable and distinguished research careers. In addition to the Nobel Prize, he has received a number of awards, including the U.S. National Medal of Science in 1999 and the Lasker Award in 2021. Early life and education Baltimore was born on March 7, 1938, in New York City to Gertrude (Lipschitz) and Richard Baltimore. Raised in the Queens neighborhoods of Forest Hills and Rego Park, he moved with his family to suburban Great Neck, New York, while he was in second grade because his mother felt that the city schools were inadequate. His father had been raised as an Orthodox Jew and his mother was an atheist, and Baltimore observed Jewish holidays and would attend synagogue with his father through his Bar Mitzvah. He graduated from Great Neck North High School in 1956, and credits his interest in biology to a high-school summer spent at the Jackson Laboratory's Summer Student Program in Bar Harbor, Maine. It was at this program that he met Howard Temin, with whom he would later share the Nobel Prize. Baltimore earned his bachelor's degree with high honors at Swarthmore College in 1960. He was introduced to molecular biology by George Streisinger, under whose mentorship he worked for a summer at Cold Spring Harbor Laboratory as part of the inaugural cohort of the Undergraduate Research Program in 1959. There he also met two new MIT faculty, future Nobel Laureate Salvador Luria and Cyrus Levinthal, who were scouting for candidates for a new program of graduate education in molecular biology. They invited him to apply to the Massachusetts Institute of Technology (MIT). Baltimore's future promise was evident in his work as a graduate student when he entered MIT's graduate program in biology in 1960 with a brash and brilliant approach to learning science, completing his PhD thesis work in two years. His early interest in phage genetics quickly yielded to a passion for animal viruses. He took the Cold Spring Harbor course on animal virology in 1961 and he moved to Richard Franklin's (got his doctoral degree from Rockefeller Institute) lab at the Rockefeller Institute at New York City, which was one of the few labs pioneering molecular research on animal virology. There he made fundamental discoveries on virus replication and its effect on cell metabolism, including the first description of an RNA replicase. Career and research After his PhD, Baltimore returned to MIT for postdoctoral research with James Darnell in 1963. He continued his work on virus replication using poliovirus and pursued training in enzymology with Jerard Hurwitz at Albert Einstein College of Medicine in 1964/1965. Independent investigator In February 1965, Baltimore was recruited by Renato Dulbecco to the newly established Salk Institute for Biological Studies in La Jolla as an independent research associate. There he investigated poliovirus RNA replication and began a long and storied career of mentoring other scientists' early careers including Marc Girard, and Michael Jacobson. They discovered the mechanism of proteolytic cleavage of viral polyprotein precursors, pointing to the importance of proteolytic processing in the synthesis of eukaryotic proteins. He also met his future wife, Alice Huang, who began working with Baltimore at Salk in 1967. He and Alice together carried out key experiments on defective interfering particles and viral pseudo types. During this work, he made a key discovery that polio produced its viral proteins as a single large polyprotein that was subsequently processed into individual functional peptides. Massachusetts Institute of Technology Reverse transcriptase In 1968, he was recruited once more by soon-to-be Nobel laureate Salvador Luria to the department of biology at MIT as an associate professor of microbiology. Alice S. Huang also moved to MIT to continue her research on vesicular stomatitis virus (VSV). They became a couple, and married in October 1968. At MIT, Huang, Baltimore, and graduate student Martha Stampfer discovered that VSV replication involved an RNA-dependent RNA polymerase within the virus particle, and used a novel strategy to replicate its RNA genome. VSV entered a host cell as a single negative strand of RNA, but brought with it RNA polymerase to stimulate the processes of transcription and replication of more RNA. Baltimore extended this work and examined two RNA tumor viruses, Rauscher murine leukemia virus and Rous sarcoma virus. He went on to discover reverse transcriptase (RTase or RT) – the enzyme that polymerizes DNA from an RNA template. In doing so, he discovered a distinct class of viruses, later called retroviruses, that use an RNA template to catalyze synthesis of viral DNA. This overturned the simplified version of the central dogma of molecular biology that stated that genetic information flows unidirectionally from DNA to RNA to proteins. Reverse transcriptase is essential for the reproduction of retroviruses, allowing such viruses to turn viral RNA strands into viral DNA strands. The viruses that fall into this category include HIV. The discovery of reverse transcriptase, made contemporaneously with Howard Temin, who had proposed the provirus hypothesis, showed that genetic information could traffic bidirectionally between DNA and RNA. They published these findings in back-to-back papers in the journal Nature. This discovery made it easier to isolate and reproduce individual genes, and was heralded as evidence that molecular and virological approaches to understanding cancer would yield new cancer treatments. This may have influenced President Richard Nixon's War on Cancer which was launched in 1971 and substantially increased research funding for the disease. In 1972, at the age of 34, Baltimore was awarded tenure as a professor of biology at MIT, a post that he held until 1997. Asilomar conference on recombinant DNA Baltimore also helped Paul Berg and Maxine Singer to organize the Asilomar Conference on Recombinant DNA, held in February 1975. The conference discussed possible dangers of new biotechnology, drew up voluntary safety guidelines, and issued a call for an ongoing moratorium on certain types of experiments and review of possible experiments, which has been institutionalized by recombinant DNA advisory committees established at essentially all US academic institutions conducting molecular biology research. Baltimore was well aware of the importance of the changes occurring in the laboratory: "The whole Asilomar process opened up to the world that modern biology had new powers that you had never conceived of before." MIT Cancer Center In 1973, he was awarded an American Cancer Society Professor of Microbiology that provided substantial salary support. Also in 1973, he became one of the early faculty members in the newly organized MIT Center for Cancer (CCR), capping a creative and industrious period of his career with nearly fifty research publications including the paradigm-shifting paper on reverse transcriptase. The MIT CCR was led by Salvador E. Luria and quickly achieved pre-eminence with a group of faculty including Baltimore, Phillips Robbins, Herman Eisen, Philip Sharp, and Robert Weinberg, who all went on to illustrious research careers. Baltimore was honored as a Fellow of the American Academy of Arts and Sciences in 1974. He returned to New York City in 1975, for a year-long sabbatical at Rockefeller University working with Jim Darnell. Nobel Prize In 1975, at the age of 37, he shared the Nobel Prize for Physiology or Medicine with Howard Temin and Renato Dulbecco. The citation reads, "for their discoveries concerning the interaction between tumor viruses and the genetic material of the cell." At the time, Baltimore's greatest contribution to virology was his discovery of reverse transcriptase (Rtase or RT) which is essential for the reproduction of retroviruses such as HIV and was discovered independently, and at about the same time, by Satoshi Mizutani and Temin. After winning the Nobel Prize, Baltimore reorganized his laboratory, refocusing on immunology and virology, with immunoglobulin gene expression as a major area of interest. He tackled new problems such as the pathogenesis of Abelson murine leukemia virus (AMuLV), lymphocyte differentiation and related topic in immunology. In 1980, his group isolated the oncogene in AMuLV and showed it was a member of a new class of protein kinases that used the amino acid tyrosine as a phosphoacceptor. This type of enzymatic activity was also discovered by Tony Hunter, who has done extensive work in the area. He also continued to pursue fundamental questions in RNA viruses and in 1981, Baltimore and Vincent Racaniello, a post-doctoral fellow in his laboratory, used recombinant DNA technology to generate a plasmid encoding the genome of poliovirus, an animal RNA virus. The plasmid DNA was introduced into cultured mammalian cells and infectious poliovirus was produced. The infectious clone, DNA encoding the genome of a virus, is a standard tool used today in virology. Whitehead Institute for Biomedical Research In 1982, with a charitable donation by businessman and philanthropist Edwin C. "Jack" Whitehead, Baltimore was asked to help establish a self-governed research institute dedicated to basic biomedical research. Baltimore persuaded Whitehead that MIT would be the ideal home for the new institute, convinced that it would be superior at hiring the best researchers in biology at the time, thus ensuring quality. Persuading MIT faculty to support the idea was far more difficult. MIT as an institution had never housed another before, and concerns were raised that the wealth of the institute might skew the biology department in directions faculty did not wish to take, and that Baltimore himself would gain undue influence over hiring within the department. The controversy was made worse by an article published by the Boston Globe framing the institute as corporate takeover of MIT. After a year of intensive discussions and planning, faculty finally voted in favor of the institute. Whitehead, Baltimore, and the rest of the planning team devised a unique structure of an independent research institute composed of "members" with a close relationship with the department of biology of MIT. This structure continues to this day to attract an elite interactive group of faculty to the Department of Biology at MIT and has served as a model for other distinguished institutes such as the Broad Institute. The Whitehead Institute for Biomedical Research (WIBR) was launched with $35 million to construct and equip a new building located across the street from the MIT cancer center at 9 Cambridge Center in Cambridge Massachusetts. The institute also received $5 million per year in guaranteed income and a substantial endowment in his will (for a total gift of $135 million). Under Baltimore's leadership, a distinguished group of founding members including Gerald Fink, Rudolf Jaenisch, Harvey Lodish, and Robert Weinberg was assembled and eventually grew to 20 members in disciplines ranging from immunology, genetics, and oncology to fundamental developmental studies in mice and fruit flies. Whitehead Institute's contributions to bioscience have long been consistently outstanding. Less than a decade after its founding with continued leadership by Baltimore, the Whitehead Institute was named the top research institution in the world in molecular biology and genetics, and over a recent 10-year period, papers published by Whitehead scientists, including many from Baltimore's own lab, were the most cited papers of any biological research institute. The Whitehead Institute was an important partner in the Human Genome Project. Baltimore served as director of the WIBR and expanded the faculty and research areas into key areas of research including mouse and drosophila genetics. During this time, Baltimore's own research program thrived in the new Institute. Important breakthroughs from Baltimore's lab include the discovery of the key transcription factor NF-κB by Dr. Ranjan Sen and David Baltimore in 1986. This was part of a broader investigation to identify nuclear factors required for lg gene expression in B lymphocytes. However, NF-κB turned out to have much broader importance in both innate and adaptive immunity and viral regulation. NF-κB is involved in regulating cellular responses and belongs to the category of "rapid-acting" primary transcription factors. Their discovery led to an "information explosion" involving "one of the most intensely studied signaling paradigms of the last two decades." As early as 1984, Rudolf Grosschedl and David Weaver, postdoctoral fellows, in Baltimore's laboratory, were experimenting with the creation of transgenic mice as a model for the study of disease. They suggested that "control of lg gene rearrangement might be the only mechanism that determines the specificity of heavy chain gene expression within the lymphoid cell lineage." in 1987, they created transgenic mice with the fused gene that developed fatal leukemia. David G. Schatz and Marjorie Oettinger, as students in Baltimore's research group in 1988 and 1989, identified the protein pair that rearranges immunoglobulin genes, the recombination-activating gene RAG-1 and RAG-2. this was a key discovery in determining how the immune system can have specificity for a given molecule out of many possibilities, and was considered by Baltimore as of 2005 to be "our most significant discovery in immunology". In 1990, as a student in David Baltimore's laboratory at MIT, George Q. Daley demonstrated that a fusion protein called bcr-abl is sufficient to stimulate cell growth and cause chronic myelogenous leukemia (CML). This work helped to identify a class of proteins that become hyperactive in specific types of cancer cells. It helped to lay the groundwork for a new type of drug, attacking cancer at the genetic level: Brian Druker's development of the anti-cancer drug Imatinib (Gleevec), which deactivates bcr-abl proteins. Gleevec has shown impressive results in treating chronic myelogenous leukemia and also promise in treating gastrointestinal stromal tumor (GIST). Rockefeller University Baltimore served as the director of the Whitehead Institute until July 1, 1990, when he was appointed the sixth president of Rockefeller University in New York City. He moved his research group to New York in stages and continued to make creative contributions to virology and cellular regulation. He also began important reforms in fiscal and faculty management and promoted the status of junior faculty at the university. After resigning on December 3, 1991 (see Imanishi-Kari case), Baltimore remained on the Rockefeller University faculty and continued research until the spring of 1994. He was invited to return to MIT and rejoined the faculty as the Ivan R. Cottrell Professor of Molecular Biology and Immunology. California Institute of Technology On May 13, 1997, Baltimore was appointed president of the California Institute of Technology (Caltech). He began serving in the office October 15, 1997 and was inaugurated March 9, 1998. During Baltimore's tenure at Caltech, United States President Bill Clinton awarded Baltimore the National Medal of Science in 1999 for his numerous contributions to the scientific world. In 2004, Rockefeller University gave Baltimore its highest honor, Doctor of Science (honoris causa). In 2003, as a postdoctoral fellow in David Baltimore's lab at Caltech, Matthew Porteus was the first to demonstrate precise gene editing in human cells using chimeric nucleases. In October 2005, Baltimore resigned the office of the president of Caltech, saying, "This is not a decision that I have made easily, but I am convinced that the interests of the Institute will be best served by a presidential transition at this particular time in its history...". Former Georgia Tech Provost Jean-Lou Chameau succeeded Baltimore as president of Caltech. Baltimore was appointed President Emeritus and the Robert Andrews Milikan Professor of Biology at Caltech and remains an active member of the institute's community. On January 21, 2021, Caltech president Thomas F. Rosenbaum announced the removal of the name of Caltech's founding president and first Nobel laureate, Robert A. Millikan, from campus buildings, assets, and honors due to Millikan's substantial participation in the eugenics movement. Baltimore's title was changed to "Distinguished Professor of Biology." Caltech Laboratory (1997–2019) Baltimore's laboratory at Caltech focused on two major research areas: understanding the development and functioning of the mammalian immune system and translational studies creating viral vectors to make the immune system more effective in resisting cancer. Their basic studies went in two directions: understanding the diverse activity of the NF-κB transcription factor, and understanding the normal and pathologic functions of microRNA. Translational Science Initiatives A primary focus of Baltimore's lab was use of gene therapy methods to treat HIV and cancer. In the early 2000s one of Baltimore's graduate students, Lili Yang, developed a lentivirus vector that allowed for the cloning of genes for two chains of TCR. Recognizing its potentially profound implications for enhancing immunity, Baltimore developed a translational research initiative within his laboratory called "Engineering Immunity." The Bill and Melinda Gates Foundation awarded the program with a Grand Challenge Grant, and he used the funding to divide the initiative into four research programs and hire additional lab staff to lead each one. Two of the research programs sparked gene therapy start-up companies, Calimmune and Immune Design Corp, founded in 2006 and 2008 respectively. A third program focused on the development of an HIV vaccine, and eventually lead to clinical trials at NIH. In 2009 Baltimore became director of the Joint Center for Translational Medicine, a shared initiative between Caltech and UCLA aimed at developing bench to bedside medicine. MicroRNA Research A focus of Baltimore's lab from his arrival at Caltech to the lab's closure in 2018 was understanding the role of microRNA in the immune system. MicroRNAs provide fine control over gene expression by regulating the amount of protein made by particular messenger RNAs. In recent research led by Jimmy Zhao, Baltimore's team has discovered a small RNA molecule called microRNA-146a (miR-146a) and bred a strain of mice that lacks miR146a. They have used the miR146a(-) mice as a model to study the effects of chronic inflammation on the activity of hematopoietic stem cells (HSCs). Their results suggest that microRNA-146a protects HSCs during chronic inflammation, and that its lack may contribute to blood cancers and bone marrow failure. Splicing Control Research Another concentration within Baltimore's lab in recent years was control of inflammatory and immune responses, specifically splicing control of gene expression after inflammatory stimuli. In 2013 they discovered that ordered expression of genes following an inflammatory stimulus was controlled by splicing, not transcription as previously supposed. This led to further discoveries that delayed splicing was caused by introns, with the revelation that RNA-binding protein BUD13 acts at this intron to increase the amount of successful splicing (2 articles by Luke Frankiw published in 2019 and 2020). In an autobiographical piece published in Annual Review Immunology in 2019, Baltimore announced that half of his lab space at Caltech would be taken over by a new assistant professor in Fall 2018, and his current lab group would be the last. "I have been involved in research for 60 years, and I think it is time to leave the field to younger people." Public policy In the span of his career, Baltimore has profoundly impacted national science policy debates, including the AIDS epidemic and recombinant DNA research. His efforts to organize the Asilomar Conference on Recombinant DNA were key to creating consensus within scientific and policy spheres. In recent years Baltimore has joined with other scientists to call for a worldwide moratorium on use of a new genome-editing technique to alter inheritable human DNA. A key step enabling researchers to slice up any DNA sequence they choose was developed by Emmanuelle Charpentier, then at Umea University in Sweden, and Jennifer A. Doudna of the University of California, Berkeley. Reminiscent of the Asilomar conference on recombinant DNA in 1975, those involved want both scientists and the public to be more aware of the ethical issues and risks involved with new techniques for genome modification. An early spokesperson for federal funding for AIDS research, Baltimore co-chaired the 1986 National Academy of Sciences committee on a National Strategy for AIDS. In 1986, he and Sheldon M. Wolff were invited by the National Academy of Sciences and the Institute of Medicine to coauthor an independent report: Confronting AIDS (1986), in which they called for a $1 billion research program for HIV/AIDS. As of 1996 he was appointed head of the National Institutes of Health (NIH) AIDS Vaccine Research Committee (AVRC). Biotechnology Baltimore holds nearly 100 different biotechnology patents in the US and Europe, and has been preeminent in American biotechnology since the 1970s. In addition to Calimmune and Immune Design, he also helped found s2A Molecular, Inc. He has consulted at various companies including Collaborative Research, Bristol Myers Squibb, and most recently Virtualitics. He serves on the board of directors at several companies and non-profit institutions including Regulus Therapeutics and Appia Bio. He has also been a member of numerous Scientific Advisory Boards, and currently serves with PACT Pharma, Volastra Therapeutics, Vir Biotechnology, and the Center for Infectious Diseases Research at Westlake University. He is the principal scientific advisor for the Science Philanthropy Alliance. Awards and legacy Baltimore's honors include the 1970 Gustave Stern Award in Virology, 1971 Eli Lilly and Co. Award in Microbiology or Immunology, 1999 National Medal of Science, and 2000 Warren Alpert Foundation Prize. He was elected to the National Academy of Sciences USA (NAS) in1974; the American Academy of Arts and Sciences, 1974; the NAS Institute of Medicine (IOM), 1974; the American Association of Immunologists, 1984; the American Philosophical Society, 1997. He was elected a Foreign Member of the Royal Society (ForMemRS) in 1987; the French Academy of Sciences, 2000; and the American Association for Cancer Research (AACR). He is also a member of the Pontifical Academy of Sciences, 1978. In 2008, Baltimore was president of the American Association for the Advancement of Science (AAAS). He has published over 700 peer-reviewed articles. Baltimore is a member of the USA Science and Engineering Festival's Advisory Board and an Xconomist (an editorial advisor for the tech news and media company, Xconomy). Baltimore also serves on The Jackson Laboratory's board of trustees, the Bulletin of the Atomic Scientists' Board of Sponsors, Amgen, Inc.'s board of directors, and numerous other organizations and their boards. In 2019 Caltech named a graduate fellowship program in biochemistry and molecular biophysics in honor of Baltimore. The program combined a $7.5 million gift from the Amgen Foundation with an existing one-year Amgen fellowship and $3.75 million given by Caltech's Gordon and Betty Moore Graduate Fellowship Match. Controversies Imanishi-Kari case During the late 1980s and early 1990s, Thereza Imanishi-Kari, a scientist who was not in Baltimore's laboratory but in a separate, independent laboratory at MIT, was implicated in a case of scientific fraud. The case received extensive news coverage and a Congressional investigation. The case was linked to Baltimore's name because of his scientific collaboration with and later his strong defense of Imanishi-Kari against accusations of fraud. In 1986, while a professor of biology at MIT and director at Whitehead, Baltimore co-authored a scientific paper on immunology with Thereza Imanishi-Kari (an assistant professor of biology who had her own laboratory at MIT) as well as four others. A postdoctoral fellow in Imanishi-Kari's laboratory, Margot O'Toole, who was not an author, reported concerns about the paper, ultimately accusing Imanishi-Kari of fabricating data in a cover-up. Baltimore, however, refused to retract the paper. O'Toole soon dropped her challenge, but the NIH, which had funded the contested paper's research, began investigating, at the insistence of Walter W. Stewart, a self-appointed fraud buster, and Ned Feder, his lab head at the NIH. Representative John Dingell (D-MI) also aggressively pursued it, eventually calling in U.S. Secret Service (USSS; U.S. Treasury) document examiners. Around October 1989, when Baltimore was appointed president of Rockefeller University, around a third of the faculty opposed his appointment because of concerns about his behaviour in the Imanishi-Kari case. He visited every laboratory, one by one, to hear those concerns directly from each group of researchers. In a draft report dated March 14, 1991, based mainly on USSS forensics findings, NIH's fraud unit, then called the Office of Scientific Integrity (OSI), accused Imanishi-Kari of falsifying and fabricating data. It also criticized Baltimore for failing to embrace O'Toole's challenge. Less than a week later, the report was leaked to the press. Baltimore and three co-authors then retracted the paper; however, Imanishi-Kari and Moema H. Reis did not sign the retraction. In the report, Baltimore stated that he may have been "too willing to accept" Imanishi-Kari's explanations and felt he "did too little to seek an independent verification of her data and conclusions." Baltimore publicly apologized for not taking a whistle-blower's charge more seriously. Amid concerns raised by negative publicity in connection with the scandal, Baltimore resigned as president of Rockefeller University and rejoined the MIT Biology faculty. In July 1992, the US Attorney for the District of Maryland, who had been investigating the case, announced he would not bring criminal or civil charges against Imanishi-Kari. In October 1994, however, OSI's successor, the Office of Research Integrity (ORI; HHS) found Imanishi-Kari guilty on 19 counts of research misconduct, basing its conclusions largely on Secret Service analysis of laboratory notebooks, documents that these investigators had little experience or expert guidance in interpreting. An HHS appeals panel began meeting in June 1995 to review all charges in detail. In June 1996, the panel ruled that the ORI had failed to prove any of its 19 charges. After throwing out much of the documentary evidence gathered by the ORI, the panel dismissed all charges against Imanishi-Kari. As their final report stated, the HHS panel "found that much of what ORI presented was irrelevant, had limited probative value, was internally inconsistent, lacked reliability or foundation, was not credible or not corroborated, or was based on unwarranted assumptions." It did conclude that "The Cell paper as a whole is rife with errors of all sorts ... [including] some which, despite all these years and layers of review, have never previously been pointed out or corrected. Responsibility ... must be shared by all participants." Neither OSI nor ORI ever accused Baltimore of research misconduct. The reputations of Stewart and Feder, who had pushed for the investigation, were badly damaged. The pair were reassigned to other positions at NIH because they failed to maintain productivity in their roles as scientists and questions were raised about the legitimacy of their self-appointed inquiries into scientific integrity. The Imanishi-Kari controversy was one among several prominent scientific integrity cases of the 1980s and 1990s in the United States. In nearly all cases, defendants were ultimately cleared. The case profoundly impacted the process for handling of scientific misconduct in the United States. Baltimore has been both defended and criticized for his actions in this matter. In 1993, Yale University mathematician Serge Lang strongly criticized Baltimore's behavior. Historian of science Daniel Kevles, writing after the exoneration of Imanishi-Kari, recounted the affair in his 1998 book, The Baltimore Case. Horace Freeland Judson also gives a critical assessment of Baltimore's actions in The Great Betrayal: Fraud In Science. Baltimore has also written his own analysis. Luk van Parijs case In 2005, at Baltimore's request, Caltech began investigating the work that Luk van Parijs had conducted while a postdoc in Baltimore's laboratory. Van Parijs first came under suspicion at MIT, for work done after he had left Baltimore's lab. After van Parijs had been fired by MIT, his doctoral supervisor also noted problems with work van Parijs did at the Brigham and Women's Hospital, before leaving Harvard to go to Baltimore's lab. The Caltech investigation concluded in March 2007. It found van Parijs alone committed research misconduct, and that four papers co-authored by Baltimore, van Parijs, and others required correction. COVID-19 and lab-leak theory In May 2021, Baltimore was quoted in the Bulletin of the Atomic Scientists in an article about the origins of the COVID-19 virus, saying, "When I first saw the furin cleavage site in the viral sequence, with its arginine codons, I said to my wife it was the smoking gun for the origin of the virus. These features make a powerful challenge to the idea of a natural origin for SARS2." This quote was widely shared and gave credence to the possibility of a Wuhan lab leak that has been discussed extensively as part of investigations into the origin of COVID-19. A month later, Baltimore told the Los Angeles Times that he "should have softened the phrase 'smoking gun' because I don't believe that it proves the origin of the furin cleavage site but it does sound that way. I believe that the question of whether the sequence was put in naturally or by molecular manipulation is very hard to determine but I wouldn't rule out either origin." Awards and honors 1971 First recipient of the Gustav Stern Award in Virology 1971 Warren Triennial Prize 1971 Eli Lilly Award in Immunology and Microbiology 1974 Fellow of the American Academy of Arts and Sciences 1974 NAS Award in Molecular Biology 1974 Canada Gairdner International Award 1975 Nobel Prize in Physiology or Medicine 1983 EMBO Member 1986 Golden Plate Award of the American Academy of Achievement 1999 National Medal of Science 2000 Warren Alpert Foundation Prize 2021 Lasker-Koshland Special Achievement Award in Medical Science Honorary degrees 1976 Swarthmore College, Swarthmore, PA 1987 Mount Holyoke College, So. Hadley, MA 1990 Mount Sinai Medical Center, New York, NY 1990 Bard College, Annandale-on-Hudson, NY 1990 University of Helsinki, Helsinki, Finland 1998 Weizmann Institute of Science, Israel 1999 Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 1999 University of Alabama, Birmingham, AL 2001 California Polytechnic State University, San Luis Obispo, CA 2004 Columbia University, New York, NY 2004 Yale University, New Haven, CT 2004 The Rockefeller University, New York, NY 2005 Harvard University, Cambridge, MA 2012 University of Buenos Aires, Buenos Aires, Argentina Personal life Baltimore married Dr. Alice S. Huang in 1968. The couple has one daughter. Baltimore is an avid fly-fisher. Books Luria, S. E., J.E. Darnell, D. Baltimore and A. Campbell (1978) General Virology 3rd edition John Wiley and Sons, New York, New York. Darnell, J., H. Lodish and D. Baltimore (1986) Molecular Cell Biology, Scientific American, New York, New York. See also History of RNA biology List of Jewish Nobel laureates List of RNA biologists Baltimore classification 73079 Davidbaltimore References External links Caltech Biology Division Faculty member page Baltimore Laboratory at Caltech site David Baltimore's Seminars: "Danger from the Wild: HIV, Can We Conquer It?" Initial reports of ribonucleic acid-dependent DNA polymerase activity: Department of Health & Human Services, Departmental Appeals Board, Research Integrity Adjudications Panel Thereza Imanishi-Kari, Ph.D. appeal ruling (Docket No. A-95-33, Decision No. 1582, June 21, 1996; Presentation missing footnotes 169–235 & footnote reference nos. 170–235). Nobel Prize video interview "The Discover Magazine Interview with David Baltimore" upon his retirement from the presidency of Caltech in 2006 * PBS interview with Baltimore on AIDS, hepatitis, vaccines and science politics 1938 births Living people Nobel laureates in Physiology or Medicine American Nobel laureates Jewish physicians American immunologists California Institute of Technology faculty Fellows of the American Academy of Arts and Sciences Fellows of the American Association for the Advancement of Science Foreign members of the Royal Society History of biotechnology Jewish American scientists Massachusetts Institute of Technology School of Science faculty Members of the European Molecular Biology Organization Members of the French Academy of Sciences Members of the Pontifical Academy of Sciences Members of the United States National Academy of Sciences National Medal of Science laureates People from Forest Hills, Queens People from Great Neck, New York Presidents of the California Institute of Technology Swarthmore College alumni Presidents of Rockefeller University Biotechnologists John L. Miller Great Neck North High School alumni Members of the National Academy of Medicine People from Rego Park, Queens American biotechnologists Members of the American Philosophical Society
David Baltimore
[ "Biology" ]
7,008
[ "History of biotechnology" ]
173,918
https://en.wikipedia.org/wiki/Complex%20conjugate
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if and are real numbers, then the complex conjugate of is The complex conjugate of is often denoted as or . In polar form, if and are real numbers then the conjugate of is This can be shown using Euler's formula. The product of a complex number and its conjugate is a real number:  (or  in polar coordinates). If a root of a univariate polynomial with real coefficients is complex, then its complex conjugate is also a root. Notation The complex conjugate of a complex number is written as or The first notation, a vinculum, avoids confusion with the notation for the conjugate transpose of a matrix, which can be thought of as a generalization of the complex conjugate. The second is preferred in physics, where dagger (†) is used for the conjugate transpose, as well as electrical engineering and computer engineering, where bar notation can be confused for the logical negation ("NOT") Boolean algebra symbol, while the bar notation is more common in pure mathematics. If a complex number is represented as a matrix, the notations are identical, and the complex conjugate corresponds to the matrix transpose, which is a flip along the diagonal. Properties The following properties apply for all complex numbers and unless stated otherwise, and can be proved by writing and in the form For any two complex numbers, conjugation is distributive over addition, subtraction, multiplication and division: A complex number is equal to its complex conjugate if its imaginary part is zero, that is, if the number is real. In other words, real numbers are the only fixed points of conjugation. Conjugation does not change the modulus of a complex number: Conjugation is an involution, that is, the conjugate of the conjugate of a complex number is In symbols, The product of a complex number with its conjugate is equal to the square of the number's modulus: This allows easy computation of the multiplicative inverse of a complex number given in rectangular coordinates: Conjugation is commutative under composition with exponentiation to integer powers, with the exponential function, and with the natural logarithm for nonzero arguments: If is a polynomial with real coefficients and then as well. Thus, non-real roots of real polynomials occur in complex conjugate pairs (see Complex conjugate root theorem). In general, if is a holomorphic function whose restriction to the real numbers is real-valued, and and are defined, then The map from to is a homeomorphism (where the topology on is taken to be the standard topology) and antilinear, if one considers as a complex vector space over itself. Even though it appears to be a well-behaved function, it is not holomorphic; it reverses orientation whereas holomorphic functions locally preserve orientation. It is bijective and compatible with the arithmetical operations, and hence is a field automorphism. As it keeps the real numbers fixed, it is an element of the Galois group of the field extension This Galois group has only two elements: and the identity on Thus the only two field automorphisms of that leave the real numbers fixed are the identity map and complex conjugation. Use as a variable Once a complex number or is given, its conjugate is sufficient to reproduce the parts of the -variable: Real part: Imaginary part: Modulus (or absolute value): Argument: so Furthermore, can be used to specify lines in the plane: the set is a line through the origin and perpendicular to since the real part of is zero only when the cosine of the angle between and is zero. Similarly, for a fixed complex unit the equation determines the line through parallel to the line through 0 and These uses of the conjugate of as a variable are illustrated in Frank Morley's book Inversive Geometry (1933), written with his son Frank Vigor Morley. Generalizations The other planar real unital algebras, dual numbers, and split-complex numbers are also analyzed using complex conjugation. For matrices of complex numbers, where represents the element-by-element conjugation of Contrast this to the property where represents the conjugate transpose of Taking the conjugate transpose (or adjoint) of complex matrices generalizes complex conjugation. Even more general is the concept of adjoint operator for operators on (possibly infinite-dimensional) complex Hilbert spaces. All this is subsumed by the *-operations of C*-algebras. One may also define a conjugation for quaternions and split-quaternions: the conjugate of is All these generalizations are multiplicative only if the factors are reversed: Since the multiplication of planar real algebras is commutative, this reversal is not needed there. There is also an abstract notion of conjugation for vector spaces over the complex numbers. In this context, any antilinear map that satisfies where and is the identity map on for all and for all is called a , or a real structure. As the involution is antilinear, it cannot be the identity map on Of course, is a -linear transformation of if one notes that every complex space has a real form obtained by taking the same vectors as in the original space and restricting the scalars to be real. The above properties actually define a real structure on the complex vector space One example of this notion is the conjugate transpose operation of complex matrices defined above. However, on generic complex vector spaces, there is no notion of complex conjugation. See also References Footnotes Bibliography Budinich, P. and Trautman, A. The Spinorial Chessboard. Springer-Verlag, 1988. . (antilinear maps are discussed in section 3.3). Complex numbers
Complex conjugate
[ "Mathematics" ]
1,270
[ "Complex numbers", "Mathematical objects", "Numbers" ]
173,926
https://en.wikipedia.org/wiki/Inductive%20bias
The inductive bias (also known as learning bias) of a learning algorithm is the set of assumptions that the learner uses to predict outputs of given inputs that it has not encountered. Inductive bias is anything which makes the algorithm learn one pattern instead of another pattern (e.g., step-functions in decision trees instead of continuous functions in linear regression models). Learning involves searching a space of solutions for a solution that provides a good explanation of the data. However, in many cases, there may be multiple equally appropriate solutions. An inductive bias allows a learning algorithm to prioritize one solution (or interpretation) over another, independently of the observed data. In machine learning, the aim is to construct algorithms that are able to learn to predict a certain target output. To achieve this, the learning algorithm is presented some training examples that demonstrate the intended relation of input and output values. Then the learner is supposed to approximate the correct output, even for examples that have not been shown during training. Without any additional assumptions, this problem cannot be solved since unseen situations might have an arbitrary output value. The kind of necessary assumptions about the nature of the target function are subsumed in the phrase inductive bias. A classical example of an inductive bias is Occam's razor, assuming that the simplest consistent hypothesis about the target function is actually the best. Here, consistent means that the hypothesis of the learner yields correct outputs for all of the examples that have been given to the algorithm. Approaches to a more formal definition of inductive bias are based on mathematical logic. Here, the inductive bias is a logical formula that, together with the training data, logically entails the hypothesis generated by the learner. However, this strict formalism fails in many practical cases in which the inductive bias can only be given as a rough description (e.g., in the case of artificial neural networks), or not at all. Types The following is a list of common inductive biases in machine learning algorithms. Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence. This is the bias used in the Naive Bayes classifier. Minimum cross-validation error: when trying to choose among hypotheses, select the hypothesis with the lowest cross-validation error. Although cross-validation may seem to be free of bias, the "no free lunch" theorems show that cross-validation must be biased, for example assuming that there is no information encoded in the ordering of the data. Maximum margin: when drawing a boundary between two classes, attempt to maximize the width of the boundary. This is the bias used in support vector machines. The assumption is that distinct classes tend to be separated by wide boundaries. Minimum description length: when forming a hypothesis, attempt to minimize the length of the description of the hypothesis. Minimum features: unless there is good evidence that a feature is useful, it should be deleted. This is the assumption behind feature selection algorithms. Nearest neighbors: assume that most of the cases in a small neighborhood in feature space belong to the same class. Given a case for which the class is unknown, guess that it belongs to the same class as the majority in its immediate neighborhood. This is the bias used in the k-nearest neighbors algorithm. The assumption is that cases that are near each other tend to belong to the same class. Shift of bias Although most learning algorithms have a static bias, some algorithms are designed to shift their bias as they acquire more data. This does not avoid bias, since the bias shifting process itself must have a bias. See also Algorithmic bias Cognitive bias No free lunch theorem No free lunch in search and optimization References Bias Machine learning
Inductive bias
[ "Engineering" ]
767
[ "Artificial intelligence engineering", "Machine learning" ]
173,937
https://en.wikipedia.org/wiki/Cosmological%20principle
In modern physical cosmology, the cosmological principle is the notion that the spatial distribution of matter in the universe is uniformly isotropic and homogeneous when viewed on a large enough scale, since the forces are expected to act equally throughout the universe on a large scale, and should, therefore, produce no observable inequalities in the large-scale structuring over the course of evolution of the matter field that was initially laid down by the Big Bang. Definition Astronomer William Keel explains: The cosmological principle is usually stated formally as 'Viewed on a sufficiently large scale, the properties of the universe are the same for all observers.' This amounts to the strongly philosophical statement that the part of the universe which we can see is a fair sample, and that the same physical laws apply throughout. In essence, this in a sense says that the universe is knowable and is playing fair with scientists. The cosmological principle depends on a definition of "observer", and contains an implicit qualification and two testable consequences. "Observers" means any observer at any location in the universe, not simply any human observer at any location on Earth: as Andrew Liddle puts it, "the cosmological principle [means that] the universe looks the same whoever and wherever you are." The qualification is that variation in physical structures can be overlooked, provided this does not imperil the uniformity of conclusions drawn from observation: the Sun is different from the Earth, our galaxy is different from a black hole, some galaxies advance toward rather than recede from us, and the universe has a "foamy" texture of galaxy clusters and voids, but none of these different structures appears to violate the basic laws of physics. The two testable structural consequences of the cosmological principle are homogeneity and isotropy. Homogeneity means that the same observational evidence is available to observers at different locations in the universe ("the part of the universe which we can see is a fair sample"). Isotropy means that the same observational evidence is available by looking in any direction in the universe ("the same physical laws apply throughout"). The principles are distinct but closely related, because a universe that appears isotropic from any two (for a spherical geometry, three) locations must also be homogeneous. Origin The cosmological principle is first clearly asserted in the Philosophiæ Naturalis Principia Mathematica (1687) of Isaac Newton. In contrast to some earlier classical or medieval cosmologies, in which Earth rested at the center of universe, Newton conceptualized the Earth as a sphere in orbital motion around the Sun within an empty space that extended uniformly in all directions to immeasurably large distances. He then showed, through a series of mathematical proofs on detailed observational data of the motions of planets and comets, that their motions could be explained by a single principle of "universal gravitation" that applied as well to the orbits of the Galilean moons around Jupiter, the Moon around the Earth, the Earth around the Sun, and to falling bodies on Earth. That is, he asserted the equivalent material nature of all bodies within the Solar System, the identical nature of the Sun and distant stars and thus the uniform extension of the physical laws of motion to a great distance beyond the observational location of Earth itself. Implications Since the 1990s, observations assuming the cosmological principle have concluded that around 68% of the mass–energy density of the universe can be attributed to dark energy, which led to the development of the ΛCDM model. Observations show that more distant galaxies are closer together and have lower content of chemical elements heavier than lithium. Applying the cosmological principle, this suggests that heavier elements were not created in the Big Bang but were produced by nucleosynthesis in giant stars and expelled across a series of supernovae and new star formation from the supernova remnants, which means heavier elements would accumulate over time. Another observation is that the furthest galaxies (earlier time) are often more fragmentary, interacting and unusually shaped than local galaxies (recent time), suggesting evolution in galaxy structure as well. A related implication of the cosmological principle is that the largest discrete structures in the universe are in mechanical equilibrium. Homogeneity and isotropy of matter at the largest scales would suggest that the largest discrete structures are parts of a single indiscrete form, like the crumbs which make up the interior of a cake. At extreme cosmological distances, the property of mechanical equilibrium in surfaces lateral to the line of sight can be empirically tested; however, under the assumption of the cosmological principle, it cannot be detected parallel to the line of sight (see timeline of the universe). Cosmologists agree that in accordance with observations of distant galaxies, a universe must be non-static if it follows the cosmological principle. In 1923, Alexander Friedmann set out a variant of Albert Einstein's equations of general relativity that describe the dynamics of a homogeneous isotropic universe. Independently, Georges Lemaître derived in 1927 the equations of an expanding universe from the General Relativity equations. Thus, a non-static universe is also implied, independent of observations of distant galaxies, as the result of applying the cosmological principle to general relativity. Criticism Karl Popper criticized the cosmological principle on the grounds that it makes "our lack of knowledge a principle of knowing something". He summarized his position as: the "cosmological principles" were, I fear, dogmas that should not have been proposed. Observations Although the universe is inhomogeneous at smaller scales, according to the ΛCDM model it ought to be isotropic and statistically homogeneous on scales larger than 250 million light years. However, recent findings (the Axis of Evil for example) have suggested that violations of the cosmological principle exist in the universe and thus have called the ΛCDM model into question, with some authors suggesting that the cosmological principle is now obsolete and the Friedmann–Lemaître–Robertson–Walker metric breaks down in the late universe. Violations of isotropy The cosmic microwave background (CMB) is predicted by the ΛCDM model to be isotropic, that is to say that its intensity is about the same whichever direction we look at. Data from the Planck Mission shows hemispheric bias in 2 respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities), the collaboration noted that these features are not strongly statistically inconsistent with isotropy. Some authors say that the universe around Earth is isotropic at high significance by studies of the cosmic microwave background temperature maps. There are however claims of isotropy violations from galaxy clusters, quasars, and type Ia supernovae. Violations of homogeneity The cosmological principle implies that at a sufficiently large scale, the universe is homogeneous. Based on N-body simulations in a ΛCDM universe, Yadav and his colleagues showed that the spatial distribution of galaxies is statistically homogeneous if averaged over scales of 260/h Mpc or more. A number of observations have been reported to be in conflict with predictions of maximal structure sizes: The Clowes–Campusano LQG, discovered in 1991, has a length of 580 Mpc, and is marginally larger than the consistent scale. The Sloan Great Wall, discovered in 2003, has a length of 423 Mpc, which is only just consistent with the cosmological principle. U1.11, a large quasar group discovered in 2011, has a length of 780 Mpc, and is two times larger than the upper limit of the homogeneity scale. The Huge-LQG, discovered in 2012, is three times longer than, and twice as wide as is predicted possible according to these current models, and so challenges our understanding of the universe on large scales. In November 2013, a new structure 10 billion light years away measuring 2000–3000 Mpc (more than seven times that of the Sloan Great Wall) was discovered, the Hercules–Corona Borealis Great Wall, putting further doubt on the validity of the cosmological principle. In September 2020, a 4.9σ conflict was found between the kinematic explanation of the CMB dipole and the measurement of the dipole in the angular distribution of a flux-limited, all-sky sample of 1.36 million quasars. In June 2021, the Giant Arc was discovered, a structure spanning approximately 1000 Mpc. It is located 2820 Mpc away and consists of galaxies, galactic clusters, gas, and dust. In January 2024, the Big Ring was discovered. It is located 9.2 billion light years away from Earth has a diameter of 1.3 billion light years or around the size of 15 full Moons as seen from Earth. However, as pointed out by Seshadri Nadathur in 2013 using statistical properties, the existence of structures larger than the homogeneous scale (260/h Mpc by Yadav's estimation) does not necessarily violate the cosmological principle in the ΛCDM model (see ). CMB dipole The cosmic microwave background (CMB) provides a snapshot of a largely isotropic and homogeneous universe. The largest scale feature of the CMB is the dipole anisotropy; it is typically subtracted from maps due to its large amplitude. The standard interpretation of the dipole is that it is due to the Doppler effect caused by the motion of the solar system with respect to the CMB rest-frame. Several studies have reported dipoles in the large scale distribution of galaxies that align with the CMB dipole direction, but indicate a larger amplitude than would be caused by the CMB dipole velocity. A similar dipole is seen in data of radio galaxies, however the amplitude of the dipole depends on the observing frequency showing that these anomalous features cannot be purely kinematic. Other authors have found radio dipoles consistent with the CMB expectation. Further claims of anisotropy along the CMB dipole axis have been made with respect to the Hubble diagram of type Ia supernovae and quasars. Separately, the CMB dipole direction has emerged as a preferred direction in some studies of alignments in quasar polarizations, strong lensing time delay, type Ia supernovae, and standard candles. Some authors have argued that the correlation of distant effects with the dipole direction may indicate that its origin is not kinematic. Alternatively, Planck data has been used to estimate the velocity with respect to the CMB independently of the dipole, by measuring the subtle aberrations and distortions of fluctuations caused by relativistic beaming and separately using the Sunyaev-Zeldovich effect. These studies found a velocity consistent with the value obtained from the dipole, indicating it is consistent with being entirely kinematic. Measurements of the velocity field of galaxies in the local universe show that on short scales galaxies are moving with the local group, and that the average mean velocity decreases with increasing distance. This follows the expectation if the CMB dipole were due to the local peculiar velocity field, it becomes more homogeneous on large scales. Surveys of the local volume have been used to reveal a low density region in the opposite direction to the CMB dipole, potentially explaining the origin of the local bulk flow. Perfect cosmological principle The perfect cosmological principle is an extension of the cosmological principle, and states that the universe is homogeneous and isotropic in space and time. In this view the universe looks the same everywhere (on the large scale), the same as it always has and always will. The perfect cosmological principle underpins steady state theory and emerges from chaotic inflation theory. See also Background independence Copernican principle End of Greatness Friedmann–Lemaître–Robertson–Walker metric Large-scale structure of the cosmos Expansion of the universe Redshift References Physical cosmological concepts Principles Concepts in astronomy
Cosmological principle
[ "Physics", "Astronomy" ]
2,505
[ "Concepts in astronomy", "Concepts in astrophysics", "Physical cosmological concepts" ]
173,954
https://en.wikipedia.org/wiki/Orthogonal%20group
In mathematics, the orthogonal group in dimension , denoted , is the group of distance-preserving transformations of a Euclidean space of dimension that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact. The orthogonal group in dimension has two connected components. The one that contains the identity element is a normal subgroup, called the special orthogonal group, and denoted . It consists of all orthogonal matrices of determinant 1. This group is also called the rotation group, generalizing the fact that in dimensions 2 and 3, its elements are the usual rotations around a point (in dimension 2) or a line (in dimension 3). In low dimension, these groups have been widely studied, see , and . The other component consists of all orthogonal matrices of determinant . This component does not form a group, as the product of any two of its elements is of determinant 1, and therefore not an element of the component. By extension, for any field , an matrix with entries in such that its inverse equals its transpose is called an orthogonal matrix over . The orthogonal matrices form a subgroup, denoted , of the general linear group ; that is More generally, given a non-degenerate symmetric bilinear form or quadratic form on a vector space over a field, the orthogonal group of the form is the group of invertible linear maps that preserve the form. The preceding orthogonal groups are the special case where, on some basis, the bilinear form is the dot product, or, equivalently, the quadratic form is the sum of the square of the coordinates. All orthogonal groups are algebraic groups, since the condition of preserving a form can be expressed as an equality of matrices. Name The name of "orthogonal group" originates from the following characterization of its elements. Given a Euclidean vector space of dimension , the elements of the orthogonal group are, up to a uniform scaling (homothecy), the linear maps from to that map orthogonal vectors to orthogonal vectors. In Euclidean geometry The orthogonal is the subgroup of the general linear group , consisting of all endomorphisms that preserve the Euclidean norm; that is, endomorphisms such that Let be the group of the Euclidean isometries of a Euclidean space of dimension . This group does not depend on the choice of a particular space, since all Euclidean spaces of the same dimension are isomorphic. The stabilizer subgroup of a point is the subgroup of the elements such that . This stabilizer is (or, more exactly, is isomorphic to) , since the choice of a point as an origin induces an isomorphism between the Euclidean space and its associated Euclidean vector space. There is a natural group homomorphism from to , which is defined by where, as usual, the subtraction of two points denotes the translation vector that maps the second point to the first one. This is a well defined homomorphism, since a straightforward verification shows that, if two pairs of points have the same difference, the same is true for their images by (for details, see ). The kernel of is the vector space of the translations. So, the translations form a normal subgroup of , the stabilizers of two points are conjugate under the action of the translations, and all stabilizers are isomorphic to . Moreover, the Euclidean group is a semidirect product of and the group of translations. It follows that the study of the Euclidean group is essentially reduced to the study of . Special orthogonal group By choosing an orthonormal basis of a Euclidean vector space, the orthogonal group can be identified with the group (under matrix multiplication) of orthogonal matrices, which are the matrices such that It follows from this equation that the square of the determinant of equals , and thus the determinant of is either or . The orthogonal matrices with determinant form a subgroup called the special orthogonal group, denoted , consisting of all direct isometries of , which are those that preserve the orientation of the space. is a normal subgroup of , as being the kernel of the determinant, which is a group homomorphism whose image is the multiplicative group . This implies that the orthogonal group is an internal semidirect product of and any subgroup formed with the identity and a reflection. The group with two elements (where is the identity matrix) is a normal subgroup and even a characteristic subgroup of , and, if is even, also of . If is odd, is the internal direct product of and . The group is abelian (whereas is not abelian when ). Its finite subgroups are the cyclic group of -fold rotations, for every positive integer . All these groups are normal subgroups of and . Canonical form For any element of there is an orthogonal basis, where its matrix has the form where there may be any number, including zero, of ±1's; and where the matrices are 2-by-2 rotation matrices, that is matrices of the form with . This results from the spectral theorem by regrouping eigenvalues that are complex conjugate, and taking into account that the absolute values of the eigenvalues of an orthogonal matrix are all equal to . The element belongs to if and only if there are an even number of on the diagonal. A pair of eigenvalues can be identified with a rotation by and a pair of eigenvalues can be identified with a rotation by . The special case of is known as Euler's rotation theorem, which asserts that every (non-identity) element of is a rotation about a unique axis–angle pair. Reflections Reflections are the elements of whose canonical form is where is the identity matrix, and the zeros denote row or column zero matrices. In other words, a reflection is a transformation that transforms the space in its mirror image with respect to a hyperplane. In dimension two, every rotation can be decomposed into a product of two reflections. More precisely, a rotation of angle is the product of two reflections whose axes form an angle of . A product of up to elementary reflections always suffices to generate any element of . This results immediately from the above canonical form and the case of dimension two. The Cartan–Dieudonné theorem is the generalization of this result to the orthogonal group of a nondegenerate quadratic form over a field of characteristic different from two. The reflection through the origin (the map ) is an example of an element of that is not a product of fewer than reflections. Symmetry group of spheres The orthogonal group is the symmetry group of the -sphere (for , this is just the sphere) and all objects with spherical symmetry, if the origin is chosen at the center. The symmetry group of a circle is . The orientation-preserving subgroup is isomorphic (as a real Lie group) to the circle group, also known as , the multiplicative group of the complex numbers of absolute value equal to one. This isomorphism sends the complex number of absolute value  to the special orthogonal matrix In higher dimension, has a more complicated structure (in particular, it is no longer commutative). The topological structures of the -sphere and are strongly correlated, and this correlation is widely used for studying both topological spaces. Group structure The groups and are real compact Lie groups of dimension . The group has two connected components, with being the identity component, that is, the connected component containing the identity matrix. As algebraic groups The orthogonal group can be identified with the group of the matrices such that . Since both members of this equation are symmetric matrices, this provides equations that the entries of an orthogonal matrix must satisfy, and which are not all satisfied by the entries of any non-orthogonal matrix. This proves that is an algebraic set. Moreover, it can be proved that its dimension is which implies that is a complete intersection. This implies that all its irreducible components have the same dimension, and that it has no embedded component. In fact, has two irreducible components, that are distinguished by the sign of the determinant (that is or ). Both are nonsingular algebraic varieties of the same dimension . The component with is . Maximal tori and Weyl groups A maximal torus in a compact Lie group G is a maximal subgroup among those that are isomorphic to for some , where is the standard one-dimensional torus. In and , for every maximal torus, there is a basis on which the torus consists of the block-diagonal matrices of the form where each belongs to . In and , the maximal tori have the same form, bordered by a row and a column of zeros, and on the diagonal. The Weyl group of is the semidirect product of a normal elementary abelian 2-subgroup and a symmetric group, where the nontrivial element of each factor of acts on the corresponding circle factor of } by inversion, and the symmetric group acts on both and } by permuting factors. The elements of the Weyl group are represented by matrices in . The factor is represented by block permutation matrices with 2-by-2 blocks, and a final on the diagonal. The component is represented by block-diagonal matrices with 2-by-2 blocks either with the last component chosen to make the determinant . The Weyl group of is the subgroup of that of , where is the kernel of the product homomorphism given by ; that is, is the subgroup with an even number of minus signs. The Weyl group of is represented in by the preimages under the standard injection of the representatives for the Weyl group of . Those matrices with an odd number of blocks have no remaining final coordinate to make their determinants positive, and hence cannot be represented in . Topology Low-dimensional topology The low-dimensional (real) orthogonal groups are familiar spaces: , a two-point discrete space is is is doubly covered by . Fundamental group In terms of algebraic topology, for the fundamental group of is cyclic of order 2, and the spin group is its universal cover. For the fundamental group is infinite cyclic and the universal cover corresponds to the real line (the group is the unique connected 2-fold cover). Homotopy groups Generally, the homotopy groups of the real orthogonal group are related to homotopy groups of spheres, and thus are in general hard to compute. However, one can compute the homotopy groups of the stable orthogonal group (aka the infinite orthogonal group), defined as the direct limit of the sequence of inclusions: Since the inclusions are all closed, hence cofibrations, this can also be interpreted as a union. On the other hand, is a homogeneous space for , and one has the following fiber bundle: which can be understood as "The orthogonal group acts transitively on the unit sphere , and the stabilizer of a point (thought of as a unit vector) is the orthogonal group of the perpendicular complement, which is an orthogonal group one dimension lower." Thus the natural inclusion is -connected, so the homotopy groups stabilize, and for : thus the homotopy groups of the stable space equal the lower homotopy groups of the unstable spaces. From Bott periodicity we obtain , therefore the homotopy groups of are 8-fold periodic, meaning , and so one need list only the first 8 homotopy groups: Relation to KO-theory Via the clutching construction, homotopy groups of the stable space are identified with stable vector bundles on spheres (up to isomorphism), with a dimension shift of 1: . Setting (to make fit into the periodicity), one obtains: Computation and interpretation of homotopy groups Low-dimensional groups The first few homotopy groups can be calculated by using the concrete descriptions of low-dimensional groups. , from orientation-preserving/reversing (this class survives to and hence stably) , which is spin comes from . , which surjects onto ; this latter thus vanishes. Lie groups From general facts about Lie groups, always vanishes, and is free (free abelian). Vector bundles is a vector bundle over , which consists of two points. Thus over each point, the bundle is trivial, and the non-triviality of the bundle is the difference between the dimensions of the vector spaces over the two points, so is the dimension. Loop spaces Using concrete descriptions of the loop spaces in Bott periodicity, one can interpret the higher homotopies of in terms of simpler-to-analyze homotopies of lower order. Using π0, and have two components, and have countably many components, and the rest are connected. Interpretation of homotopy groups In a nutshell: is about dimension is about orientation is about spin is about topological quantum field theory. Let be any of the four division algebras , , , , and let be the tautological line bundle over the projective line , and its class in K-theory. Noting that , , , , these yield vector bundles over the corresponding spheres, and is generated by is generated by is generated by is generated by From the point of view of symplectic geometry, can be interpreted as the Maslov index, thinking of it as the fundamental group of the stable Lagrangian Grassmannian as , so . Whitehead tower The orthogonal group anchors a Whitehead tower: which is obtained by successively removing (killing) homotopy groups of increasing order. This is done by constructing short exact sequences starting with an Eilenberg–MacLane space for the homotopy group to be removed. The first few entries in the tower are the spin group and the string group, and are preceded by the fivebrane group. The homotopy groups that are killed are in turn 0(O) to obtain SO from O, 1(O) to obtain Spin from SO, 3(O) to obtain String from Spin, and then 7(O) and so on to obtain the higher order branes. Of indefinite quadratic form over the reals Over the real numbers, nondegenerate quadratic forms are classified by Sylvester's law of inertia, which asserts that, on a vector space of dimension , such a form can be written as the difference of a sum of squares and a sum of squares, with . In other words, there is a basis on which the matrix of the quadratic form is a diagonal matrix, with entries equal to , and entries equal to . The pair called the inertia, is an invariant of the quadratic form, in the sense that it does not depend on the way of computing the diagonal matrix. The orthogonal group of a quadratic form depends only on the inertia, and is thus generally denoted . Moreover, as a quadratic form and its opposite have the same orthogonal group, one has . The standard orthogonal group is . So, in the remainder of this section, it is supposed that neither nor is zero. The subgroup of the matrices of determinant 1 in is denoted . The group has four connected components, depending on whether an element preserves orientation on either of the two maximal subspaces where the quadratic form is positive definite or negative definite. The component of the identity, whose elements preserve orientation on both subspaces, is denoted . The group is the Lorentz group that is fundamental in relativity theory. Here the corresponds to space coordinates, and corresponds to the time coordinate. Of complex quadratic forms Over the field of complex numbers, every non-degenerate quadratic form in variables is equivalent to . Thus, up to isomorphism, there is only one non-degenerate complex quadratic space of dimension , and one associated orthogonal group, usually denoted . It is the group of complex orthogonal matrices, complex matrices whose product with their transpose is the identity matrix. As in the real case, has two connected components. The component of the identity consists of all matrices of determinant in ; it is denoted . The groups and are complex Lie groups of dimension over (the dimension over is twice that). For , these groups are noncompact. As in the real case, is not simply connected: For , the fundamental group of is cyclic of order 2, whereas the fundamental group of is . Over finite fields Characteristic different from two Over a field of characteristic different from two, two quadratic forms are equivalent if their matrices are congruent, that is if a change of basis transforms the matrix of the first form into the matrix of the second form. Two equivalent quadratic forms have clearly the same orthogonal group. The non-degenerate quadratic forms over a finite field of characteristic different from two are completely classified into congruence classes, and it results from this classification that there is only one orthogonal group in odd dimension and two in even dimension. More precisely, Witt's decomposition theorem asserts that (in characteristic different from two) every vector space equipped with a non-degenerate quadratic form can be decomposed as a direct sum of pairwise orthogonal subspaces where each is a hyperbolic plane (that is there is a basis such that the matrix of the restriction of to has the form ), and the restriction of to is anisotropic (that is, for every nonzero in ). The Chevalley–Warning theorem asserts that, over a finite field, the dimension of is at most two. If the dimension of is odd, the dimension of is thus equal to one, and its matrix is congruent either to or to where is a non-square scalar. It results that there is only one orthogonal group that is denoted , where is the number of elements of the finite field (a power of an odd prime). If the dimension of is two and is not a square in the ground field (that is, if its number of elements is congruent to 3 modulo 4), the matrix of the restriction of to is congruent to either or , where is the 2×2 identity matrix. If the dimension of is two and is a square in the ground field (that is, if is congruent to 1, modulo 4) the matrix of the restriction of to is congruent to is any non-square scalar. This implies that if the dimension of is even, there are only two orthogonal groups, depending whether the dimension of zero or two. They are denoted respectively and . The orthogonal group is a dihedral group of order , where . When the characteristic is not two, the order of the orthogonal groups are In characteristic two, the formulas are the same, except that the factor of must be removed. Dickson invariant For orthogonal groups, the Dickson invariant is a homomorphism from the orthogonal group to the quotient group (integers modulo 2), taking the value in case the element is the product of an even number of reflections, and the value of 1 otherwise. Algebraically, the Dickson invariant can be defined as , where is the identity . Over fields that are not of characteristic 2 it is equivalent to the determinant: the determinant is to the power of the Dickson invariant. Over fields of characteristic 2, the determinant is always 1, so the Dickson invariant gives more information than the determinant. The special orthogonal group is the kernel of the Dickson invariant and usually has index 2 in . When the characteristic of is not 2, the Dickson Invariant is whenever the determinant is . Thus when the characteristic is not 2, is commonly defined to be the elements of with determinant . Each element in has determinant . Thus in characteristic 2, the determinant is always . The Dickson invariant can also be defined for Clifford groups and pin groups in a similar way (in all dimensions). Orthogonal groups of characteristic 2 Over fields of characteristic 2 orthogonal groups often exhibit special behaviors, some of which are listed in this section. (Formerly these groups were known as the hypoabelian groups, but this term is no longer used.) Any orthogonal group over any field is generated by reflections, except for a unique example where the vector space is 4-dimensional over the field with 2 elements and the Witt index is 2. A reflection in characteristic two has a slightly different definition. In characteristic two, the reflection orthogonal to a vector takes a vector to where is the bilinear form and is the quadratic form associated to the orthogonal geometry. Compare this to the Householder reflection of odd characteristic or characteristic zero, which takes to . The center of the orthogonal group usually has order 1 in characteristic 2, rather than 2, since . In odd dimensions in characteristic 2, orthogonal groups over perfect fields are the same as symplectic groups in dimension . In fact the symmetric form is alternating in characteristic 2, and as the dimension is odd it must have a kernel of dimension 1, and the quotient by this kernel is a symplectic space of dimension , acted upon by the orthogonal group. In even dimensions in characteristic 2 the orthogonal group is a subgroup of the symplectic group, because the symmetric bilinear form of the quadratic form is also an alternating form. The spinor norm The spinor norm is a homomorphism from an orthogonal group over a field to the quotient group (the multiplicative group of the field up to multiplication by square elements), that takes reflection in a vector of norm to the image of in . For the usual orthogonal group over the reals, it is trivial, but it is often non-trivial over other fields, or for the orthogonal group of a quadratic form over the reals that is not positive definite. Galois cohomology and orthogonal groups In the theory of Galois cohomology of algebraic groups, some further points of view are introduced. They have explanatory value, in particular in relation with the theory of quadratic forms; but were for the most part post hoc, as far as the discovery of the phenomenon is concerned. The first point is that quadratic forms over a field can be identified as a Galois , or twisted forms (torsors) of an orthogonal group. As an algebraic group, an orthogonal group is in general neither connected nor simply-connected; the latter point brings in the spin phenomena, while the former is related to the determinant. The 'spin' name of the spinor norm can be explained by a connection to the spin group (more accurately a pin group). This may now be explained quickly by Galois cohomology (which however postdates the introduction of the term by more direct use of Clifford algebras). The spin covering of the orthogonal group provides a short exact sequence of algebraic groups. Here is the algebraic group of square roots of 1; over a field of characteristic not 2 it is roughly the same as a two-element group with trivial Galois action. The connecting homomorphism from , which is simply the group of -valued points, to is essentially the spinor norm, because is isomorphic to the multiplicative group of the field modulo squares. There is also the connecting homomorphism from of the orthogonal group, to the of the kernel of the spin covering. The cohomology is non-abelian so that this is as far as we can go, at least with the conventional definitions. Lie algebra The Lie algebra corresponding to Lie groups and consists of the skew-symmetric matrices, with the Lie bracket given by the commutator. One Lie algebra corresponds to both groups. It is often denoted by or , and called the orthogonal Lie algebra or special orthogonal Lie algebra. Over real numbers, these Lie algebras for different are the compact real forms of two of the four families of semisimple Lie algebras: in odd dimension , where , while in even dimension , where . Since the group is not simply connected, the representation theory of the orthogonal Lie algebras includes both representations corresponding to ordinary representations of the orthogonal groups, and representations corresponding to projective representations of the orthogonal groups. (The projective representations of are just linear representations of the universal cover, the spin group Spin(n).) The latter are the so-called spin representation, which are important in physics. More generally, given a vector space (over a field with characteristic not equal to 2) with a nondegenerate symmetric bilinear form , the special orthogonal Lie algebra consists of tracefree endomorphisms which are skew-symmetric for this form (). Over a field of characteristic 2 we consider instead the alternating endomorphisms. Concretely we can equate these with the alternating tensors . The correspondence is given by: This description applies equally for the indefinite special orthogonal Lie algebras for symmetric bilinear forms with signature . Over real numbers, this characterization is used in interpreting the curl of a vector field (naturally a 2-vector) as an infinitesimal rotation or "curl", hence the name. Related groups The orthogonal groups and special orthogonal groups have a number of important subgroups, supergroups, quotient groups, and covering groups. These are listed below. The inclusions and are part of a sequence of 8 inclusions used in a geometric proof of the Bott periodicity theorem, and the corresponding quotient spaces are symmetric spaces of independent interest – for example, is the Lagrangian Grassmannian. Lie subgroups In physics, particularly in the areas of Kaluza–Klein compactification, it is important to find out the subgroups of the orthogonal group. The main ones are: – preserve an axis – are those that preserve a compatible complex structure or a compatible symplectic structure – see 2-out-of-3 property; also preserves a complex orientation. Lie supergroups The orthogonal group is also an important subgroup of various Lie groups: Conformal group Being isometries, real orthogonal transforms preserve angles, and are thus conformal maps, though not all conformal linear transforms are orthogonal. In classical terms this is the difference between congruence and similarity, as exemplified by SSS (side-side-side) congruence of triangles and AAA (angle-angle-angle) similarity of triangles. The group of conformal linear maps of is denoted for the conformal orthogonal group, and consists of the product of the orthogonal group with the group of dilations. If is odd, these two subgroups do not intersect, and they are a direct product: , where } is the real multiplicative group, while if is even, these subgroups intersect in , so this is not a direct product, but it is a direct product with the subgroup of dilation by a positive scalar: . Similarly one can define ; this is always: . Discrete subgroups As the orthogonal group is compact, discrete subgroups are equivalent to finite subgroups. These subgroups are known as point groups and can be realized as the symmetry groups of polytopes. A very important class of examples are the finite Coxeter groups, which include the symmetry groups of regular polytopes. Dimension 3 is particularly studied – see point groups in three dimensions, polyhedral groups, and list of spherical symmetry groups. In 2 dimensions, the finite groups are either cyclic or dihedral – see point groups in two dimensions. Other finite subgroups include: Permutation matrices (the Coxeter group ) Signed permutation matrices (the Coxeter group ); also equals the intersection of the orthogonal group with the integer matrices. Covering and quotient groups The orthogonal group is neither simply connected nor centerless, and thus has both a covering group and a quotient group, respectively: Two covering Pin groups, and , The quotient projective orthogonal group, . These are all 2-to-1 covers. For the special orthogonal group, the corresponding groups are: Spin group, , Projective special orthogonal group, . Spin is a 2-to-1 cover, while in even dimension, is a 2-to-1 cover, and in odd dimension is a 1-to-1 cover; i.e., isomorphic to . These groups, , , and are Lie group forms of the compact special orthogonal Lie algebra, – is the simply connected form, while is the centerless form, and is in general neither. In dimension 3 and above these are the covers and quotients, while dimension 2 and below are somewhat degenerate; see specific articles for details. Principal homogeneous space: Stiefel manifold The principal homogeneous space for the orthogonal group is the Stiefel manifold of orthonormal bases (orthonormal -frames). In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given an orthogonal space, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any orthogonal basis to any other orthogonal basis. The other Stiefel manifolds for of incomplete orthonormal bases (orthonormal -frames) are still homogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any -frame can be taken to any other -frame by an orthogonal map, but this map is not uniquely determined. See also Specific transforms Coordinate rotations and reflections Reflection through the origin Specific groups rotation group, Related groups indefinite orthogonal group unitary group symplectic group Lists of groups list of finite simple groups list of simple Lie groups Representation theory Representations of classical Lie groups Brauer algebra Notes Citations References External links John Baez "This Week's Finds in Mathematical Physics" week 105 John Baez on Octonions n-dimensional Special Orthogonal Group parametrization Lie groups Quadratic forms Euclidean symmetries Linear algebraic groups
Orthogonal group
[ "Physics", "Mathematics" ]
6,177
[ "Functions and mappings", "Mathematical structures", "Euclidean symmetries", "Lie groups", "Mathematical objects", "Number theory", "Mathematical relations", "Algebraic structures", "Quadratic forms", "Symmetry" ]
173,961
https://en.wikipedia.org/wiki/Center%20of%20mass
In physics, the center of mass of a distribution of mass in space (sometimes referred to as the barycenter or balance point) is the unique point at any given time where the weighted relative position of the distributed mass sums to zero. For a rigid body containing its center of mass, this is the point to which a force may be applied to cause a linear acceleration without an angular acceleration. Calculations in mechanics are often simplified when formulated with respect to the center of mass. It is a hypothetical point where the entire mass of an object may be assumed to be concentrated to visualise its motion. In other words, the center of mass is the particle equivalent of a given object for application of Newton's laws of motion.l In the case of a single rigid body, the center of mass is fixed in relation to the body, and if the body has uniform density, it will be located at the centroid. The center of mass may be located outside the physical body, as is sometimes the case for hollow or open-shaped objects, such as a horseshoe. In the case of a distribution of separate bodies, such as the planets of the Solar System, the center of mass may not correspond to the position of any individual member of the system. The center of mass is a useful reference point for calculations in mechanics that involve masses distributed in space, such as the linear and angular momentum of planetary bodies and rigid body dynamics. In orbital mechanics, the equations of motion of planets are formulated as point masses located at the centers of mass (see Barycenter (astronomy) for details). The center of mass frame is an inertial frame in which the center of mass of a system is at rest with respect to the origin of the coordinate system. History The concept of center of gravity or weight was studied extensively by the ancient Greek mathematician, physicist, and engineer Archimedes of Syracuse. He worked with simplified assumptions about gravity that amount to a uniform field, thus arriving at the mathematical properties of what we now call the center of mass. Archimedes showed that the torque exerted on a lever by weights resting at various points along the lever is the same as what it would be if all of the weights were moved to a single point—their center of mass. In his work On Floating Bodies, Archimedes demonstrated that the orientation of a floating object is the one that makes its center of mass as low as possible. He developed mathematical techniques for finding the centers of mass of objects of uniform density of various well-defined shapes. Other ancient mathematicians who contributed to the theory of the center of mass include Hero of Alexandria and Pappus of Alexandria. In the Renaissance and Early Modern periods, work by Guido Ubaldi, Francesco Maurolico, Federico Commandino, Evangelista Torricelli, Simon Stevin, Luca Valerio, Jean-Charles de la Faille, Paul Guldin, John Wallis, Christiaan Huygens, Louis Carré, Pierre Varignon, and Alexis Clairaut expanded the concept further. Newton's second law is reformulated with respect to the center of mass in Euler's first law. Definition The center of mass is the unique point at the center of a distribution of mass in space that has the property that the weighted position vectors relative to this point sum to zero. In analogy to statistics, the center of mass is the mean location of a distribution of mass in space. A system of particles In the case of a system of particles , each with mass that are located in space with coordinates , the coordinates R of the center of mass satisfy Solving this equation for R yields the formula A continuous volume If the mass distribution is continuous with the density ρ(r) within a solid Q, then the integral of the weighted position coordinates of the points in this volume relative to the center of mass R over the volume V is zero, that is Solve this equation for the coordinates R to obtain Where M is the total mass in the volume. If a continuous mass distribution has uniform density, which means that ρ is constant, then the center of mass is the same as the centroid of the volume. Barycentric coordinates The coordinates R of the center of mass of a two-particle system, P1 and P2, with masses m1 and m2 is given by Let the percentage of the total mass divided between these two particles vary from 100% P1 and 0% P2 through 50% P1 and 50% P2 to 0% P1 and 100% P2, then the center of mass R moves along the line from P1 to P2. The percentages of mass at each point can be viewed as projective coordinates of the point R on this line, and are termed barycentric coordinates. Another way of interpreting the process here is the mechanical balancing of moments about an arbitrary point. The numerator gives the total moment that is then balanced by an equivalent total force at the center of mass. This can be generalized to three points and four points to define projective coordinates in the plane, and in space, respectively. Systems with periodic boundary conditions For particles in a system with periodic boundary conditions two particles can be neighbours even though they are on opposite sides of the system. This occurs often in molecular dynamics simulations, for example, in which clusters form at random locations and sometimes neighbouring atoms cross the periodic boundary. When a cluster straddles the periodic boundary, a naive calculation of the center of mass will incorrect. A generalized method for calculating the center of mass for periodic systems is to treat each coordinate, x and y and/or z, as if it were on a circle instead of a line. The calculation takes every particle's x coordinate and maps it to an angle, where xmax is the system size in the x direction and . From this angle, two new points can be generated, which can be weighted by the mass of the particle for the center of mass or given a value of 1 for the geometric center: In the plane, these coordinates lie on a circle of radius 1. From the collection of and values from all the particles, the averages and are calculated. where is the sum of the masses of all of the particles. These values are mapped back into a new angle, , from which the x coordinate of the center of mass can be obtained: The process can be repeated for all dimensions of the system to determine the complete center of mass. The utility of the algorithm is that it allows the mathematics to determine where the "best" center of mass is, instead of guessing or using cluster analysis to "unfold" a cluster straddling the periodic boundaries. If both average values are zero, , then is undefined. This is a correct result, because it only occurs when all particles are exactly evenly spaced. In that condition, their x coordinates are mathematically identical in a periodic system. Center of gravity A body's center of gravity is the point around which the resultant torque due to gravity forces vanishes. Where a gravity field can be considered to be uniform, the mass-center and the center-of-gravity will be the same. However, for satellites in orbit around a planet, in the absence of other torques being applied to a satellite, the slight variation (gradient) in gravitational field between closer-to and further-from the planet (stronger and weaker gravity respectively) can lead to a torque that will tend to align the satellite such that its long axis is vertical. In such a case, it is important to make the distinction between the center-of-gravity and the mass-center. Any horizontal offset between the two will result in an applied torque. The mass-center is a fixed property for a given rigid body (e.g. with no slosh or articulation), whereas the center-of-gravity may, in addition, depend upon its orientation in a non-uniform gravitational field. In the latter case, the center-of-gravity will always be located somewhat closer to the main attractive body as compared to the mass-center, and thus will change its position in the body of interest as its orientation is changed. In the study of the dynamics of aircraft, vehicles and vessels, forces and moments need to be resolved relative to the mass center. That is true independent of whether gravity itself is a consideration. Referring to the mass-center as the center-of-gravity is something of a colloquialism, but it is in common usage and when gravity gradient effects are negligible, center-of-gravity and mass-center are the same and are used interchangeably. In physics the benefits of using the center of mass to model a mass distribution can be seen by considering the resultant of the gravity forces on a continuous body. Consider a body Q of volume V with density ρ(r) at each point r in the volume. In a parallel gravity field the force f at each point r is given by, where dm is the mass at the point r, g is the acceleration of gravity, and is a unit vector defining the vertical direction. Choose a reference point R in the volume and compute the resultant force and torque at this point, and If the reference point R is chosen so that it is the center of mass, then which means the resultant torque . Because the resultant torque is zero the body will move as though it is a particle with its mass concentrated at the center of mass. By selecting the center of gravity as the reference point for a rigid body, the gravity forces will not cause the body to rotate, which means the weight of the body can be considered to be concentrated at the center of mass. Linear and angular momentum The linear and angular momentum of a collection of particles can be simplified by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, i = 1, ..., n of masses mi be located at the coordinates ri with velocities vi. Select a reference point R and compute the relative position and velocity vectors, The total linear momentum and angular momentum of the system are and If R is chosen as the center of mass these equations simplify to where m is the total mass of all the particles, p is the linear momentum, and L is the angular momentum. The law of conservation of momentum predicts that for any system not subjected to external forces the momentum of the system will remain constant, which means the center of mass will move with constant velocity. This applies for all systems with classical internal forces, including magnetic fields, electric fields, chemical reactions, and so on. More formally, this is true for any internal forces that cancel in accordance with Newton's Third Law. Determination The experimental determination of a body's center of mass makes use of gravity forces on the body and is based on the fact that the center of mass is the same as the center of gravity in the parallel gravity field near the earth's surface. The center of mass of a body with an axis of symmetry and constant density must lie on this axis. Thus, the center of mass of a circular cylinder of constant density has its center of mass on the axis of the cylinder. In the same way, the center of mass of a spherically symmetric body of constant density is at the center of the sphere. In general, for any symmetry of a body, its center of mass will be a fixed point of that symmetry. In two dimensions An experimental method for locating the center of mass is to suspend the object from two locations and to drop plumb lines from the suspension points. The intersection of the two lines is the center of mass. The shape of an object might already be mathematically determined, but it may be too complex to use a known formula. In this case, one can subdivide the complex shape into simpler, more elementary shapes, whose centers of mass are easy to find. If the total mass and center of mass can be determined for each area, then the center of mass of the whole is the weighted average of the centers. This method can even work for objects with holes, which can be accounted for as negative masses. A direct development of the planimeter known as an integraph, or integerometer, can be used to establish the position of the centroid or center of mass of an irregular two-dimensional shape. This method can be applied to a shape with an irregular, smooth or complex boundary where other methods are too difficult. It was regularly used by ship builders to compare with the required displacement and center of buoyancy of a ship, and ensure it would not capsize. In three dimensions An experimental method to locate the three-dimensional coordinates of the center of mass begins by supporting the object at three points and measuring the forces, F1, F2, and F3 that resist the weight of the object, ( is the unit vector in the vertical direction). Let r1, r2, and r3 be the position coordinates of the support points, then the coordinates R of the center of mass satisfy the condition that the resultant torque is zero, or This equation yields the coordinates of the center of mass R* in the horizontal plane as, The center of mass lies on the vertical line L, given by The three-dimensional coordinates of the center of mass are determined by performing this experiment twice with the object positioned so that these forces are measured for two different horizontal planes through the object. The center of mass will be the intersection of the two lines L1 and L2 obtained from the two experiments. Applications Engineering designs Automotive applications Engineers try to design a sports car so that its center of mass is lowered to make the car handle better, which is to say, maintain traction while executing relatively sharp turns. The characteristic low profile of the U.S. military Humvee was designed in part to allow it to tilt farther than taller vehicles without rolling over, by ensuring its low center of mass stays over the space bounded by the four wheels even at angles far from the horizontal. Aeronautics The center of mass is an important point on an aircraft, which significantly affects the stability of the aircraft. To ensure the aircraft is stable enough to be safe to fly, the center of mass must fall within specified limits. If the center of mass is ahead of the forward limit, the aircraft will be less maneuverable, possibly to the point of being unable to rotate for takeoff or flare for landing. If the center of mass is behind the aft limit, the aircraft will be more maneuverable, but also less stable, and possibly unstable enough so as to be impossible to fly. The moment arm of the elevator will also be reduced, which makes it more difficult to recover from a stalled condition. For helicopters in hover, the center of mass is always directly below the rotorhead. In forward flight, the center of mass will move forward to balance the negative pitch torque produced by applying cyclic control to propel the helicopter forward; consequently a cruising helicopter flies "nose-down" in level flight. Astronomy The center of mass plays an important role in astronomy and astrophysics, where it is commonly referred to as the barycenter. The barycenter is the point between two objects where they balance each other; it is the center of mass where two or more celestial bodies orbit each other. When a moon orbits a planet, or a planet orbits a star, both bodies are actually orbiting a point that lies away from the center of the primary (larger) body. For example, the Moon does not orbit the exact center of the Earth, but a point on a line between the center of the Earth and the Moon, approximately 1,710 km (1,062 miles) below the surface of the Earth, where their respective masses balance. This is the point about which the Earth and Moon orbit as they travel around the Sun. If the masses are more similar, e.g., Pluto and Charon, the barycenter will fall outside both bodies. Rigging and safety Knowing the location of the center of gravity when rigging is crucial, possibly resulting in severe injury or death if assumed incorrectly. A center of gravity that is at or above the lift point will most likely result in a tip-over incident. In general, the further the center of gravity below the pick point, the safer the lift. There are other things to consider, such as shifting loads, strength of the load and mass, distance between pick points, and number of pick points. Specifically, when selecting lift points, it is very important to place the center of gravity at the center and well below the lift points. Body motion The center of mass of the adult human body vertically is 10 cm above the trochanter (the femur joins the hip), with it in horizontally being located 1.4 cm forward of the knee, and 1.0 behind the trochanter. In kinesiology and biomechanics, the center of mass is an important parameter that assists people in understanding their human locomotion. Typically, a human's center of mass is detected with one of two methods: the reaction board method is a static analysis that involves the person lying down on that instrument, and use of their static equilibrium equation to find their center of mass; the segmentation method relies on a mathematical solution based on the physical principle that the summation of the torques of individual body sections, relative to a specified axis, must equal the torque of the whole system that constitutes the body, measured relative to the same axis. Optimization The Center-of-gravity method is a method for convex optimization, which uses the center-of-gravity of the feasible region. See also Barycenter Buoyancy Center of percussion Center of pressure (fluid mechanics) Center of pressure (terrestrial locomotion) Centroid Circumcenter of mass Expected value Mass point geometry Metacentric height Roll center Weight distribution Notes References External links Motion of the Center of Mass shows that the motion of the center of mass of an object in free fall is the same as the motion of a point object. The Solar System's barycenter, simulations showing the effect each planet contributes to the Solar System's barycenter. Classical mechanics Mass Mass Moment (physics)
Center of mass
[ "Physics", "Mathematics" ]
3,698
[ "Scalar physical quantities", "Matter", "Point (geometry)", "Physical quantities", "Quantity", "Geometric centers", "Mass", "Classical mechanics", "Size", "Mechanics", "Wikipedia categories named after physical quantities", "Symmetry", "Moment (physics)" ]
173,965
https://en.wikipedia.org/wiki/3D%20rotation%20group
In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition. By definition, a rotation about the origin is a transformation that preserves the origin, Euclidean distance (so it is an isometry), and orientation (i.e., handedness of space). Composing two rotations results in another rotation, every rotation has a unique inverse rotation, and the identity map satisfies the definition of a rotation. Owing to the above properties (along composite rotations' associative property), the set of all rotations is a group under composition. Every non-trivial rotation is determined by its axis of rotation (a line through the origin) and its angle of rotation. Rotations are not commutative (for example, rotating R 90° in the x-y plane followed by S 90° in the y-z plane is not the same as S followed by R), making the 3D rotation group a nonabelian group. Moreover, the rotation group has a natural structure as a manifold for which the group operations are smoothly differentiable, so it is in fact a Lie group. It is compact and has dimension 3. Rotations are linear transformations of and can therefore be represented by matrices once a basis of has been chosen. Specifically, if we choose an orthonormal basis of , every rotation is described by an orthogonal 3 × 3 matrix (i.e., a 3 × 3 matrix with real entries which, when multiplied by its transpose, results in the identity matrix) with determinant 1. The group SO(3) can therefore be identified with the group of these matrices under matrix multiplication. These matrices are known as "special orthogonal matrices", explaining the notation SO(3). The group SO(3) is used to describe the possible rotational symmetries of an object, as well as the possible orientations of an object in space. Its representations are important in physics, where they give rise to the elementary particles of integer spin. Length and angle Besides just preserving length, rotations also preserve the angles between vectors. This follows from the fact that the standard dot product between two vectors u and v can be written purely in terms of length (see the law of cosines): It follows that every length-preserving linear transformation in preserves the dot product, and thus the angle between vectors. Rotations are often defined as linear transformations that preserve the inner product on , which is equivalent to requiring them to preserve length. See classical group for a treatment of this more general approach, where appears as a special case. Orthogonal and rotation matrices Every rotation maps an orthonormal basis of to another orthonormal basis. Like any linear transformation of finite-dimensional vector spaces, a rotation can always be represented by a matrix. Let be a given rotation. With respect to the standard basis of the columns of are given by . Since the standard basis is orthonormal, and since preserves angles and length, the columns of form another orthonormal basis. This orthonormality condition can be expressed in the form where denotes the transpose of and is the identity matrix. Matrices for which this property holds are called orthogonal matrices. The group of all orthogonal matrices is denoted , and consists of all proper and improper rotations. In addition to preserving length, proper rotations must also preserve orientation. A matrix will preserve or reverse orientation according to whether the determinant of the matrix is positive or negative. For an orthogonal matrix , note that implies , so that . The subgroup of orthogonal matrices with determinant is called the special orthogonal group, denoted . Thus every rotation can be represented uniquely by an orthogonal matrix with unit determinant. Moreover, since composition of rotations corresponds to matrix multiplication, the rotation group is isomorphic to the special orthogonal group . Improper rotations correspond to orthogonal matrices with determinant , and they do not form a group because the product of two improper rotations is a proper rotation. Group structure The rotation group is a group under function composition (or equivalently the product of linear transformations). It is a subgroup of the general linear group consisting of all invertible linear transformations of the real 3-space . Furthermore, the rotation group is nonabelian. That is, the order in which rotations are composed makes a difference. For example, a quarter turn around the positive x-axis followed by a quarter turn around the positive y-axis is a different rotation than the one obtained by first rotating around y and then x. The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Complete classification of finite subgroups The finite subgroups of are completely classified. Every finite subgroup is isomorphic to either an element of one of two countably infinite families of planar isometries: the cyclic groups or the dihedral groups , or to one of three other groups: the tetrahedral group , the octahedral group , or the icosahedral group . Axis of rotation Every nontrivial proper rotation in 3 dimensions fixes a unique 1-dimensional linear subspace of which is called the axis of rotation (this is Euler's rotation theorem). Each such rotation acts as an ordinary 2-dimensional rotation in the plane orthogonal to this axis. Since every 2-dimensional rotation can be represented by an angle φ, an arbitrary 3-dimensional rotation can be specified by an axis of rotation together with an angle of rotation about this axis. (Technically, one needs to specify an orientation for the axis and whether the rotation is taken to be clockwise or counterclockwise with respect to this orientation). For example, counterclockwise rotation about the positive z-axis by angle φ is given by Given a unit vector n in and an angle φ, let R(φ, n) represent a counterclockwise rotation about the axis through n (with orientation determined by n). Then R(0, n) is the identity transformation for any n R(φ, n) = R(−φ, −n) R( + φ, n) = R( − φ, −n). Using these properties one can show that any rotation can be represented by a unique angle φ in the range 0 ≤ φ ≤ and a unit vector n such that n is arbitrary if φ = 0 n is unique if 0 < φ < n is unique up to a sign if φ = (that is, the rotations R(, ±n) are identical). In the next section, this representation of rotations is used to identify SO(3) topologically with three-dimensional real projective space. Topology The Lie group SO(3) is diffeomorphic to the real projective space Consider the solid ball in of radius (that is, all points of of distance or less from the origin). Given the above, for every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The identity rotation corresponds to the point at the center of the ball. Rotations through an angle between 0 and (not including either) are on the same axis at the same distance. Rotation through angles between 0 and − correspond to the point on the same axis and distance from the origin but on the opposite side of the origin. The one remaining issue is that the two rotations through and through − are the same. So we identify (or "glue together") antipodal points on the surface of the ball. After this identification, we arrive at a topological space homeomorphic to the rotation group. Indeed, the ball with antipodal surface points identified is a smooth manifold, and this manifold is diffeomorphic to the rotation group. It is also diffeomorphic to the real 3-dimensional projective space so the latter can also serve as a topological model for the rotation group. These identifications illustrate that SO(3) is connected but not simply connected. As to the latter, in the ball with antipodal surface points identified, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how it is deformed, the start and end point have to remain antipodal, or else the loop will "break open". In terms of rotations, this loop represents a continuous sequence of rotations about the z-axis starting (by example) at the identity (center of the ball), through the south pole, jumping to the north pole and ending again at the identity rotation (i.e., a series of rotation through an angle φ where φ runs from 0 to 2). Surprisingly, running through the path twice, i.e., running from the north pole down to the south pole, jumping back to the north pole (using the fact that north and south poles are identified), and then again running from the north pole down to the south pole, so that φ runs from 0 to 4, gives a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The plate trick and similar tricks demonstrate this practically. The same argument can be performed in general, and it shows that the fundamental group of SO(3) is the cyclic group of order 2 (a fundamental group with two elements). In physics applications, the non-triviality (more than one element) of the fundamental group allows for the existence of objects known as spinors, and is an important tool in the development of the spin–statistics theorem. The universal cover of SO(3) is a Lie group called Spin(3). The group Spin(3) is isomorphic to the special unitary group SU(2); it is also diffeomorphic to the unit 3-sphere S3 and can be understood as the group of versors (quaternions with absolute value 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in quaternions and spatial rotations. The map from S3 onto SO(3) that identifies antipodal points of S3 is a surjective homomorphism of Lie groups, with kernel {±1}. Topologically, this map is a two-to-one covering map. (See the plate trick.) Connection between SO(3) and SU(2) In this section, we give two different constructions of a two-to-one and surjective homomorphism of SU(2) onto SO(3). Using quaternions of unit norm The group is isomorphic to the quaternions of unit norm via a map given by restricted to where , , , and , . Let us now identify with the span of . One can then verify that if is in and is a unit quaternion, then Furthermore, the map is a rotation of Moreover, is the same as . This means that there is a homomorphism from quaternions of unit norm to the 3D rotation group . One can work this homomorphism out explicitly: the unit quaternion, , with is mapped to the rotation matrix This is a rotation around the vector by an angle , where and . The proper sign for is implied, once the signs of the axis components are fixed. The is apparent since both and map to the same . Using Möbius transformations The general reference for this section is . The points on the sphere can, barring the north pole , be put into one-to-one bijection with points {{math|1=S(P) = P}} on the plane defined by , see figure. The map is called stereographic projection. Let the coordinates on be . The line passing through and can be parametrized as Demanding that the of equals , one finds We have Hence the map where, for later convenience, the plane is identified with the complex plane For the inverse, write as and demand to find and thus If is a rotation, then it will take points on to points on by its standard action on the embedding space By composing this action with one obtains a transformation of , Thus is a transformation of associated to the transformation of . It turns out that represented in this way by can be expressed as a matrix (where the notation is recycled to use the same name for the matrix as for the transformation of it represents). To identify this matrix, consider first a rotation about the through an angle , Hence which, unsurprisingly, is a rotation in the complex plane. In an analogous way, if is a rotation about the through an angle , then which, after a little algebra, becomes These two rotations, thus correspond to bilinear transforms of , namely, they are examples of Möbius transformations. A general Möbius transformation is given by The rotations, generate all of and the composition rules of the Möbius transformations show that any composition of translates to the corresponding composition of Möbius transformations. The Möbius transformations can be represented by matrices since a common factor of cancels. For the same reason, the matrix is not uniquely defined since multiplication by has no effect on either the determinant or the Möbius transformation. The composition law of Möbius transformations follow that of the corresponding matrices. The conclusion is that each Möbius transformation corresponds to two matrices . Using this correspondence one may write These matrices are unitary and thus . In terms of Euler angles<ref group="nb">This is effected by first applying a rotation through about the to take the to the line , the intersection between the planes and . For the general case, one might use Ref. The quaternion formulation of the composition of two rotations RB and RA also yields directly the rotation axis and angle of the composite rotation RC = RBRA. Let the quaternion associated with a spatial rotation R is constructed from its rotation axis S and the rotation angle φ this axis. The associated quaternion is given by, Then the composition of the rotation RR with RA is the rotation RC = RBRA with rotation axis and angle defined by the product of the quaternions that is Expand this product to obtain Divide both sides of this equation by the identity, which is the law of cosines on a sphere, and compute This is Rodrigues' formula for the axis of a composite rotation defined in terms of the axes of the two rotations. He derived this formula in 1840 (see page 408). The three rotation axes A, B, and C''' form a spherical triangle and the dihedral angles between the planes formed by the sides of this triangle are defined by the rotation angles. Infinitesimal rotations Realizations of rotations We have seen that there are a variety of ways to represent rotations: as orthogonal matrices with determinant 1, by axis and rotation angle in quaternion algebra with versors and the map 3-sphere S3 → SO(3) (see quaternions and spatial rotations) in geometric algebra as a rotor as a sequence of three rotations about three fixed axes; see Euler angles. Spherical harmonics The group of three-dimensional Euclidean rotations has an infinite-dimensional representation on the Hilbert space where are spherical harmonics. Its elements are square integrable complex-valued functions on the sphere. The inner product on this space is given by If is an arbitrary square integrable function defined on the unit sphere , then it can be expressed as where the expansion coefficients are given by The Lorentz group action restricts to that of and is expressed as This action is unitary, meaning that The can be obtained from the of above using Clebsch–Gordan decomposition, but they are more easily directly expressed as an exponential of an odd-dimensional -representation (the 3-dimensional one is exactly ). A formula for valid for all ℓ is given. In this case the space decomposes neatly into an infinite direct sum of irreducible odd finite-dimensional representations according to This is characteristic of infinite-dimensional unitary representations of . If is an infinite-dimensional unitary representation on a separable Hilbert space, then it decomposes as a direct sum of finite-dimensional unitary representations. Such a representation is thus never irreducible. All irreducible finite-dimensional representations can be made unitary by an appropriate choice of inner product, where the integral is the unique invariant integral over normalized to , here expressed using the Euler angles parametrization. The inner product inside the integral is any inner product on . Generalizations The rotation group generalizes quite naturally to n-dimensional Euclidean space, with its standard Euclidean structure. The group of all proper and improper rotations in n dimensions is called the orthogonal group O(n), and the subgroup of proper rotations is called the special orthogonal group SO(n), which is a Lie group of dimension . In special relativity, one works in a 4-dimensional vector space, known as Minkowski space rather than 3-dimensional Euclidean space. Unlike Euclidean space, Minkowski space has an inner product with an indefinite signature. However, one can still define generalized rotations which preserve this inner product. Such generalized rotations are known as Lorentz transformations and the group of all such transformations is called the Lorentz group. The rotation group SO(3) can be described as a subgroup of E+(3), the Euclidean group of direct isometries of Euclidean This larger group is the group of all motions of a rigid body: each of these is a combination of a rotation about an arbitrary axis and a translation, or put differently, a combination of an element of SO(3) and an arbitrary translation. In general, the rotation group of an object is the symmetry group within the group of direct isometries; in other words, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the same as the full symmetry group. See also Orthogonal group Angular momentum Coordinate rotations Charts on SO(3) Representations of SO(3) Euler angles Rodrigues' rotation formula Infinitesimal rotation Pin group Quaternions and spatial rotations Rigid body Spherical harmonics Plane of rotation Lie group Pauli matrix Plate trick Three-dimensional rotation operator Footnotes References Bibliography (translation of the original 1932 edition, Die Gruppentheoretische Methode in Der Quantenmechanik''). . Lie groups Rotational symmetry Rotation in three dimensions Euclidean solid geometry 3-manifolds
3D rotation group
[ "Physics", "Mathematics" ]
3,931
[ "Lie groups", "Mathematical structures", "Euclidean solid geometry", "Space", "Algebraic structures", "Spacetime", "Symmetry", "Rotational symmetry" ]
173,977
https://en.wikipedia.org/wiki/Symplectic%20group
In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted and for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by . Many authors prefer slightly different notations, usually differing by factors of . The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group is denoted , and is the compact real form of . Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension . The name "symplectic group" was coined by Hermann Weyl as a replacement for the previous confusing names (line) complex group and Abelian linear group, and is the Greek analog of "complex". The metaplectic group is a double cover of the symplectic group over R; it has analogues over other local fields, finite fields, and adele rings. The symplectic group is a classical group defined as the set of linear transformations of a -dimensional vector space over the field which preserve a non-degenerate skew-symmetric bilinear form. Such a vector space is called a symplectic vector space, and the symplectic group of an abstract symplectic vector space is denoted . Upon fixing a basis for , the symplectic group becomes the group of symplectic matrices, with entries in , under the operation of matrix multiplication. This group is denoted either or . If the bilinear form is represented by the nonsingular skew-symmetric matrix Ω, then where MT is the transpose of M. Often Ω is defined to be where In is the identity matrix. In this case, can be expressed as those block matrices , where , satisfying the three equations: Since all symplectic matrices have determinant , the symplectic group is a subgroup of the special linear group . When , the symplectic condition on a matrix is satisfied if and only if the determinant is one, so that . For , there are additional conditions, i.e. is then a proper subgroup of . Typically, the field is the field of real numbers or complex numbers . In these cases is a real or complex Lie group of real or complex dimension , respectively. These groups are connected but non-compact. The center of consists of the matrices and as long as the characteristic of the field is not . Since the center of is discrete and its quotient modulo the center is a simple group, is considered a simple Lie group. The real rank of the corresponding Lie algebra, and hence of the Lie group , is . The Lie algebra of is the set equipped with the commutator as its Lie bracket. For the standard skew-symmetric bilinear form , this Lie algebra is the set of all block matrices subject to the conditions The symplectic group over the field of complex numbers is a non-compact, simply connected, simple Lie group. is the complexification of the real group . is a real, non-compact, connected, simple Lie group. It has a fundamental group isomorphic to the group of integers under addition. As the real form of a simple Lie group its Lie algebra is a splittable Lie algebra. Some further properties of : The exponential map from the Lie algebra to the group is not surjective. However, any element of the group can be represented as the product of two exponentials. In other words, For all in : The matrix is positive-definite and diagonal. The set of such s forms a non-compact subgroup of whereas forms a compact subgroup. This decomposition is known as 'Euler' or 'Bloch–Messiah' decomposition. Further symplectic matrix properties can be found on that Wikipedia page. As a Lie group, has a manifold structure. The manifold for is diffeomorphic to the Cartesian product of the unitary group with a vector space of dimension . Infinitesimal generators The members of the symplectic Lie algebra are the Hamiltonian matrices. These are matrices, such thatwhere and are symmetric matrices. See classical group for a derivation. Example of symplectic matrices For , the group of matrices with determinant , the three symplectic -matrices are: Sp(2n, R) It turns out that can have a fairly explicit description using generators. If we let denote the symmetric matrices, then is generated by whereare subgroups of pg 173pg 2. Relationship with symplectic geometry Symplectic geometry is the study of symplectic manifolds. The tangent space at any point on a symplectic manifold is a symplectic vector space. As noted earlier, structure preserving transformations of a symplectic vector space form a group and this group is , depending on the dimension of the space and the field over which it is defined. A symplectic vector space is itself a symplectic manifold. A transformation under an action of the symplectic group is thus, in a sense, a linearised version of a symplectomorphism which is a more general structure preserving transformation on a symplectic manifold. The compact symplectic group is the intersection of with the unitary group: It is sometimes written as . Alternatively, can be described as the subgroup of (invertible quaternionic matrices) that preserves the standard hermitian form on : That is, is just the quaternionic unitary group, . Indeed, it is sometimes called the hyperunitary group. Also Sp(1) is the group of quaternions of norm , equivalent to and topologically a -sphere . Note that is not a symplectic group in the sense of the previous section—it does not preserve a non-degenerate skew-symmetric -bilinear form on : there is no such form except the zero form. Rather, it is isomorphic to a subgroup of , and so does preserve a complex symplectic form in a vector space of twice the dimension. As explained below, the Lie algebra of is the compact real form of the complex symplectic Lie algebra . is a real Lie group with (real) dimension . It is compact and simply connected. The Lie algebra of is given by the quaternionic skew-Hermitian matrices, the set of quaternionic matrices that satisfy where is the conjugate transpose of (here one takes the quaternionic conjugate). The Lie bracket is given by the commutator. Important subgroups Some main subgroups are: Conversely it is itself a subgroup of some other groups: There are also the isomorphisms of the Lie algebras and . Relationship between the symplectic groups Every complex, semisimple Lie algebra has a split real form and a compact real form; the former is called a complexification of the latter two. The Lie algebra of is semisimple and is denoted . Its split real form is and its compact real form is . These correspond to the Lie groups and respectively. The algebras, , which are the Lie algebras of , are the indefinite signature equivalent to the compact form. Physical significance Classical mechanics The non-compact symplectic group comes up in classical physics as the symmetries of canonical coordinates preserving the Poisson bracket. Consider a system of particles, evolving under Hamilton's equations whose position in phase space at a given time is denoted by the vector of canonical coordinates, The elements of the group are, in a certain sense, canonical transformations on this vector, i.e. they preserve the form of Hamilton's equations. If are new canonical coordinates, then, with a dot denoting time derivative, where for all and all in phase space. For the special case of a Riemannian manifold, Hamilton's equations describe the geodesics on that manifold. The coordinates live on the underlying manifold, and the momenta live in the cotangent bundle. This is the reason why these are conventionally written with upper and lower indexes; it is to distinguish their locations. The corresponding Hamiltonian consists purely of the kinetic energy: it is where is the inverse of the metric tensor on the Riemannian manifold. In fact, the cotangent bundle of any smooth manifold can be a given a symplectic structure in a canonical way, with the symplectic form defined as the exterior derivative of the tautological one-form. Quantum mechanics Consider a system of particles whose quantum state encodes its position and momentum. These coordinates are continuous variables and hence the Hilbert space, in which the state lives, is infinite-dimensional. This often makes the analysis of this situation tricky. An alternative approach is to consider the evolution of the position and momentum operators under the Heisenberg equation in phase space. Construct a vector of canonical coordinates, The canonical commutation relation can be expressed simply as where and is the identity matrix. Many physical situations only require quadratic Hamiltonians, i.e. Hamiltonians of the form where is a real, symmetric matrix. This turns out to be a useful restriction and allows us to rewrite the Heisenberg equation as The solution to this equation must preserve the canonical commutation relation. It can be shown that the time evolution of this system is equivalent to an action of the real symplectic group, , on the phase space. See also Hamiltonian mechanics Metaplectic group Orthogonal group Paramodular group Projective unitary group Representations of classical Lie groups Symplectic manifold, Symplectic matrix, Symplectic vector space, Symplectic representation Unitary group Θ10 Notes References . . Lie groups Symplectic geometry
Symplectic group
[ "Mathematics" ]
2,007
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
173,983
https://en.wikipedia.org/wiki/Symplectic%20matrix
In mathematics, a symplectic matrix is a matrix with real entries that satisfies the condition where denotes the transpose of and is a fixed nonsingular, skew-symmetric matrix. This definition can be extended to matrices with entries in other fields, such as the complex numbers, finite fields, p-adic numbers, and function fields. Typically is chosen to be the block matrix where is the identity matrix. The matrix has determinant and its inverse is . Properties Generators for symplectic matrices Every symplectic matrix has determinant , and the symplectic matrices with real entries form a subgroup of the general linear group under matrix multiplication since being symplectic is a property stable under matrix multiplication. Topologically, this symplectic group is a connected noncompact real Lie group of real dimension , and is denoted . The symplectic group can be defined as the set of linear transformations that preserve the symplectic form of a real symplectic vector space. This symplectic group has a distinguished set of generators, which can be used to find all possible symplectic matrices. This includes the following sets where is the set of symmetric matrices. Then, is generated by the setp. 2 of matrices. In other words, any symplectic matrix can be constructed by multiplying matrices in and together, along with some power of . Inverse matrix Every symplectic matrix is invertible with the inverse matrix given by Furthermore, the product of two symplectic matrices is, again, a symplectic matrix. This gives the set of all symplectic matrices the structure of a group. There exists a natural manifold structure on this group which makes it into a (real or complex) Lie group called the symplectic group. Determinantal properties It follows easily from the definition that the determinant of any symplectic matrix is ±1. Actually, it turns out that the determinant is always +1 for any field. One way to see this is through the use of the Pfaffian and the identity Since and we have that . When the underlying field is real or complex, one can also show this by factoring the inequality . Block form of symplectic matrices Suppose Ω is given in the standard form and let be a block matrix given by where are matrices. The condition for to be symplectic is equivalent to the two following equivalent conditions symmetric, and symmetric, and The second condition comes from the fact that if is symplectic, then is also symplectic. When these conditions reduce to the single condition . Thus a matrix is symplectic iff it has unit determinant. Inverse matrix of block matrix With in standard form, the inverse of is given by The group has dimension . This can be seen by noting that is anti-symmetric. Since the space of anti-symmetric matrices has dimension the identity imposes constraints on the coefficients of and leaves with independent coefficients. Symplectic transformations In the abstract formulation of linear algebra, matrices are replaced with linear transformations of finite-dimensional vector spaces. The abstract analog of a symplectic matrix is a symplectic transformation of a symplectic vector space. Briefly, a symplectic vector space is a -dimensional vector space equipped with a nondegenerate, skew-symmetric bilinear form called the symplectic form. A symplectic transformation is then a linear transformation which preserves , i.e. Fixing a basis for , can be written as a matrix and as a matrix . The condition that be a symplectic transformation is precisely the condition that M be a symplectic matrix: Under a change of basis, represented by a matrix A, we have One can always bring to either the standard form given in the introduction or the block diagonal form described below by a suitable choice of A. The matrix Ω Symplectic matrices are defined relative to a fixed nonsingular, skew-symmetric matrix . As explained in the previous section, can be thought of as the coordinate representation of a nondegenerate skew-symmetric bilinear form. It is a basic result in linear algebra that any two such matrices differ from each other by a change of basis. The most common alternative to the standard given above is the block diagonal form This choice differs from the previous one by a permutation of basis vectors. Sometimes the notation is used instead of for the skew-symmetric matrix. This is a particularly unfortunate choice as it leads to confusion with the notion of a complex structure, which often has the same coordinate expression as but represents a very different structure. A complex structure is the coordinate representation of a linear transformation that squares to , whereas is the coordinate representation of a nondegenerate skew-symmetric bilinear form. One could easily choose bases in which is not skew-symmetric or does not square to . Given a hermitian structure on a vector space, and are related via where is the metric. That and usually have the same coordinate expression (up to an overall sign) is simply a consequence of the fact that the metric g is usually the identity matrix. Diagonalization and decomposition For any positive definite symmetric real symplectic matrix , there is a symplectic unitary , such thatwhere the diagonal elements of are the eigenvalues of . Any real symplectic matrix has a polar decomposition of the form:where and Any real symplectic matrix can be decomposed as a product of three matrices:where and are both symplectic and orthogonal, and is positive-definite and diagonal. This decomposition is closely related to the singular value decomposition of a matrix and is known as an 'Euler' or 'Bloch-Messiah' decomposition. The set of orthogonal symplectic matrices forms a (maximal) compact subgroup of the symplectic group . This set is isomorphic to the set of unitary matrices of dimension , . Every symplectic orthogonal matrix can be written as with . This equation implies that every symplectic orthogonal matrix has determinant equal to +1 and thus that this is true for all symplectic matrices as its polar decomposition is itself given in terms symplectic matrices. Complex matrices If instead M is a matrix with complex entries, the definition is not standard throughout the literature. Many authors adjust the definition above to where M* denotes the conjugate transpose of M. In this case, the determinant may not be 1, but will have absolute value 1. In the 2×2 case (n=1), M will be the product of a real symplectic matrix and a complex number of absolute value 1. Other authors retain the definition () for complex matrices and call matrices satisfying () conjugate symplectic. Applications Transformations described by symplectic matrices play an important role in quantum optics and in continuous-variable quantum information theory. For instance, symplectic matrices can be used to describe Gaussian (Bogoliubov) transformations of a quantum state of light. In turn, the Bloch-Messiah decomposition () means that such an arbitrary Gaussian transformation can be represented as a set of two passive linear-optical interferometers (corresponding to orthogonal matrices O and O' ) intermitted by a layer of active non-linear squeezing transformations (given in terms of the matrix D). In fact, one can circumvent the need for such in-line active squeezing transformations if two-mode squeezed vacuum states are available as a prior resource only. See also Symplectic vector space Symplectic group Symplectic representation Orthogonal matrix Unitary matrix Hamiltonian mechanics Linear complex structure Williamson theorem References Matrices Symplectic geometry
Symplectic matrix
[ "Mathematics" ]
1,567
[ "Matrices (mathematics)", "Mathematical objects" ]
173,993
https://en.wikipedia.org/wiki/Unitary%20group
In mathematics, the unitary group of degree n, denoted U(n), is the group of unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group , and it has as a subgroup the special unitary group, consisting of those unitary matrices with determinant 1. In the simple case , the group U(1) corresponds to the circle group, isomorphic to the set of all complex numbers that have absolute value 1, under multiplication. All the unitary groups contain copies of this group. The unitary group U(n) is a real Lie group of dimension n2. The Lie algebra of U(n) consists of skew-Hermitian matrices, with the Lie bracket given by the commutator. The general unitary group, also called the group of unitary similitudes, consists of all matrices A such that A∗A is a nonzero multiple of the identity matrix, and is just the product of the unitary group with the group of all positive multiples of the identity matrix. Unitary groups may also be defined over fields other than the complex numbers. The hyperorthogonal group is an archaic name for the unitary group, especially over finite fields. Properties Since the determinant of a unitary matrix is a complex number with norm 1, the determinant gives a group homomorphism The kernel of this homomorphism is the set of unitary matrices with determinant 1. This subgroup is called the special unitary group, denoted SU(n). We then have a short exact sequence of Lie groups: The above map U(n) to U(1) has a section: we can view U(1) as the subgroup of U(n) that are diagonal with eiθ in the upper left corner and 1 on the rest of the diagonal. Therefore U(n) is a semidirect product of U(1) with SU(n). The unitary group U(n) is not abelian for . The center of U(n) is the set of scalar matrices λI with ; this follows from Schur's lemma. The center is then isomorphic to U(1). Since the center of U(n) is a 1-dimensional abelian normal subgroup of U(n), the unitary group is not semisimple, but it is reductive. Topology The unitary group U(n) is endowed with the relative topology as a subset of , the set of all complex matrices, which is itself homeomorphic to a 2n2-dimensional Euclidean space. As a topological space, U(n) is both compact and connected. To show that U(n) is connected, recall that any unitary matrix A can be diagonalized by another unitary matrix S. Any diagonal unitary matrix must have complex numbers of absolute value 1 on the main diagonal. We can therefore write A path in U(n) from the identity to A is then given by The unitary group is not simply connected; the fundamental group of U(n) is infinite cyclic for all n: To see this, note that the above splitting of U(n) as a semidirect product of SU(n) and U(1) induces a topological product structure on U(n), so that Now the first unitary group U(1) is topologically a circle, which is well known to have a fundamental group isomorphic to Z, whereas SU(n) is simply connected. The determinant map induces an isomorphism of fundamental groups, with the splitting inducing the inverse. The Weyl group of U(n) is the symmetric group Sn, acting on the diagonal torus by permuting the entries: Related groups 2-out-of-3 property The unitary group is the 3-fold intersection of the orthogonal, complex, and symplectic groups: Thus a unitary structure can be seen as an orthogonal structure, a complex structure, and a symplectic structure, which are required to be compatible (meaning that one uses the same J in the complex structure and the symplectic form, and that this J is orthogonal; writing all the groups as matrix groups fixes a J (which is orthogonal) and ensures compatibility). In fact, it is the intersection of any two of these three; thus a compatible orthogonal and complex structure induce a symplectic structure, and so forth. At the level of equations, this can be seen as follows: Any two of these equations implies the third. At the level of forms, this can be seen by decomposing a Hermitian form into its real and imaginary parts: the real part is symmetric (orthogonal), and the imaginary part is skew-symmetric (symplectic)—and these are related by the complex structure (which is the compatibility). On an almost Kähler manifold, one can write this decomposition as , where h is the Hermitian form, g is the Riemannian metric, i is the almost complex structure, and ω is the almost symplectic structure. From the point of view of Lie groups, this can partly be explained as follows: O(2n) is the maximal compact subgroup of , and U(n) is the maximal compact subgroup of both and Sp(2n). Thus the intersection or is the maximal compact subgroup of both of these, so U(n). From this perspective, what is unexpected is the intersection . Special unitary and projective unitary groups Just as the orthogonal group O(n) has the special orthogonal group SO(n) as subgroup and the projective orthogonal group PO(n) as quotient, and the projective special orthogonal group PSO(n) as subquotient, the unitary group U(n) has associated to it the special unitary group SU(n), the projective unitary group PU(n), and the projective special unitary group PSU(n). These are related as by the commutative diagram at right; notably, both projective groups are equal: . The above is for the classical unitary group (over the complex numbers) – for unitary groups over finite fields, one similarly obtains special unitary and projective unitary groups, but in general . G-structure: almost Hermitian In the language of G-structures, a manifold with a U(n)-structure is an almost Hermitian manifold. Generalizations From the point of view of Lie theory, the classical unitary group is a real form of the Steinberg group 2An, which is an algebraic group that arises from the combination of the diagram automorphism of the general linear group (reversing the Dynkin diagram An, which corresponds to transpose inverse) and the field automorphism of the extension C/R (namely complex conjugation). Both these automorphisms are automorphisms of the algebraic group, have order 2, and commute, and the unitary group is the fixed points of the product automorphism, as an algebraic group. The classical unitary group is a real form of this group, corresponding to the standard Hermitian form Ψ, which is positive definite. This can be generalized in a number of ways: generalizing to other Hermitian forms yields indefinite unitary groups ; the field extension can be replaced by any degree 2 separable algebra, most notably a degree 2 extension of a finite field; generalizing to other diagrams yields other groups of Lie type, namely the other Steinberg groups 2Dn, 2E6, 3D4, (in addition to 2An) and Suzuki–Ree groups considering a generalized unitary group as an algebraic group, one can take its points over various algebras. Indefinite forms Analogous to the indefinite orthogonal groups, one can define an indefinite unitary group, by considering the transforms that preserve a given Hermitian form, not necessarily positive definite (but generally taken to be non-degenerate). Here one is working with a vector space over the complex numbers. Given a Hermitian form Ψ on a complex vector space V, the unitary group U(Ψ) is the group of transforms that preserve the form: the transform M such that for all . In terms of matrices, representing the form by a matrix denoted Φ, this says that . Just as for symmetric forms over the reals, Hermitian forms are determined by signature, and are all unitarily congruent to a diagonal form with p entries of 1 on the diagonal and q entries of −1. The non-degenerate assumption is equivalent to . In a standard basis, this is represented as a quadratic form as: and as a symmetric form as: The resulting group is denoted . Finite fields Over the finite field with elements, Fq, there is a unique quadratic extension field, Fq2, with order 2 automorphism (the rth power of the Frobenius automorphism). This allows one to define a Hermitian form on an Fq2 vector space V, as an Fq-bilinear map such that and for . Further, all non-degenerate Hermitian forms on a vector space over a finite field are unitarily congruent to the standard one, represented by the identity matrix; that is, any Hermitian form is unitarily equivalent to where represent the coordinates of in some particular Fq2-basis of the n-dimensional space V . Thus one can define a (unique) unitary group of dimension n for the extension Fq2/Fq, denoted either as or depending on the author. The subgroup of the unitary group consisting of matrices of determinant 1 is called the special unitary group and denoted or . For convenience, this article will use the convention. The center of has order and consists of the scalar matrices that are unitary, that is those matrices cIV with . The center of the special unitary group has order and consists of those unitary scalars which also have order dividing n. The quotient of the unitary group by its center is called the projective unitary group, , and the quotient of the special unitary group by its center is the projective special unitary group . In most cases ( and ), is a perfect group and is a finite simple group, . Degree-2 separable algebras More generally, given a field k and a degree-2 separable k-algebra K (which may be a field extension but need not be), one can define unitary groups with respect to this extension. First, there is a unique k-automorphism of K which is an involution and fixes exactly k ( if and only if ). This generalizes complex conjugation and the conjugation of degree 2 finite field extensions, and allows one to define Hermitian forms and unitary groups as above. Algebraic groups The equations defining a unitary group are polynomial equations over k (but not over K): for the standard form , the equations are given in matrices as , where is the conjugate transpose. Given a different form, they are . The unitary group is thus an algebraic group, whose points over a k-algebra R are given by: For the field extension C/R and the standard (positive definite) Hermitian form, these yield an algebraic group with real and complex points given by: In fact, the unitary group is a linear algebraic group. Unitary group of a quadratic module The unitary group of a quadratic module is a generalisation of the linear algebraic group U just defined, which incorporates as special cases many different classical algebraic groups. The definition goes back to Anthony Bak's thesis. To define it, one has to define quadratic modules first: Let R be a ring with anti-automorphism J, such that for all r in R and . Define Let be an additive subgroup of R, then Λ is called form parameter if and . A pair such that R is a ring and Λ a form parameter is called form ring. Let M be an R-module and f a J-sesquilinear form on M (i.e., for any and ). Define and , then f is said to define the Λ-quadratic form on M. A quadratic module over is a triple such that M is an R-module and is a Λ-quadratic form. To any quadratic module defined by a J-sesquilinear form f on M over a form ring one can associate the unitary group The special case where , with J any non-trivial involution (i.e., and gives back the "classical" unitary group (as an algebraic group). Polynomial invariants The unitary groups are the automorphisms of two polynomials in real non-commutative variables: These are easily seen to be the real and imaginary parts of the complex form . The two invariants separately are invariants of O(2n) and Sp(2n). Combined they make the invariants of U(n) which is a subgroup of both these groups. The variables must be non-commutative in these invariants otherwise the second polynomial is identically zero. Classifying space The classifying space for U(n) is described in the article Classifying space for U(n). See also Special unitary group Projective unitary group Orthogonal group Symplectic group Notes > References Lie groups
Unitary group
[ "Mathematics" ]
2,714
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
173,997
https://en.wikipedia.org/wiki/Special%20unitary%20group
In mathematics, the special unitary group of degree , denoted , is the Lie group of unitary matrices with determinant 1. The matrices of the more general unitary group may have complex determinants with absolute value 1, rather than real 1 in the special case. The group operation is matrix multiplication. The special unitary group is a normal subgroup of the unitary group , consisting of all unitary matrices. As a compact classical group, is the group that preserves the standard inner product on . It is itself a subgroup of the general linear group, The groups find wide application in the Standard Model of particle physics, especially in the electroweak interaction and in quantum chromodynamics. The simplest case, , is the trivial group, having only a single element. The group is isomorphic to the group of quaternions of norm 1, and is thus diffeomorphic to the 3-sphere. Since unit quaternions can be used to represent rotations in 3-dimensional space (up to sign), there is a surjective homomorphism from to the rotation group whose kernel is . Since the quaternions can be identified as the even subalgebra of the Clifford Algebra , is in fact identical to one of the symmetry groups of spinors, Spin(3), that enables a spinor presentation of rotations. Properties The special unitary group is a strictly real Lie group (vs. a more general complex Lie group). Its dimension as a real manifold is . Topologically, it is compact and simply connected. Algebraically, it is a simple Lie group (meaning its Lie algebra is simple; see below). The center of is isomorphic to the cyclic group , and is composed of the diagonal matrices for an th root of unity and the identity matrix. Its outer automorphism group for is while the outer automorphism group of is the trivial group. A maximal torus of rank is given by the set of diagonal matrices with determinant . The Weyl group of is the symmetric group , which is represented by signed permutation matrices (the signs being necessary to ensure that the determinant is ). The Lie algebra of , denoted by , can be identified with the set of traceless anti‑Hermitian complex matrices, with the regular commutator as a Lie bracket. Particle physicists often use a different, equivalent representation: The set of traceless Hermitian complex matrices with Lie bracket given by times the commutator. Lie algebra The Lie algebra of consists of skew-Hermitian matrices with trace zero. This (real) Lie algebra has dimension . More information about the structure of this Lie algebra can be found below in . Fundamental representation In the physics literature, it is common to identify the Lie algebra with the space of trace-zero Hermitian (rather than the skew-Hermitian) matrices. That is to say, the physicists' Lie algebra differs by a factor of from the mathematicians'. With this convention, one can then choose generators that are traceless Hermitian complex matrices, where: where the are the structure constants and are antisymmetric in all indices, while the -coefficients are symmetric in all indices. As a consequence, the commutator is: and the corresponding anticommutator is: The factor of in the commutation relation arises from the physics convention and is not present when using the mathematicians' convention. The conventional normalization condition is The generators satisfy the Jacobi identity: By convention, in the physics literature the generators are defined as the traceless Hermitian complex matrices with a prefactor: for the group, the generators are chosen as where are the Pauli matrices, while for the case of one defines where are the Gell-Mann matrices. With these definitions, the generators satisfy the following normalization condition: Adjoint representation In the -dimensional adjoint representation, the generators are represented by matrices, whose elements are defined by the structure constants themselves: The group SU(2) Using matrix multiplication for the binary operation, forms a group, where the overline denotes complex conjugation. Diffeomorphism with the 3-sphere S3 If we consider as a pair in where and , then the equation becomes This is the equation of the 3-sphere S3. This can also be seen using an embedding: the map where denotes the set of 2 by 2 complex matrices, is an injective real linear map (by considering diffeomorphic to and diffeomorphic to ). Hence, the restriction of to the 3-sphere (since modulus is 1), denoted , is an embedding of the 3-sphere onto a compact submanifold of , namely . Therefore, as a manifold, is diffeomorphic to , which shows that is simply connected and that can be endowed with the structure of a compact, connected Lie group. Isomorphism with group of versors Quaternions of norm 1 are called versors since they generate the rotation group SO(3): The matrix: can be mapped to the quaternion This map is in fact a group isomorphism. Additionally, the determinant of the matrix is the squared norm of the corresponding quaternion. Clearly any matrix in is of this form and, since it has determinant , the corresponding quaternion has norm . Thus is isomorphic to the group of versors. Relation to spatial rotations Every versor is naturally associated to a spatial rotation in 3 dimensions, and the product of versors is associated to the composition of the associated rotations. Furthermore, every rotation arises from exactly two versors in this fashion. In short: there is a 2:1 surjective homomorphism from to ; consequently is isomorphic to the quotient group , the manifold underlying is obtained by identifying antipodal points of the 3-sphere , and is the universal cover of . Lie algebra The Lie algebra of consists of skew-Hermitian matrices with trace zero. Explicitly, this means The Lie algebra is then generated by the following matrices, which have the form of the general element specified above. This can also be written as using the Pauli matrices. These satisfy the quaternion relationships and The commutator bracket is therefore specified by The above generators are related to the Pauli matrices by and This representation is routinely used in quantum mechanics to represent the spin of fundamental particles such as electrons. They also serve as unit vectors for the description of our 3 spatial dimensions in loop quantum gravity. They also correspond to the Pauli X, Y, and Z gates, which are standard generators for the single qubit gates, corresponding to 3d rotations about the axes of the Bloch sphere. The Lie algebra serves to work out the representations of . SU(3) The group is an 8-dimensional simple Lie group consisting of all unitary matrices with determinant 1. Topology The group is a simply-connected, compact Lie group. Its topological structure can be understood by noting that acts transitively on the unit sphere in . The stabilizer of an arbitrary point in the sphere is isomorphic to , which topologically is a 3-sphere. It then follows that is a fiber bundle over the base with fiber . Since the fibers and the base are simply connected, the simple connectedness of then follows by means of a standard topological result (the long exact sequence of homotopy groups for fiber bundles). The -bundles over are classified by since any such bundle can be constructed by looking at trivial bundles on the two hemispheres and looking at the transition function on their intersection, which is a copy of , so Then, all such transition functions are classified by homotopy classes of maps and as rather than , cannot be the trivial bundle , and therefore must be the unique nontrivial (twisted) bundle. This can be shown by looking at the induced long exact sequence on homotopy groups. Representation theory The representation theory of is well-understood. Descriptions of these representations, from the point of view of its complexified Lie algebra , may be found in the articles on Lie algebra representations or the Clebsch–Gordan coefficients for . Lie algebra The generators, , of the Lie algebra of in the defining (particle physics, Hermitian) representation, are where , the Gell-Mann matrices, are the analog of the Pauli matrices for : These span all traceless Hermitian matrices of the Lie algebra, as required. Note that are antisymmetric. They obey the relations or, equivalently, The are the structure constants of the Lie algebra, given by while all other not related to these by permutation are zero. In general, they vanish unless they contain an odd number of indices from the set . The symmetric coefficients take the values They vanish if the number of indices from the set is odd. A generic group element generated by a traceless 3×3 Hermitian matrix , normalized as , can be expressed as a second order matrix polynomial in : LP where Lie algebra structure As noted above, the Lie algebra of consists of skew-Hermitian matrices with trace zero. The complexification of the Lie algebra is , the space of all complex matrices with trace zero. A Cartan subalgebra then consists of the diagonal matrices with trace zero, which we identify with vectors in whose entries sum to zero. The roots then consist of all the permutations of . A choice of simple roots is So, is of rank and its Dynkin diagram is given by , a chain of nodes: .... Its Cartan matrix is Its Weyl group or Coxeter group is the symmetric group , the symmetry group of the -simplex. Generalized special unitary group For a field , the generalized special unitary group over F, , is the group of all linear transformations of determinant 1 of a vector space of rank over which leave invariant a nondegenerate, Hermitian form of signature . This group is often referred to as the special unitary group of signature over . The field can be replaced by a commutative ring, in which case the vector space is replaced by a free module. Specifically, fix a Hermitian matrix of signature in , then all satisfy Often one will see the notation without reference to a ring or field; in this case, the ring or field being referred to is and this gives one of the classical Lie groups. The standard choice for when is However, there may be better choices for for certain dimensions which exhibit more behaviour under restriction to subrings of . Example An important example of this type of group is the Picard modular group which acts (projectively) on complex hyperbolic space of dimension two, in the same way that acts (projectively) on real hyperbolic space of dimension two. In 2005 Gábor Francsics and Peter Lax computed an explicit fundamental domain for the action of this group on . A further example is , which is isomorphic to . Important subgroups In physics the special unitary group is used to represent fermionic symmetries. In theories of symmetry breaking it is important to be able to find the subgroups of the special unitary group. Subgroups of that are important in GUT physics are, for , where × denotes the direct product and , known as the circle group, is the multiplicative group of all complex numbers with absolute value 1. For completeness, there are also the orthogonal and symplectic subgroups, Since the rank of is and of is 1, a useful check is that the sum of the ranks of the subgroups is less than or equal to the rank of the original group. is a subgroup of various other Lie groups, See Spin group and Simple Lie group for , , and . There are also the accidental isomorphisms: , , and . One may finally mention that is the double covering group of , a relation that plays an important role in the theory of rotations of 2-spinors in non-relativistic quantum mechanics. SU(1, 1) where denotes the complex conjugate of the complex number . This group is isomorphic to and where the numbers separated by a comma refer to the signature of the quadratic form preserved by the group. The expression in the definition of is an Hermitian form which becomes an isotropic quadratic form when and are expanded with their real components. An early appearance of this group was as the "unit sphere" of coquaternions, introduced by James Cockle in 1852. Let Then the 2×2 identity matrix, and and the elements and all anticommute, as in quaternions. Also is still a square root of (negative of the identity matrix), whereas are not, unlike in quaternions. For both quaternions and coquaternions, all scalar quantities are treated as implicit multiples of and notated as . The coquaternion with scalar , has conjugate similar to Hamilton's quaternions. The quadratic form is Note that the 2-sheet hyperboloid corresponds to the imaginary units in the algebra so that any point on this hyperboloid can be used as a pole of a sinusoidal wave according to Euler's formula. The hyperboloid is stable under , illustrating the isomorphism with . The variability of the pole of a wave, as noted in studies of polarization, might view elliptical polarization as an exhibit of the elliptical shape of a wave with The Poincaré sphere model used since 1892 has been compared to a 2-sheet hyperboloid model, and the practice of interferometry has been introduced. When an element of is interpreted as a Möbius transformation, it leaves the unit disk stable, so this group represents the motions of the Poincaré disk model of hyperbolic plane geometry. Indeed, for a point in the complex projective line, the action of is given by since in projective coordinates Writing complex number arithmetic shows where Therefore, so that their ratio lies in the open disk. See also Unitary group Projective special unitary group, Orthogonal group Generalizations of Pauli matrices Representation theory of SU(2) Footnotes Citations References Lie groups Mathematical physics
Special unitary group
[ "Physics", "Mathematics" ]
2,888
[ "Lie groups", "Mathematical structures", "Applied mathematics", "Theoretical physics", "Algebraic structures", "Mathematical physics" ]
174,009
https://en.wikipedia.org/wiki/Hall%E2%80%93H%C3%A9roult%20process
The Hall–Héroult process is the major industrial process for smelting aluminium. It involves dissolving aluminium oxide (alumina) (obtained most often from bauxite, aluminium's chief ore, through the Bayer process) in molten cryolite and electrolyzing the molten salt bath, typically in a purpose-built cell. The process conducted at an industrial scale, happens at 940–980 °C (1700 to 1800°F) and produces aluminium with a purity of 99.5-99.8%. Recycling aluminum, which does not require electrolysis, is thus not treated using this method. The Hall–Héroult process consumes substantial electrical energy, and its electrolysis stage can produce significant amounts of carbon dioxide if the electricity is generated from high-emission sources. Furthermore, the process generates fluorocarbon compounds as byproducts, contributing to both air pollution and climate change. Process Difficulties faced Elemental aluminium cannot be produced by the electrolysis of an aqueous aluminium salt, because hydronium ions readily oxidize elemental aluminium. Although a molten aluminium salt could be used instead, aluminium oxide has a melting point of 2072 °C (3762°F) so electrolysing it is impractical. In the Hall–Héroult process, alumina, Al2O3, is dissolved in molten synthetic cryolite, Na3AlF6, to lower its melting point for easier electrolysis. The carbon source is generally a coke (fossil fuel). Theory In the Hall–Héroult process the following simplified reactions take place at the carbon electrodes: Cathode: Anode: Overall: In reality, much more CO2 is formed at the anode than CO: Pure cryolite has a melting point of (1848°F). With a small percentage of alumina dissolved in it, its melting point drops to about 1000 °C (1832°F). Besides having a relatively low melting point, cryolite is used as an electrolyte because, among other things, it also dissolves alumina well, conducts electricity, dissociates electrolytically at higher voltage than alumina, and also has a lower density than aluminum at the temperatures required by the electrolysis. Aluminium fluoride (AlF3) is usually added to the electrolyte. The ratio NaF/AlF3 is called the cryolite ratio and it is 3 in pure cryolite. In industrial production, AlF3 is added so that the cryolite ratio is 2–3 to further reduce the melting point, so that the electrolysis can happen at temperatures between 940 and 980 °C (1700 to 1800°F). The density of liquid aluminum is 2.3 g/ml at temperatures between 950 and 1000 °C (1750° to 1830°F). The density of the electrolyte should be less than 2.1 g/ml, so that the molten aluminum separates from the electrolyte and settles properly to the bottom of the electrolysis cell. In addition to AlF3, other additives like lithium fluoride may be added to alter different properties (melting point, density, conductivity etc.) of the electrolyte. The mixture is electrolysed by passing a low voltage (under 5 V) direct current at through it. This causes liquid aluminium to be deposited at the cathode, while the oxygen from the alumina combines with carbon from the anode to produce mostly carbon dioxide. The theoretical minimum energy requirement for this process is 6.23 kWh/(kg of Al), but the process commonly requires 15.37 kWh. Cell operation Cells in factories are operated 24 hours per day so that the molten material in them will not solidify. Temperature within the cell is maintained via electrical resistance. Oxidation of the carbon anode increases the electrical efficiency at a cost of consuming the carbon electrodes and producing carbon dioxide. While solid cryolite is denser than solid aluminium at room temperature, liquid aluminium is denser than molten cryolite at temperatures around . The aluminium sinks to the bottom of the electrolytic cell, where it is periodically collected. The liquid aluminium is removed from the cell via a siphon every 1 to 3 days in order to avoid having to use extremely high temperature valves and pumps. Alumina is added to the cells as the aluminum is removed. Collected aluminium from different cells in a factory is finally melted together to ensure uniform product and made into metal sheets. The electrolytic mixture is sprinkled with coke to prevent the anode's oxidation by the oxygen involved. The cell produces gases at the anode. The exhaust is primarily CO2 produced from the anode consumption and hydrogen fluoride (HF) from the cryolite and flux (AlF3). In modern facilities, fluorides are almost completely recycled to the cells and therefore used again in the electrolysis. Escaped HF can be neutralized to its sodium salt, sodium fluoride. Particulates are captured using electrostatic or bag filters. The CO2 is usually vented into the atmosphere. Agitation of the molten material in the cell increases its production rate at the expense of an increase in cryolite impurities in the product. Properly designed cells can leverage magnetohydrodynamic forces induced by the electrolysing current to agitate the electrolyte. In non-agitating static pool cells, the impurities either rise to the top of the metallic aluminium, or sink to the bottom, leaving high-purity aluminium in the middle area. Electrodes Electrodes in cells are mostly coke which has been purified at high temperatures. Pitch resin or tar is used as a binder. The materials most often used in anodes, coke and pitch resin, are mainly residues from the petroleum industry and need to be of high enough purity so no impurities end up into the molten aluminum or the electrolyte. There are two primary anode technologies using the Hall–Héroult process: Söderberg technology and prebaked technology. In cells using Söderberg or self-baking anodes, there is a single anode per electrolysis cell. The anode is contained within a frame and, as the bottom of the anode turns mainly into CO2 during the electrolysis, the anode loses mass and, being amorphous, it slowly sinks within its frame. More material to the top of the anode is continuously added in the form of briquettes made from coke and pitch. The lost heat from the smelting operation is used to bake the briquettes into the carbon form required for the reaction with alumina. The baking process in Söderberg anodes during electrolysis releases more carcinogenic PAHs and other pollutants than electrolysis with prebaked anodes and, partially for this reason, prebaked anode-using cells have become more common in the aluminium industry. More alumina is added to the electrolyte from the sides of the Söderberg anode after the crust on top of the electrolyte mixture is broken. Prebaked anodes are baked in very large gas-fired ovens at high temperature before being lowered by various heavy industrial lifting systems into the electrolytic solution. There are usually 24 prebaked anodes in two rows per cell. Each anode is lowered vertically and individually by a computer, as the bottom surfaces of the anodes are eaten away during the electrolysis. Compared to Söderberg anodes, computer-controlled prebaked anodes can be brought closer to the molten aluminium layer at the bottom of the cell without any of them touching the layer and interfering with the electrolysis. This smaller distance decreases the resistance caused by the electrolyte mixture and increases the efficiency of prebaked anodes over Söderberg anodes. Prebake technology also has much lower risk of the anode effect (see below), but cells using it are more expensive to build and labor-intensive to use, as each prebaked anode in a cell needs to be removed and replaced once it has been used. Alumina is added to the electrolyte from between the anodes in prebake cells. Prebaked anodes contain a smaller percentage of pitch, as they need to be more solid than Söderberg anodes. The remains of prebaked anodes are used to make more new prebaked anodes. Prebaked anodes are either made in the same factory where electrolysis happens, or are brought there from elsewhere. The inside of the cell's bath is lined with cathode made from coke and pitch. Cathodes also degrade during electrolysis, but much more slowly than anodes do, and thus they need neither be as high in purity, nor be maintained as often. Cathodes are typically replaced every 2–6 years. This requires the whole cell to be shut down. Anode effect The anode effect is a situation where too many gas bubbles form at the bottom of the anode and join, forming a layer. This increases the resistance of the cell, because smaller areas of the electrolyte touch the anode. These areas of the electrolyte and anode heat up when the density of the electric current of the cell focuses to go through only them. This heats up the gas layer and causes it to expand, thus further reducing the surface area where electrolyte and anode are in contact with each other. The anode effect decreases the energy-efficiency and the aluminium production of the cell. It also induces the formation of tetrafluoromethane (CF4) in significant quantities, increases formation of CO and, to a lesser extent, also causes the formation of hexafluoroethane (C2F6). CF4 and C2F6 are not CFCs, and, although not detrimental to the ozone layer, are still potent greenhouse gases. The anode effect is mainly a problem in Söderberg technology cells, not in prebaked. History Existing need Aluminium is the most abundant metallic element in the Earth's crust, but it is rarely found in its elemental state. It occurs in many minerals, but its primary commercial source is bauxite, a mixture of hydrated aluminium oxides and compounds of other elements such as iron. Prior to the Hall–Héroult process, elemental aluminium was made by heating ore along with elemental sodium or potassium in a vacuum. The method was complicated and consumed materials that were in themselves expensive at that time. This meant that the cost to produce the small amount of aluminium made in the early 19th century was very high, higher than for gold or platinum. Bars of aluminium were exhibited alongside the French crown jewels at the Exposition Universelle of 1855, and Emperor Napoleon III of France was said to have reserved his few sets of aluminium dinner plates and eating utensils for his most honored guests. Production costs using older methods did come down, but when aluminium was selected as the material for the cap/lightning rod to sit atop the Washington Monument in Washington, D.C., it was still more expensive than silver. Independent discovery The Hall–Héroult process was invented independently and almost simultaneously in 1886 by the American chemist Charles Martin Hall and by the Frenchman Paul Héroult—both 22 years old. Some authors claim Hall was assisted by his sister Julia Brainerd Hall; however, the extent to which she was involved has been disputed. In 1888, Hall opened the first large-scale aluminium production plant in Pittsburgh. It later became the Alcoa corporation. In 1997, the Hall–Héroult process was designated a National Historic Chemical Landmark by the American Chemical Society in recognition of the importance of the process in the commercialization of aluminum. Economic impact Aluminium produced via the Hall–Héroult process, in combination with cheaper electric power, helped make aluminium (and incidentally magnesium) an inexpensive commodity rather than a precious metal. This, in turn, helped make it possible for pioneers like Hugo Junkers to utilize aluminium and aluminium-magnesium alloys to make items like metal airplanes by the thousands, or Howard Lund to make aluminium fishing boats. In 2012 it was estimated that 12.7 tons of CO2 emissions are generated per ton of aluminium produced. See also Bayer process History of aluminium Solid oxide Hall–Héroult process Hoopes process Downs cell References Further reading Grjotheim, U and Kvande, H., Introduction to Aluminium Electrolysis. Understanding the Hall–Heroult Process, Aluminium Verlag GmbH, (Germany), 1993, pp. 260. Industrial processes Chemical processes Aluminium industry Electrolysis
Hall–Héroult process
[ "Chemistry" ]
2,587
[ "Chemical processes", "Electrochemistry", "nan", "Electrolysis", "Chemical process engineering" ]
174,030
https://en.wikipedia.org/wiki/Western%20blot
The western blot (sometimes called the protein immunoblot), or western blotting, is a widely used analytical technique in molecular biology and immunogenetics to detect specific proteins in a sample of tissue homogenate or extract. Besides detecting the proteins, this technique is also utilized to visualize, distinguish, and quantify the different proteins in a complicated protein combination. Western blot technique uses three elements to achieve its task of separating a specific protein from a complex: separation by size, transfer of protein to a solid support, and marking target protein using a primary and secondary antibody to visualize. A synthetic or animal-derived antibody (known as the primary antibody) is created that recognizes and binds to a specific target protein. The electrophoresis membrane is washed in a solution containing the primary antibody, before excess antibody is washed off. A secondary antibody is added which recognizes and binds to the primary antibody. The secondary antibody is visualized through various methods such as staining, immunofluorescence, and radioactivity, allowing indirect detection of the specific target protein. Other related techniques include dot blot analysis, quantitative dot blot, immunohistochemistry and immunocytochemistry, where antibodies are used to detect proteins in tissues and cells by immunostaining, and enzyme-linked immunosorbent assay (ELISA). The name western blot is a play on the Southern blot, a technique for DNA detection named after its inventor, English biologist Edwin Southern. Similarly, detection of RNA is termed as northern blot. The term western blot was given by W. Neal Burnette in 1981, although the method itself was independently invented in 1979 by Jaime Renart, Jakob Reiser, and George Stark at Stanford University, and by Harry Towbin, Theophil Staehelin, and Julian Gordon at the Friedrich Miescher Institute in Basel, Switzerland. The Towbin group also used secondary antibodies for detection, thus resembling the actual method that is almost universally used today. Between 1979 and 2019 "it has been mentioned in the titles, abstracts, and keywords of more than 400,000 PubMed-listed publications" and may still be the most-used protein-analytical technique. Applications The western blot is extensively used in biochemistry for the qualitative detection of single proteins and protein-modifications (such as post-translational modifications). At least 8–9% of all protein-related publications are estimated to apply western blots. It is used as a general method to identify the presence of a specific single protein within a complex mixture of proteins. A semi-quantitative estimation of a protein can be derived from the size and colour intensity of a protein band on the blot membrane. In addition, applying a dilution series of a purified protein of known concentrations can be used to allow a more precise estimate of protein concentration. The western blot is routinely used for verification of protein production after cloning. It is also used in medical diagnostics, e.g., in the HIV test or BSE-Test. The confirmatory HIV test employs a western blot to detect anti-HIV antibody in a human serum sample. Proteins from known HIV-infected cells are separated and blotted on a membrane as above. Then, the serum to be tested is applied in the primary antibody incubation step; free antibody is washed away, and a secondary anti-human antibody linked to an enzyme signal is added. The stained bands then indicate the proteins to which the patient's serum contains antibody. A western blot is also used as the definitive test for variant Creutzfeldt–Jakob disease, a type of prion disease linked to the consumption of contaminated beef from cattle with bovine spongiform encephalopathy (BSE, commonly referred to as 'mad cow disease'). Another application is in the diagnosis of tularemia. An evaluation of the western blot's ability to detect antibodies against F. tularensis revealed that its sensitivity is almost 100% and the specificity is 99.6%. Some forms of Lyme disease testing employ western blotting. A western blot can also be used as a confirmatory test for Hepatitis B infection and HSV-2 (Herpes Type 2) infection. In veterinary medicine, a western blot is sometimes used to confirm FIV+ status in cats. Further applications of the western blot technique include its use by the World Anti-Doping Agency (WADA). Blood doping is the misuse of certain techniques and/or substances to increase one's red blood cell mass, which allows the body to transport more oxygen to muscles and therefore increase stamina and performance. There are three widely known substances or methods used for blood doping, namely, erythropoietin (EPO), synthetic oxygen carriers and blood transfusions. Each is prohibited under WADA's List of Prohibited Substances and Methods. The western blot technique was used during the 2014 FIFA World Cup in the anti-doping campaign for that event. In total, over 1000 samples were collected and analysed by Reichel, et al. in the WADA accredited Laboratory of Lausanne, Switzerland. Recent research utilizing the western blot technique showed an improved detection of EPO in blood and urine based on novel Velum SAR precast horizontal gels optimized for routine analysis. With the adoption of the horizontal SAR-PAGE in combination with the precast film-supported Velum SAR gels the discriminatory capacity of micro-dose application of rEPO was significantly enhanced. Identification of protein localization across cells For medication development, the identification of therapeutic targets, and biological research, it is essential to comprehend where proteins are located within a cell. The subcellular locations of proteins inside the cell and their functions are closely related. The relationship between protein function and localization suggests that when proteins move, their functions may change or acquire new characteristics. A protein's subcellular placement can be determined using a variety of methods. Numerous efficient and reliable computational tools and strategies have been created and used to identify protein subcellular localization. With the aid of subcellular fractionation methods, WB continues to be an important fundamental method for the investigation and comprehension of protein localization. Epitope mapping Due to their various epitopes, antibodies have gained interest in both basic and clinical research. The foundation of antibody characterization and validation is epitope mapping. The procedure of identifying an antibody's binding sites (epitopes) on the target protein is referred to as "epitope mapping." Finding the binding epitope of an antibody is essential for the discovery and creation of novel vaccines, diagnostics, and therapeutics. As a result, various methods for mapping antibody epitopes have been created. At this point, western blotting's specificity is the main feature that sets it apart from other epitope mapping techniques. There are several application of western blot for epitope mapping on human skin samples, hemorrhagic disease virus. Procedure The western blot method is composed of gel electrophoresis to separate native proteins by 3-D structure or denatured proteins by the length of the polypeptide, followed by an electrophoretic transfer onto a membrane (mostly PVDF or nitrocellulose) and an immunostaining procedure to visualize a certain protein on the blot membrane. Sodium dodecyl sulfate–polyacrylamide gel electrophoresis (SDS-PAGE) is generally used for the denaturing electrophoretic separation of proteins. Sodium dodecyl sulfate (SDS) is generally used as a buffer (as well as in the gel) in order to give all proteins present a uniform negative charge, since proteins can be positively, negatively, or neutrally charged. Prior to electrophoresis, protein samples are often boiled to denature the proteins present. This ensures that proteins are separated based on size and prevents proteases (enzymes that break down proteins) from degrading samples. Following electrophoretic separation, the proteins are transferred to a membrane (typically nitrocellulose or PVDF). The membrane is often then stained with Ponceau S in order to visualize the proteins on the blot and ensure a proper transfer occurred. Next the proteins are blocked with milk (or other blocking agents) to prevent non-specific antibody binding, and then stained with antibodies specific to the target protein. Lastly, the membrane will be stained with a secondary antibody that recognizes the first antibody staining, which can then be used for detection by a variety of methods. The gel electrophoresis step is included in western blot analysis to resolve the issue of the cross-reactivity of antibodies. Sample preparation As a significant step in conducting a western blot, sample preparation has to be done effectively since the interpretation of this assay is influenced by the protein preparation, which is composed of protein extraction and purification processes. To achieve efficient protein extraction, a proper homogenization method needs to be chosen due to the fact that it is responsible for bursting the cell membrane and releasing the intracellular components. Besides that, the ideal lysis buffer is needed to acquire substantial amounts of target protein content because the buffer is leading the process of protein solubilization and preventing protein degradation. After completing the sample preparation, the protein content is ready to be separated by the utilization of gel electrophoresis. Gel electrophoresis The proteins of the sample are separated using gel electrophoresis. Separation of proteins may be by isoelectric point (pI), molecular weight, electric charge, or a combination of these factors. The nature of the separation depends on the treatment of the sample and the nature of the gel. By far the most common type of gel electrophoresis employs polyacrylamide gels and buffers loaded with sodium dodecyl sulfate (SDS). SDS-PAGE (SDS-polyacrylamide gel electrophoresis) maintains polypeptides in a denatured state once they have been treated with strong reducing agents to remove secondary and tertiary structure (e.g. disulfide bonds [S-S] to sulfhydryl groups [SH and SH]) and thus allows separation of proteins by their molecular mass. Sampled proteins become covered in the negatively charged SDS, effectively becoming anionic, and migrate towards the positively charged (higher voltage) anode (usually having a red wire) through the acrylamide mesh of the gel. Smaller proteins migrate faster through this mesh, and the proteins are thus separated according to size (usually measured in kilodaltons, kDa). The concentration of acrylamide determines the resolution of the gel – the greater the acrylamide concentration, the better the resolution of lower molecular weight proteins. The lower the acrylamide concentration, the better the resolution of higher molecular weight proteins. Proteins travel only in one dimension along the gel for most blots. Samples are loaded into wells in the gel. One lane is usually reserved for a marker or ladder, which is a commercially available mixture of proteins of known molecular weights, typically stained so as to form visible, coloured bands. When voltage is applied along the gel, proteins migrate through it at different speeds dependent on their size. These different rates of advancement (different electrophoretic mobilities) separate into bands within each lane. Protein bands can then be compared to the ladder bands, allowing estimation of the protein's molecular weight. It is also possible to use a two-dimensional gel which spreads the proteins from a single sample out in two dimensions. Proteins are separated according to isoelectric point (pH at which they have a neutral net charge) in the first dimension, and according to their molecular weight in the second dimension. Transfer To make the proteins accessible to antibody detection, they are moved from within the gel onto a membrane, a solid support, which is an essential part of the process. There are two types of membrane: nitrocellulose (NC) or polyvinylidene difluoride (PVDF). NC membrane has high affinity for protein and its retention abilities. However, NC is brittle, and does not allow the blot to be used for re-probing, whereas PVDF membrane allows the blot to be re-probed. The most commonly used method for transferring the proteins is called electroblotting. Electroblotting uses an electric current to pull the negatively charged proteins from the gel towards the positively charged anode, and into the PVDF or NC membrane. The proteins move from within the gel onto the membrane while maintaining the organization they had within the gel. An older method of transfer involves placing a membrane on top of the gel, and a stack of filter papers on top of that. The entire stack is placed in a buffer solution which moves up the paper by capillary action, bringing the proteins with it. In practice this method is not commonly used due to the lengthy procedure time. As a result of either transfer process, the proteins are exposed on a thin membrane layer for detection. Both varieties of membrane are chosen for their non-specific protein binding properties (i.e. binds all proteins equally well). Protein binding is based upon hydrophobic interactions, as well as charged interactions between the membrane and protein. Nitrocellulose membranes are cheaper than PVDF, but are far more fragile and cannot withstand repeated probings. Total protein staining Total protein staining allows the total protein that has been successfully transferred to the membrane to be visualised, allowing the user to check the uniformity of protein transfer and to perform subsequent normalization of the target protein with the actual protein amount per lane. Normalization with the so-called "loading control" was based on immunostaining of housekeeping proteins in the classical procedure, but is heading toward total protein staining recently, due to multiple benefits. At least seven different approaches for total protein staining have been described for western blot normalization: Ponceau S, stain-free techniques, Sypro Ruby, Epicocconone, Coomassie R-350, Amido Black, and Cy5. In order to avoid noise of signal, total protein staining should be performed before blocking of the membrane. Nevertheless, post-antibody stainings have been described as well. Blocking Since the membrane has been chosen for its ability to bind protein and as both antibodies and the target are proteins, steps must be taken to prevent the interactions between the membrane and the antibody used for detection of the target protein. Blocking of non-specific binding is achieved by placing the membrane in a dilute solution of protein – typically 3–5% bovine serum albumin (BSA) or non-fat dry milk (both are inexpensive) in tris-buffered saline (TBS) or I-Block, with a minute percentage (0.1%) of detergent such as Tween 20 or Triton X-100. Although non-fat dry milk is preferred due to its availability, an appropriate blocking solution is needed as not all proteins in milk are compatible with all the detection bands. The protein in the dilute solution attaches to the membrane in all places where the target proteins have not attached. Thus, when the antibody is added, it cannot bind to the membrane, and therefore the only available binding site is the specific target protein. This reduces background in the final product of the western blot, leading to clearer results, and eliminates false positives. Incubation During the detection process, the membrane is "probed" for the protein of interest with a modified antibody which is linked to a reporter enzyme; when exposed to an appropriate substrate, this enzyme drives a colorimetric reaction and produces a colour. For a variety of reasons, this traditionally takes place in a two-step process, although there are now one-step detection methods available for certain applications. Primary antibody The primary antibodies are generated when a host species or immune cell culture is exposed to the protein of interest (or a part thereof). Normally, this is part of the immune response, whereas here they are harvested and used as sensitive and specific detection tools that bind the protein directly. After blocking, a solution of primary antibody (generally between 0.5 and 5 micrograms/mL) diluted in either PBS or TBST wash buffer is incubated with the membrane under gentle agitation for typically an hour at room temperature, or overnight at 4°C. It can also be incubated at different temperatures, with lesser temperatures being associated with more binding, both specific (to the target protein, the "signal") and non-specific ("noise"). Following incubation, the membrane is washed several times in wash buffer to remove unbound primary antibody, and thereby minimize background. Typically, the wash buffer solution is composed of buffered saline solution with a small percentage of detergent, and sometimes with powdered milk or BSA. Secondary antibody After rinsing the membrane to remove unbound primary antibody, the membrane is exposed to another antibody known as the secondary antibody. Antibodies come from animal sources (or animal sourced hybridoma cultures). The secondary antibody recognises and binds to the species-specific portion of the primary antibody. Therefore, an anti-mouse secondary antibody will bind to almost any mouse-sourced primary antibody, and can be referred to as an 'anti-species' antibody (e.g. anti-mouse, anti-goat etc.). To allow detection of the target protein, the secondary antibody is commonly linked to biotin or a reporter enzyme such as alkaline phosphatase or horseradish peroxidase. This means that several secondary antibodies will bind to one primary antibody and enhance the signal, allowing the detection of proteins of a much lower concentration than would be visible by SDS-PAGE alone. Horseradish peroxidase is commonly linked to secondary antibodies to allow the detection of the target protein by chemiluminescence. The chemiluminescent substrate is cleaved by horseradish peroxidase, resulting in the production of luminescence. Therefore, the production of luminescence is proportional to the amount of horseradish peroxidase-conjugated secondary antibody, and therefore, indirectly measures the presence of the target protein. A sensitive sheet of photographic film is placed against the membrane, and exposure to the light from the reaction creates an image of the antibodies bound to the blot. A cheaper but less sensitive approach utilizes a 4-chloronaphthol stain with 1% hydrogen peroxide; the reaction of peroxide radicals with 4-chloronaphthol produces a dark purple stain that can be photographed without using specialized photographic film. As with the ELISPOT and ELISA procedures, the enzyme can be provided with a substrate molecule that will be converted by the enzyme to a coloured reaction product that will be visible on the membrane (see the figure below with blue bands). Another method of secondary antibody detection utilizes a near-infrared fluorophore-linked antibody. The light produced from the excitation of a fluorescent dye is static, making fluorescent detection a more precise and accurate measure of the difference in the signal produced by labeled antibodies bound to proteins on a western blot. Proteins can be accurately quantified because the signal generated by the different amounts of proteins on the membranes is measured in a static state, as compared to chemiluminescence, in which light is measured in a dynamic state. A third alternative is to use a radioactive label rather than an enzyme coupled to the secondary antibody, such as labeling an antibody-binding protein like Staphylococcus Protein A or Streptavidin with a radioactive isotope of iodine. Since other methods are safer, quicker, and cheaper, this method is now rarely used; however, an advantage of this approach is the sensitivity of auto-radiography-based imaging, which enables highly accurate protein quantification when combined with optical software (e.g. Optiquant). One step Historically, the probing process was performed in two steps because of the relative ease of producing primary and secondary antibodies in separate processes. This gives researchers and corporations huge advantages in terms of flexibility, reduction of cost, and adds an amplification step to the detection process. Given the advent of high-throughput protein analysis and lower limits of detection, however, there has been interest in developing one-step probing systems that would allow the process to occur faster and with fewer consumables. This requires a probe antibody which both recognizes the protein of interest and contains a detectable label, probes which are often available for known protein tags. The primary probe is incubated with the membrane in a manner similar to that for the primary antibody in a two-step process, and then is ready for direct detection after a series of wash steps. Detection and visualization After the unbound probes are washed away, the western blot is ready for detection of the probes that are labeled and bound to the protein of interest. In practical terms, not all westerns reveal protein only at one band in a membrane. Size approximations are taken by comparing the stained bands to that of the marker or ladder loaded during electrophoresis. The process is commonly repeated for a structural protein, such as actin or tubulin, that should not change between samples. The amount of target protein is normalized to the structural protein to control between groups. A superior strategy is the normalization to the total protein visualized with trichloroethanol or epicocconone. This practice ensures correction for the amount of total protein on the membrane in case of errors or incomplete transfers. (see western blot normalization) Colorimetric detection The colorimetric detection method depends on incubation of the western blot with a substrate that reacts with the reporter enzyme (such as peroxidase) that is bound to the secondary antibody. This converts the soluble dye into an insoluble form of a different colour that precipitates next to the enzyme and thereby stains the membrane. Development of the blot is then stopped by washing away the soluble dye. Protein levels are evaluated through densitometry (how intense the stain is) or spectrophotometry. Chemiluminescent detection Chemiluminescent detection methods depend on incubation of the western blot with a substrate that will luminesce when exposed to the reporter on the secondary antibody. The light is then detected by CCD cameras which capture a digital image of the western blot or photographic film. The use of film for western blot detection is slowly disappearing because of non linearity of the image (non accurate quantification). The image is analysed by densitometry, which evaluates the relative amount of protein staining and quantifies the results in terms of optical density. Newer software allows further data analysis such as molecular weight analysis if appropriate standards are used. Radioactive detection Radioactive labels do not require enzyme substrates, but rather, allow the placement of medical X-ray film directly against the western blot, which develops as it is exposed to the label and creates dark regions which correspond to the protein bands of interest (see image above). The importance of radioactive detections methods is declining due to its hazardous radiation , because it is very expensive, health and safety risks are high, and ECL (enhanced chemiluminescence) provides a useful alternative. Fluorescent detection The fluorescently labeled probe is excited by light and the emission of the excitation is then detected by a photosensor such as a CCD camera equipped with appropriate emission filters which captures a digital image of the western blot and allows further data analysis such as molecular weight analysis and a quantitative western blot analysis. Fluorescence is considered to be one of the best methods for quantification but is less sensitive than chemiluminescence. Secondary probing One major difference between nitrocellulose and PVDF membranes relates to the ability of each to support "stripping" antibodies off and reusing the membrane for subsequent antibody probes. While there are well-established protocols available for stripping nitrocellulose membranes, the sturdier PVDF allows for easier stripping, and for more reuse before background noise limits experiments. Another difference is that, unlike nitrocellulose, PVDF must be soaked in 95% ethanol, isopropanol or methanol before use. PVDF membranes also tend to be thicker and more resistant to damage during use. Minimum requirement specification for Western Blot In order to ensure that the results of Western blots are reproducible, it is important to report the various parameters mentioned above, including specimen preparation, the concentration of protein used for loading, the percentage of gel and running condition, various transfer methods, attempting to block conditions, the concentration of antibodies, and identification and quantitative determination methods. Many of the articles that have been published don't cover all of these variables. Hence, it is crucial to describe different experimental circumstances or parameters in order to increase the repeatability and precision of WB. To increase WB repeatability, a minimum reporting criteria is thus required. 2-D gel electrophoresis Two-dimensional SDS-PAGE uses the principles and techniques outlined above. 2-D SDS-PAGE, as the name suggests, involves the migration of polypeptides in 2 dimensions. For example, in the first dimension, polypeptides are separated according to isoelectric point, while in the second dimension, polypeptides are separated according to their molecular weight. The isoelectric point of a given protein is determined by the relative number of positively (e.g. lysine, arginine) and negatively (e.g. glutamate, aspartate) charged amino acids, with negatively charged amino acids contributing to a low isoelectric point and positively charged amino acids contributing to a high isoelectric point. Samples could also be separated first under nonreducing conditions using SDS-PAGE, and under reducing conditions in the second dimension, which breaks apart disulfide bonds that hold subunits together. SDS-PAGE might also be coupled with urea-PAGE for a 2-dimensional gel. In principle, this method allows for the separation of all cellular proteins on a single large gel. A major advantage of this method is that it often distinguishes between different isoforms of a particular protein – e.g. a protein that has been phosphorylated (by addition of a negatively charged group). Proteins that have been separated can be cut out of the gel and then analysed by mass spectrometry, which identifies their molecular weight. Problems Detection problems There may be a weak or absent signal in the band for a number of reasons related to the amount of antibody and antigen used. This problem might be resolved by using the ideal antigen and antibody concentrations and dilutions specified in the supplier's data sheet. Increasing the exposition period in the detection system's software can address weak bands caused by lower sample and antibody concentrations. Multiple band problems When the protein is broken down by proteases, several bands other than predicted bands of low molecular weight might appear. The development of numerous bands can be prevented by properly preparing protein samples with enough protease inhibitors. Multiple bands might show up in the high molecular weight region because some proteins form dimers, trimers, and multimers; this issue might be solved by heating the sample for longer periods of time. Proteins with post-translational modifications (PTMs) or numerous isoforms cause several bands to appear at various molecular weight areas. PTMs can be removed from a specimen using specific chemicals, which also remove extra bands. High background Strong antibody concentrations, inadequate blocking, inadequate washing, and excessive exposure time during imaging can result in a high background in the blots. A high background in the blots could be avoided by fixing these issues. Irregular and uneven bands It has been claimed that a variety of odd and unequal bands, including black dots, white spots or bands, and curving bands, have occurred. The block dots are removed from the blots by effective blocking. White patches develop as a result of bubbles between the membrane and gel. White bands appear in the blots when main and secondary antibodies are present in significant concentrations. Because of the high voltage used during the gel run and the rapid protein migration, smiley bands appear in the blots. The strange bands in the blot are resolved by resolving these problems. Mitigations During the western blotting, there could be several problems related to the different steps of this procedure. Those problems could originate from a protein analysis step such as the detection of low- or post-translationally modified proteins. Additionally, they can be based on the selection of antibodies since the quality of the antibodies plays a significant role in the detection of proteins specifically. On account of the presence of these kinds of problems, a variety of improvements are being produced in the fields of preparation of cell lysate and blotting procedures to build up reliable results. Moreover, to achieve more sensitive analysis and overcome the problems associated with western blotting, several different techniques have been developed and utilized, such as far-western blotting, diffusion blotting, single-cell resolution western blotting, and automated microfluidic western blotting. Presentation Researchers use different software to process and align image-sections for elegant presentation of western blot results. Popular tools include Sciugo, Microsoft PowerPoint, Adobe Illustrator and GIMP. See also Eastern blot Far-eastern blot Far-western blot Fast parallel proteolysis Northwestern blot References External links Archived at Ghostarchive and the Wayback Machine: Archived at Ghostarchive and the Wayback Machine: Diagnostic virology Protein methods Laboratory techniques Molecular biology techniques
Western blot
[ "Chemistry", "Biology" ]
6,170
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Molecular biology techniques", "nan", "Molecular biology" ]
174,055
https://en.wikipedia.org/wiki/Skew-symmetric%20matrix
In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition In terms of the entries of the matrix, if denotes the entry in the -th row and -th column, then the skew-symmetric condition is equivalent to Example The matrix is skew-symmetric because Properties Throughout, we assume that all matrix entries belong to a field whose characteristic is not equal to 2. That is, we assume that , where 1 denotes the multiplicative identity and 0 the additive identity of the given field. If the characteristic of the field is 2, then a skew-symmetric matrix is the same thing as a symmetric matrix. The sum of two skew-symmetric matrices is skew-symmetric. A scalar multiple of a skew-symmetric matrix is skew-symmetric. The elements on the diagonal of a skew-symmetric matrix are zero, and therefore its trace equals zero. If is a real skew-symmetric matrix and is a real eigenvalue, then , i.e. the nonzero eigenvalues of a skew-symmetric matrix are non-real. If is a real skew-symmetric matrix, then is invertible, where is the identity matrix. If is a skew-symmetric matrix then is a symmetric negative semi-definite matrix. Vector space structure As a result of the first two properties above, the set of all skew-symmetric matrices of a fixed size forms a vector space. The space of skew-symmetric matrices has dimension Let denote the space of matrices. A skew-symmetric matrix is determined by scalars (the number of entries above the main diagonal); a symmetric matrix is determined by scalars (the number of entries on or above the main diagonal). Let denote the space of skew-symmetric matrices and denote the space of symmetric matrices. If then Notice that and This is true for every square matrix with entries from any field whose characteristic is different from 2. Then, since and where denotes the direct sum. Denote by the standard inner product on The real matrix is skew-symmetric if and only if This is also equivalent to for all (one implication being obvious, the other a plain consequence of for all and ). Since this definition is independent of the choice of basis, skew-symmetry is a property that depends only on the linear operator and a choice of inner product. skew symmetric matrices can be used to represent cross products as matrix multiplications. Furthermore, if is a skew-symmetric (or skew-Hermitian) matrix, then for all . Determinant Let be a skew-symmetric matrix. The determinant of satisfies In particular, if is odd, and since the underlying field is not of characteristic 2, the determinant vanishes. Hence, all odd dimension skew symmetric matrices are singular as their determinants are always zero. This result is called Jacobi’s theorem, after Carl Gustav Jacobi (Eves, 1980). The even-dimensional case is more interesting. It turns out that the determinant of for even can be written as the square of a polynomial in the entries of , which was first proved by Cayley: This polynomial is called the Pfaffian of and is denoted . Thus the determinant of a real skew-symmetric matrix is always non-negative. However this last fact can be proved in an elementary way as follows: the eigenvalues of a real skew-symmetric matrix are purely imaginary (see below) and to every eigenvalue there corresponds the conjugate eigenvalue with the same multiplicity; therefore, as the determinant is the product of the eigenvalues, each one repeated according to its multiplicity, it follows at once that the determinant, if it is not 0, is a positive real number. The number of distinct terms in the expansion of the determinant of a skew-symmetric matrix of order was considered already by Cayley, Sylvester, and Pfaff. Due to cancellations, this number is quite small as compared the number of terms of the determinant of a generic matrix of order , which is . The sequence is 1, 0, 1, 0, 6, 0, 120, 0, 5250, 0, 395010, 0, … and it is encoded in the exponential generating function The latter yields to the asymptotics (for even) The number of positive and negative terms are approximatively a half of the total, although their difference takes larger and larger positive and negative values as increases . Cross product Three-by-three skew-symmetric matrices can be used to represent cross products as matrix multiplications. Consider vectors and Then, defining the matrix the cross product can be written as This can be immediately verified by computing both sides of the previous equation and comparing each corresponding element of the results. One actually has i.e., the commutator of skew-symmetric three-by-three matrices can be identified with the cross-product of three-vectors. Since the skew-symmetric three-by-three matrices are the Lie algebra of the rotation group this elucidates the relation between three-space , the cross product and three-dimensional rotations. More on infinitesimal rotations can be found below. Spectral theory Since a matrix is similar to its own transpose, they must have the same eigenvalues. It follows that the eigenvalues of a skew-symmetric matrix always come in pairs ±λ (except in the odd-dimensional case where there is an additional unpaired 0 eigenvalue). From the spectral theorem, for a real skew-symmetric matrix the nonzero eigenvalues are all pure imaginary and thus are of the form where each of the are real. Real skew-symmetric matrices are normal matrices (they commute with their adjoints) and are thus subject to the spectral theorem, which states that any real skew-symmetric matrix can be diagonalized by a unitary matrix. Since the eigenvalues of a real skew-symmetric matrix are imaginary, it is not possible to diagonalize one by a real matrix. However, it is possible to bring every skew-symmetric matrix to a block diagonal form by a special orthogonal transformation. Specifically, every real skew-symmetric matrix can be written in the form where is orthogonal and for real positive-definite . The nonzero eigenvalues of this matrix are ±λk i. In the odd-dimensional case Σ always has at least one row and column of zeros. More generally, every complex skew-symmetric matrix can be written in the form where is unitary and has the block-diagonal form given above with still real positive-definite. This is an example of the Youla decomposition of a complex square matrix. Skew-symmetric and alternating forms A skew-symmetric form on a vector space over a field of arbitrary characteristic is defined to be a bilinear form such that for all in This defines a form with desirable properties for vector spaces over fields of characteristic not equal to 2, but in a vector space over a field of characteristic 2, the definition is equivalent to that of a symmetric form, as every element is its own additive inverse. Where the vector space is over a field of arbitrary characteristic including characteristic 2, we may define an alternating form as a bilinear form such that for all vectors in This is equivalent to a skew-symmetric form when the field is not of characteristic 2, as seen from whence A bilinear form will be represented by a matrix such that , once a basis of is chosen, and conversely an matrix on gives rise to a form sending to For each of symmetric, skew-symmetric and alternating forms, the representing matrices are symmetric, skew-symmetric and alternating respectively. Infinitesimal rotations Coordinate-free More intrinsically (i.e., without using coordinates), skew-symmetric linear transformations on a vector space with an inner product may be defined as the bivectors on the space, which are sums of simple bivectors (2-blades) The correspondence is given by the map where is the covector dual to the vector ; in orthonormal coordinates these are exactly the elementary skew-symmetric matrices. This characterization is used in interpreting the curl of a vector field (naturally a 2-vector) as an infinitesimal rotation or "curl", hence the name. Skew-symmetrizable matrix An matrix is said to be skew-symmetrizable if there exists an invertible diagonal matrix such that is skew-symmetric. For real matrices, sometimes the condition for to have positive entries is added. See also Cayley transform Symmetric matrix Skew-Hermitian matrix Symplectic matrix Symmetry in mathematics References Further reading External links Fortran Fortran90 Matrices
Skew-symmetric matrix
[ "Mathematics" ]
1,866
[ "Matrices (mathematics)", "Mathematical objects" ]
174,060
https://en.wikipedia.org/wiki/%2829075%29%201950%20DA
(provisional designation ) is a risk-listed asteroid, classified as a near-Earth object and potentially hazardous asteroid of the Apollo group, approximately in diameter. It once had the highest known probability of impacting Earth. In 2002, it had the highest Palermo rating with a value of 0.17 and a probability of 1 in 306 (0.33%) for a possible collision in 2880. Since that time, the estimated risk has been updated several times. In December 2015, the odds of an Earth impact were revised to 1 in 8,300 (0.012%) with a Palermo rating of −1.42. , it is listed on the Sentry Risk Table with the highest cumulative Palermo rating of −0.93. is not assigned a Torino scale rating, because the 2880 date is over 100 years in the future. As of 5 January 2025, the odds of an Earth impact are 1 in 2,600 (0.038%). Discovery and nomenclature was first discovered on 23 February 1950 by Carl A. Wirtanen at Lick Observatory. It was observed for seventeen days and then lost because this short observation arc resulted in large uncertainties in Wirtanen's orbital solution. On 31 December 2000, it was recovered at Lowell Observatory and was announced as on 4 January 2001. Just two hours later it was recognized as . Observations On 5 March 2001, made a close approach to Earth at a distance of . It was studied by radar at the Goldstone and Arecibo observatories from March 3 to 7, 2001. The studies showed that the asteroid has a mean diameter of 1.1 km, assuming that is a retrograde rotator. Optical lightcurve analysis by Lenka Šarounová and Petr Pravec shows that its rotation period is hours. Due to its short rotation period and high radar albedo, is thought to be fairly dense (more than 3.5 g/cm3, assuming that it has no internal strength) and likely composed of nickel–iron. In August 2014, scientists from the University of Tennessee determined that is a rubble pile rotating faster than the breakup limit for its density, implying the asteroid is held together by van der Waals forces rather than gravity. made distant approaches to Earth on 20 May 2012, 5 February 2021 and 5 February 2023. However, at these times it was a quarter to half an AU away from Earth, preventing more useful astrometrics and timing that occurs when an object is closer to Earth. The next close approach that presents a good opportunity to observe the asteroid will be on 2 March 2032, when it will be from Earth. The following table lists the approaches closer than 0.1 AU until the year 2500. By 2136 the close approach solutions are becoming notably more divergent. Possible Earth impact has one of the best-determined asteroid orbital solutions. This is due to a combination of: an orbit moderately inclined (12 degrees) to the ecliptic plane (reducing in-plane perturbations); high-precision radar astrometry, which provides its distance and is complementary to the measurements of angular positions; a 74-year observation arc; an uncertainty region controlled by resonance. Main-belt asteroid 78 Diana (~125 km in diameter) will pass about from on 5 August 2150. At that distance and size, Diana will perturb enough so that the change in trajectory is notable by 2880 (730 years later). In addition, over the intervening time, 's rotation will cause its orbit to slightly change as a result of the Yarkovsky effect. If continues on its present orbit, it may approach Earth on 16 March 2880, though the mean trajectory passes many millions of kilometres from Earth, so does not have a significant chance of impacting Earth. , according to the latest solution dated 5 January 2025, the probability of an impact in 2880 is 1 in 2,600 (0.038%). The energy released by a collision with an object the size of would cause major effects on the climate and biosphere, which would be devastating to human civilization. The discovery of the potential impact heightened interest in asteroid deflection strategies. See also Asteroid impact prediction Earth-grazing fireball List of asteroid close approaches to Earth Notes References External links MPEC 2001-A26 : 1950 DA = 2000 YK66 (K00Y66K). MPC 4 January 2001 3D model Rotating model of the asteroid (preferred rotation model is retrograde, NeoDys) Asteroid Lightcurve Database (LCDB), query form (info ) Asteroids and comets rotation curves, CdR Observatoire de Genève, Raoul Behrend Discovery Circumstances: Numbered Minor Planets (25001)-(30000) Minor Planet Center 029075 Discoveries by Carl A. Wirtanen 029075 029075 19500222 Recovered astronomical objects
(29075) 1950 DA
[ "Astronomy" ]
999
[ "Recovered astronomical objects", "Astronomical objects" ]
174,061
https://en.wikipedia.org/wiki/Herbert%20Freeman
Dr. Herbert Freeman (born Herbert Freinmann, December 13, 1925 – November 15, 2020) was an American computer scientist who made important contributions to the field of automatic label placement, computer graphics, including spatial anti-aliasing, and machine vision. Personal life Herbert Freeman was born Herbert Freimann in Frankfurt, Germany on December 13, 1925. Freeman's parents, Leo and Johanna, and his brother, Henry, emigrated to the United States in 1936. Herbert was diagnosed with tuberculosis, and was unable to join his family in the United States until 1938. He received his B.S.E.E. degree from Union College, New York, and his Master's and Eng.Sc.D. degree from Columbia University, New York. He married Joan Sleppin in 1955 and they had three children, Nancy, Susan, and Robert. Freeman died on November 15, 2020, in his home in New Jersey, USA. Career in Computer Science Freeman held many professorial posts such as in RPI (Rensselaer Polytechnic Institute), NYU, and Rutgers University. Freeman was the recipient of several awards, including the IEEE Computer Society's Computer Pioneer award (1999). Freeman was also a Fellow of the ACM, a Life Fellow of the IEEE, and a Guggenheim Fellow. Professor Freeman also founded MapText, Inc., in 1997. See also Dr. Freeman's homepage at Rutgers University Dr. Freeman's White Paper on Automated Cartographic Text Placement Guide to the Herbert Freeman Family Collection, Leo Baeck Institute, New York, New York. Freeman's memoir Cobblestones. References Computer vision researchers 1997 fellows of the Association for Computing Machinery Fellows of the IEEE 2020 deaths Polytechnic Institute of New York University faculty Fellows of the International Association for Pattern Recognition 1925 births
Herbert Freeman
[ "Technology" ]
367
[ "Computing stubs", "Computer specialist stubs" ]
174,069
https://en.wikipedia.org/wiki/Asteroid%20impact%20avoidance
Asteroid impact avoidance encompasses the methods by which near-Earth objects (NEO) on a potential collision course with Earth could be diverted away, preventing destructive impact events. An impact by a sufficiently large asteroid or other NEOs would cause, depending on its impact location, massive tsunamis or multiple firestorms, and an impact winter caused by the sunlight-blocking effect of large quantities of pulverized rock dust and other debris placed into the stratosphere. A collision 66 million years ago between the Earth and an object approximately wide is thought to have produced the Chicxulub crater and triggered the Cretaceous–Paleogene extinction event that is understood by the scientific community to have caused the extinction of all non-avian dinosaurs. While the chances of a major collision are low in the near term, it is a near-certainty that one will happen eventually unless defensive measures are taken. Astronomical events—such as the Shoemaker-Levy 9 impacts on Jupiter and the 2013 Chelyabinsk meteor, along with the growing number of near-Earth objects discovered and catalogued on the Sentry Risk Table—have drawn renewed attention to such threats. The popularity of the 2021 movie Don't Look Up helped to raise awareness of the possibility of avoiding NEOs. In 2016, a NASA scientist warned that the Earth is unprepared for such an event. In April 2018, the B612 Foundation reported "It's 100 percent certain we'll be hit by a devastating asteroid, but we're not 100 percent sure when." Also in 2018, physicist Stephen Hawking, in his final book, Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. Several ways of avoiding an asteroid impact have been described. Nonetheless, in March 2019, scientists reported that asteroids may be much more difficult to destroy than thought earlier. An asteroid may reassemble itself due to gravity after being disrupted. In May 2021, NASA astronomers reported that 5 to 10 years of preparation may be needed to avoid a virtual impactor based on a simulated exercise conducted by the 2021 Planetary Defense Conference. In 2022, NASA spacecraft DART impacted Dimorphos, reducing the minor-planet moon's orbital period by 32 minutes. This mission constitutes the first successful attempt at asteroid deflection. In 2025, CNSA plans to launch another deflection mission to near-Earth object 2019 VL5, a 30-meter-wide (100 ft.) asteroid, which will include both an impactor and observer spacecraft. Deflection efforts According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched. In June 2018, the US National Science and Technology Council warned that the United States was unprepared for an asteroid impact event, and developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. Most deflection efforts for a large object require from a year to decades of warning, allowing time to prepare and carry out a collision avoidance project, as no known planetary defense hardware has yet been developed. It has been estimated that a velocity change of just (where t is the number of years until potential impact) is needed to successfully deflect a body on a direct collision trajectory. Thus for a large number of years before impact, much smaller velocity changes are needed. For example, it was estimated there was a high chance of 99942 Apophis swinging by Earth in 2029 with a 10−4 probability of returning on an impact trajectory in 2035 or 2036. It was then determined that a deflection from this potential return trajectory, several years before the swing-by, could be achieved with a velocity change on the order of 10−6 m/s. NASA's Double Asteroid Redirection Test (DART), the world's first full-scale mission to test technology for defending Earth against potential asteroid or comet hazards, launched on a SpaceX Falcon 9 rocket from Space Launch Complex 4 East at Vandenberg Space Force Base in California. An impact by a asteroid on the Earth has historically caused an extinction-level event due to catastrophic damage to the biosphere. There is also the threat from comets entering the inner Solar System. The impact speed of a long-period comet would likely be several times greater than that of a near-Earth asteroid, making its impact much more destructive; in addition, the warning time is unlikely to be more than a few months. Impacts from objects as small as in diameter, which are far more common, are historically extremely destructive regionally (see Barringer crater). Finding out the material composition of the object is also helpful before deciding which strategy is appropriate. Missions like the 2005 Deep Impact probe and the Rosetta spacecraft, have provided valuable information on what to expect. In October 2022, a method of mapping the insides of a potentially problematic asteroid in order to determine the best area for impact was proposed. History of US government mandates Efforts in asteroid impact prediction have concentrated on the survey method. The 1992 NASA-sponsored Near-Earth-Object Interception Workshop hosted by Los Alamos National Laboratory evaluated issues involved in intercepting celestial objects that could hit Earth. In a 1992 report to NASA, a coordinated Spaceguard Survey was recommended to discover, verify and provide follow-up observations for Earth-crossing asteroids. This survey was expected to discover 90% of these objects larger than one kilometer within 25 years. Three years later, another NASA report recommended search surveys that would discover 60–70% of short-period, near-Earth objects larger than one kilometer within ten years and obtain 90% completeness within five more years. In 1998, NASA formally embraced the goal of finding and cataloging, by 2008, 90% of all near-Earth objects (NEOs) with diameters of 1 km or larger that could represent a collision risk to Earth. The 1 km diameter metric was chosen after considerable study indicated that an impact of an object smaller than 1 km could cause significant local or regional damage but is unlikely to cause a worldwide catastrophe. The impact of an object much larger than 1 km diameter could well result in worldwide damage up to, and potentially including, extinction of the human species. The NASA commitment has resulted in the funding of a number of NEO search efforts, which made considerable progress toward the 90% goal by 2008. However the 2009 discovery of several NEOs approximately 2 to 3 kilometers in diameter (e.g. , , , and ) demonstrated there were still large objects to be detected. United States Representative George E. Brown Jr. (D-CA) was quoted as voicing his support for planetary defense projects in Air & Space Power Chronicles, saying "If some day in the future we discover well in advance that an asteroid that is big enough to cause a mass extinction is going to hit the Earth, and then we alter the course of that asteroid so that it does not hit us, it will be one of the most important accomplishments in all of human history." Because of Congressman Brown's long-standing commitment to planetary defense, a U.S. House of Representatives' bill, H.R. 1022, was named in his honor: The George E. Brown, Jr. Near-Earth Object Survey Act. This bill "to provide for a Near-Earth Object Survey program to detect, track, catalogue, and characterize certain near-Earth asteroids and comets" was introduced in March 2005 by Rep. Dana Rohrabacher (R-CA). It was eventually rolled into S.1281, the NASA Authorization Act of 2005, passed by Congress on December 22, 2005, subsequently signed by the President, and stating in part: The result of this directive was a report presented to Congress in early March 2007. This was an Analysis of Alternatives (AoA) study led by NASA's Program Analysis and Evaluation (PA&E) office with support from outside consultants, the Aerospace Corporation, NASA Langley Research Center (LaRC), and SAIC (amongst others). See also Improving impact prediction. Ongoing projects The Minor Planet Center in Cambridge, Massachusetts has been cataloging the orbits of asteroids and comets since 1947. It has recently been joined by surveys that specialize in locating the near-Earth objects (NEO), many (as of early 2007) funded by NASA's Near Earth Object program office as part of their Spaceguard program. One of the best-known is LINEAR that began in 1996. By 2004 LINEAR was discovering tens of thousands of objects each year and accounting for 65% of all new asteroid detections. LINEAR uses two one-meter telescopes and one half-meter telescope based in New Mexico. The Catalina Sky Survey (CSS) is conducted at the Steward Observatory's Catalina Station, located near Tucson, Arizona, in the United States. It uses two telescopes, a f/2 telescope on the peak of Mount Lemmon, and a f/1.7 Schmidt telescope near Mount Bigelow (both in the Tucson, Arizona area). In 2005, CSS became the most prolific NEO survey surpassing Lincoln Near-Earth Asteroid Research (LINEAR) in total number of NEOs and potentially hazardous asteroids discovered each year since. CSS discovered 310 NEOs in 2005, 396 in 2006, 466 in 2007, and in 2008 564 NEOs were found. Spacewatch, which uses a telescope sited at the Kitt Peak Observatory in Arizona, updated with automatic pointing, imaging, and analysis equipment to search the skies for intruders, was set up in 1980 by Tom Gehrels and Robert S. McMillan of the Lunar and Planetary Laboratory of the University of Arizona in Tucson, and is now being operated by McMillan. The Spacewatch project has acquired a telescope, also at Kitt Peak, to hunt for NEOs, and has provided the old 90-centimeter telescope with an improved electronic imaging system with much greater resolution, improving its search capability. Other near-Earth object tracking programs include Near-Earth Asteroid Tracking (NEAT), Lowell Observatory Near-Earth-Object Search (LONEOS), Campo Imperatore Near-Earth Object Survey (CINEOS), Japanese Spaceguard Association, and Asiago-DLR Asteroid Survey. Pan-STARRS completed telescope construction in 2010, and it is now actively observing. The Asteroid Terrestrial-impact Last Alert System, now in operation, conducts frequent scans of the sky with a view to later-stage detection on the collision stretch of the asteroid orbit. Those would be much too late for deflection, but still in time for evacuation and preparation of the affected Earth region. Another project, supported by the European Union, is NEOShield, which analyses realistic options for preventing the collision of a NEO with Earth. Their aim is to provide test mission designs for feasible NEO mitigation concepts. The project particularly emphasises on two aspects. The first one is the focus on technological development on essential techniques and instruments needed for guidance, navigation and control (GNC) in close vicinity of asteroids and comets. This will, for example, allow hitting such bodies with a high-velocity kinetic impactor spacecraft and observing them before, during and after a mitigation attempt, e.g., for orbit determination and monitoring. The second one focuses on refining Near Earth Object (NEO) characterisation. Moreover, NEOShield-2 will carry out astronomical observations of NEOs, to improve the understanding of their physical properties, concentrating on the smaller sizes of most concern for mitigation purposes, and to identify further objects suitable for missions for physical characterisation and NEO deflection demonstration. "Spaceguard" is the name for these loosely affiliated programs, some of which receive NASA funding to meet a U.S. Congressional requirement to detect 90% of near-Earth asteroids over 1 km diameter by 2008. A 2003 NASA study of a follow-on program suggests spending US$250–450 million to detect 90% of all near-Earth asteroids and larger by 2028. NEODyS is an online database of known NEOs. Sentinel mission The B612 Foundation is a private nonprofit foundation with headquarters in the United States, dedicated to protecting the Earth from asteroid strikes. It is led mainly by scientists, former astronauts and engineers from the Institute for Advanced Study, Southwest Research Institute, Stanford University, NASA and the space industry. As a non-governmental organization it has conducted two lines of related research to help detect NEOs that could one day strike the Earth, and find the technological means to divert their path to avoid such collisions. The foundation's goal had been to design and build a privately financed asteroid-finding space telescope, Sentinel, which was to be launched in 2017–2018. However the project was cancelled in 2015. Had the Sentinel's infrared telescope been parked in an orbit similar to that of Venus, it would have helped identify threatening NEOs by cataloging 90% of those with diameters larger than , as well as surveying smaller Solar System objects. Data gathered by Sentinel would have helped identify asteroids and other NEOs that pose a risk of collision with Earth, by being forwarded to scientific data-sharing networks, including NASA and academic institutions such as the Minor Planet Center. The foundation also proposes asteroid deflection of potentially dangerous NEOs by the use of gravity tractors to divert their trajectories away from Earth, a concept co-invented by the organization's CEO, physicist and former NASA astronaut Ed Lu. Prospective projects Orbit@home intends to provide distributed computing resources to optimize search strategy. On February 16, 2013, the project was halted due to lack of grant funding. However, on July 23, 2013, the orbit@home project was selected for funding by NASA's Near Earth Object Observation program and was to resume operations sometime in early 2014. As of July 13, 2018, the project is offline according to its website. The Large Synoptic Survey Telescope, currently under construction, is expected to perform a comprehensive, high-resolution survey starting in the early 2020s. Detection from space On November 8, 2007, the House Committee on Science and Technology's Subcommittee on Space and Aeronautics held a hearing to examine the status of NASA's Near-Earth Object survey program. The prospect of using the Wide-field Infrared Survey Explorer was proposed by NASA officials. WISE surveyed the sky in the infrared band at a very high sensitivity. Asteroids that absorb solar radiation can be observed through the infrared band. It was used to detect NEOs, in addition to performing its science goals. It is projected that WISE could detect 400 NEOs (roughly two percent of the estimated NEO population of interest) within the one-year mission. NEOSSat, the Near Earth Object Surveillance Satellite, is a microsatellite launched in February 2013 by the Canadian Space Agency (CSA) that will hunt for NEOs in space. Furthermore Near-Earth Object WISE (NEOWISE), an extension of the WISE mission, started in September 2013 (in its second mission extension) to hunt asteroids and comets close to the orbit of Earth. Deep Impact Research published in the March 26, 2009 issue of the journal Nature, describes how scientists were able to identify an asteroid in space before it entered Earth's atmosphere, enabling computers to determine its area of origin in the Solar System as well as predict the arrival time and location on Earth of its shattered surviving parts. The four-meter-diameter asteroid, called 2008 TC3, was initially sighted by the automated Catalina Sky Survey telescope, on October 6, 2008. Computations correctly predicted that it would impact 19 hours after discovery and in the Nubian Desert of northern Sudan. A number of potential threats have been identified, such as 99942 Apophis (previously known by its provisional designation ), which in 2004 temporarily had an impact probability of about 3% for the year 2029. Additional observations revised this probability down to zero. Double Asteroid Redirection Test On September 26, 2022 DART impacted Dimorphos, reducing the minor-planet moon's orbital period by 32 minutes. This mission was the first successful attempt at asteroid deflection. 2019 VL5 Asteroid Deflection Mission In 2025, China's CNSA intends to launch a deflection mission to near-Earth object 2019 VL5, a 30-meter wide asteroid. The mission will launch on a Long March 3B rocket and carry both an impactor and observer spacecraft. Impact probability calculation pattern The ellipses in the diagram on the right show the predicted position of an example asteroid at closest Earth approach. At first, with only a few asteroid observations, the error ellipse is very large and includes the Earth. Further observations shrink the error ellipse, but it still includes the Earth. This raises the predicted impact probability, since the Earth now covers a larger fraction of the error region. Finally, yet more observations (often radar observations, or discovery of a previous sighting of the same asteroid on archival images) shrink the ellipse revealing that the Earth is outside the error region, and the impact probability is near zero. For asteroids that are actually on track to hit Earth the predicted probability of impact continues to increase as more observations are made. This similar pattern makes it difficult to differentiate between asteroids that will only come close to Earth and those that will actually hit it. This in turn makes it difficult to decide when to raise an alarm as gaining more certainty takes time, which reduces time available to react to a predicted impact. However, raising the alarm too soon has the danger of causing a false alarm and creating a Boy Who Cried Wolf effect if the asteroid in fact misses Earth. Collision avoidance strategies Cost, risk of failure, complexity, technology readiness, and overall performance are all important trade-offs in weighing collision avoidance strategies. Methods can be differentiated by the type of mitigation (deflection or fragmentation), energy source (kinetic, electromagnetic, gravitational, solar/thermal, or nuclear), and approach strategy (interception, rendezvous, or remote station). Strategies fall into two basic sets: Fragmentation and delay. Fragmentation concentrates on rendering the impactor harmless by fragmenting it and scattering the fragments so that they miss the Earth or are small enough to burn up in the atmosphere. Delay exploits the fact that both the Earth and the impactor are in orbit. An impact occurs when both reach the same point in space at the same time, or more correctly when some point on Earth's surface intersects the impactor's orbit when the impactor arrives. Since the Earth is approximately in diameter and moves at approximately in its orbit, it travels a distance of one planetary diameter in about 425 seconds, or slightly over seven minutes. Delaying, or advancing the impactor's arrival by times of this magnitude can, depending on the exact geometry of the impact, cause it to miss the Earth. Collision avoidance strategies can also be seen as either direct, or indirect and in how rapidly they transfer energy to the object. The direct methods, such as nuclear explosives, or kinetic impactors, rapidly intercept the bolide's path. Direct methods are preferred because they are generally less costly in time and money. Their effects may be immediate, thus saving precious time. These methods would work for short-notice and long-notice threats, and are most effective against solid objects that can be directly pushed, but in the case of kinetic impactors, they are not very effective against large loosely aggregated rubble piles. Indirect methods, such as gravity tractors, attaching rockets or mass drivers, are much slower. They require traveling to the object, changing course up to 180 degrees for space rendezvous, and then taking much more time to change the asteroid's path just enough so it will miss Earth. Many NEOs are thought to be "flying rubble piles" only loosely held together by gravity, and a typical spacecraft sized kinetic-impactor deflection attempt might just break up the object or fragment it without sufficiently adjusting its course. If an asteroid breaks into fragments, any fragment larger than across would not burn up in the atmosphere and itself could impact Earth. Tracking the thousands of buckshot-like fragments that could result from such an explosion would be a very daunting task, although fragmentation would be preferable to doing nothing and allowing the originally larger rubble body, which is analogous to a shot and wax slug, to impact the Earth. In Cielo simulations conducted in 2011–2012, in which the rate and quantity of energy delivery were sufficiently high and matched to the size of the rubble pile, such as following a tailored nuclear explosion, results indicated that any asteroid fragments, created after the pulse of energy is delivered, would not pose a threat of re-coalescing (including for those with the shape of asteroid Itokawa) but instead would rapidly achieve escape velocity from their parent body (which for Itokawa is about 0.2 m/s) and therefore move out of an Earth-impact trajectory. Nuclear explosive device Initiating a nuclear explosive device above, on, or slightly beneath, the surface of a threatening celestial body is a potential deflection option, with the optimal detonation height dependent upon the composition and size of the object. It does not require the entire NEO to be vaporized to mitigate an impact threat. In the case of an inbound threat from a "rubble pile", the stand off, or detonation height above the surface configuration, has been put forth as a means to prevent the potential fracturing of the rubble pile. The energetic neutrons and soft X-rays released by the detonation, which do not appreciably penetrate matter, are converted into heat upon encountering the object's surface matter, ablatively vaporizing all line of sight exposed surface areas of the object to a shallow depth, turning the surface material it heats up into ejecta, and, analogous to the ejecta from a chemical rocket engine exhaust, changing the velocity, or "nudging", the object off course by the reaction, following Newton's third law, with ejecta going one way and the object being propelled in the other. Depending on the energy of the explosive device, the resulting rocket exhaust effect, created by the high velocity of the asteroid's vaporized mass ejecta, coupled with the object's small reduction in mass, would produce enough of a change in the object's orbit to make it miss the Earth. A Hypervelocity Asteroid Mitigation Mission for Emergency Response (HAMMER) has been proposed. While there have been no updates as of 2023 regarding the HAMMER, NASA has published its regular Planetary Defense Strategy and Action Plan for 2023. In it, NASA acknowledges that it is crucial to continue studying the potential of nuclear energy in deflecting or destroying asteroids. This is because it is currently the only option for defense if scientists were not aware of the asteroid within a few months or years, depending on the asteroid's velocity. The report also notes there needs to be research done into the legal implications as well as policy implications on the topic. Stand-off approach If the object is very large but is still a loosely-held-together rubble pile, a solution is to detonate one or a series of nuclear explosive devices alongside the asteroid, at a or greater stand-off height above its surface, so as not to fracture the potentially loosely-held-together object. Providing that this stand-off strategy was done far enough in advance, the force from a sufficient number of nuclear blasts would alter the object's trajectory enough to avoid an impact, according to computer simulations and experimental evidence from meteorites exposed to the thermal X-ray pulses of the Z-machine. In 1967, graduate students under Professor Paul Sandorff at the Massachusetts Institute of Technology were tasked with designing a method to prevent a hypothetical 18-month distant impact on Earth by the asteroid 1566 Icarus, an object that makes regular close approaches to Earth, sometimes as close as 16 lunar distances. To achieve the task within the timeframe and with limited material knowledge of the asteroid's composition, a variable stand-off system was conceived. This would have used a number of modified Saturn V rockets sent on interception courses and the creation of a handful of nuclear explosive devices in the 100-megaton energy range—coincidentally, the same as the maximum yield of the Soviets' Tsar Bomba would have been if a uranium tamper had been used—as each rocket vehicle's payload. The design study was later published as Project Icarus which served as the inspiration for the 1979 film Meteor. A NASA analysis of deflection alternatives, conducted in 2007, stated: In the same year, NASA released a study where the asteroid Apophis (with a diameter of around ) was assumed to have a much lower rubble pile density () and therefore lower mass than it is now known to have, and in the study, it is assumed to be on an impact trajectory with Earth for the year 2029. Under these hypothetical conditions, the report determines that a "Cradle spacecraft" would be sufficient to deflect it from Earth impact. This conceptual spacecraft contains six B83 physics packages, each set for their maximum 1.2-megatonne yield, bundled together and lofted by an Ares V vehicle sometime in the 2020s, with each B83 being fuzed to detonate over the asteroid's surface at a height of ("1/3 of the objects diameter" as its stand-off), one after the other, with hour-long intervals between each detonation. The results of this study indicated that a single employment of this option "can deflect NEOs of [ diameter] two years before impact, and larger NEOs with at least five years warning". These effectiveness figures are considered to be "conservative" by its authors, and only the thermal X-ray output of the B83 devices was considered, while neutron heating was neglected for ease of calculation purposes. Research published in 2021 pointed out the fact that for an effective deflection mission, there would need to be a significant amount of warning time, with the ideal being several years or more. The more warning time provided, the less energy will be necessary to divert the asteroid just enough to adjust the trajectory to avoid Earth. The study also emphasized that deflection, as opposed to destruction, can be a safer option, as there is a smaller likelihood of asteroid debris falling to Earth's surface. The researchers proposed the best way to divert an asteroid through deflection is adjusting the output of neutron energy in the nuclear explosion. Surface and subsurface use In 2011, the director of the Asteroid Deflection Research Center at Iowa State University, Dr. Bong Wie (who had published kinetic impactor deflection studies previously), began to study strategies that could deal with objects when the time to Earth impact was less than one year. He concluded that to provide the required energy, a nuclear explosion or other event that could deliver the same power, are the only methods that can work against a very large asteroid within these time constraints. This work resulted in the creation of a conceptual Hypervelocity Asteroid Intercept Vehicle (HAIV), which combines a kinetic impactor to create an initial crater for a follow-up subsurface nuclear detonation within that initial crater, which would generate a high degree of efficiency in the conversion of the nuclear energy that is released in the detonation into propulsion energy to the asteroid. A similar proposal would use a surface-detonating nuclear device in place of the kinetic impactor to create the initial crater, then using the crater as a rocket nozzle to channel succeeding nuclear detonations. Wie claimed the computer models he worked on showed the possibility for a 300-meter-wide (1,000 ft) asteroid to be destroyed using a single HAIV with a warning time of 30 days. Additionally, the models showed that less than 0.1% of debris from the asteroid would reach Earth's surface. There have been few substantial updates from Wie and his team since 2014 regarding the research. As of 2015, Wie has collaborated with the Danish Emergency Asteroid Defence Project (EADP), which intends to crowdsource sufficient funds to design, build, and store a non-nuclear HAIV spacecraft as planetary insurance. For threatening asteroids too large or close to Earth impact to effectively be deflected by the non-nuclear HAIV approach, nuclear explosive devices (with 5% of the explosive yield than those used for the stand-off strategy) are intended to be used, under international oversight, when conditions arise that necessitate it. A study published in 2020 pointed out that a non-nuclear kinetic impact becomes less effective the larger and closer the asteroid. However, researchers ran a model that suggested a nuclear detonation near the surface of an asteroid designed to cover one side of the asteroid with x-rays would be effective. When the x-rays cover one side of an asteroid in the program, the energy would propel the asteroid in a preferred direction. The lead researcher with the study, Dave Dearborn, said a nuclear impact offered more flexibility than a non-nuclear approach, as the energy output can be adjusted specifically to the asteroid's size and location. Comet deflection possibility Following the 1994 Shoemaker-Levy 9 comet impacts with Jupiter, Edward Teller proposed, to a collective of U.S. and Russian ex-Cold War weapons designers in a 1995 planetary defense workshop meeting at Lawrence Livermore National Laboratory (LLNL), that they collaborate to design a one-gigaton nuclear explosive device, which would be equivalent to the kinetic energy of a asteroid. The theoretical one-gigaton device would weigh about 25–30 tons, light enough to be lifted on the Energia rocket. It could be used to instantaneously vaporize a one-kilometer asteroid, divert the paths of ELE-class asteroids (greater than in diameter) within short notice of a few months. With one year of notice, and at an interception location no closer than Jupiter, it could also deal with the even rarer short period comets that can come out of the Kuiper belt and transit past Earth orbit within two years. For comets of this class, with a maximum estimated diameter of , Chiron served as the hypothetical threat. In 2013, the related National Laboratories of the US and Russia signed a deal that includes an intent to cooperate on defense from asteroids. The deal was meant to complement New START, but Russia suspended its participation in the treaty in 2023. As of April 2023, there has not been an official update from the White House or Moscow on how Russia's suspended participation will affect adjacent treaties. Present capability As of late 2022, the most likely and most effective method for asteroid deflection does not involve nuclear technology. Instead, it involves a kinetic impactor designed to redirect the asteroid, which showed promise in the NASA DART mission. For nuclear technology, simulations have been run analyzing the possibility of using neutron energy put off by a nuclear device to redirect an asteroid. These simulations showed promise, with one study finding that increasing the neutron energy output had a notable effect on the angle of the asteroid's travel. However, there has not been a practical test studying the possibility as of April 2023. Kinetic impact The impact of a massive object, such as a spacecraft or even another near-Earth object, is another possible solution to a pending NEO impact. An object with a high mass close to the Earth could be sent out into a collision course with the asteroid, knocking it off course. When the asteroid is still far from the Earth, a means of deflecting the asteroid is to directly alter its momentum by colliding a spacecraft with the asteroid. A NASA analysis of deflection alternatives, conducted in 2007, stated: This deviation method, which has been implemented by DART and, for a completely different purpose (analysis of the structure and composition of a comet), by NASA's Deep Impact space probe, involves launching a spacecraft against the near Earth object. The speed of the asteroid is modified due to the law of conservation of momentum: with V velocity of the spacecraft, V velocity of the celestial body before impact, and V the velocity after impact. M and M respective mass of the spacecraft and of the celestial body. Velocities are vectors here. The European Union's NEOShield-2 Mission is also primarily studying the Kinetic Impactor mitigation method. The principle of the kinetic impactor mitigation method is that the NEO or Asteroid is deflected following an impact from an impactor spacecraft. The principle of momentum transfer is used, as the impactor crashes into the NEO at a very high velocity of or more. The momentum of the impactor is transferred to the NEO, causing a change in velocity and therefore making it deviate from its course slightly. As of mid-2021, the modified AIDA mission has been approved. The NASA Double Asteroid Redirection Test (DART) kinetic impactor spacecraft was launched in November 2021. The goal was to impact Dimorphos (nicknamed Didymoon), the minor-planet moon of near-Earth asteroid 65803 Didymos. The impact occurred in September 2022 when Didymos is relatively close to Earth, allowing Earth-based telescopes and planetary radar to observe the event. The result of the impact will be to change the orbital velocity and hence orbital period of Dimorphos, by a large enough amount that it can be measured from Earth. This will show for the first time that it is possible to change the orbit of a small asteroid, around the size most likely to require active mitigation in the future. The launch and use of the Double Asteroid Redirection Test system in March 2023 showed the world that asteroids could be safely redirected without the use of nuclear means. The second part of the AIDA missionthe ESA HERA spacecrafthas been approved by ESA member states in October 2019. It would reach the Didymos system in 2026 and measure both the mass of Dimorphos and the precise effect of the impact on that body, allowing much better extrapolation of the AIDA mission to other targets. Asteroid gravity tractor Another alternative to explosive deflection is to move the asteroid slowly over time. A small but constant amount of thrust accumulates to deviate an object sufficiently from its course. Edward T. Lu and Stanley G. Love have proposed using a massive uncrewed spacecraft hovering over an asteroid to gravitationally pull the asteroid into a non-threatening orbit. Though both objects are gravitationally pulled towards each other, the spacecraft can counter the force towards the asteroid by, for example, an ion thruster, so the net effect would be that the asteroid is accelerated towards the spacecraft and thus slightly deflected from its orbit. While slow, this method has the advantage of working irrespective of the asteroid's composition or spin rate; rubble pile asteroids would be difficult to deflect by means of nuclear detonations, while a pushing device would be difficult or inefficient to mount on a fast-rotating asteroid. A gravity tractor would likely have to spend several years beside the asteroid to be effective. A NASA analysis of deflection alternatives, conducted in 2007, stated: Ion beam shepherd Another "contactless" asteroid deflection technique has been proposed by C. Bombardelli and J. Peláez from the Technical University of Madrid. The method involves the use of a low-divergence ion thruster pointed at the asteroid from a nearby hovering spacecraft. The momentum transmitted by the ions reaching the asteroid surface produces a slow but continuous force that can deflect the asteroid in a similar way as the gravity tractor, but with a lighter spacecraft. Focused solar energy H. J. Melosh with I. V. Nemchinov proposed deflecting an asteroid or comet by focusing solar energy onto its surface to create thrust from the resulting vaporization of material. This method would first require the construction of a space station with a system of large collecting, concave mirrors similar to those used in solar furnaces. Orbit mitigation with highly concentrated sunlight is scalable to achieving the predetermined deflection within a year even for a global-threatening body without prolonged warning time. Such a hastened strategy may become topical in the case of late detection of a potential hazard, and also, if required, in providing the possibility for some additional action. Conventional concave reflectors are practically inapplicable to the high-concentrating geometry in the case of a giant shadowing space target, which is located in front of the mirrored surface. This is primarily because of the dramatic spread of the mirrors' focal points on the target due to the optical aberration when the optical axis is not aligned with the Sun. On the other hand, the positioning of any collector at a distance to the target much larger than its size does not yield the required concentration level (and therefore temperature) due to the natural divergence of the sunrays. Such principal restrictions are inevitably at any location regarding the asteroid of one or many unshaded forward-reflecting collectors. Also, in the case of secondary mirrors use, similar to the ones found in Cassegrain telescopes, would be prone to heat damage by partially concentrated sunlight from primary mirror. In order to remove the above restrictions, V.P. Vasylyev proposed to apply an alternative design of a mirrored collector – the ring-array concentrator. This type of collector has an underside lens-like position of its focal area that avoids shadowing of the collector by the target and minimizes the risk of its coating by ejected debris. Provided the sunlight concentration of approximately 5 × 103 times, a surface irradiance of around 4-5 MW/m2 leads to a thrusting effect of about . Intensive ablation of the rotating asteroid surface under the focal spot will lead to the appearance of a deep "canyon", which can contribute to the formation of the escaping gas flow into a jet-like one. This may be sufficient to deflect a asteroid within several months and no addition warning period, only using ring-array collector size of about half of the asteroid's diameter. For such a prompt deflection of the larger NEOs, , the required collector sizes are comparable to the target diameter. In the case of a longer warning time, the required size of the collector may be significantly decreased. Mass driver A mass driver is an (automated) system on the asteroid to eject material into space, thus giving the object a slow steady push and decreasing its mass. A mass driver is designed to work as a very low specific impulse system, which in general uses a lot of propellant, but very little power. This essentially uses the asteroid against itself in order to divert a collision. Modular Asteroid Deflection Mission Ejector Node, (MADMEN), is the idea of landing small unmanned vehicles such as space rovers to break up small portions of the asteroid. Using drills to break up small rocks and boulders from the surface, debris would eject from the surface very fast. Because there are no forces acting on the asteroid these rocks will push the asteroid off course at a very slow rate. This process takes time but could be very effective if implemented correctly. The idea is that when using local material as propellant, the amount of propellant is not as important as the amount of power, which is likely to be limited. Conventional rocket engine Attaching any spacecraft propulsion device would have a similar effect of giving a push, possibly forcing the asteroid onto a trajectory that takes it away from Earth. An in-space rocket engine that is capable of imparting an impulse of 106 N·s (E.g. adding 1 km/s to a 1000 kg vehicle), will have a relatively small effect on a relatively small asteroid that has a mass of roughly a million times more. Chapman, Durda, and Gold's white paper calculates deflections using existing chemical rockets delivered to the asteroid. Such direct force rocket engines are typically proposed to use highly-efficient electrically powered spacecraft propulsion, such as ion thrusters or VASIMR. Asteroid laser ablation Similar to the effects of a nuclear device, it is thought possible to focus sufficient laser energy on the surface of an asteroid to cause flash vaporization / ablation to create either in impulse or to ablate away the asteroid mass. This concept, called asteroid laser ablation was articulated in the 1995 SpaceCast 2020 white paper "Preparing for Planetary Defense", and the 1996 Air Force 2025 white paper "Planetary Defense: Catastrophic Health Insurance for Planet Earth". Early publications include C. R. Phipps "ORION" concept from 1996, Colonel Jonathan W. Campbell's 2000 monograph "Using Lasers in Space: Laser Orbital Debris Removal and Asteroid Deflection", and NASA's 2005 concept Comet Asteroid Protection System (CAPS). Typically such systems require a significant amount of power, such as would be available from a Space-Based Solar Power Satellite. Another proposal is the Phillip Lubin's DE-STAR proposal: The DE-STAR project, proposed by researchers at the University of California, Santa Barbara, is a concept modular solar powered 1 μm, near infrared wavelength, laser array. The design calls for the array to eventually be approximately 1 km squared in size, with the modular design meaning that it could be launched in increments and assembled in space. In its early stages as a small array it could deal with smaller targets, assist solar sail probes and would also be useful in cleaning up space debris. Other proposals Wrapping the asteroid in a sheet of reflective plastic such as aluminized PET film as a solar sail "Painting" or dusting the object with titanium dioxide (white) to alter its trajectory via increased reflected radiation pressure or with soot (black) to alter its trajectory via the Yarkovsky effect. Planetary scientist Eugene Shoemaker in 1996 proposed deflecting a potential impactor by releasing a cloud of steam in the path of the object, hopefully gently slowing it. Nick Szabo in 1990 sketched a similar idea, "cometary aerobraking", the targeting of a comet or ice construct at an asteroid, then vaporizing the ice with nuclear explosives to form a temporary atmosphere in the path of the asteroid. Coherent digger array multiple 1-ton flat tractors able to dig and expel asteroid soil mass as a coherent fountain array, coordinated fountain activity may propel and deflect over years. Attaching a tether and ballast mass to the asteroid to alter its trajectory by changing its center of mass. Magnetic flux compression to magnetically brake and or capture objects that contain a high percentage of meteoric iron by deploying a wide coil of wire in its orbital path and when it passes through, Inductance creates an electromagnet solenoid to be generated. Deflection technology concerns Carl Sagan, in his book Pale Blue Dot, expressed concern about deflection technology, noting that any method capable of deflecting impactors away from Earth could also be abused to divert non-threatening bodies toward the planet. Considering the history of genocidal political leaders and the possibility of the bureaucratic obscuring of any such project's true goals to most of its scientific participants, he judged the Earth at greater risk from a man-made impact than a natural one. Sagan instead suggested that deflection technology be developed only in an actual emergency situation. All low-energy delivery deflection technologies have inherent fine control and steering capability, making it possible to add just the right amount of energy to steer an asteroid originally destined for a mere close approach toward a specific Earth target. According to former NASA astronaut Rusty Schweickart, the gravitational tractor method is controversial because, during the process of changing an asteroid's trajectory, the point on the Earth where it could most likely hit would be slowly shifted across different countries. Thus, the threat for the entire planet would be minimized at the cost of some specific states' security. In Schweickart's opinion, choosing the way the asteroid should be "dragged" would be a tough diplomatic decision. Analysis of the uncertainty involved in nuclear deflection shows that the ability to protect the planet does not imply the ability to target the planet. A nuclear explosion that changes an asteroid's velocity by 10 meters per second (plus or minus 20%) would be adequate to push it out of an Earth-impacting orbit. However, if the uncertainty of the velocity change was more than a few percent, there would be no chance of directing the asteroid to a particular target. Additionally, there are legal concerns regarding the launch of nuclear technology into space. In 1992, the United Nations adopted a resolution that provides strict rules regarding sending nuclear technology to space, including preventing the contamination of space as well as protecting all citizens on Earth from potential fallout. As of 2022, the UN is still considering the safety and legal issues of launching nuclear powered items into outer space, particularly given the expanding field of space travel as more private organizations take part in the modern space race. The UN Committee on Peaceful Uses of Outer Space recently emphasized the point of the previous resolution, saying it is the responsibility of the member states to ensure the safety of everyone regarding nuclear power in space. Planetary defense timeline In their 1964 book, Islands in Space, Dandridge M. Cole and Donald W. Cox noted the dangers of planetoid impacts, both those occurring naturally and those that might be brought about with hostile intent. They argued for cataloging the minor planets and developing the technologies to land on, deflect, or even capture planetoids. In 1967, students in the Aeronautics and Astronautics department at MIT did a design study, "Project Icarus", of a mission to prevent a hypothetical impact on Earth by asteroid 1566 Icarus. The design project was later published in a book by the MIT Press and received considerable publicity, for the first time bringing asteroid impact into the public eye. In the 1980s NASA studied evidence of past strikes on planet Earth, and the risk of this happening at the current level of civilization. This led to a program that maps objects in the Solar System that both cross Earth's orbit and are large enough to cause serious damage if they hit. In the 1990s, US Congress held hearings to consider the risks and what needed to be done about them. This led to a US$3 million annual budget for programs like Spaceguard and the near-Earth object program, as managed by NASA and USAF. In 2005 a number of astronauts published an open letter through the Association of Space Explorers calling for a united push to develop strategies to protect Earth from the risk of a cosmic collision. In 2007 it was estimated that there were approximately 20,000 objects capable of crossing Earth's orbit and large enough (140 meters or larger) to warrant concern. On the average, one of these will collide with Earth every 5,000 years, unless preventive measures are undertaken. It was anticipated that by year 2008, 90% of such objects that are 1 km or more in diameter will have been identified and will be monitored. The further task of identifying and monitoring all such objects of 140m or greater was expected to be complete around 2020. By April 2018, astronomers have spotted more than 8,000 near-Earth asteroids that are at least 460 feet (140 meters) wide and it is estimated about 17,000 such near-Earth asteroids remain undetected. By 2019, the number of discovered near-Earth asteroids of all sizes totaled more than 19,000. An average of 30 new discoveries are added each week. The Catalina Sky Survey (CSS) is one of NASA's four funded surveys to carry out a 1998 U.S. Congress mandate to find and catalog by the end of 2008, at least 90 percent of all near-Earth objects (NEOs) larger than 1 kilometer across. CSS discovered over 1150 NEOs in years 2005 to 2007. In doing this survey they discovered on November 20, 2007, an asteroid, designated , which initially was estimated to have a chance of hitting Mars on January 30, 2008, but further observations during the following weeks allowed NASA to rule out an impact. NASA estimated a near miss by . In January 2012, after a near pass-by of object 2012 BX34, a paper entitled "A Global Approach to Near-Earth Object Impact Threat Mitigation" was released by researchers from Russia, Germany, the United States, France, Britain, and Spain, which discusses the "NEOShield" project. In November 2021, NASA launched a program with a different goal in terms of planetary defense. Many common methods previously in place were meant to completely destroy the asteroid. However, NASA and many others believed this method was far too unreliable so they funded the Double Asteroid Redirection Test or (DART) mission. This mission launched a small unmanned spacecraft to crash into the asteroid to break it up, or to deflect the rock away from Earth. In January 2022, The NASA-funded Asteroid Terrestrial-impact Last Alert System (ATLAS)—a state-of-the-art asteroid detection system operated by the University of Hawaii (UH) Institute for Astronomy (IfA) for the agency's Planetary Defense Coordination Office (PDCO)—has reached a new milestone by becoming the first survey capable of searching the entire dark sky every 24 hours for near-Earth objects (NEOs) that could pose a future impact hazard to Earth. Now comprising four telescopes, ATLAS has expanded its reach to the southern hemisphere from the two existing northern-hemisphere telescopes on Haleakalā and Maunaloa in Hawai'i to include two additional observatories in South Africa and Chile. As of March 1, 2023, we have proof from NASA that DART does indeed work. It was successful in both targeting and making contact with an asteroid moving at high speeds and, was successful in redirecting its course. This data showed that we can successfully move an asteroid with a diameter up to half a mile. See also Asteroid impact prediction Asteroid Redirect Mission Asteroid Day Asteroids in fiction B612 Foundation Colonization of the Moon Framework Programmes for Research and Technological Development Global catastrophic risk Gravity tractor Lost minor planet Near-Earth Asteroid Scout Near-Earth object Potentially hazardous object United States Space Force Sources References Citations General bibliography Luis Alvarez et al. 1980 paper in Science magazine on the great mass extinction 65 million years ago that led to the proliferation of mammal species such as the rise of the human race, thanks to asteroid-impact, a controversial theory in its day, now generally accepted. Clark R. Chapman, Daniel D. Durda & Robert E. Gold (February 24, 2001) Impact Hazard, a Systems Approach, white paper on public policy issues associated with the impact hazard, at boulder.swri.edu Donald W. Cox, and James H. Chestek. 1996. Doomsday Asteroid: Can We Survive? New York: Prometheus Books. . (Note that despite its sensationalist title, this is a good treatment of the subject and includes a nice discussion of the collateral space development possibilities.) Izzo, D., Bourdoux, A., Walker, R. and Ongaro, F.; "Optimal Trajectories for the Impulsive Deflection of NEOs"; Paper IAC-05-C1.5.06, 56th International Astronautical Congress, Fukuoka, Japan, (October 2005). Later published in Acta Astronautica, Vol. 59, No. 1-5, pp. 294–300, April 2006, available in esa.int – The first scientific paper proving that Apophis can be deflected by a small sized kinetic impactor. David Morrison. "Is the Sky Falling?", Skeptical Inquirer 1997. David Morrison, Alan W Harris, Geoff Summer, Clark R. Chapman, & Andrea Carusi Dealing with Impact Hazard, 2002 technical summary Kunio M. Sayanagi. "How to Deflect an Asteroid". Ars Technica (April 2008). Russell L. Schweickart, Edward T. Lu, Piet Hut and Clark R. Chapman; "The Asteroid Tugboat"; Scientific American (November 2003). Vol. 289, No. 5, pp. 54–61. . Furfaro, Emily. NASA's DART Data Validates Kinetic Impact as Planetary Defense Method, NASA, 28 Feb. 2023, . Further reading General Air Force 2025. Planetary Defense: Social, Economic, and Political Implications, United States Air Force, Air Force 2025 Final Report webpage, December 11, 1996. Belton, M.J.S. Mitigation of Hazardous Comets and Asteroids, Cambridge University Press, 2004, Bottke, William F. Asteroids III (Space Science Series), University of Arizona space science series, University of Arizona Press, 2002, Burrows, William E. The Asteroid Threat: Defending Our Planet from Deadly Near-Earth Objects. Lewis, John S. Comet and Asteroid Impact Hazards on a Populated Earth: Computer Modeling (Volume 1 of Comet and Asteroid Impact Hazards on a Populated Earth: Computer Modeling), Academic Press, 2000, Marboe, Irmgard : Legal Aspects of Planetary Defence. Brill, Leiden 2021, ISBN 978-90-04-46759-0. Schmidt, Nikola et al.: Planetary Defense: Global Collaboration for Defending Earth from Asteroids and Comets. Springer, Cham 2019, . Verschuur, Gerrit L. (1997) Impact!: The Threat of Comets and Asteroids, Oxford University Press, External links "Deflecting Asteroids" (with solar sails) by Gregory L. Matloff, IEEE Spectrum, April 2012 Near Earth Objects Directory Nasa's 2007 Report to Congress on NEO Survey Program Including Tracking and Diverting Methods for High Risk Asteroids Armagh University: Near Earth Object Impact Hazard Threats from Space: A Review of U.S. Government Efforts to Track and Mitigate Asteroids and Meteors (Part I and Part II): Hearing before the Committee on Science, Space, and Technology, House of Representatives, One Hundred Thirteenth Congress, First Session, Tuesday, March 19, 2013 and Wednesday, April 10, 2013 Asteroids Earth Future problems Impact events Prevention Space weapons
Asteroid impact avoidance
[ "Astronomy" ]
11,094
[ "Astronomical events", "Impact events" ]
174,080
https://en.wikipedia.org/wiki/Diagonal%20matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale. Definition As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with columns and rows is diagonal if However, the main diagonal entries are unrestricted. The term diagonal matrix may sometimes refer to a , which is an -by- matrix with all the entries not of the form being zero. For example: More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a . A square diagonal matrix is a symmetric matrix, so this can also be called a . The following matrix is square diagonal matrix: If the entries are real numbers or complex numbers, then it is a normal matrix as well. In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices". Vector-to-matrix diag operator A diagonal matrix can be constructed from a vector using the operator: This may be written more compactly as . The same operator is also used to represent block diagonal matrices as where each argument is a matrix. The operator may be written as: where represents the Hadamard product and is a constant vector with elements 1. Matrix-to-vector diag operator The inverse matrix-to-vector operator is sometimes denoted by the identically named where the argument is now a matrix and the result is a vector of its diagonal entries. The following property holds: Scalar matrix A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple of the identity matrix . Its effect on a vector is scalar multiplication by . For example, a 3×3 scalar matrix has the form: The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix has then given a matrix with the term of the products are: and and (since one can divide by ), so they do not commute unless the off-diagonal terms are zero. Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices. For an abstract vector space (rather than the concrete vector space ), the analog of scalar matrices are scalar transformations. This is true more generally for a module over a ring , with the endomorphism algebra (algebra of linear operators on ) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map (from a scalar to its corresponding scalar transformation, multiplication by ) exhibiting as a -algebra. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, scalar invertible transforms are the center of the general linear group . The former is more generally true free modules for which the endomorphism algebra is isomorphic to a matrix algebra. Vector operations Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix and a vector , the product is: This can be expressed more compactly by using a vector instead of a diagonal matrix, , and taking the Hadamard product of the vectors (entrywise product), denoted : This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix. This product is thus used in machine learning, such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF, since some BLAS frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly. Matrix operations The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write for a diagonal matrix whose diagonal entries starting in the upper left corner are . Then, for addition, we have and for matrix multiplication, The diagonal matrix is invertible if and only if the entries are all nonzero. In this case, we have In particular, the diagonal matrices form a subring of the ring of all -by- matrices. Multiplying an -by- matrix from the left with amounts to multiplying the -th row of by for all ; multiplying the matrix from the right with amounts to multiplying the -th column of by for all . Operator matrix in eigenbasis As explained in determining coefficients of operator matrix, there is a special basis, , for which the matrix takes the diagonal form. Hence, in the defining equation , all coefficients with are zero, leaving only one term per sum. The surviving diagonal elements, , are known as eigenvalues and designated with in the equation, which reduces to The resulting equation is known as eigenvalue equation and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors. In other words, the eigenvalues of are with associated eigenvectors of . Properties The determinant of is the product . The adjugate of a diagonal matrix is again diagonal. Where all matrices are square, A matrix is diagonal if and only if it is triangular and normal. A matrix is diagonal if and only if it is both upper- and lower-triangular. A diagonal matrix is symmetric. The identity matrix and zero matrix are diagonal. A 1×1 matrix is always diagonal. The square of a 2×2 matrix with zero trace is always diagonal. Applications Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a diagonal matrix. In fact, a given -by- matrix is similar to a diagonal matrix (meaning that there is a matrix such that is diagonal) if and only if it has linearly independent eigenvectors. Such matrices are said to be diagonalizable. Over the field of real or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily similar to a diagonal matrix (if then there exists a unitary matrix such that is diagonal). Furthermore, the singular value decomposition implies that for any matrix , there exist unitary matrices and such that is diagonal with positive entries. Operator theory In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation. Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix. See also Anti-diagonal matrix Banded matrix Bidiagonal matrix Diagonally dominant matrix Diagonalizable matrix Jordan normal form Multiplication operator Tridiagonal matrix Toeplitz matrix Toral Lie algebra Circulant matrix Notes References Sources Matrix normal forms Sparse matrices
Diagonal matrix
[ "Mathematics" ]
1,680
[ "Matrices (mathematics)", "Sparse matrices", "Mathematical objects", "Combinatorics" ]
174,094
https://en.wikipedia.org/wiki/Microsoft%20Messenger%20service
Messenger (formerly MSN Messenger Service, .NET Messenger Service and Windows Live Messenger Service) was an instant messaging and presence system developed by Microsoft in 1999 for use with its MSN Messenger software. It was used by instant messaging clients including Windows 8, Windows Live Messenger, Microsoft Messenger for Mac, Outlook.com and Xbox Live. Third-party clients also connected to the service. It communicated using the Microsoft Notification Protocol, a proprietary instant messaging protocol. The service allowed anyone with a Microsoft account to sign in and communicate in real time with other people who were signed in as well. On January 11, 2013, Microsoft announced that they were retiring the existing Messenger service globally (except for mainland China where Messenger will continue to be available) and replacing it with Skype. In April 2013, Microsoft merged the service into Skype; existing users were able to sign into Skype with their existing accounts and access their contact list. As part of the merger, Skype's instant messaging functionality is now running on the backbone of the former Messenger service. Background Despite multiple name changes to the service and its client software over the years, the Messenger service is often referred to colloquially as "MSN", due to the history of MSN Messenger. The service itself was known as MSN Messenger Service from 1999 to 2001, at which time, Microsoft changed its name to .NET Messenger Service and began offering clients that no longer carried the "MSN" name, such as the Windows Messenger client included with Windows XP, which was originally intended to be a streamlined version of MSN Messenger, free of advertisements and integrated into Windows. Nevertheless, the company continued to offer more upgrades to MSN Messenger until the end of 2005, when all previous versions of MSN Messenger and Windows Messenger were superseded by a new program, Windows Live Messenger, as part of Microsoft's launch of its Windows Live online services. For several years, the official name for the service remained .NET Messenger Service, as indicated on its official network status web page, though Microsoft rarely used the name to promote the service. Because the main client used to access the service became known as Windows Live Messenger, Microsoft started referring to the entire service as the Windows Live Messenger Service in its support documentation in the mid-2000s. The service can integrate with the Windows operating system, automatically and simultaneously signing into the network as the user logs into their Windows account. Organizations can also integrate their Microsoft Office Communications Server and Active Directory with the service. In December 2011, Microsoft released an XMPP interface to the Messenger service. As part of a larger effort to rebrand many of its Windows Live services, Microsoft began referring to the service as simply Messenger in 2012. Software Official clients Microsoft offered the following instant messaging clients that connected to the Messenger service: Windows Live Messenger, for users of Windows 7 and previous versions MSN Messenger was the former name of the client from 1999 to 2006 Windows Messenger is a scaled-down client that was included with Windows XP in 2001 Microsoft Messenger for Mac, for users of Mac OS X Outlook.com includes web browser-based functionality for instant messaging Hotmail, the predecessor to Outlook.com, includes similar functionality for Messenger Windows Live Web Messenger was a web-based program for use through Internet Explorer MSN Web Messenger was the former name of the web-based client Windows 8, includes a built-in Messaging client Xbox Live includes access to the Messenger service from within the Xbox Dashboard MSN TV (formerly WebTV) had a built-in messaging client available on the original WebTV/MSN TV and MSN TV 2 devices, which was originally introduced via a Summer 2000 software update Messenger on Windows Phone includes access to the Messenger service from within a phone running Windows Phone Windows Live Messenger for iPhone and iPod Touch includes access to the Messenger service from within an iPhone, iPod Touch or iPad Windows Live Messenger for Nokia includes access to the Messenger service from within a Nokia phone Messenger Play! includes access to the Messenger service from within an Android phone or tablet Windows Live Messenger for BlackBerry includes access to the Messenger service from within a BlackBerry Security concerns A 2007 analysis of Messenger's Microsoft Notification Protocol, which is unencrypted, concluded that its design "did not follow several principles of designing secure systems", resulting in a "plethora of security vulnerabilities"; these vulnerabilities were demonstrated by successfully spoofing a user's identity. See also Microsoft Notification Protocol Comparison of instant messaging protocols Comparison of cross-platform instant messaging clients References External links MSN Messenger protocol documentation MSNPiki (protocol wiki) Skype replaces Microsoft Messenger for online calls .NET Instant messaging protocols Windows communication and services
Microsoft Messenger service
[ "Technology" ]
942
[ "Instant messaging", "Instant messaging protocols" ]
174,108
https://en.wikipedia.org/wiki/Abc%20conjecture
The abc conjecture (also known as the Oesterlé–Masser conjecture) is a conjecture in number theory that arose out of a discussion of Joseph Oesterlé and David Masser in 1985. It is stated in terms of three positive integers and (hence the name) that are relatively prime and satisfy . The conjecture essentially states that the product of the distinct prime factors of is usually not much smaller than . A number of famous conjectures and theorems in number theory would follow immediately from the abc conjecture or its versions. Mathematician Dorian Goldfeld described the abc conjecture as "The most important unsolved problem in Diophantine analysis". The abc conjecture originated as the outcome of attempts by Oesterlé and Masser to understand the Szpiro conjecture about elliptic curves, which involves more geometric structures in its statement than the abc conjecture. The abc conjecture was shown to be equivalent to the modified Szpiro's conjecture. Various attempts to prove the abc conjecture have been made, but none have gained broad acceptance. Shinichi Mochizuki claimed to have a proof in 2012, but the conjecture is still regarded as unproven by the mainstream mathematical community. Formulations Before stating the conjecture, the notion of the radical of an integer must be introduced: for a positive integer , the radical of , denoted , is the product of the distinct prime factors of . For example, If a, b, and c are coprime positive integers such that a + b = c, it turns out that "usually" . The abc conjecture deals with the exceptions. Specifically, it states that: An equivalent formulation is: Equivalently (using the little o notation): A fourth equivalent formulation of the conjecture involves the quality q(a, b, c) of the triple (a, b, c), which is defined as For example: A typical triple (a, b, c) of coprime positive integers with a + b = c will have c < rad(abc), i.e. q(a, b, c) < 1. Triples with q > 1 such as in the second example are rather special, they consist of numbers divisible by high powers of small prime numbers. The fourth formulation is: Whereas it is known that there are infinitely many triples (a, b, c) of coprime positive integers with a + b = c such that q(a, b, c) > 1, the conjecture predicts that only finitely many of those have q > 1.01 or q > 1.001 or even q > 1.0001, etc. In particular, if the conjecture is true, then there must exist a triple (a, b, c) that achieves the maximal possible quality q(a, b, c). Examples of triples with small radical The condition that ε > 0 is necessary as there exist infinitely many triples a, b, c with c > rad(abc). For example, let The integer b is divisible by 9: Using this fact, the following calculation is made: By replacing the exponent 6n with other exponents forcing b to have larger square factors, the ratio between the radical and c can be made arbitrarily small. Specifically, let p > 2 be a prime and consider Now it may be plausibly claimed that b is divisible by p2: The last step uses the fact that p2 divides 2p(p−1) − 1. This follows from Fermat's little theorem, which shows that, for p > 2, 2p−1 = pk + 1 for some integer k. Raising both sides to the power of p then shows that 2p(p−1) = p2(...) + 1. And now with a similar calculation as above, the following results: A list of the highest-quality triples (triples with a particularly small radical relative to c) is given below; the highest quality, 1.6299, was found by Eric Reyssat for Some consequences The abc conjecture has a large number of consequences. These include both known results (some of which have been proven separately only since the conjecture has been stated) and conjectures for which it gives a conditional proof. The consequences include: Roth's theorem on Diophantine approximation of algebraic numbers. The Mordell conjecture (already proven in general by Gerd Faltings). As equivalent, Vojta's conjecture in dimension 1. The Erdős–Woods conjecture allowing for a finite number of counterexamples. The existence of infinitely many non-Wieferich primes in every base b > 1. The weak form of Marshall Hall's conjecture on the separation between squares and cubes of integers. Fermat's Last Theorem has a famously difficult proof by Andrew Wiles. However it follows easily, at least for , from an effective form of a weak version of the abc conjecture. The abc conjecture says the lim sup of the set of all qualities (defined above) is 1, which implies the much weaker assertion that there is a finite upper bound for qualities. The conjecture that 2 is such an upper bound suffices for a very short proof of Fermat's Last Theorem for . The Fermat–Catalan conjecture, a generalization of Fermat's Last Theorem concerning powers that are sums of powers. The L-function L(s, χd) formed with the Legendre symbol, has no Siegel zero, given a uniform version of the abc conjecture in number fields, not just the abc conjecture as formulated above for rational integers. A polynomial P(x) has only finitely many perfect powers for all integers x if P has at least three simple zeros. A generalization of Tijdeman's theorem concerning the number of solutions of ym = xn + k (Tijdeman's theorem answers the case k = 1), and Pillai's conjecture (1931) concerning the number of solutions of Aym = Bxn + k. As equivalent, the Granville–Langevin conjecture, that if f is a square-free binary form of degree n > 2, then for every real β > 2 there is a constant C(f, β) such that for all coprime integers x, y, the radical of f(x, y) exceeds C · max{|x|, |y|}n−β. all the polynominals (x^n-1)/(x-1) have an infinity of square-free values. As equivalent, the modified Szpiro conjecture, which would yield a bound of rad(abc)1.2+ε. has shown that the abc conjecture implies that the Diophantine equation n! + A = k2 has only finitely many solutions for any given integer A. There are ~cfN positive integers n ≤ N for which f(n)/B' is square-free, with cf > 0 a positive constant defined as: The Beal conjecture, a generalization of Fermat's Last Theorem proposing that if A, B, C, x, y, and z are positive integers with Ax + By = Cz and x, y, z > 2, then A, B, and C have a common prime factor. The abc conjecture would imply that there are only finitely many counterexamples. Lang's conjecture, a lower bound for the height of a non-torsion rational point of an elliptic curve. A negative solution to the Erdős–Ulam problem on dense sets of Euclidean points with rational distances. An effective version of Siegel's theorem about integral points on algebraic curves. Theoretical results The abc conjecture implies that c can be bounded above by a near-linear function of the radical of abc. Bounds are known that are exponential. Specifically, the following bounds have been proven: In these bounds, K1 and K3 are constants that do not depend on a, b, or c, and K2 is a constant that depends on ε (in an effectively computable way) but not on a, b, or c. The bounds apply to any triple for which c > 2. There are also theoretical results that provide a lower bound on the best possible form of the abc conjecture. In particular, showed that there are infinitely many triples (a, b, c) of coprime integers with a + b = c and for all k < 4. The constant k was improved to k = 6.068 by . Computational results In 2006, the Mathematics Department of Leiden University in the Netherlands, together with the Dutch Kennislink science institute, launched the ABC@Home project, a grid computing system, which aims to discover additional triples a, b, c with rad(abc) < c. Although no finite set of examples or counterexamples can resolve the abc conjecture, it is hoped that patterns in the triples discovered by this project will lead to insights about the conjecture and about number theory more generally. As of May 2014, ABC@Home had found 23.8 million triples. Note: the quality q(a, b, c) of the triple (a, b, c) is defined above. Refined forms, generalizations and related statements The abc conjecture is an integer analogue of the Mason–Stothers theorem for polynomials. A strengthening, proposed by , states that in the abc conjecture one can replace rad(abc) by where ω is the total number of distinct primes dividing a, b and c. Andrew Granville noticed that the minimum of the function over occurs when This inspired to propose a sharper form of the abc conjecture, namely: with κ an absolute constant. After some computational experiments he found that a value of was admissible for κ. This version is called the "explicit abc conjecture". also describes related conjectures of Andrew Granville that would give upper bounds on c of the form where Ω(n) is the total number of prime factors of n, and where Θ(n) is the number of integers up to n divisible only by primes dividing n. proposed a more precise inequality based on . Let k = rad(abc). They conjectured there is a constant C1 such that holds whereas there is a constant C2 such that holds infinitely often. formulated the n conjecture—a version of the abc conjecture involving n > 2 integers. Claimed proofs Lucien Szpiro proposed a solution in 2007, but it was found to be incorrect shortly afterwards. Since August 2012, Shinichi Mochizuki has claimed a proof of Szpiro's conjecture and therefore the abc conjecture. He released a series of four preprints developing a new theory he called inter-universal Teichmüller theory (IUTT), which is then applied to prove the abc conjecture. The papers have not been widely accepted by the mathematical community as providing a proof of abc. This is not only because of their length and the difficulty of understanding them, but also because at least one specific point in the argument has been identified as a gap by some other experts. Although a few mathematicians have vouched for the correctness of the proof and have attempted to communicate their understanding via workshops on IUTT, they have failed to convince the number theory community at large. In March 2018, Peter Scholze and Jakob Stix visited Kyoto for discussions with Mochizuki. While they did not resolve the differences, they brought them into clearer focus. Scholze and Stix wrote a report asserting and explaining an error in the logic of the proof and claiming that the resulting gap was "so severe that ... small modifications will not rescue the proof strategy"; Mochizuki claimed that they misunderstood vital aspects of the theory and made invalid simplifications. On April 3, 2020, two mathematicians from the Kyoto research institute where Mochizuki works announced that his claimed proof would be published in Publications of the Research Institute for Mathematical Sciences, the institute's journal. Mochizuki is chief editor of the journal but recused himself from the review of the paper. The announcement was received with skepticism by Kiran Kedlaya and Edward Frenkel, as well as being described by Nature as "unlikely to move many researchers over to Mochizuki's camp". In March 2021, Mochizuki's proof was published in RIMS. See also List of unsolved problems in mathematics Notes References Sources External links ABC@home Distributed computing project called ABC@Home. Easy as ABC: Easy to follow, detailed explanation by Brian Hayes. Abderrahmane Nitaj's ABC conjecture home page Bart de Smit's ABC Triples webpage http://www.math.columbia.edu/~goldfeld/ABC-Conjecture.pdf The ABC's of Number Theory by Noam D. Elkies Questions about Number by Barry Mazur Philosophy behind Mochizuki’s work on the ABC conjecture on MathOverflow ABC Conjecture Polymath project wiki page linking to various sources of commentary on Mochizuki's papers. abc Conjecture Numberphile video News about IUT by Mochizuki Conjectures Abc conjecture Unsolved problems in number theory 1985 introductions Number theory
Abc conjecture
[ "Mathematics" ]
2,733
[ "Discrete mathematics", "Unsolved problems in mathematics", "Unsolved problems in number theory", "Conjectures", "Abc conjecture", "Mathematical problems", "Number theory" ]
174,230
https://en.wikipedia.org/wiki/Brownfield%20land
Brownfield is previously-developed land that has been abandoned or underutilized, and which may carry pollution, or a risk of pollution, from industrial use. The specific definition of brownfield land varies and is decided by policy makers and land developers within different countries. The main difference in definitions of whether a piece of land is considered a brownfield or not depends on the presence or absence of pollution. Overall, brownfield land is a site previously developed for industrial or commercial purposes and thus requires further development before reuse. Examples of post industrial brownfield sites include abandoned factories, dry cleaning establishments, and gas stations. Typical contaminants include hydrocarbon spillages, solvents and pesticides, asbestos, and heavy metals like lead. Many contaminated post-industrial brownfield sites sit unused because the cleaning costs may be more than the land is worth after redevelopment. Previously unknown underground wastes can increase the cost for study and clean-up. Depending on the contaminants and damage present adaptive re-use and disposal of a brownfield can require advanced and specialized appraisal analysis techniques. Definition Canada The Federal Government of Canada defines brownfields as "abandoned, idle or underutilized commercial or industrial properties [typically located in urban areas] where past actions have caused environmental contamination, but which still have potential for redevelopment or other economic opportunities." United States The U.S. Environmental Protection Agency (EPA) defined brownfield as a property where expansion, redevelopment or reuse may be complicated by the presence or potential presence of a hazardous substance, pollutant or contaminant. This comports well with an available general definition of the term, which scopes to "industrial or commercial property". The term brownfield first came into use on June 28, 1992, at a U.S. congressional field hearing hosted by the Northeast Midwest Congressional Coalition. Also in 1992, the first detailed policy analysis of the issue was convened by the Cuyahoga County, Ohio Planning Commission. EPA selected Cuyahoga County as its first brownfield pilot project in September 1993. The term applies more generally to previously used land or to sections of industrial or commercial facilities that are to be upgraded. In 2002, President George W. Bush signed the Small Business Liability Relief and Brownfields Revitalization Act (the "Brownfields Law") which provides grants and tools to local governments for the assessment, cleanup, and revitalization of brownfields as well as unique technical and program management experience, and public and environmental health expertise to individual brownfield communities. The motivation for this act was the success of the EPA's brownfields program, which it started in the 1990s in response to several court cases that caused lenders to redline contaminated property for fear of liability under the Superfund. As of September 2023, the EPA estimates that the EPA Brownfields program has resulted in 134,414 acres of land readied for reuse. Mothballed brownfields are properties that the owners are not willing to transfer or put to productive reuse. Brownfield status is a legal designation which places restrictions, conditions or incentives on redevelopment and use on the site. United Kingdom In the United Kingdom, brownfield land and previously developed land (PDL) have the same definition under the National Planning Policy Framework (NPPF). The government of the United Kingdom refers to them both as: "Land which is or was occupied by a permanent structure, including the curtilage of the developed land (although it should not be assumed that the whole of the curtilage should be developed) and any associated fixed surface infrastructure." They exclude land that: "is or has been occupied by agricultural or forestry buildings; has been developed for minerals extraction or waste disposal by landfill purposes where provision for restoration has been made through development control procedures; land in built-up areas such as private residential gardens, parks, recreation grounds and allotments; and land that was previously developed but where the remains of the permanent structure or fixed surface structure have blended into the landscape in the process of time." Locations and contaminants Generally, post industrial brownfield sites exist in a city's or town's industrial section, on locations with abandoned factories or commercial buildings, or other previously polluting operations like steel mills, refineries or landfills. Small brownfields also may be found in older residential neighborhoods, as for example dry cleaning establishments or gas stations produced high levels of subsurface contaminants. Typical contaminants found on contaminated brownfield land include hydrocarbon spillages, solvents, pesticides, heavy metals such as lead (e.g., paints), tributyl tins, and asbestos. Old maps may assist in identifying areas to be tested. Brownfield status by country The primary issue facing all nations involved in attracting and sustaining new uses to brownfield sites is globalization of industry. This directly affects brownfield reuse, such as limiting the effective economic life of the use on the revitalized sites. Canada Canada has an estimated 200,000 "contaminated sites" across the nation. , Canada had about 23,078 federally recognized contamination sites, from abandoned mines, to airports, lighthouse stations, and military bases, which are classified into N 1,2,or 3, depending on a score of contamination, with 5,300 active contaminated sites, 2,300 suspected sites and 15,000 listed as closed because remediated or no action was necessary. The provincial governments have primary responsibility for brownfields. The provinces' legal mechanisms for managing risk are limited, as there are no tools such as "No Further Action" letters to give property owners finality and certainty in the cleanup and reuse process. Yet, Canada has cleaned up sites and attracted investment to contaminated lands such as the Moncton rail yards. A strip of the Texaco lands in Mississauga is slated to be part of the Waterfront Trail. However, Imperial Oil has no plans to sell the property which has been vacant since the 1980s. According to their 2014 report on federally listed contaminated sites, the Parliamentary Budget Officer estimated that the "total liability for remediating Canada's contaminated sites reported in the public accounts [was] $4.9 billion." The report listed significant sites called the Big Five with a liability of $1.8 billion: Faro mine, Colomac Mine, Giant Mine, Cape Dyer-DEW line and Goose Bay Air Base. The Port Hope, Ontario site has a liability of $1 billion. Port Hope has the largest volume of historic low-level radioactive wastes in Canada, resulting from "radium and uranium processing in Port Hope between 1933 and 1988 by the former Crown corporation Eldorado Nuclear Limited and its private sector predecessors. By 2010 it was projected that it would cost well over a billion dollars for the soil remediation project, it was the largest such cleanup in Canadian history. The effort is projected to be complete in 2022. In July 2015 the $86,847,474 contract "to relocate the historic low-level radioactive waste and marginally contaminated soils from an existing waste management facility on the shoreline of Lake Ontario to the new, state-of-the-art facility about a kilometre north of the current site." was undertaken. There is also "$1.8 billion for general inventory sites" and "$200 million for other sites." The same report claimed the inventory currently lists 24,990 contaminated sites." The federal government exercises some control over environmental protection, the "provincial and territorial governments issue the bulk of legislation regarding contaminated sites." Under the Shared-Responsibility Contaminated Sites Policy Framework (2005), the government may provide funding for the remediation of nonfederal sites, if the contamination is related to federal government activities or national security. See Natural Resources Canada (2012) Denmark While Denmark lacks the large land base which creates the magnitude of brownfield issues facing countries such as Germany and the U.S., brownfield sites in areas critical to the local economies of Denmark's cities require sophisticated solutions and careful interaction with affected communities. Examples include the cleanup and redevelopment of former and current ship building facilities along Copenhagen's historic waterfront. Laws in Denmark require a higher degree of coordination of planning and reuse than is found in many other countries. France In France, brownfields are called and the Ministère de l'Écologie, du Développement Durable et de l'Énergie (MEDDE) maintains a database of polluted sites named BASOL, with "more than 4,000 sites", of about 300,000 to 400,000 potentially polluted sites total (around 100,000 ha), in a historical inventory named BASIAS, maintained by the Agence de l'Environnement et de la Maitrise de l'Energie (ADEME). Hong Kong Developing brownfield land is considered by the public as one of the most popular ways to increase housing in Hong Kong. The Liber Research Community has found 1,521 hectares of brownfield land in Hong Kong, and has found that almost 90% of existing uses of the land could easily be moved into multi-story buildings, freeing up land that could be used efficiently for housing. In June 2021, Liber Research Community and Greenpeace East Asia collaborated and found a new total of 1,950 hectares of brownfield sites, 379 more hectares than the government was previously able to locate. Germany Germany loses greenfields at a rate of about 1.2 square kilometres per day for settlement and transportation infrastructure. Each of the approximately 14,700 local municipalities is empowered to allocate lands for industrial and commercial use. Local control over reuse decisions of German brownfield sites () is a critical factor. Industrial sites tend to be remote due to zoning laws, and incur costly overhead for providing infrastructure such as utilities, disposal services and transportation. In 1989, a brownfield of the Ruhrgebiet became Emscher Park. United Kingdom In the UK, centuries of industrial use of lands which once formed the birthplace of the Industrial Revolution have left entire regions in a brownfield status. There are legal and fiscal incentives for brownfield redevelopment. Remediation laws are centered on the premise that the remediation should leave land safe and suitable for its current or intended use. In 2018, the Campaign to Protect Rural England (CPRE) reported that the 17,656 sites (covering over 28,000 hectares of land) identified by English local planning authorities on their Brownfield Land Registers would provide enough land for a minimum of 1 million homes, which could rise to over 1.1 million once all registers are published. The registers contain land that is available for redevelopment so is a small subset of all land that would be considered brownfield. There is also brownfield capacity in areas in which the green belt is in danger, for example in Northwest England, where local authorities have identified enough brownfield land to provide for 12 years of housing demand. The UK government has recognised the ecological importance of brownfield sites and has afforded some protection to such habitats through the United Kingdom Biodiversity Action Plan. The Creekside Discovery Centre in Deptford, London is an urban wildlife centre encompassing brownfield habitats. United States United States estimates suggest there are over 500,000 brownfield sites contaminated at levels below the Superfund caliber (the most contaminated) in the country. While historic land use patterns created contaminated sites, the Superfund law has been criticized as creating the brownfield phenomenon where investment moves to greenfields for new development due to severe, no-fault liability schemes and other disincentives. The Clinton-Gore administration and US EPA launched a series of brownfield policies and programs in 1993 to tackle this problem. Redevelopment Valuation and financing Acquisition, adaptive re-use, and disposal of a brownfield site requires advanced and specialized appraisal analysis techniques. For example, the highest and best use of the brownfield site may be affected by the contamination, both before and after remediation. Additionally, the value should take into account residual stigma and potential for third-party liability. Normal appraisal techniques frequently fail, and appraisers must rely on more advanced techniques, such as contingent valuation, case studies, or statistical analyses. A 2011 University of Delaware study has suggested a 17.5:1 return on dollars invested on brownfield redevelopment. A 2014 study of EPA brownfield cleanup grants from 2002 through 2008 found an average benefit value of almost $4 million per brownfield site (with a median of $2,117,982). To expedite the cleanup of brownfield sites in the US, some environmental firms have teamed up with insurance companies to underwrite the cleanup and provide a guaranteed cleanup cost to limit land developers' exposure to environmental remediation costs and pollution lawsuits. The environmental firm first performs an extensive investigation generally in the form of desk studies and potentially further intrusive investigation. Remediation strategies Innovative remediation techniques used at distressed brownfields in recent years include in situ thermal remediation, bioremediation and in situ oxidation. Often, these strategies are used in conjunction with each other or with other remedial strategies such as soil vapor extraction. In this process, vapor from the soil phase is extracted from soils and treated, which has the effect of removing contaminants from the soils and groundwater beneath a site. Binders can be added to contaminated soil to prevent chemical leaching. Some brownfields with heavy metal contamination have even been cleaned up through an innovative approach called phytoremediation, which uses deep-rooted plants to soak up metals in soils into the plant structure as the plant grows. After they reach maturity, the plants – which now contain the heavy metal contaminants in their tissues – are removed and disposed of as hazardous waste. Research is under way to see if some brownfields can be used to grow crops, specifically for the production of biofuels. Michigan State University, in collaboration with DaimlerChrysler and NextEnergy, has small plots of soybean, corn, canola, and switchgrass growing in a former industrial dump site in Oakland County, Michigan. The intent is to see if the plants can serve two purposes simultaneously: assist with phytoremediation, and contribute to the economical production of biodiesel and/or ethanol fuel. The regeneration of brownfields in the United Kingdom and in other European countries has gained prominence due to greenfield land restrictions as well as their potential to promote the urban renaissance. Development of brownfield sites also presents an opportunity to reduce the environmental impact on communities, and considerable assessments need to take place in order to evaluate the size of this opportunity. Barriers Many contaminated brownfield sites sit unused for decades because the cost of cleaning them to safe standards is more than the land would be worth after redevelopment, in the process becoming involuntary parks as they grow over. However, redevelopment has become more common in the first decade of the 21st century, as developable land has become less available in highly populated areas, and brownfields contribute to environmental stigma which can delay redevelopment. Also, the methods of studying contaminated land have become more sophisticated and costly. Some states and localities have spent considerable money assessing the contamination on local brownfield sites, to quantify the cleanup costs in an effort to move the redevelopment process forward. Therefore, federal and state programs have been developed to help developers interested in cleaning up brownfield sites and restoring them to practical uses. In the process of cleaning contaminated brownfield sites, previously unknown underground storage tanks, buried drums or buried railroad tank cars containing wastes are sometimes encountered. Unexpected circumstances increase the cost for study and clean-up. As a result, the cleanup work may be delayed or stopped entirely. To avoid unexpected contamination and increased costs, many developers insist that a site be thoroughly investigated (via a Phase II Site Investigation or Remedial Investigation) prior to commencing remedial cleanup activities. Post-redevelopment uses Commercial and residential the Atlantic Station project in Atlanta, was the largest brownfield redevelopment in the United States. Dayton, like many other cities in the region, is developing Tech Town in order to attract technology-based firms to Dayton and revitalize the downtown area. In Homestead, Pennsylvania, the site once occupied by Carnegie Steel has been converted into a successful commercial center, The Waterfront. Pittsburgh, Pennsylvania, has successfully converted numerous former steel mill sites into high-end residential, shopping, and offices. Examples of brownfield redevelopment in Pittsburgh include: In Pittsburgh's Squirrel Hill neighborhood, a former slag dump for steel mills was turned into a $243 million residential development called Summerset at Frick Park. In Pittsburgh's South Side neighborhood, a former LTV Steel mill site was transformed into Southside Works, a mixed-use development that includes high-end entertainment, retail, offices, and housing. In the Hazelwood (Pittsburgh) neighborhood, a former Jones and Laughlin steel mill site was transformed into a $104 million office park called Pittsburgh Technology Center. In Herr's Island, a island on the western bank of the Allegheny River, a former rail stop for livestock and meatpacking was transformed into Washington's Landing, a waterfront center for commerce, manufacturing, recreation and upscale housing Solar landfill A Solar landfill is a repurposed used landfill that is converted to a solar array solar farm. Regulation United States In the United States, Brownfield regulation and development is governed mainly by state environmental agencies in cooperation with the Environmental Protection Agency (EPA). In 1995, the EPA launched the Brownfields Program, which was expanded in 2002 with the Brownfields Law. The EPA and local and national governments can provide technical help and some funding for assessment and cleanup. From 2002 through 2013, the EPA awarded nearly 1,000 clean-up grants for almost $190 million. It can also provide tax incentives for cleanup that is not paid for outright; specifically, cleanup costs are fully tax-deductible in the year they are incurred. Many of the most important provisions on liability relief are contained in state codes that can differ significantly from state to state. United Kingdom In the United Kingdom, regulation of contaminated land comes from Part IIA of the Environmental Protection Act 1990; responsibility falls on local authorities to create a "contaminated land register". For sites with dubious past and present uses, the Local Planning Authority may ask for a desktop study, which is sometimes implemented as a condition in planning applications. However by definition land that is derelict or underused is highly unlikely to be determined as contaminated land – primarily due to risks to human health. The key regulation of brownfield land is through the land use planning system when a new land use is being considered. See also Greenfield project Brockton Brightfield (brownfield turned into a solar power plant) Greyfield land HUD USER Industrial nature Love Canal Redevelopment of Mumbai mills (unused mills being re-developed) Regulatory Barriers Clearinghouse Small Business Liability Relief and Brownfields Revitalization Act Waste (law) Urban renewal Vapor Intrusion References Further reading External links United States EPA Brownfields Homepage Parents Demand Curbs on Schools Built on Contaminated Land Photographies of French Brownfields. Photographies of German Brownfields. National Brownfields Conference cosponsored by the U.S. EPA and ICMA From Industrial Wasteland to Community Park From Brownfield to Greenfield: A New Working Landscape for Wellesley College Wrenched from its Toxic Past The Brownfields Center at Carnegie Mellon University Browninfo Methodology and Software for Development of Interactive Brownfield Databases Soil contamination Town and country planning in the United Kingdom Urban decay Urban studies and planning terminology
Brownfield land
[ "Chemistry", "Environmental_science" ]
3,980
[ "Environmental chemistry", "Soil contamination" ]
174,232
https://en.wikipedia.org/wiki/Shiga%20toxin
Shiga toxins are a family of related toxins with two major groups, Stx1 and Stx2, expressed by genes considered to be part of the genome of lambdoid prophages. The toxins are named after Kiyoshi Shiga, who first described the bacterial origin of dysentery caused by Shigella dysenteriae. Shiga-like toxin (SLT) is a historical term for similar or identical toxins produced by Escherichia coli. The most common sources for Shiga toxin are the bacteria S. dysenteriae and some serotypes of Escherichia coli (shigatoxigenic or STEC), which include serotypes O157:H7, and O104:H4. Nomenclature Microbiologists use many terms to describe Shiga toxin and differentiate more than one unique form. Many of these terms are used interchangeably. Shiga toxin type 1 and type 2 (Stx-1 and 2) are the Shiga toxins produced by some E. coli strains. Stx-1 is identical to Stx of Shigella spp. or differs by only one amino acid. Stx-2 shares 55% amino acid homology with Stx-1. Cytotoxins – an archaic denotation for Stx – is used in a broad sense. Verocytotoxins/verotoxins – a seldom-used term for Stx – is from the hypersensitivity of Vero cells to Stx. The term Shiga-like toxins is another antiquated term which arose prior to the understanding that Shiga and Shiga-like toxins were identical. History The toxin is named after Kiyoshi Shiga, who discovered S. dysenteriae in 1897. In 1977, researchers in Ottawa, Ontario discovered the Shiga toxin normally produced by Shigella dysenteriae in a line of E. coli. The E. coli version of the toxin was named "verotoxin" because of its ability to kill Vero cells (African green monkey kidney cells) in culture. Shortly after, the verotoxin was referred to as Shiga-like toxin because of its similarities to Shiga toxin. It has been suggested by some researchers that the gene coding for Shiga-like toxin comes from a toxin-converting lambdoid bacteriophage, such as H-19B or 933W, inserted into the bacteria's chromosome via transduction. Phylogenetic studies of the diversity of E. coli suggest that it may have been relatively easy for Shiga toxin to transduce into certain strains of E. coli, because Shigella is itself a subgenus of Escherichia; in fact, some strains traditionally considered E. coli (including those that produce this toxin) in fact belong to this lineage. Being closer relatives of Shigella dysenteriae than of the typical E. coli, it is not at all unusual that toxins similar to that of S. dysenteriae are produced by these strains. As microbiology advances, the historical variation in nomenclature (which arose because of gradually advancing science in multiple places) is increasingly giving way to recognizing all of these molecules as "versions of the same toxin" rather than "different toxins". Transmission The toxin requires highly specific receptors on the cells' surface in order to attach and enter the cell; species such as cattle, swine, and deer which do not carry these receptors may harbor toxigenic bacteria without any ill effect, shedding them in their feces, from where they may be spread to humans. Clinical significance Symptoms of Shiga toxin ingestion include abdominal pain as well as watery diarrhea. Severe life-threatening cases are characterized by hemorrhagic colitis (HC). The toxin is associated with hemolytic-uremic syndrome. In contrast, Shigella species may also produce shigella enterotoxins, which are the cause of dysentery. The toxin is effective against small blood vessels, such as found in the digestive tract, the kidney, and lungs, but not against large vessels such as the arteries or major veins. A specific target for the toxin appears to be the vascular endothelium of the glomerulus. This is the filtering structure that is a key to the function of the kidney. Destroying these structures leads to kidney failure and the development of the often deadly and frequently debilitating hemolytic uremic syndrome. Food poisoning with Shiga toxin often also has effects on the lungs and the nervous system. Structure and mechanism Mechanism The B subunits of the toxin bind to a component of the cell membrane known as glycolipid globotriaosylceramide (Gb3). Binding of the subunit B to Gb3 causes induction of narrow tubular membrane invaginations, which drives formation of inward membrane tubules for toxin-receptor complex uptake into the cell. These tubules are essential for uptake into the host cell. The Shiga toxin (a non-pore forming toxin) is transferred to the cytosol via Golgi network and endoplasmic reticulum (ER). From the Golgi toxin is trafficked to the ER. It is then processed through cleavage by a furin-like protease to separate the A1 subunit. Some toxin-receptor complexes reportedly bypass these steps and are transported to the nucleus rather than the cytosol, with unknown effects. Shiga toxins act to inhibit protein synthesis within target cells by a mechanism similar to that of the infamous plant toxin ricin. After entering a cell via a macropinosome, the payload (A subunit) cleaves a specific adenine nucleobase from the 28S RNA of the 60S subunit of the ribosome, thereby halting protein synthesis. As they mainly act on the lining of the blood vessels, the vascular endothelium, a breakdown of the lining and hemorrhage eventually occurs. The bacterial Shiga toxin can be used for targeted therapy of gastric cancer, because this tumor entity expresses the receptor of the Shiga toxin. For this purpose an unspecific chemotherapeutical is conjugated to the B-subunit to make it specific. In this way only the tumor cells, but not healthy cells, are destroyed during therapy. Structure The toxin has two subunits—designated A (mol. wt. 32000 Da) and B (mol. wt. 7700 Da)—and is one of the AB5 toxins. The B subunit is a pentamer that binds to specific glycolipids on the host cell, specifically globotriaosylceramide (Gb3). Following this, the A subunit is internalised and cleaved into two parts. The A1 component then binds to the ribosome, disrupting protein synthesis. Stx-2 has been found to be about 400 times more toxic (as quantified by LD50 in mice) than Stx-1. Gb3 is, for unknown reasons, present in greater amounts in renal epithelial tissues, to which the renal toxicity of Shiga toxin may be attributed. Gb3 is also found in central nervous system neurons and endothelium, which may lead to neurotoxicity. Stx-2 is also known to increase the expression of its receptor GB3 and cause neuronal dysfunctions. See also 2011 German E. coli outbreak Cholera toxin Enterotoxin Pertussis toxin References External links UniprotKB entries: stxA1 , stxB1 , stxA2 , stxB2 "Shigella" in Todar's Online Textbook of Bacteriology AB5 toxins Bacterial toxins Biological toxin weapons Ribosome-inactivating proteins Invertebrate toxins Microbiology
Shiga toxin
[ "Chemistry", "Biology" ]
1,640
[ "Microscopy", "Microbiology", "Biological toxin weapons", "Chemical weapons" ]
174,238
https://en.wikipedia.org/wiki/Isotope%20analysis
Isotope analysis is the identification of isotopic signature, abundance of certain stable isotopes of chemical elements within organic and inorganic compounds. Isotopic analysis can be used to understand the flow of energy through a food web, to reconstruct past environmental and climatic conditions, to investigate human and animal diets, for food authentification, and a variety of other physical, geological, palaeontological and chemical processes. Stable isotope ratios are measured using mass spectrometry, which separates the different isotopes of an element on the basis of their mass-to-charge ratio. Tissues affected Isotopic oxygen is incorporated into the body primarily through ingestion at which point it is used in the formation of, for archaeological purposes, bones and teeth. The oxygen is incorporated into the hydroxylcarbonic apatite of bone and tooth enamel. Bone is continually remodelled throughout the lifetime of an individual. Although the rate of turnover of isotopic oxygen in hydroxyapatite is not fully known, it is assumed to be similar to that of collagen; approximately 10 years. Consequently, should an individual remain in a region for 10 years or longer, the isotopic oxygen ratios in the bone hydroxyapatite would reflect the isotopic oxygen ratios present in that region. Teeth are not subject to continual remodelling and so their isotopic oxygen ratios remain constant from the time of formation. The isotopic oxygen ratios, then, of teeth represent the ratios of the region in which the individual was born and raised. Where deciduous teeth are present, it is also possible to determine the age at which a child was weaned. Breast milk production draws upon the body water of the mother, which has higher levels of 18O due to the preferential loss of 16O through sweat, urine, and expired water vapour. While teeth are more resistant to chemical and physical changes over time, both are subject to post-depositional diagenesis. As such, isotopic analysis makes use of the more resistant phosphate groups, rather than the less abundant hydroxyl group or the more likely diagenetic carbonate groups present. Applications Isotope analysis has widespread applicability in the natural sciences. These include numerous applications in the biological, earth and environmental sciences. Archaeology Reconstructing ancient diets Archaeological materials, such as bone, organic residues, hair, or sea shells, can serve as substrates for isotopic analysis. Carbon, nitrogen and zinc isotope ratios are used to investigate the diets of past people; these isotopic systems can be used with others, such as strontium or oxygen, to answer questions about population movements and cultural interactions, such as trade. Carbon isotopes are analysed in archaeology to determine the source of carbon at the base of the foodchain. Examining the 12C/13C isotope ratio, it is possible to determine whether animals and humans ate predominantly C3 or C4 plants. Potential C3 food sources include wheat, rice, tubers, fruits, nuts and many vegetables, while C4 food sources include millet and sugar cane. Carbon isotope ratios can also be used to distinguish between marine, freshwater, and terrestrial food sources. Carbon isotope ratios can be measured in bone collagen or bone mineral (hydroxylapatite), and each of these fractions of bone can be analysed to shed light on different components of diet. The carbon in bone collagen is predominantly sourced from dietary protein, while the carbon found in bone mineral is sourced from all consumed dietary carbon, included carbohydrates, lipids, and protein. Nitrogen isotopes can be used to infer soil conditions, with enriched δ15N used to infer the addition of manure. A complication is that enrichment also occurs as a result of environmental factors, such as wetland denitrification, salinity, aridity, microbes, and clearance. δ13C and δ15N measurements on medieval manor soils has shown that stable isotopes can differentiate between crop cultivation and grazing activities, revealing land use types such as cereal production and the presence of fertilization practices at historical sites. To obtain an accurate picture of palaeodiets, it is important to understand processes of diagenesis that may affect the original isotopic signal. It is also important for the researcher to know the variations of isotopes within individuals, between individuals, and over time. Sourcing archaeological materials Isotope analysis has been particularly useful in archaeology as a means of characterization. Characterization of artifacts involves determining the isotopic composition of possible source materials such as metal ore bodies and comparing these data to the isotopic composition of analyzed artifacts. A wide range of archaeological materials such as metals, glass and lead-based pigments have been sourced using isotopic characterization. Particularly in the Bronze Age Mediterranean, lead isotope analysis has been a useful tool for determining the sources of metals and an important indicator of trade patterns. Interpretation of lead isotope data is, however, often contentious and faces numerous instrumental and methodological challenges. Problems such as the mixing and re-using of metals from different sources, limited reliable data and contamination of samples can be difficult problems in interpretation. Ecology All biologically active elements exist in a number of different isotopic forms, of which two or more are stable. For example, most carbon is present as 12C, with approximately 1% being 13C. The ratio of the two isotopes may be altered by biological and geophysical processes, and these differences can be utilized in a number of ways by ecologists. The main elements used in isotope ecology are carbon, nitrogen, oxygen, hydrogen and sulfur, but also include silicon, iron, and strontium. Stable isotope analysis in aquatic ecosystems Stable isotopes have become a popular method for understanding aquatic ecosystems because they can help scientists in understanding source links and process information in marine food webs. These analyses can also be used to a certain degree in terrestrial systems. Certain isotopes can signify distinct primary producers forming the bases of food webs and trophic level positioning. The stable isotope compositions are expressed in terms of delta values (δ) in permil (‰), i.e. parts per thousand differences from a standard. They express the proportion of an isotope that is in a sample. The values are expressed as: δX = [(Rsample / Rstandard) – 1] × 103 where X represents the isotope of interest (e.g., 13C) and R represents the ratio of the isotope of interest and its natural form (e.g., 13C/12C). Higher (or less negative) delta values indicate increases in a sample's isotope of interest, relative to the standard, and lower (or more negative) values indicate decreases. The standard reference materials for carbon, nitrogen, and sulfur are Pee Dee Belamnite limestone, nitrogen gas in the atmosphere, and Cañon Diablo meteorite respectively. Analysis is usually done using a mass spectrometer, detecting small differences between gaseous elements. Analysis of a sample can cost anywhere from $30 to $100. Stable isotopes assist scientists in analyzing animal diets and food webs by examining the animal tissues that bear a fixed isotopic enrichment or depletion vs. the diet. Muscle or protein fractions have become the most common animal tissue used to examine the isotopes because they represent the assimilated nutrients in their diet. The main advantage to using stable isotope analysis as opposed to stomach content observations is that no matter what the status is of the animal's stomach (empty or not), the isotope tracers in the tissues will give us an understanding of its trophic position and food source. The three major isotopes used in aquatic ecosystem food web analysis are 13C, 15N and 34S. While all three indicate information on trophic dynamics, it is common to perform analysis on at least two of the previously mentioned three isotopes for better understanding of marine trophic interactions and for stronger results. Hydrogen-2 The ratio of 2H, also known as deuterium, to 1H has been studied in both plant and animal tissue. Hydrogen isotopes in plant tissue are correlated with local water values but vary based on fractionation during photosynthesis, transpiration, and other processes in the formation of cellulose. A study on the isotope ratios of tissues from plants growing within a small area in Texas found tissues from CAM plants were enriched in deuterium relative to C4 plants. Hydrogen isotope ratios in animal tissue reflect diet, including drinking water, and have been used to study bird migration and aquatic food webs. Carbon-13 Carbon isotopes aid us in determining the primary production source responsible for the energy flow in an ecosystem. The transfer of 13C through trophic levels remains relatively the same, except for a small increase (an enrichment < 1 ‰). Large differences of δ13C between animals indicate that they have different food sources or that their food webs are based on different primary producers (i.e. different species of phytoplankton, marsh grasses.) Because δ13C indicates the original source of primary producers, the isotopes can also help us determine shifts in diets, both short term, long term or permanent. These shifts may even correlate to seasonal changes, reflecting phytoplankton abundance. Scientists have found that there can be wide ranges of δ13C values in phytoplankton populations over a geographic region. While it is not quite certain as to why this may be, there are several hypotheses for this occurrence. These include isotopes within dissolved inorganic carbon pools (DIC) may vary with temperature and location and that growth rates of phytoplankton may affect their uptake of the isotopes. δ13C has been used in determining migration of juvenile animals from sheltered inshore areas to offshore locations by examining the changes in their diets. A study by Fry (1983) studied the isotopic compositions in juvenile shrimp of south Texas grass flats. Fry found that at the beginning of the study the shrimp had isotopic values of δ13C = -11 to -14‰ and 6-8‰ for δ15N and δ34S. As the shrimp matured and migrated offshore, the isotopic values changed to those resembling offshore organisms (δ13C= -15‰ and δ15N = 11.5‰ and δ34S = 16‰). Sulfur-34 While there is no enrichment of 34S between trophic levels, the stable isotope can be useful in distinguishing benthic vs. pelagic producers and marsh vs. phytoplankton producers. Similar to 13C, it can also help distinguish between different phytoplankton as the key primary producers in food webs. The differences between seawater sulfates and sulfides (c. 21‰ vs -10‰) aid scientists in the discriminations. Sulfur tends to be more plentiful in less aerobic areas, such as benthic systems and marsh plants, than the pelagic and more aerobic systems. Thus, in the benthic systems, there are smaller δ34S values. Nitrogen-15 Nitrogen isotopes indicate the trophic level position of organisms (reflective of the time the tissue samples were taken). There is a larger enrichment component with δ15N because its retention is higher than that of 14N. This can be seen by analyzing the waste of organisms. Cattle urine has shown that there is a depletion of 15N relative to the diet. As organisms eat each other, the 15N isotopes are transferred to the predators. Thus, organisms higher in the trophic pyramid have accumulated higher levels of 15N ( and higher δ15N values) relative to their prey and others before them in the food web. Numerous studies on marine ecosystems have shown that on average there is a 3.2‰ enrichment of 15N vs. diet between different trophic level species in ecosystems. In the Baltic sea, Hansson et al. (1997) found that when analyzing a variety of creatures (such as particulate organic matter (phytoplankton), zooplankton, mysids, sprat, smelt and herring,) there was an apparent fractionation of 2.4‰ between consumers and their apparent prey. In addition to trophic positioning of organisms, δ15N values have become commonly used in distinguishing between land derived and natural sources of nutrients. As water travels from septic tanks to aquifers, the nitrogen rich water is delivered into coastal areas. Waste-water nitrate has higher concentrations of 15N than the nitrate that is found in natural soils in near shore zones. For bacteria, it is more convenient for them to uptake 14N as opposed to 15N because it is a lighter element and easier to metabolize. Thus, due to bacteria's preference when performing biogeochemical processes such as denitrification and volatilization of ammonia, 14N is removed from the water at a faster rate than 15N, resulting in more 15N entering the aquifer. 15N is roughly 10-20‰ as opposed to the natural 15N values of 2-8‰. The inorganic nitrogen that is emitted from septic tanks and other human-derived sewage is usually in the form of NH4+. Once the nitrogen enters the estuaries via groundwater, it is thought that because there is more 15N entering, that there will also be more 15N in the inorganic nitrogen pool delivered and that it is picked up more by producers taking up N. Even though 14N is easier to take up, because there is much more 15N, there will still be higher amounts assimilated than normal. These levels of δ15N can be examined in creatures that live in the area and are non migratory (such as macrophytes, clams and even some fish). This method of identifying high levels of nitrogen input is becoming a more and more popular method in attempting to monitor nutrient input into estuaries and coastal ecosystems. Environmental managers have become more and more concerned about measuring anthropogenic nutrient inputs into estuaries because excess in nutrients can lead to eutrophication and hypoxic events, eliminating organisms from an area entirely. Oxygen-18 Analysis of the ratio of 18O to 16O in the shells of the Colorado Delta clam was used to assess the historical extent of the estuary in the Colorado River Delta prior to construction of upstream dams. Forensic science A recent development in forensic science is the isotopic analysis of hair strands. Hair has a recognisable growth rate of 9-11mm per month or 15 cm per year. Human hair growth is primarily a function of diet, especially drinking water intake. The stable isotopic ratios of drinking water are a function of location, and the geology that the water percolates through. 87Sr, 88Sr and oxygen isotope variations are different all over the world. These differences in isotopic ratio are then biologically 'set' in our hair as it grows and it has therefore become possible to identify recent geographic histories by the analysis of hair strands. For example, it could be possible to identify whether a terrorist suspect had recently been to a particular location from hair analysis. This hair analysis is a non-invasive method which is becoming very popular in cases that DNA or other traditional means are bringing no answers. Isotope analysis can be used by forensic investigators to determine whether two or more samples of explosives are of a common origin. Most high explosives contain carbon, hydrogen, nitrogen and oxygen atoms and thus comparing their relative abundances of isotopes can reveal the existence of a common origin. Researchers have also shown that analysis of the 12C/13C ratios can locate the country of origin for a given explosive. Stable isotopic analysis has also been used in the identification of drug trafficking routes. Isotopic abundances are different in morphine grown from poppies in south-east Asia versus poppies grown in south-west Asia. The same is applied to cocaine that is derived from Bolivia and that from Colombia. Traceability Stable isotopic analysis has also been used for tracing the geographical origin of food, timber, and in tracing the sources and fates of nitrates in the environment. Geology Hydrology In isotope hydrology, stable isotopes of water (2H and 18O) are used to estimate the source, age, and flow paths of water flowing through ecosystems. The main effects that change the stable isotope composition of water are evaporation and condensation. Variability in water isotopes is used to study sources of water to streams and rivers, evaporation rates, groundwater recharge, and other hydrological processes. Paleoclimatology The ratio of 18O to 16O in ice and deep sea cores is temperature dependent, and can be used as a proxy measure for reconstructing climate change. During colder periods of the Earth's history (glacials) such as during the ice ages, 16O is preferentially evaporated from the colder oceans, leaving the slightly heavier and more sluggish 18O behind. Organisms such as foraminifera which combine oxygen dissolved in the surrounding water with carbon and calcium to build their shells therefore incorporate the temperature-dependent 18O to 16O ratio. When these organisms die, they settle out on the sea bed, preserving a long and invaluable record of global climate change through much of the Quaternary. Similarly, ice cores on land are enriched in the heavier 18O relative to 16O during warmer climatic phases (interglacials) as more energy is available for the evaporation of the heavier 18O isotope. The oxygen isotope record preserved in the ice cores is therefore a "mirror" of the record contained in ocean sediments. Oxygen isotopes preserve a record of the effects of the Milankovitch cycles on climate change during the Quaternary, revealing an approximately 100,000-year cyclicity in the Earth's climate. References External links MixSIAR. MixSIAR is an R package that helps you create and run Bayesian mixing models to analyze biotracer data (i.e. stable isotopes, fatty acids), following the MixSIAR model framework. Both graphical user interface (GUI) and script versions are available. Stock, B.C., Jackson, A.L., Ward, E.J., Parnell, A.C., Phillips, D.L., Semmens, B.X. Associated peer-reviewed research paper. IsoSource. Stable isotope mixing model for an excess number of sources (Visual Basic), (Phillips and Gregg, 2003). SIAR - Stable isotope analysis in R.. Bayesian mixing model package for the R environment. Parnell, A., Inger, R., Bearhop, S., Jackson, A. SISUS: Stable Isotope Sourcing using Sampling. Stable Isotope Sourcing using Sampling (SISUS) (Erhardt, Wolf, and Bedrick, In Prep.) provides a more efficient algorithm to provide solutions to the same problem as the Phillips and Gregg (2003) IsoSource model and software for source partitioning using stable isotopes. Isotopes
Isotope analysis
[ "Physics", "Chemistry" ]
3,957
[ "Isotopes", "Nuclear physics" ]
14,673,643
https://en.wikipedia.org/wiki/BMC%20Systems%20Biology
BMC Systems Biology was an open access peer-reviewed scientific journal that covered research in systems biology. Filling a gap in what was a new research field, the journal was established in 2007 and is published by BioMed Central. Part of the BMC Series of journals, it had a broad scope covering the engineering of biological systems, network modelling, quantitative analyses, integration of different levels of information and synthetic biology. In January 2019, the Editorial Board was informed that the journal was closing and no more submissions would be accepted after March 1. The last articles were published on 5 April 2019, but content is still archived in perpetuity from the homepage and PubMed Central. Scope and Coverage BMC Systems Biology focused on a wide range of topics within systems biology, including but not limited to: Engineering of biological systems Network modelling Quantitative analyses Integration of different levels of information Synthetic biology The journal provided a platform for the dissemination of significant research findings in the area of systems biology, aiming to bridge the gap between biological research and mathematical modelling. Notable Articles and Research Several significant studies were published in the journal, contributing to the advancement of systems biology. Some notable research includes: "A quantitative systems pharmacology (QSP) model for Pneumocystis treatment in mice" "Network-based characterization of drug-protein interaction signatures with a space-efficient approach" "Boolean network modeling of β-cell apoptosis and insulin resistance in type 2 diabetes mellitus" Impact and Legacy The journal's impact factor in 2018 was 2.048, reflecting its influence and relevance in the field of systems biology. Although the journal is now closed, its archived content continues to serve as a valuable resource for researchers and scholars. See also Systems and Synthetic Biology (until 2015) References BioMed Central academic journals Systems biology Academic journals established in 2007 English-language journals Creative Commons Attribution-licensed journals
BMC Systems Biology
[ "Biology" ]
385
[ "Systems biology" ]
14,673,667
https://en.wikipedia.org/wiki/BMC%20Bioinformatics
BMC Bioinformatics is a peer-reviewed open access scientific journal covering bioinformatics and computational biology published by BioMed Central. It was established in 2000, and has been one of the fastest growing and most successful journals in the BMC Series of journals, publishing 1,000 articles in its first five years. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.9. References External links BioMed Central academic journals Bioinformatics and computational biology journals Creative Commons Attribution-licensed journals Academic journals established in 2000
BMC Bioinformatics
[ "Biology" ]
131
[ "Bioinformatics", "Bioinformatics and computational biology journals" ]
14,673,841
https://en.wikipedia.org/wiki/Journal%20of%20Computational%20Biology
The Journal of Computational Biology is a monthly peer-reviewed scientific journal covering computational biology and bioinformatics. It was established in 1994 and is published by Mary Ann Liebert The editors-in-chief are Sorin Istrail (Brown University) and Michael S. Waterman (University of Southern California). According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.549. Since 1997, authors of accepted proceedings papers at the Research in Computational Molecular Biology conference have been invited to submit a revised version to a special issue of the journal. References External links Mary Ann Liebert academic journals Academic journals established in 1994 English-language journals Hybrid open access journals Bioinformatics and computational biology journals Monthly journals
Journal of Computational Biology
[ "Chemistry", "Biology" ]
149
[ "Bioinformatics stubs", "Bioinformatics and computational biology journals", "Biotechnology stubs", "Biochemistry stubs", "Bioinformatics" ]
14,674,051
https://en.wikipedia.org/wiki/Smart%20Materials%20and%20Structures
Smart Materials and Structures is a monthly peer-reviewed scientific journal covering technical advances in smart materials, systems and structures; including intelligent systems, sensing and actuation, adaptive structures, and active control. The initial editors-in-chief starting in 1992 were Vijay K. Varadan (Pennsylvania State University), Gareth J. Knowles (Grumman Corporation), and Richard O. Claus (Virginia Tech); in 2008 Ephrahim Garcia (Cornell University) took over as editor-in-chief until 2014. Christopher S. Lynch (University of California, Los Angeles) assumed the position of editor-in-chief in 2015 and was succeeded by Alper Erturk (Georgia Institute of Technology) in 2023, who serves as the current editor-in-chief. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.7. References External links IOP Publishing academic journals Materials science journals Monthly journals Academic journals established in 1992 English-language journals
Smart Materials and Structures
[ "Materials_science", "Engineering" ]
214
[ "Materials science journals", "Materials science" ]
14,674,081
https://en.wikipedia.org/wiki/Centrin
Centrins, also known as caltractins, are a family of calcium-binding phosphoproteins found in the centrosome of eukaryotes. Centrins are small calcium binding proteins that are ubiquitous centrosome components. There are about 350 “signature” proteins that are unique to eukaryotic cells but have no significant homology to proteins in archaea and bacteria. They are a type of protein that is essential and present in almost all eukaryotic cells and are found in the centrioles and pericentriolar lattice. Human centrin genes are CETN1, CETN2 and CETN3. Humans and mice have three centrin genes: Cetn-1, which is typically only expressed in male germ cells, and Cetn-2 and Cetn-3, which are typically only expressed in somatic cells. Centrin-2 is a recombinant GFP-centrin-2 and centriole protein that localizes to centrioles throughout the cell cycle, while centrin-3 seems to stick to the pericentriolar material that surrounds the centrioles. History Centrin was first isolated and characterized from the flagellar roots of the green alga Tetraselmis striata in 1984. Jeffrey Salisbury, who discovered centrin in the green algae, and his colleagues used RNA interference (RNAi) to reduce the levels of centrin-2 in human tissue culture cells. The RNAi of centrin-2 from HeLa cells had led to progressive losses in the centrioles and was consistent with full blocks in the centriole replication. He had proved that centrin was involved in centriole duplication in animal cells like seen in his previous work with algae. This implies that centrin requirement was absolute for plants and animals within the centriole. Function Centrins are required for duplication of centrioles. They may also play a role in severing of microtubules by causing calcium-mediated contraction. It was found that centrin was essential within the calcium channel metabolism and it has a high affinity for calcium and a way lower affinity for phosphorus and other cell mineral constituents. Centrins show calcium-sensitive contractile behavior and was identified before as a calcium sensing regulator of the centriole structure. It is one of the first proteins to localize at sites of newly forming centrioles in semiconservative and novo assembly pathways. In algae, ciliates, and lower land plants failure of centrioles to duplicate is shown when a mutation, deletion, or knockdown of centrin happens by RNAi because centrin is a key factor for the structural integrity of centrioles. Studies of experimental ablation of centrin synthesis in alga Chlamydomonas cryptogamous water fern Marsilea indicate a key role of centrin having to do with centriole biogenesis. Centrins facilitated the duplication of centrioles and the severing of microtubules by calcium mediated contraction. The centrin found was highly concentrated outside of the centrosome and a lot of it was found to be non-centrosomal, which assembled during meiosis two. The extra-centrosomal materials function is not yet fully understood by researchers yet but using cross linking found centrin does have an affinity for actin and the terminal portion of the HC. Immunoprecipitation assays are needed in order to confirm this. Structure Centrin belongs to the EF-hand superfamily of calcium-binding proteins and has four calcium-binding EF-hands. It has a molecular weight of 20 kDa. Centrins contain four helix-loop-helix features specifically made binding with calcium in the transitional region of the axoneme. The axoneme is the bridge between the nucleus and the basal body where the proximal and distal fibers are connecting two basal bodies. Centrin is also present in the set of fiberd that connect the microtubule blades. Studies of higher eukaryotic cells such as human cells proved that centrins are the universal centrosome protein that occurs in fibers linking centrioles to one another and the distal most core structure called the "transition zone". See also Centriole Centrosome References Protein families
Centrin
[ "Biology" ]
892
[ "Protein families", "Protein classification" ]
14,674,111
https://en.wikipedia.org/wiki/Discriminator
In computing, a discriminator is a field of characters designed to separate a certain element from others of the same identifier. As an example, suppose that a program must save two unique objects to memory, both of whose identifiers happen to be . To ensure the two objects are not conflated, the program may assign discriminators to the objects in the form of numbers; thus, and distinguish both objects named . This has been adopted by programming languages as well as digital platforms for instant messaging and massively multiplayer online games. In instant messaging A discriminator is used to disambiguate a user from other users who wish to identify under the same username. Discord On Discord, a discriminator is a four-digit suffix added to the end of a username. This allowed for up to 10000 user accounts to take the same name. Transition away from discriminators In 2023, co-founder Stanislav Vishnevskiy wrote on a company blog post about the technical debt caused by the discriminator system, stating that the system resulted in nearly half of the company's friend requests failing to connect. The platform implemented discriminators in the early days of the service, he wrote. When the platform was initially introduced, the software developers' priority was to let its users take any username they want without receiving a “your desired username is taken” error. Discord had no friend system at first, thus letting people take names in different letter cases, making usernames case-sensitive. Discord also introduced a global display name system, wherein a user may input a default nickname to be shown on top of the messages they sent in lieu of their platform-wide username, Vishnevskiy touted on Reddit. The platform created a transition process to a system of pseudonyms wherein all new usernames would be case-insensitive lowercase and limited to the ASCII characters of A–Z, 0–9, the full stop and the underscore. The transition would happen over the course of months, with the accounts that were registered the oldest, and paid subscribers, receiving the opportunity to reserve their name earlier. This change was criticized online for being a step backward, as users could be a risk of being impersonated. A notable indie game studio noted that it could no longer claim its own name on the platform. Discord pointed to its processes for users with high visibility and longstanding business relationships with the company for reserving a username under the new system. The old discriminator-oriented system also mitigated the rush to get unique usernames for sale on the black market, leading to swatting and online harassment. In digital distribution Battle.net implements a suffix of four-digit numbers to its usernames. In computer data storage Common Object Request Broker Architecture A discriminator is a typed tag field present in the Common Object Request Broker Architecture, the interface description language of the Object Management Group. It exists as type and value definitions of tagged unions that determine which union member is selected in the current union instance. This is done by introduction of the classic C switch construct as part of the classic C union. Unlike in some conventional programming languages offering support for unions, the discriminator in IDL is not identical to the selected field name. Here is an example of an IDL union type definition: union Register switch (char) { case 'a': case 'b': short AX; case 'c': long EAX; default: octet AL; }; The effective value of the Register type may contain AX as the selected field, but the discriminator value may be either 'a' or 'b' and is stored in memory separately. Therefore, IDL logically separates information about the currently selected field name and the union effective value from information about the current discriminator value. In the example above, the discriminator value may be any of the following: 'a', 'b', 'c', as well as all other characters belonging to the IDL char type, since the default branch specified in the example Register type allows the use of the remaining characters as well. Other interface definition languages The Microsoft Interface Definition Language also supports tagged unions, allowing to choose the discriminator via an attribute in an enclosing structure or function. Alternatives A friend code is a unique twelve-digit number that could be exchanged with friends and be used to maintain individual friend lists in each video game. Friend codes were generated from an identifier unique to a copy of a game and the universally unique identifier corresponding to that of a user's device. References Programming language topics
Discriminator
[ "Engineering" ]
968
[ "Software engineering", "Programming language topics" ]
14,674,638
https://en.wikipedia.org/wiki/Payment%20for%20ecosystem%20services
Payments for ecosystem services (PES), also known as payments for environmental services (or benefits), are incentives offered to farmers or landowners in exchange for managing their land to provide some sort of ecological service. They have been defined as "a transparent system for the additional provision of environmental services through conditional payments to voluntary providers". These programmes promote the conservation of natural resources in the marketplace. Concept overview Ecosystem services have no standardized definition but might broadly be called "the benefits of nature to households, communities, and economies" or, more simply, "the good things nature does". Twenty-four specific ecosystem services were identified and assessed by the Millennium Ecosystem Assessment, a 2005 UN-sponsored report designed to assess the state of the world's ecosystems. The report defined the broad categories of ecosystem services as food production (in the form of crops, livestock, capture fisheries, aquaculture, and wild foods), fiber (in the form of timber, cotton, hemp, and silk), genetic resources (biochemicals, natural medicines, and pharmaceuticals), fresh water, air quality regulation, climate regulation, water regulation, erosion regulation, water purification and waste treatment, disease regulation, pest regulation, pollination, natural hazard regulation, and cultural services (including spiritual, religious, and aesthetic values, recreation and ecotourism). Notably, however, there is a "big three" among these 24 services which are currently receiving the most money and interest worldwide. These are climate change mitigation, watershed services and biodiversity conservation, and demand for these services in particular is predicted to continue to grow as time goes on. One seminal 1997 Nature magazine article estimated the annual value of global ecological benefits at $33 trillion, a number nearly twice the gross global product at the time. In 2014, the author of this 1997 research (Robert Costanza) and a qualified group of co-authors re-took this assessment – using only a slightly modified methodology but with more detailed 2011 data – and increased the aggregate global ecosystem services provisioning estimate to $125–145 trillion a year. The same research project also estimated between $4.3 and 20.2 trillion a year of losses to ecosystem services, due to land use change. PES has also been touted as a tool for rural development. In 2007, the World Bank released a document outlining the place of PES in development. But the link between the environment and development had been officially recognized long before with the 1972 Stockholm Conference on the Human Environment and later reaffirmed by the Rio Conference on Environment and Development. However, it is important to note PES programs are usually not designed to be primarily poverty alleviation schemes, although they may incorporate development mechanisms. Some PES programs involve contracts between consumers of ecosystem services and the suppliers of these services. However, the majority of the PES programs are funded by governments and involve intermediaries, such as non-government organisations. The party supplying the environmental services normally holds the property rights over an environmental good that provides a flow of benefits to the demanding party in return for compensation. In the case of private contracts, the beneficiaries of the ecosystem services are willing to pay a price that can be expected to be lower than their welfare gain due to the services. The providers of the ecosystem services can be expected to be willing to accept a payment that is greater than the cost of providing the services. Theoretical perspectives There are three main theoretical perspectives concerning PES. The first is that of environmental economics, the second of ecological economics, and the third of those who reject the very idea of ecosystem services. Environmental economics The basic conceptualization of nature from the perspective of environmental economics is that manufactured capital can be used as a substitute for natural capital. The definition of PES provided by environmental economics is the most popular: a voluntary transaction between a service buyer and service seller that takes place on the condition that either a specific ecosystem service is provided or land is used in a way to secure that service. This definition is directly related to the Coase theorem, upon which PES is strongly based from the environmental economics perspective, which states that in a competitive market, in the absence of transaction costs and the presence of clear property rights, direct negotiation between private parties can lead to efficient outcomes. However, in reality, transaction costs are virtually always present and private parties cannot always reach agreements on their own. One of the main reasons is the lack of sustained financing, which often leads governments to provide some type of funding assistance. The environmental economics theorists acknowledge that PES systems can resemble an environmental subsidy, complicating the strict Coasian backing. Ecological economics The conceptualization of nature as understood by ecological economics is that manufactured capital and natural capital are not exclusive or substitutable, but rather complementary. PES as understood by ecological economics comprises three schematic components. The first surrounds the importance of the economic incentive. This idea concerns the relative weight an economic incentive may carry when understood in relation to social, moral, or other non-economic incentives. The second component is directness of the transfer, referring to the extent of interaction between ultimate buyers and sellers. The most direct program would occur between one buyer and one seller, with no intermediaries. A relatively indirect program would remove the buyers and sellers from each other, placing intermediaries between them, commonly in the form of NGOs and governments. The third and final component is the degree of commodification. This addresses the extent to which the environmental service (ES) being provided can be specifically and clearly assessed and measured. Some ES may be relatively easy to assess, such as tons of carbon sequestered, while others may prove difficult. Rejection of ecosystem services Those who reject the idea of valuation of ecosystem services argue that nature should be conserved and valued for nature's sake, and that nature's value is impossible to quantify because its value is inherently infinite. They posit that the attempt to force the idea of ecosystem services into the market system leads to conservation only when it is deemed useful for human life, abandoning ideals environmental conservation when nature conflicts with human interest or simply does not affect human activity. There are also those who support the valuation of nature from a purely practical standpoint, expressed in the idea that "something is better than nothing." They realize and acknowledge the problematic nature of the quantitative valuation of nature but at the same time argue that practically, in a highly commodified society, it is a necessary measure. Commodification of natural capital results in undervaluing ecological systems by not accounting for the innumerable wide-range services provided. PES may decrease in utility as 1) wealth becomes concentrated to the point that natural resource scarcity results in higher short-term value for unsustainable resource extraction, and 2) the long-term cost to engineer limited-range replacement services is externalized onto citizens. This occurs either through increased expense to the existing systems or as justification to privatize services for further profit. For example, a parent corporation can profit both from the exploitation of an ecosystem, and by engineering and operating the services formerly provided. Organizations and motives for incentivizing production of ecosystem services Though the goal of all PES programs is the procurement of some sort of ecosystem service, the reasons why organizations or governments would incentivize the production of these services are diverse. For example, the world's largest and longest running PES program is the United States' Conservation Reserve Program, which pays about $1.8 billion a year under 766,000 contracts with farmers and landowners to "rent" a total of what it considers "environmentally-sensitive land." These farmers agree to plant "long-term, resource-conserving covers to improve water quality, control soil erosion and enhance habitats for waterfowl and wildlife." This program has existed in some form or another since the wake of the American Dust Bowl, when the federal government began paying farmers to avoid farming on poor quality, erodible land. In 1999, the Chinese central government announced an even more expensive project under its $43 billion Grain for Green program, by which it offers farmers grain in exchange for not clearing forested slopes for farming, thereby reducing erosion and saving the streams and rivers below from the associated deluge of sedimentation. Notably, some sources cite the cost of the entire program at $95 billion. Many less extensive nationally funded PES projects which bear resemblances to the American and Chinese land set-aside programs exist around the world, including programs in Canada, the EU, Japan and Switzerland. Examples North America United States In Jamestown, Rhode Island, United States, farmers usually harvest the hay in their fields twice a year. However, this practice destroys the habitats of many local grassland birds. Economists from the University of Rhode Island and EcoAssets Markets Inc. raised money from residents of Jamestown who were willing to help the birds. The range of investments was between $5 and $200 per person for a total of $9,800. This money was enough to compensate three Jamestown farms for the cost of reducing their yearly harvests and getting their hay from another source. In this way, the birds have sufficient time to nest and leave the grounds without being subject to a hay harvest. In this example, the farmers benefit because they only have to harvest their fields once a year instead of twice, and the contributors benefit because they value the lives of the birds more than the money they contributed to the project. Salt Lake City, Utah, United States has managed the majority of its watershed since the 1850s through multi-jurisdictional regulatory mechanisms such as specifying allowable uses (and restricting them), and purchasing land or conservation easements. This long-standing, legally-defensible, yet often-overlooked strategy preserves ecosystem services, while still allowing widely utilized recreation including skiing, snowboarding, hiking, mountain biking, and fishing. Existing uses of the land are generally unaffected, and commercial enterprises are restricted to no- or low-impact tourism-related activities. Central and South America Costa Rica Costa Rica's PES program, Pagos por servicios ambientales (PSA) was established in 1997, and was the first PES program to be implemented on a national scale. It came on the back of Forestry Law 7575 of 1996 which prioritized environmental services over other forest activities such as timber production, and which established the national fund for forest financing (Fondo Nacional de Financiamento Forestal), FONAFIFO. The PSA follows several years of different environmental programs in Costa Rica including the Forest Credit Certificate (Certificado de Abono Forestal, CAF) of 1986 and the Forest Protection Certificate (Certificado para la Protección del Bosque, CPB) of 1995. One of the main reasons for establishing the PSA program was to reframe conservation subsidies as payments for services. It explicitly recognized four environmental services: mitigation of greenhouse gas emissions, hydrological services, biodiversity protection, and provision of scenic beauty. During the early years of the PSA program from 2001 to 2006, it was funded by a World Bank loan and a grant from the Global Environment Facility (GEF) under the project name "Ecomarkets." From 2007 to 2014, the World Bank renewed its support for the program through a new project called "Mainstreaming Market-Based Instruments for Environmental Management." This support also generated FONAFIFO's Sustainable Biodiversity Fund (FBS), designed to target PES programs at owners of small pieces of land, indigenous communities, and communities with low development rates. Financing of PSA activities was initially accomplished in part through a fuel tax established by Forestry Law 7575. The tax was used to flexibly target ecologically important areas. In 2006 a water tariff was introduced to provide additional funding. The water tariff has a relatively narrow application when compared to the fuel tax. Under the water tariff, holders of water concessions pay fees, a portion of which is transferred for use in the PSA exclusively within the watershed in which the revenues were generated. This removes the potential for revenues to be distributed as needed and has been criticized for concentrating funding in select areas, despite their relatively low ecological importance. FONAFIFO acts as a semi-autonomous intermediary organization between service buyers and service sellers. As of 2004, FONAFIFO had contracted 11 different companies in agribusiness, hydropower, municipal water supply, and tourism to pay for the water services they receive. Since then, FONAFIFO has reached agreements with several more companies. By the end of 2005, 95% of land enrolled in Costa Rica's PSA was under forest conservation contracts, covering 10% of the country. It is estimated that forest cover area increased from 2.1 million hectares in 1986 to 2.45 million hectares (48% of the country's total land area) in 2005. It is also estimated that the PSA prevented 11 million tons of carbon emissions between 1999 and 2005. Despite these successes, the PSA has been criticized for critical shortcomings. As it stands, the PSA payment system employs a flat rate cash payment to all participating landowners. This has resulted in large swaths of ecologically high value areas being left unenrolled in the program due to associated higher opportunity costs for land-use change not being adequately compensated for by the flat rate payment scheme. Los Negros, Bolivia The program in Los Negros, Bolivia is a small user-financed program of combined payments for watershed and biodiversity services started by local NGO Fundación Natura Bolivia in 2003. The target area of the program is the watershed in Los Negros valley servicing the town of Santa Rosa and other downhill towns. By August 2007, 2774 hectares of native vegetation were enrolled in the program under 46 landowners. Funding for the program was initially provided by the US Fish and Wildlife Service, before the Municipality of Pampagrande began making payments for the services One of the program's most unique aspects is the landowners' specific request that they be paid in-kind with beehives. They claimed that they wanted their compensation to last beyond a simple cash transfer. Along with the beehives, payment recipients are able to receive training in apiculture. It also allowed for those who prefer cash to sell their hives. An organizational obstacle to the program is that some farmers fear that the scheme is just a way to dispossess them of their land. This was a major factor in deciding to be paid in-kind as it is perceived as less of an attempt at land appropriation. Natura is addressing this issue by maintaining a constant presence in the community and leveraging social networks to convince farmers of the program's benefits. Another issue regards the service buyer of the program. The Municipality of Pampagrande has received some limited support from irrigators in contributing to the program payments. This structure essentially provides the environmental services to downstream users essentially free of charge. Natura is working to implement a strategy through which beneficiaries of environmental services will directly contribute to their maintenance. Program evaluation has been impeded by two factors, namely a lack of baseline data and insufficient data as the program develops. These are important in order to establish the additionality of the program. This issue is not unique to Los Negros, however, as many programs suffer from a lack of sufficient monitoring and evaluation mechanisms. Honduras In Jesús de Otoro, Honduras, the Cumes River is the town's main source of clean water. Coffee producers were dumping their waste into the river upstream, polluting the source and directly affecting the consumers downstream. To solve this problem, the local Council for Administration of Water and Sewage Disposal (JAPOE) created a payment program to benefit coffee producers upstream and the town's inhabitants who lived downstream. The villagers downstream paid around $0.06 per household per month to JAPOE, who redirected the money toward the upstream farmers. The farmers complied with guidelines, such as construction of irrigation ditches, proper management of waste, and use of organic fertilizers. Pico Bonito Forests, near La Ceiba, Honduras, is a mission-driven, for-profit venture between the Pico Bonito National Park Foundation and the EcoLogic Development Fund. Carbon credits are generated by planting native trees to capture, or sequester, carbon dioxide. The credits are then sold though the World Bank's BioCarbon Fund to countries aiming to meet their carbon emissions reduction targets. The project offers a unique business model because it is owned jointly by investors and the communities near the park. Community members earn income and share profits from implementing the sustainable forestry practices that capture carbon. By 2017, the project is expected to sequester from .45-.55 Mt of carbon through reforestation and agroforestry and up to an additional .5 Mt of carbon through avoided deforestation as destructive practices are replaced with sustainable practices. Mexico The Scolel Té program in Chiapas, Mexico, aims to create a market for positive externalities of shade-grown coffee plantations. Designed by the University of Edinburgh's Institute of Ecology and Resource Management along with the Edinburgh Centre for Carbon Management, using the Plan Vivo System, Scolel Té is a PES program under which farmers agree to responsible farming and reforestation practices in exchange for payment for carbon offsets. The NGO Ambio manages Scolel Té. Farmers submit their reforestation plans to Ambio, which judges their financial benefits and the amount of carbon sequestration associated with each plan. The farmers then receive payments from the Fondo BioClimatico, managed by Ambio. Funding for the Fondo BioClimatico comes from the sale of Voluntary Emissions Reduction (VERs) to private groups at a price of $13 per ton of carbon sequestered. Another citizen science project has monitored rainfall data that is linked to a hydrologic payment for ecosystem services project. Movimiento El Campo no Aguanta Más! (MECNAM) is a rural Mexican organization that works for the campo and its representation. The organization was active in the early 200s and contested many of the neoliberal policies taking hold in Mexico, including the implementation of the North America Free Trade Agreement (NAFTA). They also advocated for and won the expansion of national PES programs in Mexico as they believed it was an excellent way of engaging with and forming relationships within the state. It also allowed them to garner recognition of the rural areas and emphasized the value of rural economic stewardship and protection of the environment. MECNAM members were involved with the creation and design of PES programs which allowed them to add provisions for rural communities. This also permitted the discourse of the rights of nature and the implementation of indigenous sociocultural concepts of human-nature configurations. The addition of community input and agency as the result of local actors and activists works to commodify the environment in a bottom-up approach. While the environment becomes commodified, other aspects are also at play. The PES system works not only to monetize natural resources, but gives locals a platform to consider social and community concerns. The increased agency of locals who rely on the forests for survival and livelihood allows for compromise and negotiation between the needs of the community and the environmental concerns. The involvement of regional actors allows for them to tailor policies to their needs in order to best suit both the community and conservation efforts. Fundo Monarca (FM), which is funded by federal funds and donations from the World Wildlife Fund (WWF) and Fundación Slim, initiated a PES program in 2001 to protect the “human-free” core forest of the Monarch Butterfly Biosphere Reserve (MBBR). This provided economic incentives for ejidos and comunidades (mestizo and indigenous communities, respectively, who were given communal usufruct land rights) to “conserve land” in their section of the core forest. It also was a method to address the income loss and social discontent caused by the repeal of communal logging permits due to the 2000 rezoning and expansion of the MBBR. However, the creation of the reserve undermined communal management institutions by reducing local land control and restricting human activity in the region, contributing to an increase in organized crime in the region. Some communities were denied their payment because they were categorized as “non-compliant actors,” defined as communities with more than 3% forest change in their core land. This was despite the communities claiming that external logging by organized crime was the cause. Ethnographic evidence demonstrates that penalizing the communities for external logging results in residents patrolling the forest commons. This activity is made life-threatening by the violent actors associated with illegal logging. There are historical bases for community walks/patrols to protect the forest, such as faenas or rondas, but they are now organized to combat organized crime through methods such as performing them armed. There is no public acknowledgment that WWF/FM’s zoning and PES program promotes this patrolling. Africa Hoima and Kibaale, Uganda The Hoima and Kibaale PES intervention took place from 2010 to 2013 and was especially unique because it was the first PES program set up specifically for a randomized control trial to empirically determine its impact on deforestation. In the treatment villages, owners of forested land were paid $28 per year over the course of two years for every hectare of forest land that was left intact, with the possibility of additional payment for planting new trees. The payment scheme amounted to 5% of average annual income for the typical participating landowner. The program evaluation found there to be significantly less deforestation in participating villages (2–5%) than in control villages (7–10%), but the program did not carry on beyond the evaluation period, and it is assumed that previous forest practice would resume once landowners stop receiving program payments. Europe France Beginning in the 1970s, smallholders in Vittel valley of the Vosges mountains adopted increasingly intensive agricultural practices, facilitated in part by new Common Agricultural Policy subsidies. As a result, aquifer nitrate concentrations began to increase, posing an existential threat to the lucrative Vittel mineral water brand. However, the company's options were limited for a number of reasons: Mineral water labeling regulations were much stricter than water quality laws, so the brand could not legally enforce its water quality requirements Aquifer nitrification was the result of nonpoint source pollution, and the contribution of any individual farmer could not be reliably quantified The company lacked the agricultural expertise to either identify the management changes needed to protect groundwater quality, or to estimate the opportunity costs of implementing those practices As a result, a unique process of bargaining and negotiations unfolded between the farmers and the Vittel brand (acquired by Nestlé early in the process), which was supported scientifically by the French national institute for agricultural research. The resulting arrangement—in which the company purchased some agricultural land while providing technical support and time-limited payments to farmers switching to more groundwater-friendly practices—is arguably the best-known example in the world of PES based on direct negotiations between ecosystem service providers and beneficiaries. Although widely regarded as case study in PES based on Coasean bargaining, aspects of the arrangement remain controversial, such as the creation of community grievances and the potential overexploitation of water resources. United Kingdom The United Kingdom government set up a Commission on Environmental Markets and Economic Performance in 2006 to make detailed proposals on enhancing the UK's environmental industries, technologies and markets. It was established following publication of the Stern Review (2006) to play an advisory role looking at "how the UK could make the most of the potential economic benefits of [a] transition to a low carbon, sustainable economy". References Further reading Cacho, Oscar; Marshall, Graham; Milne, Mary. "Smallholder Agroforestry Projects: Potential for Carbon Sequestration and Poverty Alleviation" ESA Working Paper #03-06, (2003). Callan, Scott J., Thomas, Janet M., Environmental Economics and Management, Thompson South-Western, Mason, OH, 2007 Jones, Kelly, Muños-Brenes, C.L., Shinbrot X.A., Lopez-Baez, W., and Rivera-Castañeda, A. (2018). The role of cash versus technical assistance in the effectiveness and equity of payments for watershed services programs in Mexico. Ecosystem Services. 31, 208–218. Keohane, Nathaniel O, and Olmstead, Sheila M., Markets and the Environment, Island Press, Washington, DC, 2007. Porras, Ina., Barton, David., Miranda, Mirium., and Chacón-Cascante, Adriana. "Learning from 20 years of Payment for Ecosystem Services in Costa Rica." Publications from the International Institute for Environment and Development (2013). Sanchirico, James, and Juha Siikamaki, "Natural Resource Economics and Policy in the 21st Century: Conservation of Ecosystem Services" Resources, 165 (2007): 8-10. University of Rhode Island, "First U.S. test of Ecological Services Payment Underway." MongaBay.com June 27, 2007 . Ward, Frank A., Environmental and Natural Resource Economics, Prentice-Hall, 2006. Unmüßig, Barbara. "Monetizing Nature: Taking Precaution on a Slippery Slope," Great Transition Initiative (August 2014), https://greattransition.org/publication/monetizing-nature-taking-precaution-on-a-slippery-slope. Wexler, Mark. "The Coffee Connection." National Wildlife 41.1 (2003): 37. Wunder, S. "The efficiency of payments for environmental services in tropical conservation." Conservation Biology, 21(1)48-58. Wunder, S. "When payments for environmental services will work for conservation." Conservation Letters 6(4), 230–237. External links Ecosystem Marketplace breaking news and features on payments for ecosystem services Plan Vivo is a standard used to certify PES projects and provides guidance on developing a PES programme PES for Mt. Kalatungan Range Natural Park Ecological economics Market-based environmental policy instruments Environmental social science concepts Ecological economics concepts
Payment for ecosystem services
[ "Environmental_science" ]
5,385
[ "Environmental social science concepts", "Environmental social science" ]
14,674,709
https://en.wikipedia.org/wiki/Trace%20diagram
In mathematics, trace diagrams are a graphical means of performing computations in linear and multilinear algebra. They can be represented as (slightly modified) graphs in which some edges are labeled by matrices. The simplest trace diagrams represent the trace and determinant of a matrix. Several results in linear algebra, such as Cramer's Rule and the Cayley–Hamilton theorem, have simple diagrammatic proofs. They are closely related to Penrose's graphical notation. Formal definition Let V be a vector space of dimension n over a field F (with n≥2), and let Hom(V,V) denote the linear transformations on V. An n-trace diagram is a graph , where the sets Vi (i = 1, 2, n) are composed of vertices of degree i, together with the following additional structures: a ciliation at each vertex in the graph, which is an explicit ordering of the adjacent edges at that vertex; a labeling V2 → Hom(V,V) associating each degree-2 vertex to a linear transformation. Note that V2 and Vn should be considered as distinct sets in the case n = 2. A framed trace diagram is a trace diagram together with a partition of the degree-1 vertices V1 into two disjoint ordered collections called the inputs and the outputs. The "graph" underlying a trace diagram may have the following special features, which are not always included in the standard definition of a graph: Loops are permitted (a loop is an edge that connects a vertex to itself). Edges that have no vertices are permitted, and are represented by small circles. Multiple edges between the same two vertices are permitted. Drawing conventions When trace diagrams are drawn, the ciliation on an n-vertex is commonly represented by a small mark between two of the incident edges (in the figure above, a small red dot); the specific ordering of edges follows by proceeding counter-clockwise from this mark. The ciliation and labeling at a degree-2 vertex are combined into a single directed node that allows one to differentiate the first edge (the incoming edge) from the second edge (the outgoing edge). Framed diagrams are drawn with inputs at the bottom of the diagram and outputs at the top of the diagram. In both cases, the ordering corresponds to reading from left to right. Correspondence with multilinear functions Every framed trace diagram corresponds to a multilinear function between tensor powers of the vector space V. The degree-1 vertices correspond to the inputs and outputs of the function, while the degree-n vertices correspond to the generalized Levi-Civita symbol (which is an anti-symmetric tensor related to the determinant). If a diagram has no output strands, its function maps tensor products to a scalar. If there are no degree-1 vertices, the diagram is said to be closed and its corresponding function may be identified with a scalar. By definition, a trace diagram's function is computed using signed graph coloring. For each edge coloring of the graph's edges by n labels, so that no two edges adjacent to the same vertex have the same label, one assigns a weight based on the labels at the vertices and the labels adjacent to the matrix labels. These weights become the coefficients of the diagram's function. In practice, a trace diagram's function is typically computed by decomposing the diagram into smaller pieces whose functions are known. The overall function can then be computed by re-composing the individual functions. Examples 3-Vector diagrams Several vector identities have easy proofs using trace diagrams. This section covers 3-trace diagrams. In the translation of diagrams to functions, it can be shown that the positions of ciliations at the degree-3 vertices has no influence on the resulting function, so they may be omitted. It can be shown that the cross product and dot product of 3-dimensional vectors are represented by In this picture, the inputs to the function are shown as vectors in yellow boxes at the bottom of the diagram. The cross product diagram has an output vector, represented by the free strand at the top of the diagram. The dot product diagram does not have an output vector; hence, its output is a scalar. As a first example, consider the scalar triple product identity To prove this diagrammatically, note that all of the following figures are different depictions of the same 3-trace diagram (as specified by the above definition): Combining the above diagrams for the cross product and the dot product, one can read off the three leftmost diagrams as precisely the three leftmost scalar triple products in the above identity. It can also be shown that the rightmost diagram represents det[u v w]. The scalar triple product identity follows because each is a different representation of the same diagram's function. As a second example, one can show that (where the equality indicates that the identity holds for the underlying multilinear functions). One can show that this kind of identity does not change by "bending" the diagram or attaching more diagrams, provided the changes are consistent across all diagrams in the identity. Thus, one can bend the top of the diagram down to the bottom, and attach vectors to each of the free edges, to obtain which reads a well-known identity relating four 3-dimensional vectors. Diagrams with matrices The simplest closed diagrams with a single matrix label correspond to the coefficients of the characteristic polynomial, up to a scalar factor that depends only on the dimension of the matrix. One representation of these diagrams is shown below, where is used to indicate equality up to a scalar factor that depends only on the dimension n of the underlying vector space. . Properties Let G be the group of n×n matrices. If a closed trace diagram is labeled by k different matrices, it may be interpreted as a function from to an algebra of multilinear functions. This function is invariant under simultaneous conjugation, that is, the function corresponding to is the same as the function corresponding to for any invertible . Extensions and applications Trace diagrams may be specialized for particular Lie groups by altering the definition slightly. In this context, they are sometimes called birdtracks, tensor diagrams, or Penrose graphical notation. Trace diagrams have primarily been used by physicists as a tool for studying Lie groups. The most common applications use representation theory to construct spin networks from trace diagrams. In mathematics, they have been used to study character varieties. See also Multilinear map Gain graph References Books: Diagram Techniques in Group Theory, G. E. Stedman, Cambridge University Press, 1990 Group Theory: Birdtracks, Lie's, and Exceptional Groups, Predrag Cvitanović, Princeton University Press, 2008, http://birdtracks.eu/ Multilinear algebra Tensors Linear algebra Matrix theory Diagram algebras Application-specific graphs Diagrams
Trace diagram
[ "Mathematics", "Engineering" ]
1,398
[ "Linear algebra", "Tensors", "Algebra" ]
14,675,239
https://en.wikipedia.org/wiki/Dialogue%20tree
A dialogue tree, or conversation tree, is a gameplay mechanic that is used throughout many adventure games (including action-adventure games) and role-playing video games. When interacting with a non-player character, the player is given a choice of what to say and makes subsequent choices until the conversation ends. Certain video game genres, such as visual novels and dating sims, revolve almost entirely around these character interactions and branching dialogues. History The concept of the dialogue tree has existed long before the advent of video games. The earliest known dialogue tree is described in "The Garden of Forking Paths", a 1941 short story by Jorge Luis Borges, in which the combination book of Ts'ui Pên allows all major outcomes from an event branch into their own chapters. Much like the game counterparts this story reconvenes as it progresses (as possible outcomes would approach n where n is the number of options at each fork and m is the depth of the tree). The first computer dialogue system was featured in ELIZA, a primitive natural language processing computer program written by Joseph Weizenbaum between 1964 and 1966. The program emulated interaction between the user and an artificial therapist. With the advent of video games, interactive entertainment have attempted to incorporate meaningful interactions with virtual characters. Branching dialogues have since become a common feature in visual novels, dating sims, adventure games, and role-playing video games. Game mechanics The player typically enters the gameplay mode by choosing to speak with a non-player character (or when a non-player character chooses to speak to them), and then choosing a line of pre-written dialog from a menu. Upon choosing what to say, the non-player character responds to the player, and the player is given another choice of what to say. This cycle continues until the conversation ends. The conversation may end when the player selects a farewell message, the non-player character has nothing more to add and ends the conversation, or when the player makes a bad choice (perhaps angering the non-player to leave the conversation). Games often offer options to ask non-players to reiterate information about a topic, allowing players to replay parts of the conversation that they did not pay close enough attention to the first time. These conversations are said to be designed as a tree structure, with players deciding between each branch of dialog to pursue. Unlike a branching story, players may return to earlier parts of a conversation tree and repeat them. Each branch point (or node) is essentially a different menu of choices, and each choice that the player makes triggers a response from the non-player character followed by a new menu of choices. In some genres such as role-playing video games, external factors such as charisma may influence the response of the non-player character or unlock options that would not be available to other characters. These conversations can have far-reaching consequences, such as deciding to disclose a valuable secret that has been entrusted to the player. However, these are usually not real tree data structure in programmers sense, because they contain cycles as can be seen on illustration on this page. Certain game genres revolve almost entirely around character interactions, including visual novels such as Ace Attorney and dating sims such as Tokimeki Memorial, usually featuring complex branching dialogues and often presenting the player's possible responses word-for-word as the player character would say them. Games revolving around relationship-building, including visual novels, dating sims such as Tokimeki Memorial, and some role-playing games such as Shin Megami Tensei: Persona, often give choices that have a different number of associated "mood points" which influence a player character's relationship and future conversations with a non-player character. These games often feature a day-night cycle with a time scheduling system that provides context and relevance to character interactions, allowing players to choose when and if to interact with certain characters, which in turn influences their responses during later conversations. Some games use a real-time conversation system, giving the player only a few seconds to respond to a non-player character, such as Sega's Sakura Wars and Alpha Protocol. Another variation of branching dialogues can be seen in the adventure game Culpa Innata, where the player chooses a tactic at the beginning of a conversation, such as using either a formal, casual or accusatory manner, that affects the tone of the conversation and the information gleaned from the interviewee. Value and impact This mechanism allows game designers to provide interactive conversations with nonplayer characters without having to tackle the challenges of natural language processing in the field of artificial intelligence. In games such as Monkey Island, these conversations can help demonstrate the personality of certain characters. See also Digital conversation Sierra Entertainment References Adventure games Trees (data structures) Video game terminology Role-playing game terminology
Dialogue tree
[ "Technology" ]
974
[ "Computing terminology", "Video game terminology" ]
14,675,761
https://en.wikipedia.org/wiki/Birkhoff%E2%80%93Grothendieck%20theorem
In mathematics, the Birkhoff–Grothendieck theorem classifies holomorphic vector bundles over the complex projective line. In particular every holomorphic vector bundle over is a direct sum of holomorphic line bundles. The theorem was proved by , and is more or less equivalent to Birkhoff factorization introduced by . Statement More precisely, the statement of the theorem is as the following. Every holomorphic vector bundle on is holomorphically isomorphic to a direct sum of line bundles: The notation implies each summand is a Serre twist some number of times of the trivial bundle. The representation is unique up to permuting factors. Generalization The same result holds in algebraic geometry for algebraic vector bundle over for any field . It also holds for with one or two orbifold points, and for chains of projective lines meeting along nodes. Applications One application of this theorem is it gives a classification of all coherent sheaves on . We have two cases, vector bundles and coherent sheaves supported along a subvariety, so where n is the degree of the fat point at . Since the only subvarieties are points, we have a complete classification of coherent sheaves. See also Algebraic geometry of projective spaces Euler sequence Splitting principle K-theory Jumping line References Further reading External links Roman Bezrukavnikov. 18.725 Algebraic Geometry (LEC # 24 Birkhoff–Grothendieck, Riemann-Roch, Serre Duality) Fall 2015. Massachusetts Institute of Technology: MIT OpenCourseWare Creative Commons BY-NC-SA. Vector bundles Theorems in projective geometry Theorems in algebraic geometry Theorems in complex geometry
Birkhoff–Grothendieck theorem
[ "Mathematics" ]
354
[ "Theorems in algebraic geometry", "Theorems in projective geometry", "Theorems in complex geometry", "Topology stubs", "Topology", "Theorems in geometry" ]
14,676,297
https://en.wikipedia.org/wiki/2-amino-4-hydroxy-6-hydroxymethyldihydropteridine%20diphosphokinase
In enzymology, a 2-amino-4-hydroxy-6-hydroxymethyldihydropteridine diphosphokinase () is an enzyme that catalyzes the chemical reaction ATP + 2-amino-4-hydroxy-6-hydroxymethyl-7,8-dihydropteridine AMP + (2-amino-4-hydroxy-7,8-dihydropteridin-6-yl)methyl diphosphate Thus, the two substrates of this enzyme are ATP and 2-amino-4-hydroxy-6-hydroxymethyl-7,8-dihydropteridine, whereas its two products are AMP and (2-amino-4-hydroxy-7,8-dihydropteridin-6-yl)methyl diphosphate. This enzyme belongs to the family of transferases, specifically those transferring two phosphorus-containing groups (diphosphotransferases). The systematic name of this enzyme class is ATP:2-amino-4-hydroxy-6-hydroxymethyl-7,8-dihydropteridine 6'-diphosphotransferase. Other names in common use include 2-amino-4-hydroxy-6-hydroxymethyldihydropteridine pyrophosphokinase, H2-pteridine-CH2OH pyrophosphokinase, 7,8-dihydroxymethylpterin-pyrophosphokinase, HPPK, 7,8-dihydro-6-hydroxymethylpterin pyrophosphokinase, and hydroxymethyldihydropteridine pyrophosphokinase. This enzyme participates in folate biosynthesis. This enzyme catalyses the first step in a three-step pathway leading to 7,8 dihydrofolate. Bacterial HPPK (gene folK or sulD) is a protein of 160 to 270 amino acids. In the lower eukaryote Pneumocystis carinii, HPPK is the central domain of a multifunctional folate synthesis enzyme (gene fas). Structural studies As of late 2007, 23 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , , , , , , , , , , , , , and . References Further reading Protein domains EC 2.7.6 Enzymes of known structure
2-amino-4-hydroxy-6-hydroxymethyldihydropteridine diphosphokinase
[ "Biology" ]
546
[ "Protein domains", "Protein classification" ]
14,676,311
https://en.wikipedia.org/wiki/2-C-methyl-D-erythritol%204-phosphate%20cytidylyltransferase
In enzymology, a 2-C-methyl-D-erythritol 4-phosphate cytidylyltransferase () is an enzyme that catalyzes the chemical reaction: 2-C-methyl-D-erythritol 4-phosphate + CTP diphosphate + 4-(cytidine 5'-diphospho)-2-C-methyl-D-erythritol Thus, the two substrates of this enzyme are CTP and 2-C-methyl-D-erythritol 4-phosphate, whereas its two products are diphosphate and 4-diphosphocytidyl-2-C-methylerythritol. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). This enzyme participates in isoprenoid biosynthesis and stenvenosim. It catalyzes the third step of the MEP pathway; the formation of CDP-ME (4-diphosphocytidyl-2C-methyl-D-erythritol) from CTP and MEP (2C-methyl-D-erythritol 4-phosphate). The isoprenoid pathway is a well known target for anti-infective drug development. Nomenclature The systematic name of this enzyme class is CTP:2-C-methyl-D-erythritol 4-phosphate cytidylyltransferase. This enzyme is also called: MEP cytidylyltransferase CDP-ME synthetase It is normally abbreviated IspD. It is also referenced by the open reading frame YgbP. Structural studies The crystal structure of the E. coli 2-C-methyl-D-erythritol 4-phosphate cytidylyltransferase , & , reported by Richard et al. (2001), was the first one for an enzyme involved in the MEP pathway. As of February 2010, 13 other structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , ,, , and . References Further reading EC 2.7.7 Enzymes of known structure Protein families
2-C-methyl-D-erythritol 4-phosphate cytidylyltransferase
[ "Biology" ]
487
[ "Protein families", "Protein classification" ]
14,676,596
https://en.wikipedia.org/wiki/Openbravo
Openbravo is a Spanish cloud-based SaaS software provider specializing in retail with headquarters in Pamplona, Spain and offices in Barcelona and Lille. The company was formerly known for being a horizontal open-source enterprise resource planning (ERP) software vendor for different industries. History Openbravo's roots are in the development of business administration software, first developed by Nicolas Serrano and Ismael Ciordia, employees of the University of Navarra in the mid-1990s who used emerging internet technologies in their work, and subsequently introduced a new approach for building web applications. Their concept was realized in a new company called Tecnicia, founded in August 2001 by Serrano, Ciordia, and Aguinaga. In 2005, two management consultants, Manel Sarasa and Josep Mitjá, were asked by a venture capital company to evaluate Tecnicia and prepare a business plan for its evolution. In 2006, the two consultants joined Tecnicia as the CEO and COO, respectively. Around the same time the Spanish investment company Sodena invested US$6.4 million in the further development of the company. In 2006, the company was renamed Openbravo and launched its first product offering, Openbravo ERP. The code was made available open-source in April that same year. In 2007, the company announced the acquisition of LibrePOS, a Java-based Point-of-Sale (POS) application for retail and hospitality businesses. LibrePOS was rebranded as Openbravo POS (or Openbravo Java POS). In May 2008 Openbravo attracted three more investors, Amadeus (UK), GIMV (Belgium) and Adara (Spain) for a second investment round totaling $12.5 million to further develop its products and services. In July 2012, Openbravo launched Openbravo for Retail, including the Openbravo Web POS, a new point-of-sale platform replacing Openbravo Java POS that was web and mobile-friendly. In March 2014, Openbravo ERP was renamed Openbravo ERP Platform. Openbravo for Retail was renamed to Openbravo Commerce Platform. In May 2015, the Openbravo Commerce Platform and Openbravo ERP Platform were renamed to Openbravo Commerce Suite and Openbravo Business Suite. Openbravo announces its strategic focus in Retail. Openbravo also launches the Openbravo Subscription Management and Recurring Billing, a specialized solution for recurring transactions-based revenue models. In February 2016, Openbravo launches Openbravo Cloud, its official cloud offering, and starts the distribution of Openbravo Commerce Cloud, a cloud-based and mobile-enabled omnichannel platform for midsize to large retail and restaurant chains. In 2018, Openbravo announces a certified SAP connector to facilitate the integration of the Openbravo Commerce Cloud in all those clients running SAP as their central corporate system. In November 2022 Openbravo joins the French group DL Software, today Orisha. As a result Openbravo becomes Orisha | Openbravo in October 2023. Business and markets Openbravo targets today mid-sized to large retail chains. Current products Openbravo currently distributes Openbravo Commerce Cloud, a cloud-based SaaS unified commerce platform. The functionality offered by the platform covers both front and back office processes for the integration of all sales channels. Features such as a web and mobile point of sale, an integrated OMS engine, CRM & Clienteling functionalities and inventory management among others. Previous products (discontinued) Since its appearance in the market in 2006, Openbravo has launched different products that help to describe the evolution of the company. The following information is shown for historical purposes only, since all these products are no longer offered. Openbravo ERP Openbravo ERP was the first product launched by Openbravo. It was a web-based enterprise resource planning software for small and medium-sized companies that is released under the Openbravo Public License, based on the Mozilla Public License. The model for the program was originally based on the Compiere ERP program that is also open-source, released under the GNU General Public License version 2. As of January 2008, the program was among the top ten most active projects of SourceForge. With Openbravo ERP organizations can automate and register the most common business processes, in the fields: Sales, Procurement, Manufacturing, Projects, Finance, MRP and more. Numerous commercial extensions are available on the Openbravo Exchange which can be procured by users with a commercial edition of Openbravo ERP. This paid-for version offers additional functionality compared to the free Community Edition, among them integrated administration tools, a non-technical tool for updates and upgrades, access to Openbravo Exchange and a Service Level Agreement. The characteristic of the Openbravo ERP application is the green web interface through which users maintain company data in a web browser. Openbravo can also create and export reports and data to several formats, such as PDF and Microsoft Excel. Openbravo's Java-based architecture focuses on two development models: model-driven development, in which developers describe the application in terms of models rather than code model-view-controller, a well-established design pattern in which the presentation logic and the business logic are kept isolated These two models allow for integration with other programs and for a simple interface. The application of open standards Openbravo ERP can be integrated with other open source applications like Magento webshop, Pentaho Business Intelligence, ProcessMaker BPM, Liferay Portal and SugarCRM. In March 2014, Openbravo ERP was renamed to Openbravo ERP Platform, which was changed again to Openbravo Business Suite in May 2015. The latest version is 3.0.36902 released in April 2020. Openbravo Java POS Openbravo POS was the first POS solution offered by Openbravo. It is a Java Point-of-Sale (POS) application for retail and hospitality businesses. The application came into existence called TinaPOS. For legal reasons the application was renamed to LibrePOS. In 2007 LibrePOS was acquired by Openbravo and it is known by its current name. The program was completely integrated into Openbravo ERP. Through this integration it was possible to update stock levels, financial journals and customer data directly in the central database when a POS sales is executed in the stores. Openbravo POS can be applied using PDAs for order intake. In July 2012 Openbravo launched its new POS solution, the Openbravo Web POS, included in the Openbravo Commerce Suite and which replaced the Openbravo Java POS. Openbravo Java POS has been discontinued. Openbravo Business Suite The Openbravo Business Suite was launched in May 2015, replacing the previous Openbravo ERP Platform. It is a global management solution built on top of the Openbravo Technology Platform including horizontal ERP, CRM and BI functionality for across industries. Openbravo Commerce Suite The Openbravo Commerce Suite is the Openbravo's solution for retailers. It is a multi-channel retail management solution including a responsive web and mobile POS (Openbravo Web POS) backed by a comprehensive functionality for Merchandise Management, Supply Chain Management and Enterprise Management. Openbravo Subscription Management and Recurring Billing A commercial solution for companies with recurring billing revenue models, including functionality from pricing definition to automatic revenue recognition and accounting. Openbravo Commerce Cloud The current version of the Openbravo software provides a cloud-based SaaS unified commerce platform for midsize to large retail chains. See also Omnichannel Software as a service Cloud computing OMS Warehouse management system References Business software companies Point of sale companies Retail point of sale systems Cloud computing providers Development software companies Supply chain software companies Companies based in Navarre Software companies of Spain
Openbravo
[ "Technology" ]
1,644
[ "Retail point of sale systems", "Information systems" ]
14,676,653
https://en.wikipedia.org/wiki/Adenylyl-sulfate%20kinase
In enzymology, an adenylyl-sulfate kinase () is an enzyme that catalyzes the chemical reaction ATP + adenylyl sulfate ADP + 3'-phosphoadenylyl sulfate Thus, the two substrates of this enzyme are ATP and adenylyl sulfate, whereas its two products are ADP and 3'-phosphoadenylyl sulfate. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with an alcohol group as acceptor. The systematic name of this enzyme class is ATP:adenylyl-sulfate 3'-phosphotransferase. Other names in common use include adenylylsulfate kinase (phosphorylating), 5'-phosphoadenosine sulfate kinase, adenosine 5'-phosphosulfate kinase, adenosine phosphosulfate kinase, adenosine phosphosulfokinase, adenosine-5'-phosphosulfate-3'-phosphokinase, and APS kinase. This enzyme participates in 3 metabolic pathways: purine metabolism, selenoamino acid metabolism, and sulfur metabolism. This enzyme contains an ATP binding P-loop motif. Structural studies As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , and . References Further reading Protein domains EC 2.7.1 Enzymes of known structure
Adenylyl-sulfate kinase
[ "Biology" ]
330
[ "Protein domains", "Protein classification" ]
14,676,686
https://en.wikipedia.org/wiki/William%20Lawrence%20Tower
William Lawrence Tower (22 December 1872– July 1967) was an American zoologist, born in Halifax, Massachusetts. He was educated at the Lawrence Scientific School (Harvard), the Harvard Graduate School, and the University of Chicago (B. S., 1902), where he taught thereafter, becoming associate professor in 1911. Research Tower was notable for his experimental work in heredity, investigating the inheritance of acquired characteristics and the laws of heredity in beetles and publishing An Investigation of Evolution in Chrysomelid Beetles of the Genus Leptinotarsa (1906). This study is probably the first (albeit possibly discredited) of mutation in animals. He published also The Development of the Colors and Color Patterns of Coleoptera (1903) and, with Coulter, Castle, Davenport and East, an essay on Heredity and Eugenics (1912). Tower was caught up in personal and professional scandals. He resigned from the University of Chicago in 1917 following a very public divorce, but by then he had become a source of discontent among students and faculty. His professed atheism caused offense to some, including graduate student Warder Clyde Allee. Tower caused political friction within the department and many members distrusted his professional ethics. Experimental results which Tower reported in 1906 and 1910 were found to include serious discrepancies which he declined to explain. His claim that experimental results had been lost in a fire increased his colleagues' skepticism. William Bateson, T. D. A. Cockerell, and R. A. Gortner were particularly critical of his work. A more positive reception came from the botanist Henry Chandler Cowles. It was suggested that his research may have been faked. The geneticist William E. Castle who visited Tower's laboratory was not impressed by the experimental conditions. He later concluded that Tower had faked his data. Castle found the fire suspicious and also Tower's claim that a steam leak in his greenhouse had destroyed all his beetle stocks. Publications The Development of the Colors and Color Patterns of Coleoptera (1903) An Investigation of Evolution in Chrysomelid Beetles of the Genus Leptinotarsa (1906) The Mechanism of Evolution in Leptinotarsa (1918) References Further reading Bateson, William. (1913). Problems of Genetics. Yale University Press. Cockerell, T. D. A. (1910). The Modification of Mendelian Inheritance by Extreme Conditions. American Naturalist 44: 747-749. Gortner, R. A. (1911). Studies on Melanin IV. The Origin of the Pigment and the Color Pattern in the Elytra of the Colorado Potato Beetle (Leptinotarsa decemlineata Say). American Naturalist 45: 743-755. Kohler, Robert E. (2002). Landscapes and Labscapes: Exploring the Lab-Field Border in Biology. University of Chicago Press. External links Tower, William Lawrence, – Biodiversity Heritage Library 1872 births 1967 deaths Academic scandals American eugenicists American science writers American zoologists Harvard John A. Paulson School of Engineering and Applied Sciences alumni Lamarckism People from Halifax, Massachusetts People involved in scientific misconduct incidents Biology controversies University of Chicago alumni University of Chicago faculty
William Lawrence Tower
[ "Biology" ]
660
[ "Non-Darwinian evolution", "Biology theories", "Obsolete biology theories", "Lamarckism" ]
14,676,857
https://en.wikipedia.org/wiki/Architectural%20light%20shelf
A light shelf is a horizontal surface that reflects daylight deep into a building. Light shelves are placed above eye-level and have high-reflectance upper surfaces, which reflect daylight onto the ceiling and deeper into the space. Light shelves are typically used in high-rise and low-rise office buildings, as well as institutional buildings. This design is generally used on the equator-facing side of the building, which is where maximum sunlight is found, and as a result is most effective. Not only do light shelves allow light to penetrate through the building, they are also designed to shade near the windows, due to the overhang of the shelf, and help reduce window glare. Exterior shelves are generally more effective shading devices than interior shelves. A combination of exterior and interior shelves will work best in providing an even illumination gradient. Benefits Architectural light shelves have been proven to reduce the need for artificial lighting in buildings. Since they can reflect light deeper into a space, the use of incandescent and fluorescent lighting can be reduced or eliminated, depending on the space. Light shelves make it possible for daylight to penetrate the space up to 2.5 times the distance between the floor and the top of the window. Today, advanced light shelf technology makes it possible to increase the distance up to 4 times. In spaces such as classrooms and offices, light shelves have been proven to increase occupant comfort and productivity. Furthermore, incorporating light shelves in a building design is admissible for the LEED point system, falling under the “Indoor Environment Quality: Daylight & Views” category. Limitations Light shelves may not be suitable for all climates. They are generally used in mild climates and not in tropical or desert climates due to the intense solar heat gain. These hot climates, compared to mild climates, require very small window openings to reduce the amount of heat infiltration. The fact that light shelves extend a fair distance into a room may result in interference with sprinkler systems. In Canada, they cannot exceed 1200 mm (4 ft.) in width if sprinklers are present or the design will require integration with sprinkler system to cover the floor area under the light shelf. They also require a higher than average floor-to-ceiling height in order for them to be effective, or daylight may be inadvertently redirected into occupants' eyes. The distance into a space that light is cast is variable depending on both the time of day and the time of year. Light shelves also increase maintenance requirements and window coverings must be coordinated with light shelf design. Alternatives Alternatives to light shelves for window daylighting include blinds and louver systems, both of which can be interior or exterior. Blinds reduce solar gain, but do little to redirect light into the interior space. Exterior louver systems often rely on adjustments from either complex servo motors or building occupants throughout the day to operate well. Both of these systems can be unreliable at times, reducing the overall benefit of having a daylighting system. See also Architectural lighting design Daylighting References Architectural lighting design Architectural elements
Architectural light shelf
[ "Technology", "Engineering" ]
617
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
14,677,231
https://en.wikipedia.org/wiki/Insert%20%28molecular%20biology%29
In Molecular biology, an insert is a piece of DNA that is inserted into a larger DNA vector by a recombinant DNA technique, such as ligation or recombination. This allows it to be multiplied, selected, further manipulated or expressed in a host organism. Inserts can range from physical nucleotide additions using a technique system or the addition of artificial structures on a molecule via mutagenic chemicals, such as ethidium bromide or crystals. Inserts into the genome of an organism normally occur due to natural causes. These causes include environmental conditions and intracellular processes. Environmental inserts range from exposure to radioactive radiation such as Ultraviolet, mutagenic chemicals, or DNA viruses. Intracellular inserts can occur through heritable changes in parent cells or errors in DNA replication or DNA repair. Gene insertion techniques can be used for characteristic mutations in an organism for a desired phenotypic gene expression. A gene insert change can be expressed in a large variety of ends. These variants can range from the loss, or gain, of protein function to changes in physical structure i.e., hair, or eye, color. The goal of changes in expression are focused on a gain of function in proteins for regulation or to termination of cellular function for prevention of disease. The results of the variations are dependent on the place in the genome the addition, or mutation is located. The aim is to learn, understand, and possibly predict the expression of genetic material in organisms using physical and chemical analysis. To see the results of genetic mutations, or inserts, techniques such as DNA sequencing, gel electrophoresis, immunoassay, or microscopy  can observe mutation. History The field has expanded significantly since the publication in 1973 with biochemists Stanley N. Cohen and Herbert W. Boyer by using E. coli bacteria to learn how to cut fragments, rejoin different fragments, and insert the new genes. The field has expanded tremendously in terms of precision and accuracy since then. Computers and technology have made it technologically easier to achieve narrowing of error and expand understanding in this field. Computers having a high capacity for data and calculations which made processing the large volume of information tangible, i.e., the use of ChIP and gene sequence. Techniques and protocols Homology directed repair (HDR) is a technique repairs breaks or lesions in DNA molecules. The most common technique to add inserts to desired sequences is the use of homologous recombination. This technique has a specific requirement where the insert can only be added after it has been introduced to the nucleus of the cell, which can be added to the genome mostly during the G2 and S phases in the cell cycle. CRISPR gene editing CRISPR gene editing based on Clustered regularly interspaced short palindromic repeats (CRISPR) -Cas9 is an enzyme that uses the gene sequences to help control, cleave, and separate specific DNA sequences that are complementary to a CRISPR sequence. These sequences and enzymes were originally derived from bacteriophages. The importance of this technique in the field of genetic engineering is that it gives the ability to have highly precise targeted gene editing and the cost factor for this technique is low compared to other tools. The ability to insert DNA sequences into the organism is easy and fast, although it can run into expression issues in higher complex organisms. Transcription activator-like effector nuclease Transcription activator-like effector nuclease, TALENs, are a set of restriction enzymes that be created to cut out desired DNA sequences. These enzymes are mostly used in combination with CRISPR-CAS9, Zinc finger nuclease, or HDR. The main reason for this is the ability for these enzymes to have the precision to cut and separate the desired sequence within a gene. Zinc finger nuclease Zinc finger nucleases are genetically engineered enzymes that combine fusing a zinc finger DNA-binding domain on a DNA-cleavage domain. These are also combined with CRISPR-CAS9 or TALENs to gain a sequence-specific addition, or deletion, within the genome of more complex cells and organisms. Gene gun The gene gun, also known as a biolistic particle delivery system, is used to deliver transgenes, proteins, or RNA into the cell. It uses a micro-projectile delivery system that shoots coated particles of a typical heavy metal that has DNA of interest into cells using high speed. The genetic material will penetrate the cell and deliver the contents over a space area. The use of micro-projectile delivery systems is a technique known as biolistic. References Molecular biology
Insert (molecular biology)
[ "Chemistry", "Biology" ]
939
[ "Biochemistry", "Molecular biology" ]
14,677,455
https://en.wikipedia.org/wiki/Selectfluor
Selectfluor, a trademark of Air Products and Chemicals, is a reagent in chemistry that is used as a fluorine donor. This compound is a derivative of the nucleophillic base DABCO. It is a colourless salt that tolerates air and even water. It has been commercialized for use for electrophilic fluorination. Preparation Selectfluor is synthesized by the N-alkylation of diazabicyclo[2.2.2]octane (DABCO) with dichloromethane in a Menshutkin reaction, followed by ion exchange with sodium tetrafluoroborate (replacing the chloride counterion for the tetrafluoroborate). The resulting salt is treated with elemental fluorine and sodium tetrafluoroborate: The cation is often depicted with one skewed ethylene ((CH2)2) group. In fact, these pairs of CH2 groups are eclipsed so that the cation has idealized C3h symmetry. Mechanism of fluorination Electrophilic fluorinating reagents could in principle operate by electron transfer pathways or an SN2 attack at fluorine. This distinction has not been decided. By using a charge-spin separated probe, it was possible to show that the electrophilic fluorination of stilbenes with Selectfluor proceeds through an SET/fluorine atom transfer mechanism. In certain cases Selectfluor can transfer fluorine to alkyl radicals. Applications The conventional source of "electrophilic fluorine", i.e. the equivalent to the superelectrophile F+, is gaseous fluorine, which requires specialised equipment for manipulation. Selectfluor reagent is a salt, the use of which requires only routine procedures. Like F2, the salt delivers the equivalent of F+. It is mainly used in the synthesis of organofluorine compounds: Specialized applications Selectfluor reagent also serves as a strong oxidant, a property that is useful in other reactions in organic chemistry. Oxidation of alcohols and phenols. As applied to electrophilic iodination, Selectfluor reagent activates the I–I bond in I2 molecule. Related reagents Similar to Selectfluor are N-fluorosulfonimides: References Patents Reagents for organic chemistry Tetrafluoroborates Fluorinating agents Quaternary ammonium compounds Nitrogen heterocycles Organochlorides Substances discovered in the 1990s
Selectfluor
[ "Chemistry" ]
547
[ "Fluorinating agents", "Reagents for organic chemistry" ]
14,678,069
https://en.wikipedia.org/wiki/Nitronium%20tetrafluoroborate
Nitronium tetrafluoroborate is an inorganic compound with formula NO2BF4. It is a salt of nitronium cation and tetrafluoroborate anion. It is a colorless crystalline solid, which reacts with water to form the corrosive acids HF and HNO3. As such, it must be handled under water-free conditions. It is sparsely soluble in many organic solvents. Preparation Nitronium tetrafluoroborate can be prepared by adding a mixture of anhydrous hydrogen fluoride and boron trifluoride to a nitromethane solution of nitric acid or dinitrogen pentoxide. Applications Nitronium tetrafluoroborate is used in organic synthesis as an electrophilic nitrating agent and a mild oxidant. References Tetrafluoroborates Nitronium compounds
Nitronium tetrafluoroborate
[ "Chemistry" ]
190
[ "Nitronium compounds", "Salts", "Inorganic compounds", "Inorganic compound stubs" ]
14,678,502
https://en.wikipedia.org/wiki/Guanylate%20kinase
In enzymology, a guanylate kinase () is an enzyme that catalyzes the chemical reaction ATP + GMP ADP + GDP Thus, the two substrates of this enzyme are ATP and GMP, whereas its two products are ADP and GDP. This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing groups (phosphotransferases) with a phosphate group as acceptor. This enzyme participates in purine metabolism. Guanylate kinase catalyzes the ATP-dependent phosphorylation of GMP into GDP. It is essential for recycling GMP and indirectly, cGMP. In prokaryotes (such as Escherichia coli), lower eukaryotes (such as yeast) and in vertebrates, GK is a highly conserved monomeric protein of about 200 amino acids. GK has been shown to be structurally similar to protein A57R (or SalG2R) from various strains of Vaccinia virus. Systems biology analyses carried out by the team of Andreas Dräger also identified a pivotal role of this enzyme in the replication of SARS-CoV-2 within the human airways. Nomenclature The systematic name of this enzyme class is ATP:(d)GMP phosphotransferase. Other names in common use include" deoxyguanylate kinase, 5'-GMP kinase, GMP kinase, guanosine monophosphate kinase, and ATP:GMP phosphotransferase. References Further reading EC 2.7.4 Enzymes of known structure Protein domains
Guanylate kinase
[ "Biology" ]
338
[ "Protein domains", "Protein classification" ]
14,678,600
https://en.wikipedia.org/wiki/Holo-%28acyl-carrier-protein%29%20synthase
In enzymology and molecular biology, a holo-[acyl-carrier-protein] synthase (ACPS, ) is an enzyme that catalyzes the chemical reaction: CoA-[4'-phosphopantetheine] + apo-acyl carrier protein adenosine 3',5'-bisphosphate + holo-acyl carrier protein This enzyme belongs to the family of transferases, specifically those transferring non-standard substituted phosphate groups. It is also known as 4'-phosphopantetheinyl transferase after the group it transfers. Function All ACPS enzymes known so far are evolutionally related to each other in a single superfamily of proteins. It transfers a 4'-phosphopantetheine (4'-PP) moiety from coenzyme A (CoA) to an invariant serine in an acyl carrier protein (ACP), a small protein responsible for acyl group activation in fatty acid biosynthesis. This post-translational modification renders holo-ACP capable of acyl group activation via thioesterification of the cysteamine thiol of 4'-PP. This superfamily consists of two subtypes: the trimeric ACPS type such as E. coli ACPS and the monomeric Sfp (PCP-synthesizing) type such as B. subtilis SFP. Structures from both families are now known. The active site accommodates a magnesium ion. The most highly conserved regions of the protein are involved in binding the magnesium ion. Nomenclature The systematic name of this enzyme class is CoA-[4'-phosphopantetheine]:apo-[acyl-carrier-protein] 4'-pantetheinephosphotransferase. Other names in common use, disregarding the synthetase/synthase spelling difference, include acyl carrier protein holoprotein synthetase, holo-ACP synthetase, coenzyme A:fatty acid synthetase apoenzyme 4'-phosphopantetheine, acyl carrier protein synthetase (ACPS), PPTase, acyl carrier protein synthase, P-pant transferase, and CoA:apo-[acyl-carrier-protein] pantetheinephosphotransferase. Structural studies As of late 2007, 8 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , and . References Further reading EC 2.7.8 Enzymes of known structure Protein families
Holo-(acyl-carrier-protein) synthase
[ "Biology" ]
546
[ "Protein families", "Protein classification" ]
14,678,604
https://en.wikipedia.org/wiki/Sindell%20v.%20Abbott%20Laboratories
Sindell v. Abbott Laboratories, (1980), was a landmark products liability decision of the Supreme Court of California which pioneered the doctrine of market share liability. Background The plaintiff in Sindell was a young woman who developed cancer as a result of her mother's use of the drug diethylstilbestrol (DES) during pregnancy. A large number of companies had manufactured DES around the time the plaintiff's mother used the drug. Since the drug was a fungible product and many years had passed, it was impossible for the plaintiff to identify the manufacturer(s) of the particular DES pills her mother had actually consumed. Decision In a 4-3 majority decision by Associate Justice Stanley Mosk, the court decided to impose a new kind of liability, known as market share liability. The doctrine evolved from a line of negligence and strict products liability opinions (most of which had been decided by the Supreme Court of California) that were being adopted as the majority rule in many U.S. states. The essential components of the theory are as follows: All defendants named in the suit are potential tortfeasors (that is, they did produce the harmful product at issue at some point in time) The product involved is fungible The plaintiff cannot identify which defendant produced the fungible product which harmed her in particular, through no fault of her own A substantial share of the manufacturers who produced the product during the relevant time period are named as defendants in the action If these requirements are met, a rebuttable presumption arises in favor of the plaintiff; if she can prove actual damages, then a court may order each defendant to pay a percentage of such damages equal to its share of the market for the product at the time the product was used. A manufacturer may rebut the presumption and reduce its market share damages to zero by showing that its product could not have possibly injured the plaintiff (for example, by demonstrating that it did not manufacture the product during the time period relevant for that particular plaintiff). Mosk later explained in an oral history interview that the court got the idea for market share liability from the Fordham Law Review comment cited extensively in the Sindell opinion. Dissent Associate Justice Frank K. Richardson wrote a dissent in which he accused the majority of judicial activism and argued that the judiciary should defer to the legislature, whose role it was to craft an appropriate solution to the problems presented by the unique nature of DES. Problems of doctrine Courts after Sindell have refused to apply the market share doctrine to products other than drugs such as DES. The argument centers on the fact that a product must be fungible to hold all producers equally liable for any harm. If the product was not fungible, then different production methods or gross negligence in manufacturing might imply that some manufacturers were actually more culpable than others, yet they would only be required to pay up to their share of the market. The time period over which the harm occurred is also an issue: in Skipworth v. Lead Industries Association 690 A.2d 169 (Pa. 1997), a 1997 Pennsylvania case, the plaintiffs complained of the use of lead-based paint in their house and brought suit against Lead Industries Association. The court refused to apply the market share theory because the house had stood for over a century and many manufacturers of lead-based paint had since gone out of business, while others named in the suit had not existed at the time the house was painted. The court also noted that lead-based paint was not a fungible product and therefore, some of the manufacturers may not have been responsible for Skipworth's injuries. Notes References Sources Epstein, Richard A. Cases and Materials on Torts, 8th edition. New York: Aspen Publishers, 2004 Gifford, Donald G. Suing the Tobacco and Lead Pigment Industries: Government Litigation as Public Health Prescription Ann Arbor: University of Michigan Press, 2010. External links Product liability case law Supreme Court of California case law United States tort case law 1980 in United States case law 1980 in California Drug safety
Sindell v. Abbott Laboratories
[ "Chemistry" ]
815
[ "Drug safety" ]
14,679,376
https://en.wikipedia.org/wiki/Highly%20accelerated%20life%20test
A highly accelerated life test (HALT) is a stress testing methodology for enhancing product reliability in which prototypes are stressed to a much higher degree than expected from actual use in order to identify weaknesses in the design or manufacture of the product. Manufacturing and research and development organizations in the electronics, computer, medical, and military industries use HALT to improve product reliability. HALT can be effectively used multiple times over a product's life time. During product development, it can find design weakness earlier in the product lifecycle when changes are much less costly to make. By finding weaknesses and making changes early, HALT can lower product development costs and compress time to market. When HALT is used at the time a product is being introduced into the market, it can expose problems caused by new manufacturing processes. When used after a product has been introduced into the market, HALT can be used to audit product reliability caused by changes in components, manufacturing processes, suppliers, etc. Overview Highly accelerated life testing (HALT) techniques are important in uncovering many of the weak links of a new product. These discovery tests rapidly find weaknesses using accelerated stress conditions. The goal of HALT is to proactively find weaknesses and fix them, thereby increasing product reliability. Because of its accelerated nature, HALT is typically faster and less expensive than traditional testing techniques. HALT is a test technique called test-to-fail, where a product is tested until failure. HALT does not help to determine or demonstrate the reliability value or failure probability in field. Many accelerated life tests are test-to-pass, meaning they are used to demonstrate the product life or reliability. It is highly recommended to perform HALT in the initial phases of product development to uncover weak links in a product, so that there is better chance and more time to modify and improve the product. HALT uses several stress factors (decided by a Reliability Test Engineer) and/or the combination of various factors. Commonly used stress factors are temperature, vibration, and humidity for electronics and mechanical products. Other factors can include voltage, current, power cycling and combinations of them. Typical HALT procedures Environmental stresses are applied in a HALT procedure, eventually reaching a level significantly beyond that expected during use. The stresses used in HALT are typically hot and cold temperatures, temperature cycles, random vibration, power margining, and power cycling. The product under test is in operation during HALT and is continuously monitored for failures. As stress-induced failures occur, the cause should be determined, and if possible, the problem should be repaired so that the test can continue to find other weaknesses. Output of the HALT gives you: Multiple failure modes in the product before it is subjected to demonstration testing Operating limits of the product (upper and lower). These can be compared with a designer's margin or supplier specifications Destruct limits of the product (limit at which product functionality is lost and no recovery can be made) Test chambers A specialized environmental chamber is required for HALT. A suitable chamber also has to be capable of applying pseudo-random vibration with a suitable profile in relation to frequency. The HALT chamber should be capable of applying random vibration energy from 2 to 10,000 Hz in 6 degrees of freedom and temperatures from -100 to +200°C. Sometimes HALT chambers are called repetitive shock chambers because pneumatic air hammers are used to produce vibration. The chamber should also be capable of rapid changes in temperature, 50°C per minute should be considered a minimum rate of change. Usually high power resistive heating elements are used for heating and liquid nitrogen (LN2) is used for cooling. Fixtures Test fixtures must transmit vibration to the item under test. They must also be open in design or use air circulation to produce rapid temperature change to internal components. Test fixtures can use simple channels to attach the product to the chamber table or more complicated fixtures sometimes are fabricated. Monitoring and failure analysis The equipment under test must be monitored so that if the equipment fails under test, the failure is detected. Monitoring is typically performed with thermocouple sensors, vibration accelerometers, multimeters and data loggers. Common causes of failures during HALT are poor product design, workmanship, and poor manufacturing. Failures to individual components such as resistors, capacitors, diodes, printed circuit boards occur because of these issues. Failure types found during HALT testing are associated with the infant mortality region of the bathtub curve. Military application HALT is conducted before qualification testing. By catching failures early, flaws are found earlier in the acceptance process, eliminating repetitive later-stage reviews. See also Fault injection References Further reading External links HALT AND HASS: The Accepted Quality and Reliability Paradigm What is HALT/HASS Testing? A Beginners Guide to HALT Electronic engineering Product testing Environmental testing
Highly accelerated life test
[ "Technology", "Engineering" ]
956
[ "Computer engineering", "Reliability engineering", "Environmental testing", "Electronic engineering", "Electrical engineering" ]
14,679,497
https://en.wikipedia.org/wiki/Nicotinamide-nucleotide%20adenylyltransferase
In enzymology, nicotinamide-nucleotide adenylyltransferase (NMNAT) () are enzymes that catalyzes the chemical reaction ATP + nicotinamide mononucleotide diphosphate + NAD+ Thus, the two substrates of this enzyme are ATP and nicotinamide mononucleotide (NMN), whereas its two products are diphosphate and NAD+. This enzyme participates in nicotinate and nicotinamide metabolism. Humans have three protein isoforms: NMNAT1 (widespread), NMNAT2 (predominantly in brain), and NMNAT3 (highest in liver, heart, skeletal muscle, and erythrocytes). Mutations in the NMNAT1 gene lead to the LCA9 form of Leber congenital amaurosis. Mutations in NMNAT2 or NMNAT3 genes are not known to cause any human disease. NMNAT2 is critical for neurons: loss of NMNAT2 is associated with neurodegeneration. All NMNAT isoforms reportedly decline with age. Belongs to This enzyme belongs to the family of transferases, specifically those transferring phosphorus-containing nucleotide groups (nucleotidyltransferases). The systematic name of this enzyme class is ATP:nicotinamide-nucleotide adenylyltransferase. Other names in common use include NAD+ pyrophosphorylase, adenosine triphosphate-nicotinamide mononucleotide transadenylase, ATP:NMN adenylyltransferase, diphosphopyridine nucleotide pyrophosphorylase, nicotinamide adenine dinucleotide pyrophosphorylase, nicotinamide mononucleotide adenylyltransferase, and NMN adenylyltransferase. Structural studies As of late 2007, 11 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , and . Isoform cellular localization The three protein isoforms have the following cellular localizations NMNAT1 : Nucleus NMNAT2 : Cytoplasm NMNAT3 : Mitochondrion or cytoplasm All three NMNATs compete for the NMN produced by NAMPT. Clinical significance Chronic inflammation due to obesity and other causes reduced NMNAT and NAD+ levels in many tissues. References EC 2.7.7 NADH-dependent enzymes Enzymes of known structure Anti-aging substances
Nicotinamide-nucleotide adenylyltransferase
[ "Chemistry", "Biology" ]
517
[ "Senescence", "Anti-aging substances" ]
14,680,221
https://en.wikipedia.org/wiki/Grazing%20marsh
Grazing marsh is a British Isles term for flat, marshy grassland in polders. It consists of large grass fields separated by fresh or brackish ditches, and is often important for its wildlife. History Grazing marshes were created from medieval times by building sea walls (earth banks) across tidal mudflats and salt marsh to make polders (though the term "polder" is little used in Britain). Polders in Britain are mostly drained by gravity, rather than active pumping. The original tidal drainage channels were augmented by new ditches, and flap valves in the sea walls let water drain out at low tide and prevent the sea or tidal river from entering at high tide. Constructing polders in this way is called inning or reclaiming from the sea. Grazing marshes have been made in most lowland estuaries in Britain, often leaving only the river channel and the lowest part of the estuary tidal. In a few cases (such as Newtown Harbour on the Isle of Wight, and Pagham Harbour in West Sussex) the sea walls have been breached, and the estuaries have returned to a tidal state. Grazing marshes have also been made on low-lying open coasts. Many grazing marshes were inned in stages, and the old sea walls (called counter walls) may be found marooned far from the current sea wall. Land levels on either side of a counter wall often differ by several metres. Paradoxically, the lower side is the land inned earlier, because sediment continued to build up on the side that remained tidal. Wildlife Wintering wildfowl are characteristic of grazing marshes, often including large flocks of Eurasian wigeon, brent goose, white-fronted goose and Bewick's swan. Many of these birds are hunted by predators such as peregrine and marsh harrier. In spring, waders such as common redshank, Eurasian curlew, snipe, and northern lapwing breed. The ditches often have a range of salinity, depending on how close to the sea wall they are. The more saline ditches host specialist brackish-water plants and animals. These include, for example, the rare brackish amphipod Gammarus insensibilis and sea club-rush (Bolboschoenus maritimus). Fresher ditches may support rare animals, such as the great silver water beetle (Hydrophilus piceus) and the great raft spider (Dolomedes plantarius), and a wide range of pondweeds (Potamogeton and relatives). The grassland vegetation usually has a fairly small number of species, but those present are often scarce elsewhere, such as sea arrowgrass (Triglochin maritimum), divided sedge (Carex divisa) and strawberry clover (Trifolium fragiferum). Conservation Many grazing marshes have been converted into arable land, often using pumped drainage to lower the water levels enough to grow crops, though most are used for grazing cattle. The low ditch levels and agricultural runoff combine to remove much of the aquatic wildlife, although the arable fields may still be used by some wintering wildfowl. Some areas of grazing marsh and other polder land have been used to recreate tidal habitats by a process of managed retreat. Many of the larger areas of grazing marsh bear nature conservation designations, including Site of Special Scientific Interest, Special Protection Area, Special Area of Conservation and Ramsar Site. Examples of grazing marsh Pevensey Levels in East Sussex Romney Marsh in Kent and East Sussex The Somerset Levels The Thames Estuary marshes in Kent and Essex Marshes along the River Wantsum in Kent—formerly the Wantsum Channel separating the Isle of Thanet from the mainland Moss Valley, Derbyshire References Ecology Agriculture in the United Kingdom
Grazing marsh
[ "Biology" ]
767
[ "Ecology" ]
14,680,451
https://en.wikipedia.org/wiki/First%20Alert
First Alert is the retail brand of American safety equipment manufacturer BRK Brands, Inc., established in 1976 and based in Aurora, Illinois, with a production plant in Juarez, Mexico. Products sold with the brand include carbon monoxide detectors, smoke alarms, fire extinguishers, and other safety products like flashlights and fire escape ladders. First Alert supports fire safety in partnership with Safe Kids USA and The United States Fire Administration, providing smoke alarms at reduced cost to low-income families in the United States. History 1958-company created by Burke-Roberts-Kimberlin (BRK) Electronics. The three-man team (Burke-Roberts-Kimberlin) invented the first battery-powered smoke detector 1964- Began commercial manufacturing of the first battery-powered smoke detector 1967- Pittway began manufacturing the alarms 1974-Sears begins selling the BRK model SS-74R battery powered smoke alarm 1992-Sold to T.H. Lee & Associates 1998-Sold to Sunbeam Corporation 2002-American Household, Inc. is formed from Sunbeam Corporation 2005-Jarden Corporation (NYSE: JAH) purchases American Household, Inc. 2006-BRK Brands/ First Alert becomes part of Jarden Branded Consumables. 2016-Newell Rubbermaid acquires Jarden, including BRK Brands, forming Newell Brands. 2022-Resideo Technologies acquires First Alert Awards 2009 DIY, Garden & Housewares "Silver" Industry Award in Security & Safety- Tundra [UK, Europe] 2008 Chicago Innovation Award- Tundra 2007 International Housewares Show "Best of Show"- Tundra 2006 Golden Hammer Gold Level Award Winner 2005 Golden Hammer Gold Level Award Winner 2004 Golden Hammer Gold Level Award Winner 2003 Golden Hammer Gold Level Award Winner 2002 SPARC Award Winner 2001 Popular Mechanics Editor's Choice Award for SA302 2001 Good Housekeeping "Good Buy" Award for SA302 1999 CHAMPS Award for winning marketing strategy in the consumer category 1999 EFFIE Award for "Be Safe...Replace" campaign, most effective advertising campaign in the health aids category 1997 Pinnacle Award for the Standard for Excellence Recalls First Alert branded fire extinguishers model FE1A10G with serial numbers beginning with RH, RK, RL, RP, RT, RU, or RW were recalled. Fire Extinguishers were sold from September 1999 through September 2000. On September 4, 1992, BRK recalled all hardwired smoke alarms under the series 1839I and 2839I due to testing programs determining that corrosion could form on the alarm horn's electrical contacts, causing the piezo to fail to make any noise. In May 2006, First Alert combination smoke alarms were recalled due to draining batteries rapidly. References External links Companies based in DuPage County, Illinois Manufacturing companies established in 1958 Fire protection Manufacturing companies based in Illinois 1958 establishments in Illinois 1992 mergers and acquisitions 1998 mergers and acquisitions 2005 mergers and acquisitions
First Alert
[ "Engineering" ]
596
[ "Building engineering", "Fire protection" ]
14,680,518
https://en.wikipedia.org/wiki/Polyphosphate%20kinase
In enzymology, a polyphosphate kinase (), or polyphosphate polymerase, is an enzyme that catalyzes the formation of polyphosphate from ATP, with chain lengths of up to a thousand or more orthophosphate moieties. ATP + (phosphate)n ADP + (phosphate)n+1 Thus, the two substrates of this enzyme are ATP and polyphosphate [(phosphate)n], whereas its two products are ADP and polyphosphate extended by one phosphate moiety [(phosphate)n+1]. This enzyme is a membrane protein and goes through an intermediate stage during the reaction where it is autophosphorylated with a phosphate group covalently linked to a basic amino acyl residue through an N-P bond. Several enzymes catalyze polyphosphate polymerization. Some of these enzymes couple phosphotransfer to transmembrane transport. These enzyme/transporters are categorized in the Transporter Classification Database (TCDB) under the Polyphosphate Polymerase/YidH Superfamily (TC# 4.E.1) and are transferases that transfer phosphoryl groups (phosphotransferases) with polyphosphate as the acceptor. The systematic name of this enzyme class is ATP:polyphosphate phosphotransferase. This enzyme is also called polyphosphoric acid kinase. Families The Polyphosphate Polymerase Superfamily (TC# 4.E.1) includes the following families: 4.E.1 - The Vacuolar (Acidocalcisome) Polyphosphate Polymerase (V-PPP) Family 9.B.51 - The Uncharacterized DUF202/YidH (YidH) Family The Vacuolar (Acidocalcisome) Polyphosphate Polymerase (V-PPP) Family Eukaryotes contain inorganic polyphosphate (polyP) and acidocalcisomes, which sequester polyP and store amino acids and divalent cations. Gerasimaitė et al. showed that polyP produced in the cytosol of yeast is toxic. Reconstitution of polyP translocation with purified vacuoles, the acidocalcisomes of yeast, showed that cytosolic polyP cannot be imported whereas polyP produced by the vacuolar transporter chaperone (VTC) complex, an endogenous vacuolar polyP polymerase, is efficiently imported and does not interfere with growth. PolyP synthesis and import require an electrochemical gradient, probably as a (partial) driving force for polyP translocation. VTC exposes its catalytic domain to the cytosol and has nine vacuolar transmembrane segments (TMSs). Mutations in the VTC transmembrane regions, which may constitute the translocation channel, block not only polyP translocation but also synthesis. Since these mutations are far from the cytosolic catalytic domain of VTC, this suggests that the VTC complex obligatorily couples synthesis of polyP to its vesicular import in order to avoid toxic intermediates in the cytosol. The process therefore conforms to the classical definition of Group Translocation, where the substrate is modified during transport. Sequestration of otherwise toxic polyP may be one reason for the existence of this mechanism in acidocalcisomes. The vacuolar polyphosphate kinase (polymerase) is described in TCDB with family TC# 4.E.1. Function CYTH-like superfamily enzymes, which include polyphosphate polymerases, hydrolyze triphosphate-containing substrates and require metal cations as cofactors. They have a unique active site located at the center of an eight-stranded antiparallel beta barrel tunnel (the triphosphate tunnel). The name CYTH originated from the gene designation for bacterial class IV adenylyl cyclases (CyaB), and from thiamine triphosphatase (THTPA). Class IV adenylate cyclases catalyze the conversion of ATP to 3',5'-cyclic AMP (cAMP) and PPi. Thiamine triphosphatase is a soluble cytosolic enzyme which converts thiamine triphosphate to thiamine diphosphate. This domain superfamily also contains RNA triphosphatases, membrane-associated polyphosphate polymerases, tripolyphosphatases, nucleoside triphosphatases, nucleoside tetraphosphatases and other proteins with unknown functions. The generalized reaction catalyzed by the vectorial polyphosphate polymerases is: ATP + (phosphate)n in the cytoplasm ADP + (phosphate)n+1 in the vacuolar lumen Structure VTC2 has three recognized domains: an N-terminal SPX domain, a large central CYTH-like domain and a smaller transmembrane VTC1 (DUF202) domain. The SPX domain is found in Syg1, Pho81, XPR1 (SPX), and related proteins. This domain is found at the amino termini of a variety of proteins. In the yeast protein, Syg1, the N-terminus directly binds to the G-protein beta subunit and inhibits transduction of the mating pheromone signal. Similarly, the N-terminus of the human XPR1 protein binds directly to the beta subunit of the G-protein heterotrimer, leading to increased production of cAMP. Thus, this domain is involved in G-protein associated signal transduction. The N-termini of several proteins involved in the regulation of phosphate transport, including the putative phosphate level sensors, Pho81 from Saccharomyces cerevisiae and NUC-2 from Neurospora crassa, have this domain. The SPX domains of the S. cerevisiae low-affinity phosphate transporters, Pho87 and Pho90, auto-regulate uptake and prevent efflux. This SPX-dependent inhibition is mediated by a physical interaction with Spl2. NUC-2 contains several ankyrin repeats. Several members of this family are annotated as XPR1 proteins: the xenotropic and polytropic retrovirus receptor confers susceptibility to infection with xenotropic and polytropic murine leukaemia viruses (MLV). Infection by these retroviruses can inhibit XPR1-mediated cAMP signaling and result in cell toxicity and death. The similarity between Syg1 phosphate regulators and XPR1 sequences has been noted, as has the additional similarity to several predicted proteins of unknown function, from Drosophila melanogaster, Arabidopsis thaliana, Caenorhabditis elegans, Schizosaccharomyces pombe, S. cerevisiae, and many other diverse organisms. As of 2015, several structures have been solved for this class of enzymes, with PDB accession codes , , , , , . The Uncharacterized DUF202/YidH (YidH) Family Members of the YidH Family are found in bacteria, archaea and eukaryotes. Members of this family include YidH of E. coli (TC# 9.B.51.1.1) which has 115 amino acyl residues and 3 TMSs of α-helical nature. The first TMS has a low level of hydrophobicity, the second has a moderate level of hydrophobicity, and the third has very hydrophobic character. These traits appear to be characteristic of all members of this family. A representative list of proteins belonging to this family can be found in the Transporter Classification Database. In fungi, a long homologue of 351 aas has a similar 3 TMS DUF202 domain at its extreme C-terminus. References Further reading EC 2.7.4 Enzymes of known structure Membrane proteins
Polyphosphate kinase
[ "Biology" ]
1,706
[ "Protein classification", "Membrane proteins" ]
14,680,805
https://en.wikipedia.org/wiki/Blohm%20%26%20Voss%20P%20178
The Blohm & Voss P 178 was a German jet-powered dive bomber/fighter-bomber of unusual asymmetric form, proposed during World War II. Overview This asymmetrically-designed dive bomber had one Junkers Jumo 004B turbojet located under the wing to the starboard side of the fuselage. The pilot sat in a cockpit in the forward fuselage, with a large fuel tank located to the rear of the cockpit. Beneath the fuel tank, there was a deep recess in which an SC 500 bomb could be carried within the fuselage, or an SC 1000 bomb which would protrude slightly out of the fuselage. Two solid-fuel auxiliary rockets extended from the rear, used for take-off. Two 15  mm (.60 in) MG 151 cannons were located in the nose. Specifications See also List of German aircraft projects, 1939–45 References External links Secret Projects; Blohm und Voss P.178 Asymmetrical aircraft P 178 Abandoned military aircraft projects of Germany 1940s German attack aircraft
Blohm & Voss P 178
[ "Physics" ]
208
[ "Asymmetrical aircraft", "Symmetry", "Asymmetry" ]
14,680,977
https://en.wikipedia.org/wiki/Successive%20parabolic%20interpolation
Successive parabolic interpolation is a technique for finding the extremum (minimum or maximum) of a continuous unimodal function by successively fitting parabolas (polynomials of degree two) to a function of one variable at three unique points or, in general, a function of n variables at 1+n(n+3)/2 points, and at each iteration replacing the "oldest" point with the extremum of the fitted parabola. Advantages Only function values are used, and when this method converges to an extremum, it does so with an order of convergence of approximately 1.325. The superlinear rate of convergence is superior to that of other methods with only linear convergence (such as line search). Moreover, not requiring the computation or approximation of function derivatives makes successive parabolic interpolation a popular alternative to other methods that do require them (such as gradient descent and Newton's method). Disadvantages On the other hand, convergence (even to a local extremum) is not guaranteed when using this method in isolation. For example, if the three points are collinear, the resulting parabola is degenerate and thus does not provide a new candidate point. Furthermore, if function derivatives are available, Newton's method is applicable and exhibits quadratic convergence. Improvements Alternating the parabolic iterations with a more robust method (golden-section search is a popular choice) to choose candidates can greatly increase the probability of convergence without hampering the convergence rate. See also Inverse quadratic interpolation is a related method that uses parabolas to find roots rather than extrema. Simpson's rule uses parabolas to approximate definite integrals. References Numerical analysis Optimization algorithms and methods
Successive parabolic interpolation
[ "Mathematics" ]
356
[ "Computational mathematics", "Mathematical relations", "Approximations", "Numerical analysis" ]
14,681,053
https://en.wikipedia.org/wiki/Metabolon
In biochemistry, a metabolon is a temporary structural-functional complex formed between sequential enzymes of a metabolic pathway, held together both by non-covalent interactions and by structural elements of the cell, such as integral membrane proteins and proteins of the cytoskeleton. The formation of metabolons allows the intermediate product from one enzyme to be passed (channelling) directly into the active site of the next consecutive enzyme of the metabolic pathway. The citric acid cycle is an example of a metabolon that facilitates substrate channeling. Another example is the dhurrin synthesis pathway in sorghum, in which the enzymes assemble as a metabolon in lipid membranes. During the functioning of metabolons, the amount of water needed to hydrate the enzymes is reduced and enzyme activity is increased. History The concept of structural-metabolic cellular complexes was first conceived in 1970 by A. M. Kuzin of the USSR Academy of Sciences, and adopted in 1972 by Paul A. Srere of the University of Texas for the enzymes of the citric acid cycle. This hypothesis was well accepted in the former USSR and further developed for the complex of glycolytic enzymes (Embden-Meyerhof-Parnas pathway) by B.I. Kurganov and A.E. Lyubarev. In the mid-1970s, the group of F.M. Clarke at the University of Queensland, Australia also worked on the concept. The name "metabolon" was first proposed in 1985 by Paul Srere during a lecture in Debrecen, Hungary. The case of Fatty Acid Synthesis In Chaetomium thermophilum, a complex of a metabolon exists between fatty acid synthase and a MDa carboxylase, and was observed using chemical cross-linking coupled to mass spectrometry and visualized by cryo-electron microscopy. The Fatty acid synthesis metabolon in C. thermophilum is highly flexible, and although a high-resolution structure of Fatty acid synthase was possible, the metabolon was highly flexible, hindering high-resolution structure determination. Examples See also Enzyme kinetics Enzyme assay Enzyme catalysis References Metabolism Protein complexes
Metabolon
[ "Chemistry", "Biology" ]
453
[ "Biochemistry", "Metabolism", "Cellular processes" ]
14,681,529
https://en.wikipedia.org/wiki/STRETCH%20Assembly%20Program
STRETCH Assembly Program (STRAP) was the assembler for the IBM 7030 Stretch computer. The first version (STRAP-1) was a subset cross assembler that ran on the IBM 704, IBM 709, and IBM 7090 computers. The final version (STRAP-2) ran natively. External links IBM Reference Manual 704-709-7090 Programming Package for the IBM 7030 Data Processing System (PDF) STRAP I - assembler for IBM 7030/709 Assemblers IBM software IBM 700/7000 series
STRETCH Assembly Program
[ "Technology" ]
112
[ "Computing stubs", "Computer hardware stubs" ]
14,681,729
https://en.wikipedia.org/wiki/Child%20pyromaniac
A child pyromaniac is a child with an impulse-control disorder that is primarily distinguished by a compulsion to set fires in order to relieve built-up tension. Child pyromania is the rarest form of fire-setting. Most young children are not diagnosed with pyromania, but rather with conduct disorders. A key feature of pyromania is repeated association with fire without a real motive. Pyromania is not a commonly diagnosed disorder, and only occurs in about one percent of the population. It can occur in children as young as three years old. About ninety percent of the people officially diagnosed with pyromania are male. Pyromaniacs and people with other mental illnesses are responsible for about 14% of fires. Symptoms Many clinical studies have found that fire-setting rarely occurs by itself, but usually occurs in addition to other socially unacceptable behavior. The motives that have earned the most attention are pleasure, a cry for help, retaliation against adults, and a desire to reunite the family. Fire-setting among children and teens can be recurring or periodic. Some children and teens may set fires often to release tension. Others may only seek to set fires during times of great stress. Some of the symptoms of pyromania are depression, conflicts in relationships, and trouble coping with stress and anxiety. Diagnosis The Diagnostic and Statistical Manual of Mental Disorders, also known as the DSM, gives six standards that must be met for a child to be officially diagnosed with pyromania: The child has to have set more than one fire deliberately. Before setting the fire, the child must have felt some feelings of tension or arousal. The child must show that he or she is attracted to fire and anything related to fire. The child must feel a sense of relief or satisfaction from setting the fire and witnessing it. The child does not have other motives like revenge, financial gain, delusions, or brain damage for setting the fire. The fire-setting problem cannot be attributed to other disorders like anti-social personality disorder or conduct disorders. Even though fire-setting and pyromania are prevalent in children, these standards are hard to apply to their age group. There is not a lot of experience in diagnosing pyromania, mainly because of the little experience that health care professionals have with fire-setting. Comparison to child fire-setters. There are many important distinctions between a child pyromaniac and a child fire-setter. In general, a fire-setter is any individual who feels the impulse to set a fire for unusual reasons. While a child fire-setter is usually curious about fire and has the desire to learn more about it, a child pyromaniac has an unusually bizarre impulse or desire to set intentional fires. Pyromania, also known as pathological fire-setting, is when the desire to set fires is repetitive and destructive to people or property. The most important difference between pyromania and fire-setting is that pyromania is a mental disorder, but fire-setting is simply a behavior and can be more easily fixed. Minor or non-severe fire-setting is defined as "accidental or occasional fire-starting behavior" by unsupervised children. Usually these fires are started when a curious child plays with matches, lighters, or small fires. Juveniles in this minor group average at most 2.5 accidental fires in their lifetime. Most children in this group are between five and ten years of age and do not realize the dangers of playing with fire. Pathological fire-setting manifests when the action is "a deliberate, planned, and persistent behavior". Juveniles in this severe group set about 5.3 fires. Most young children are not diagnosed as having pyromania but conduct disorders. Epidemiology There are two basic types of children that start fires. The first type is the curiosity fire-setter who starts the fire just to find out what will happen. The second type is the problem fire-setter who usually sets fires based on changes in their environment or due to a conduct disorder. Causes Fire-setting is made up of five subcategories: the curious fire-setter, the sexually motivated fire-setter, the "cry for help" fire-setter, the "severely disturbed" group, and the rare form of pyromania. Pyromania usually surfaces in childhood, but there is no conclusive data about the average age of onset. Child pyromaniacs are usually filled with an uncontrollable urge to set fires to relieve tension. Not much is known about what genetically causes pyromania but there have been many studies that have explored the topic. The causes of fire setting among young children and youths can be attributed to many factors, which are divided into individual and environmental factors: Individual factors Antisocial behaviors and attitudes: Children that set fires usually do not only set fires but also commit other crimes or offenses including vandalism, violence, anger, etc. Sensation seeking: Some children are attracted to fire-setting because they are bored and are looking for something to do. Attention seeking: Lighting a fire becomes a way to "get back" at adults and, in turn, produce a response from the adults. Lack of social skills: Some children simply have not been taught enough social skills. Many children and adolescents who have been discovered setting fires consider themselves to be "loners". Lack of fire-safety skills and ignorance of danger: This is what drives most children who do not display signs of pyromania- just natural curiosity and ignorance of the fire's destructive power. Learning difficulties. Parental conflicts like separation, neglect, and abuse. Sexual abuse. Maltreatment. Environmental factors Poor supervision by parents or guardians. Seeing adults use fire inappropriately at an early age. Parental neglect. Parents abusing drugs or acting violently- this factor has been studied and the conclusions show that fire-setters are more likely in homes where the parents abuse them. Peer pressure. Stressful life events: Fire-setting becomes a way to cope with crises. Treatment If a child is diagnosed with pyromania, there are treatment options despite the lack of scientific research on the genetic cause. Studies have shown that children with repeat cases of setting fires tend to respond better to a case-management approach rather than a medical approach. The first crucial step for treatment should be parents sitting down with their child and having a one-on-one interview. The interview itself should try to determine which stresses on the family, methods of discipline, or other factors contribute to the child's uncontrollable desire to set fires. Some examples of treatment methods are problem-solving skills, anger management, communication skills, aggression replacement training, and cognitive restructuring. The chances that a child will recover from pyromania are very slim according to recent studies, but there are ways to channel the child's desire to set fires to relieve tension—for example, alternate activities such as playing a sport or an instrument. Another method of treatment is fire-safety education. At times, the best method of treatment is child counseling or a residential treatment center. However, since cases of child pyromania are so rare, there has not been enough research done on the success of these treatment methods. The most common and effective treatment of pyromania in children is behavioral modification. The results usually range from fair to poor. Behavioral modification seems to work on children with pyromaniac tendencies about 95% of the time. History Early studies into the causes of pyromania come from Freudian psychoanalysis. Around 1850, there were many arguments about the causes of pyromania. The two biggest sides of the argument were whether pyromania comes from a mental or genetic disorder or moral deficiency. Freud reasoned that fire-setting was an archaic desire to gain power over nature. The first study done on fire-setting behavior in children was in 1940 and was credited to Helen Yarnall, who compared fire-setting to fears of castration in male children and said that by setting a fire, some young males feel that they have gained power over adults. This 1940 study also introduced the idea that a good predictor of violent behavior in adult life is fire-setting and cruelty towards animals as a child. References Further reading External links Operation Extinguish Juvenile Firesetter Handbook Prevent Youth Firesetting Mental disorders diagnosed in childhood Fire
Child pyromaniac
[ "Chemistry" ]
1,733
[ "Combustion", "Fire" ]
14,682,074
https://en.wikipedia.org/wiki/Cytochrome%20c%20oxidase%20subunit%20III
Cytochrome c oxidase subunit III (COX3) is an enzyme that in humans is encoded by the MT-CO3 gene. It is one of main transmembrane subunits of cytochrome c oxidase. It is also one of the three mitochondrial DNA (mtDNA) encoded subunits (MT-CO1, MT-CO2, MT-CO3) of respiratory complex IV. Variants of it have been associated with isolated myopathy, severe encephalomyopathy, Leber hereditary optic neuropathy, mitochondrial complex IV deficiency, and recurrent myoglobinuria . Structure The MT-CO3 gene produces a 30 kDa protein composed of 261 amino acids. COX3, the protein encoded by this gene, is a member of the cytochrome c oxidase subunit 3 family. This protein is located on the inner mitochondrial membrane. COX3 is a multi-pass transmembrane protein: in human, it contains 7 transmembrane domains at positions 15–35, 42–59, 81–101, 127–147, 159–179, 197–217, and 239–259. Function Cytochrome c oxidase () is the terminal enzyme of the respiratory chain of mitochondria and many aerobic bacteria. It catalyzes the transfer of electrons from reduced cytochrome c to molecular oxygen: 4 cytochrome c+2 + 4 H+ + O2 4 cytochrome c+3 + 2 H2O This reaction is coupled to the pumping of four additional protons across the mitochondrial or bacterial membrane. Cytochrome c oxidase is an oligomeric enzymatic complex that is located in the mitochondrial inner membrane of eukaryotes and in the plasma membrane of aerobic prokaryotes. The core structure of prokaryotic and eukaryotic cytochrome c oxidase contains three common subunits, I, II and III. In prokaryotes, subunits I and III can be fused and a fourth subunit is sometimes found, whereas in eukaryotes there are a variable number of additional small subunits. As the bacterial respiratory systems are branched, they have a number of distinct terminal oxidases, rather than the single cytochrome c oxidase present in the eukaryotic mitochondrial systems. Although the cytochrome o oxidases do not catalyze the cytochrome c but the quinol (ubiquinol) oxidation they belong to the same haem-copper oxidase superfamily as cytochrome c oxidases. Members of this family share sequence similarities in all three core subunits: subunit I is the most conserved subunit, whereas subunit II is the least conserved. Clinical significance Mutations in mtDNA-encoded cytochrome c oxidase subunit genes have been observed to be associated with isolated myopathy, severe encephalomyopathy, Leber hereditary optic neuropathy, mitochondrial complex IV deficiency, and recurrent myoglobinuria . Leber hereditary optic neuropathy (LHON) LHON is a maternally inherited disease resulting in acute or subacute loss of central vision, due to optic nerve dysfunction. Cardiac conduction defects and neurological defects have also been described in some patients. LHON results from primary mitochondrial DNA mutations affecting the respiratory chain complexes. Mutations at positions 9438 and 9804, which result in glycine-78 to serine and alanine-200 to threonine amino acid changes, have been associated with this disease. Mitochondrial complex IV deficiency (MT-C4D) Complex IV deficiency (COX deficiency) is a disorder of the mitochondrial respiratory chain with heterogeneous clinical manifestations, ranging from isolated myopathy to severe multisystem disease affecting several tissues and organs. Features include hypertrophic cardiomyopathy, hepatomegaly and liver dysfunction, hypotonia, muscle weakness, exercise intolerance, developmental delay, delayed motor development, mental retardation, lactic acidemia, encephalopathy, ataxia, and cardiac arrhythmia. Some affected individuals manifest a fatal hypertrophic cardiomyopathy resulting in neonatal death and a subset of patients manifest Leigh syndrome. The mutations G7970T and G9952A have been associated with this disease. Recurrent myoglobinuria mitochondrial (RM-MT) Recurrent myoglobinuria is characterized by recurrent attacks of rhabdomyolysis (necrosis or disintegration of skeletal muscle) associated with muscle pain and weakness, and followed by excretion of myoglobin in the urine. It has been associated with mitochondrial complex IV deficiency. Subfamilies Cytochrome o ubiquinol oxidase, subunit III Cytochrome aa3 quinol oxidase, subunit III Interactions COX3 has been shown to have 15 binary protein-protein interactions including 8 co-complex interactions. COX3 appears to interact with SNCA, KRAS, RAC1, and HSPB2. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Mitochondrial DNA-Associated Leigh Syndrome and NARP Protein domains Protein families Transmembrane proteins Human mitochondrial genes
Cytochrome c oxidase subunit III
[ "Biology" ]
1,093
[ "Protein families", "Protein domains", "Protein classification" ]
14,682,410
https://en.wikipedia.org/wiki/Leaning%20tower%20illusion
The leaning tower illusion is a visual illusion seen in a pair of identical images of the Leaning Tower of Pisa photographed from below. Although the images are duplicates, one has the impression that the tower on the right leans more, as if photographed from a different angle. The illusion was discovered by Frederick Kingdom, Ali Yoonessi and Elena Gheorghiu at McGill University, and won first prize in the Best Illusion of the Year Contest 2007. The authors suggest that the illusion occurs because of the way the visual system takes into account perspective. When two identical towers rise in parallel but are viewed from below, their corresponding outlines converge in the retinal image due to perspective. The visual system normally "corrects" for the perspective distortion and as a result perceives the towers correctly, i.e. as rising in parallel. However in the case of the two identical images of the Pisa tower, the corresponding outlines of the towers do not converge but run in parallel, and as a result the towers are perceived as non-parallel, i.e. as diverging. The illusion reveals that the visual system is obliged to treat the two images as part of the same scene, in other words as the "Twin Towers of Pisa". Although the Pisa tower demonstrates the illusion and provides a pun for its name, the illusion can be seen in any pair of (identical) images of a receding object. References Optical illusions
Leaning tower illusion
[ "Physics" ]
288
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]