text
stringlengths
10
951k
source
stringlengths
39
44
Galilean moons The Galilean moons (or Galilean satellites) are the four largest moons of Jupiter—Io, Europa, Ganymede, and Callisto. They were first seen by Galileo Galilei in December 1609 or January 1610, and recognized by him as satellites of Jupiter in March 1610. They were the first objects found to orbit a planet other than the Earth. They are among the largest objects in the Solar System with the exception of the Sun and the eight planets, with radii larger than any of the dwarf planets. Ganymede is the largest moon in the Solar System, and is even bigger than the planet Mercury, though only around half as massive. The three inner moons—Io, Europa, and Ganymede—are in a 4:2:1 orbital resonance with each other. Because of their much smaller size, and therefore weaker self-gravitation, all of Jupiter's remaining moons have irregular forms rather than a spherical shape. The Galilean moons were observed in either 1609 or 1610 when Galileo made improvements to his telescope, which enabled him to observe celestial bodies more distinctly than ever. Galileo's observations showed the importance of the telescope as a tool for astronomers by proving that there were objects in space that cannot be seen by the naked eye. The discovery of celestial bodies orbiting something other than Earth dealt a serious blow to the then-accepted Ptolemaic world system, a geocentric theory in which everything orbits around Earth. Galileo initially named his discovery the Cosmica Sidera ("Cosimo's stars"), but the names that eventually prevailed were chosen by Simon Marius. Marius discovered the moons independently at nearly the same time as Galileo, 8 January 1610, and gave them their present names, derived from the lovers of Zeus, which were suggested by Johannes Kepler, in his "Mundus Jovialis", published in 1614. These were the only four moons of Jupiter known, until the discovery of the "fifth moon of Jupiter" in 1892. As a result of improvements Galileo Galilei made to the telescope, with a magnifying capability of 20×, he was able to see celestial bodies more distinctly than was previously possible. This allowed Galilei to observe in either December 1609 or January 1610 what came to be known as the Galilean moons. On January 7, 1610, Galileo wrote a letter containing the first mention of Jupiter's moons. At the time, he saw only three of them, and he believed them to be fixed stars near Jupiter. He continued to observe these celestial orbs from January 8 to March 2, 1610. In these observations, he discovered a fourth body, and also observed that the four were not fixed stars, but rather were orbiting Jupiter. Galileo's discovery proved the importance of the telescope as a tool for astronomers by showing that there were objects in space to be discovered that until then had remained unseen by the naked eye. More importantly, the discovery of celestial bodies orbiting something other than Earth dealt a blow to the then-accepted Ptolemaic world system, which held that Earth was at the center of the universe and all other celestial bodies revolved around it. Galileo's "Sidereus Nuncius" ("Starry Messenger"), which announced celestial observations through his telescope, does not explicitly mention Copernican heliocentrism, a theory that placed the Sun at the center of the universe. Nevertheless, Galileo accepted the Copernican theory. A Chinese historian of astronomy, Xi Zezong, has claimed that a "small reddish star" observed near Jupiter in 362 BCE by Chinese astronomer Gan De may have been Ganymede. If true, this might predate Galileo's discovery by around two millennia. The observations of Simon Marius are another noted example of observation, and he later reported observing the moons in 1609. However, because he did not publish these findings until after Galileo, there is a degree of uncertainty around his records. In 1605, Galileo had been employed as a mathematics tutor for Cosimo de' Medici. In 1609, Cosimo became Grand Duke Cosimo II of Tuscany. Galileo, seeking patronage from his now-wealthy former student and his powerful family, used the discovery of Jupiter's moons to gain it. On February 13, 1610, Galileo wrote to the Grand Duke's secretary: "God graced me with being able, through such a singular sign, to reveal to my Lord my devotion and the desire I have that his glorious name live as equal among the stars, and since it is up to me, the first discoverer, to name these new planets, I wish, in imitation of the great sages who placed the most excellent heroes of that age among the stars, to inscribe these with the name of the Most Serene Grand Duke." Galileo asked whether he should name the moons the "Cosmian Stars", after Cosimo alone, or the "Medician Stars", which would honor all four brothers in the Medici clan. The secretary replied that the latter name would be best. On March 12, 1610, Galileo wrote his dedicatory letter to the Duke of Tuscany, and the next day sent a copy to the Grand Duke, hoping to obtain the Grand Duke's support as quickly as possible. On March 19, he sent the telescope he had used to first view Jupiter's moons to the Grand Duke, along with an official copy of "Sidereus Nuncius" ("The Starry Messenger") that, following the secretary's advice, named the four moons the Medician Stars. In his dedicatory introduction, Galileo wrote: Scarcely have the immortal graces of your soul begun to shine forth on earth than bright stars offer themselves in the heavens which, like tongues, will speak of and celebrate your most excellent virtues for all time. Behold, therefore, four stars reserved for your illustrious name ... which ... make their journeys and orbits with a marvelous speed around the star of Jupiter ... like children of the same family ... Indeed, it appears the Maker of the Stars himself, by clear arguments, admonished me to call these new planets by the illustrious name of Your Highness before all others. Galileo initially called his discovery the Cosmica Sidera ("Cosimo's stars"), in honour of Cosimo II de' Medici (1590–1621). At Cosimo's suggestion, Galileo changed the name to Medicea Sidera ("the Medician stars"), honouring all four Medici brothers (Cosimo, Francesco, Carlo, and Lorenzo). The discovery was announced in the "Sidereus Nuncius" ("Starry Messenger"), published in Venice in March 1610, less than two months after the first observations. Other names put forward include: The names that eventually prevailed were chosen by Simon Marius, who discovered the moons independently at the same time as Galileo: he named them at the suggestion of Johannes Kepler after lovers of the god Zeus (the Greek equivalent of Jupiter): "Io", "Europa", "Ganymede" and "Callisto", in his "Mundus Jovialis", published in 1614. Galileo steadfastly refused to use Marius' names and invented as a result the numbering scheme that is still used nowadays, in parallel with proper moon names. The numbers run from Jupiter outward, thus I, II, III and IV for Io, Europa, Ganymede, and Callisto respectively. Galileo used this system in his notebooks but never actually published it. The numbered names (Jupiter "x") were used until the mid-20th century when other inner moons were discovered, and Marius' names became widely used. Galileo was able to develop a method of determining longitude based on the timing of the orbits of the Galilean moons. The times of the eclipses of the moons could be precisely calculated in advance and compared with local observations on land or on ship to determine the local time and hence longitude. The main problem with the technique was that it was difficult to observe the Galilean moons through a telescope on a moving ship, a problem that Galileo tried to solve with the invention of the celatone. The method was used by Giovanni Domenico Cassini and Jean Picard to re-map France. Some models predict that there may have been several generations of Galilean satellites in Jupiter's early history. Each generation of moons to have formed would have spiraled into Jupiter and been destroyed, due to tidal interactions with Jupiter's proto-satellite disk, with new moons forming from the remaining debris. By the time the present generation formed, the gas in the proto-satellite disk had thinned out to the point that it no longer greatly interfered with the moons' orbits. Other models suggest that Galilean satellites formed in a proto-satellite disk, in which formation timescales were comparable to or shorter than orbital migration timescales. Io is anhydrous and likely has an interior of rock and metal. Europa is thought to contain 8% ice and water by mass with the remainder rock. These moons are, in increasing order of distance from Jupiter: Io (Jupiter I) is the innermost of the four Galilean moons of Jupiter and, with a diameter of 3642 kilometers, the fourth-largest moon in the Solar System. It was named after Io, a priestess of Hera who became one of the lovers of Zeus. Nevertheless, it was simply referred to as "Jupiter I", or "The first satellite of Jupiter", until the mid-20th century. With over 400 active volcanos, Io is the most geologically active object in the Solar System. Its surface is dotted with more than 100 mountains, some of which are taller than Earth's Mount Everest. Unlike most satellites in the outer Solar System (which have a thick coating of ice), Io is primarily composed of silicate rock surrounding a molten iron or iron sulfide core. Although not proven, recent data from the Galileo orbiter indicate that Io might have its own magnetic field. Io has an extremely thin atmosphere made up mostly of sulfur dioxide (SO2). If a surface data or collection vessel were to land on Io in the future, it would have to be extremely tough (similar to the tank-like bodies of the Soviet Venera landers) to survive the radiation and magnetic fields that originate from Jupiter. Europa (Jupiter II), the second of the four Galilean moons, is the second closest to Jupiter and the smallest at 3121.6 kilometers in diameter, which is slightly smaller than the Moon. The name comes from a mythical Phoenician noblewoman, Europa, who was courted by Zeus and became the queen of Crete, though the name did not become widely used until the mid-20th century. It has a smooth and bright surface, with a layer of water surrounding the mantle of the planet, thought to be 100 kilometers thick. The smooth surface includes a layer of ice, while the bottom of the ice is theorized to be liquid water. The apparent youth and smoothness of the surface have led to the hypothesis that a water ocean exists beneath it, which could conceivably serve as an abode for extraterrestrial life. Heat energy from tidal flexing ensures that the ocean remains liquid and drives geological activity. Life may exist in Europa's under-ice ocean. So far, there is no evidence that life exists on Europa, but the likely presence of liquid water has spurred calls to send a probe there. The prominent markings that criss-cross the moon seem to be mainly albedo features, which emphasize low topography. There are few craters on Europa because its surface is tectonically active and young. Some theories suggest that Jupiter's gravity is causing these markings, as one side of Europa is constantly facing Jupiter. Volcanic water eruptions splitting the surface of Europa, and even geysers have also been considered as a cause. The color of the markings, reddish-brown, is theorized to be caused by sulfur, but scientists cannot confirm that, because no data collection devices have been sent to Europa. Europa is primarily made of silicate rock and likely has an iron core. It has a tenuous atmosphere composed primarily of oxygen. Ganymede (Jupiter III), the third Galilean moon, is named after the mythological Ganymede, cupbearer of the Greek gods and Zeus's beloved. Ganymede is the largest natural satellite in the Solar System at 5262.4 kilometers in diameter, which makes it larger than the planet Mercury – although only at about half of its mass since Ganymede is an icy world. It is the only satellite in the Solar System known to possess a magnetosphere, likely created through convection within the liquid iron core. Ganymede is composed primarily of silicate rock and water ice, and a salt-water ocean is believed to exist nearly 200 km below Ganymede's surface, sandwiched between layers of ice. The metallic core of Ganymede suggests a greater heat at some time in its past than had previously been proposed. The surface is a mix of two types of terrain—highly cratered dark regions and younger, but still ancient, regions with a large array of grooves and ridges. Ganymede has a high number of craters, but many are gone or barely visible due to its icy crust forming over them. The satellite has a thin oxygen atmosphere that includes O, O2, and possibly O3 (ozone), and some atomic hydrogen. Callisto (Jupiter IV) is the fourth and last Galilean moon, and is the second-largest of the four, and at 4820.6 kilometers in diameter, it is the third largest moon in the Solar System, and barely smaller than Mercury, though only a third of the latter's mass. It is named after the Greek mythological nymph Callisto, a lover of Zeus who was a daughter of the Arkadian King Lykaon and a hunting companion of the goddess Artemis. The moon does not form part of the orbital resonance that affects three inner Galilean satellites and thus does not experience appreciable tidal heating. Callisto is composed of approximately equal amounts of rock and ices, which makes it the least dense of the Galilean moons. It is one of the most heavily cratered satellites in the Solar System, and one major feature is a basin around 3000 km wide called Valhalla. Callisto is surrounded by an extremely thin atmosphere composed of carbon dioxide and probably molecular oxygen. Investigation revealed that Callisto may possibly have a subsurface ocean of liquid water at depths less than 300 kilometres. The likely presence of an ocean within Callisto indicates that it can or could harbour life. However, this is less likely than on nearby Europa. Callisto has long been considered the most suitable place for a human base for future exploration of the Jupiter system since it is furthest from the intense radiation of Jupiter. Fluctuations in the orbits of the moons indicate that their mean density decreases with distance from Jupiter. Callisto, the outermost and least dense of the four, has a density intermediate between ice and rock whereas Io, the innermost and densest moon, has a density intermediate between rock and iron. Callisto has an ancient, heavily cratered and unaltered ice surface and the way it rotates indicates that its density is equally distributed, suggesting that it has no rocky or metallic core but consists of a homogeneous mix of rock and ice. This may well have been the original structure of all the moons. The rotation of the three inner moons, in contrast, indicates differentiation of their interiors with denser matter at the core and lighter matter above. They also reveal significant alteration of the surface. Ganymede reveals past tectonic movement of the ice surface which required partial melting of subsurface layers. Europa reveals more dynamic and recent movement of this nature, suggesting a thinner ice crust. Finally, Io, the innermost moon, has a sulfur surface, active volcanism and no sign of ice. All this evidence suggests that the nearer a moon is to Jupiter the hotter its interior. The current model is that the moons experience tidal heating as a result of the gravitational field of Jupiter in inverse proportion to the square of their distance from the giant planet. In all but Callisto this will have melted the interior ice, allowing rock and iron to sink to the interior and water to cover the surface. In Ganymede a thick and solid ice crust then formed. In warmer Europa a thinner more easily broken crust formed. In Io the heating is so extreme that all the rock has melted and water has long ago boiled out into space. Jupiter's regular satellites are believed to have formed from a circumplanetary disk, a ring of accreting gas and solid debris analogous to a protoplanetary disk. They may be the remnants of a score of Galilean-mass satellites that formed early in Jupiter's history. Simulations suggest that, while the disk had a relatively high mass at any given moment, over time a substantial fraction (several tenths of a percent) of the mass of Jupiter captured from the Solar nebula was processed through it. However, the disk mass of only 2% that of Jupiter is required to explain the existing satellites. Thus there may have been several generations of Galilean-mass satellites in Jupiter's early history. Each generation of moons would have spiraled into Jupiter, due to drag from the disk, with new moons then forming from the new debris captured from the Solar nebula. By the time the present (possibly fifth) generation formed, the disk had thinned out to the point that it no longer greatly interfered with the moons' orbits. The current Galilean moons were still affected, falling into and being partially protected by an orbital resonance which still exists for Io, Europa, and Ganymede. Ganymede's larger mass means that it would have migrated inward at a faster rate than Europa or Io. All four Galilean moons are bright enough to be viewed from Earth without a telescope, if only they could appear farther away from Jupiter. (They are, however, easily distinguished with even low-powered binoculars.) They have apparent magnitudes between 4.6 and 5.6 when Jupiter is in opposition with the Sun, and are about one unit of magnitude dimmer when Jupiter is in conjunction. The main difficulty in observing the moons from Earth is their proximity to Jupiter, since they are obscured by its brightness. The maximum angular separations of the moons are between 2 and 10 arcminutes from Jupiter, which is close to the limit of human visual acuity. Ganymede and Callisto, at their maximum separation, are the likeliest targets for potential naked-eye observation. GIF animations depicting the Galilean moon orbits and the resonance of Io, Europa, and Ganymede
https://en.wikipedia.org/wiki?curid=12505
Gloria Gaynor Gloria Gaynor (born September 7, 1943) is an American singer, best known for the disco era hits "I Will Survive" (Hot 100 number 1, 1979), "Never Can Say Goodbye" (Hot 100 number 9, 1974), "Let Me Know (I Have a Right)" (Hot 100 number 42, 1980) and "I Am What I Am" (R&B number 82, 1983). Gaynor was born Gloria Fowles in Newark, New Jersey, to Daniel Fowles and Queenie Mae Proctor. Her grandmother lived nearby and was involved in her upbringing. "There was always music in our house", Gaynor wrote in her autobiography, "I Will Survive". She enjoyed listening to the radio, and to records by Nat King Cole and Sarah Vaughan. Her father played the ukulele and guitar and sang professionally in nightclubs with a group called Step 'n' Fetchit. Gloria grew up a tomboy: she had five brothers, and one sister. Her brothers sang gospel and formed a quartet with a friend. Gaynor was not allowed to sing with the all-male group, nor was her younger brother Arthur, because Gloria was a girl and he was too young. Arthur later acted as a tour manager for Gaynor. The family was relatively poor, but Gaynor recalls the house being filled with laughter and happiness, and the dinner table being open to neighborhood friends. They moved to a housing project in 1960, where Gaynor attended South Side High School. She graduated in 1961. "All through my young life I wanted to sing, although nobody in my family knew it", Gaynor wrote in her autobiography. Gaynor began singing in a night club in Newark, where she was recommended to a local band by a neighbor. After several years of performing in local clubs and along the east coast, Gaynor began her recording career in 1971 at Columbia Records. Gaynor was a singer with the Soul Satisfiers, a jazz/R&B music band, in the 1960s. She recorded "She'll Be Sorry/Let Me Go Baby" (for the first time as "Gloria Gaynor") in 1965, for Johnny Nash's Jocida label. Her first real success came in 1973 when she was signed to Columbia Records by Clive Davis. The fruit of that was the release of the flop single "Honey Bee." Moving on to MGM Records she finally hit with the album "Never Can Say Goodbye" on MGM. The first side of the album consisted of three songs ("Honey Bee", "Never Can Say Goodbye" and "Reach Out, I'll Be There") with no break between the songs. This 19-minute dance marathon proved to be enormously popular, especially at dance clubs. All three songs were released as singles via radio edits and all of them became hits. The album was instrumental in introducing disco music to the public, "Never Can Say Goodbye" becoming the first song to top "Billboard" magazine's dance chart. It was also a hit on the mainstream Pop Charts, peaking at #9, and on the R&B Charts, reaching #34 (the original version by The Jackson 5 had been a #2 hit on the Hot 100 in 1971). It also marked her first significant chart success internationally, making it into the Top 5 in Australia, Canada, Germany and the UK. The song would go on to be certified silver by the British Phonographic Industry, and subsequently gold in the US. Capitalizing on the success of her first album, she quickly released her second album, "Experience Gloria Gaynor", later that same year. Some of her lesser-known singles, due to lack of recurrent airplay — including "Honey Bee" (1974), "Casanova Brown" (1975), and "Let's Make A Deal" (1976), as well as her cover of The Four Tops' "Reach Out, I'll Be There" (which also became a #14 hit on the UK Singles Chart) — became hits in the clubs and reached the Top 5 on "Billboard"'s disco charts. Many charted on the Hot 100 and R&B charts as well, with songs like "(If You Want It) Do It Yourself" - a #1 disco hit - peaking at #98 on the Pop Charts and #24 on the R&B Charts. Her cover of "How High The Moon" topped the US Dance Charts, and made the lower parts of both the Pop and R&B charts, as well as achieving some international chart success. After her 1976 album, "I've Got You", she shifted from her hit production team, to work with other productions. Gaynor has recorded some 16 albums since, including one in England, one in Germany, and two in Italy. In the next few years, Gaynor released the albums "Glorious" and "Gloria Gaynor's Park Avenue Sound", but would only enjoy a few more moderate hits. However, in late 1978, with the release of her album "Love Tracks", she climbed the pop charts again with her smash hit single "I Will Survive". The lyrics of this song are written from the point of view of a woman, recently dumped, telling her former lover that she can cope without him and does not want anything more to do with him. The song has become something of an anthem of female emancipation. Interestingly, "I Will Survive" was originally the B-side when Polydor Records released it in late 1978. The A-side, a song called "Substitute", then a recent worldwide hit for South African girl-group Clout, was considered more "radio friendly". Legendary Boston Disco Radio DJ Jack King, turned the record over and recalls being stunned by what he heard. "I couldn't believe they were burying this monster hit on the B-side", says DJ King. "I played it and played it and my listeners went nuts!" This massive audience response forced the record company to flip the songs, so that subsequent copies of the single listed the more popular song on the A-side. King was honored at New York's "Disco Masters Awards Show" for 3 consecutive years (1979–1981) in recognition of his relentless push of the song. The song received a Grammy Award for Best Disco Recording in 1980, the only year that award was given (Gloria had to wait another 40 years for her second Grammy, in the Grammy Award for Best Roots Gospel Album category). It is ranked #492 on Rolling Stone magazine's list of "the 500 Greatest Songs of All Time", and ranked at #97 on Billboard magazine's "All-Time Hot 100". In 2000, the song was ranked #1 in VH1's list of the '100 greatest dance songs' of all time and remains there to this day. As a disco number, the song was unique for its time by virtue of Gaynor's having no background singers or lush production. And, unlike her first disco hits, the track was not pitched up to make it faster and to render Gaynor's recorded voice in a higher register than that in which she actually sang. Most disco hits at the time were heavily produced, with multiple voices, overdubs, and adjustments to pitch and speed. "I Will Survive" had a much more spare and "clean" sound. Had it been originally planned and released as an A-side, it would almost certainly have undergone a substantially more heavy-handed remix. In late 1979, she released the album "I Have a Right" which contained her next disco hit, "Let Me Know (I Have a Right)", which featured Doc Severinsen of "The Tonight Show" fame, on trumpet solo. Gaynor also recorded a disco song called "Love Is Just a Heartbeat Away" in 1979 for the cult vampire movie "" which featured a number of Disco songs. In 1980 and again in 1981, Gaynor released two disco albums which were virtually ignored in the United States due to the backlash against disco, which began late in 1979. The album's singles barely registered on Urban contemporary radio, where disco music remained popular. In 1982, having looked into a wide variety of faiths and religious movements, she became a Christian and began to distance herself from a past she considered to be sinful. She would not release an album in 1982. In 1982, she released an album entitled "Gloria Gaynor", in which she rejected disco for mid-tempo R&B and Pop style songs. Gaynor would achieve her final success in the 1980s with the release of her album "I Am Gloria Gaynor" in 1984. This was mainly due to the song "I Am What I Am", which became a hit at dance clubs, and then on the Club Play chart in late 1983/early 1984. "I Am What I Am" became a gay anthem and made Gaynor a gay icon. Her 1986 album, "The Power of Gloria Gaynor", was almost entirely composed of cover versions of other songs that were popular at the time. Gaynor supports LGBT rights. Gaynor's career received a revitalizing spark in the early and mid 1990s with the worldwide disco revival movement. During the late 1990s, she dabbled in acting for a while, guest starring on "The Wayans Bros", "That '70s Show" (singing "I Will Survive"), and "Ally McBeal," before doing a limited engagement performance in Broadway's "Smokey Joe's Cafe". In 2001 Gaynor performed "I Will Survive" at the 30th Anniversary Concert for Michael Jackson. Gloria Gaynor returned to the recording studio in 2002, releasing her first album in over 15 years, entitled, "I Wish You Love". The two singles released from the album, "Just Keep Thinking About You" and "I Never Knew", both topped "Billboard"'s Hot Dance Music/Club Play. Both singles also secured moderate to heavy Dance format radio airplay. The latter song also charted #30 on "Billboard"'s Adult Contemporary chart. In 2004, Gaynor re-released her 1997 album "The Answer" (also released under the title "What a Life") as a follow up to her successful album "I Wish You Love". The album includes her popular club hit "Oh, What a Life". In late 2002, Gaynor appeared with many R&B stars on the "Rhythm, Love, and Soul" edition of the PBS series "American Soundtrack". Her performance of the Disco hit "I Will Survive" and new single "I Never Knew" was included on the accompanying live album that was released in 2004. On September 19, 2005, Gaynor was honored twice when she and her music were inducted into the Dance Music Hall of Fame, in the "Ar"tist"" category along with fellow disco legends Chic and Sylvester. Her classic anthem "I Will Survive" was inducted under the "R"ecords"" category. In January 2008, The American Diabetes Association named Gaynor the Honorary Spokesperson of the 2008 NYC Step Out to Fight Diabetes Walk. More television appearances followed in the late 2000s with 2009 appearances on "The John Kerwin Show", "The Wendy Williams Show", and "The View" to promote the 30th anniversary of "I Will Survive". In 2010, she appeared on "Last Comic Standing" and "The Tonight Show". Forty years after its release, Gaynor continues to ride the success of "I Will Survive", touring the country and the world over and performing her signature song on dozens of TV shows. A few successful remixes of the song during the 1990s and 2000s along with new versions of the song by Lonnie Gordon, Diana Ross, Chantay Savage, rock group Cake and others as well as constant recurrent airplay on nearly all Soft AC and Rhythmic format radio stations have helped to keep the song in the mainstream. Said Gaynor of her biggest hit in a 2012 interview: "It feels great to have such a song like that because I get kids five and six years old telling me they like the song, and then people seventy-five and eighty. It's quite an honor." The song was revived yet again in 2015 for the movie "The Martian", where the sound track is used at the end as the credits roll. Gaynor released a Contemporary Christian album in late 2013. On May 16, 2015, Gloria Gaynor was awarded the honorary degree of Doctor of Music by Dowling College. In 2017 Gloria made a cameo appearance as a flight attendant in a Capital One commercial, while Samuel L. Jackson, Charles Barkley, and Spike Lee sang "I Will Survive". In 2016, Gloria Gaynor's "I Will Survive" was selected for induction into the Library of Congress' National Recording Registry. On May 6, 2017, Gloria Gaynor performed with her band at the Library of Congress' celebration of disco music at Bibliodiscotheque, a disco dance party in the Great Hall of the Thomas Jefferson building. Due to the devastation wreaked by hurricane Harvey on the state of Texas in August 2017, Gaynor rewrote the lyrics to "I Will Survive", changing the title to "Texas Will Survive", and posted video of herself singing the song on Twitter on August 30, 2017. In January 2020, she won her second Grammy Award in her career, 40 years after her first (for I Will Survive), for her roots gospel album "Testimony".
https://en.wikipedia.org/wiki?curid=12509
Gerald Schroeder Gerald Lawrence Schroeder is an Orthodox Jewish physicist, author, lecturer and teacher at College of Jewish Studies Aish HaTorah's Discovery Seminar, Essentials and Fellowships programs and Executive Learning Center, who focuses on what he perceives to be an inherent relationship between science and spirituality. Schroeder received his BSc in 1959, his MSc in 1961, and his PhD in nuclear physics and earth and planetary sciences in 1965, from the Massachusetts Institute of Technology (MIT). He worked five years on the staff of the MIT physics department. He was a member of the United States Atomic Energy Commission. After emigrating to Israel in 1971, Schroeder was employed as a researcher at the Weizmann Institute of Science, the Volcani Research Institute, and the Hebrew University of Jerusalem. He currently teaches at Aish HaTorah College of Jewish Studies. His works frequently cite Talmudic, Midrashic and medieval commentaries on Biblical creation accounts, such as commentaries written by the Jewish philosopher Nachmanides. Among other things, Schroeder attempts to reconcile a six-day creation as described in Genesis with the scientific evidence that the world is billions of years old using the idea that the perceived flow of time for a given event in an expanding universe varies with the observer's perspective of that event. He attempts to reconcile the two perspectives numerically, calculating the effect of the stretching of space-time, based on Albert Einstein's general relativity. Namely, that from the perspective of the point of origin of the Big Bang, according to Einstein's equations of the 'stretching factor', time dilates by a factor of roughly 1,000,000,000,000, meaning one trillion days on earth would appear to pass as one day from that point, due to the stretching of space. When applied to the estimated age of the universe at 13.8 billion years, from the perspective of the point of origin, the universe today would appear to have just begun its sixth day of existence, or if the universe is 15 billion years old from the perspective of earth, it would appear to have just completed its sixth day. Antony Flew, an academic philosopher who promoted atheism for most of his adult life indicated that the arguments of Gerald Schroeder had influenced his decision to become a deist. His theories to reconcile faith and science have drawn some criticism from both religious and non-religious scientists, and his works remain controversial in scientific circles. Schroeder's wife Barbara Sofer is a popular columnist for the English language Israeli newspaper "Jerusalem Post". The couple has five children. In 2012, Schroeder was awarded the Trotter Prize by Texas A&M University's College of Science.
https://en.wikipedia.org/wiki?curid=12511
Ghost In folklore, a ghost (sometimes known as an apparition, haunt, phantom, poltergeist, shade, specter or spectre, spirit, spook, and wraith) is the soul or spirit of a dead person or animal that can appear to the living. In ghostlore, descriptions of ghosts vary widely from an invisible presence to translucent or barely visible wispy shapes, to realistic, lifelike forms. The deliberate attempt to contact the spirit of a deceased person is known as necromancy, or in spiritism as a "séance". The belief in the existence of an afterlife, as well as manifestations of the spirits of the dead, is widespread, dating back to animism or ancestor worship in pre-literate cultures. Certain religious practices—funeral rites, exorcisms, and some practices of spiritualism and ritual magic—are specifically designed to rest the spirits of the dead. Ghosts are generally described as solitary, human-like essences, though stories of ghostly armies and the ghosts of animals rather than humans have also been recounted. They are believed to haunt particular locations, objects, or people they were associated with in life. According to a 2009 study by the Pew Research Center, 18% of Americans say they have seen a ghost. The overwhelming consensus of science is that ghosts do not exist. Their existence is impossible to falsify, and ghost hunting has been classified as pseudoscience. Despite centuries of investigation, there is no scientific evidence that any location is inhabited by spirits of the dead. Historically, certain toxic and psychoactive plants (such as datura and hyoscyamus niger), whose use has long been associated with necromancy and the underworld, have been shown to contain anticholinergic compounds that are pharmacologically linked to dementia (specifically DLB) as well as histological patterns of neurodegeneration. Recent research has indicated that ghost sightings may be related to degenerative brain diseases such as Alzheimer's disease. Common prescription medication and over-the-counter drugs (such as sleep aids) may also, in rare instances, cause ghost-like hallucinations, particularly zolpidem and diphenhydramine. Older reports linked carbon monoxide poisoning to ghost-like hallucinations. In folklore studies, ghosts fall within the motif index designation E200-E599 ("Ghosts and other revenants"). The English word "ghost" continues Old English "gāst", from Proto-Germanic "*gaistaz". It is common to West Germanic, but lacking in North Germanic and East Germanic (the equivalent word in Gothic is "ahma", Old Norse has "andi" m., "önd" f.). The prior Proto-Indo-European form was "", from the root " " denoting "fury, anger" reflected in Old Norse "geisa" "to rage". The Germanic word is recorded as masculine only, but likely continues a neuter "s"-stem. The original meaning of the Germanic word would thus have been an animating principle of the mind, in particular capable of excitation and fury (compare "óðr"). In Germanic paganism, "Germanic Mercury", and the later Odin, was at the same time the conductor of the dead and the "lord of fury" leading the Wild Hunt. Besides denoting the human spirit or soul, both of the living and the deceased, the Old English word is used as a synonym of Latin "spiritus" also in the meaning of "breath" or "blast" from the earliest attestations (9th century). It could also denote any good or evil spirit, such as angels and demons; the Anglo-Saxon gospel refers to the demonic possession of Matthew 12:43 as "se unclæna gast". Also from the Old English period, the word could denote the spirit of God, viz. the "Holy Ghost". The now-prevailing sense of "the soul of a deceased person, spoken of as appearing in a visible form" only emerges in Middle English (14th century). The modern noun does, however, retain a wider field of application, extending on one hand to "soul", "spirit", "vital principle", "mind", or "psyche", the seat of feeling, thought, and moral judgement; on the other hand used figuratively of any shadowy outline, or fuzzy or unsubstantial image; in optics, photography, and cinematography especially, a flare, secondary image, or spurious signal. The synonym "spook" is a Dutch loanword, akin to Low German "spôk" (of uncertain etymology); it entered the English language via American English in the 19th century. Alternative words in modern usage include "spectre" (altn. "specter"; from Latin "spectrum"), the Scottish "wraith" (of obscure origin), "phantom" (via French ultimately from Greek "phantasma", compare "fantasy") and "apparition". The term "shade" in classical mythology translates Greek σκιά, or Latin "umbra", in reference to the notion of spirits in the Greek underworld. "Haint" is a synonym for ghost used in regional English of the southern United States, and the "haint tale" is a common feature of southern oral and literary tradition. The term "poltergeist" is a German word, literally a "noisy ghost", for a spirit said to manifest itself by invisibly moving and influencing objects. "Wraith" is a Scots word for "ghost", "spectre", or "apparition". It appeared in Scottish Romanticist literature, and acquired the more general or figurative sense of "portent" or "omen". In 18th- to 19th-century Scottish literature, it also applied to aquatic spirits. The word has no commonly accepted etymology; the "OED" notes "of obscure origin" only. An association with the verb "writhe" was the etymology favored by J. R. R. Tolkien. Tolkien's use of the word in the naming of the creatures known as the Ringwraiths has influenced later usage in fantasy literature. Bogey or "bogy/bogie" is a term for a ghost, and appears in Scottish poet John Mayne's "Hallowe'en" in 1780. A "revenant" is a deceased person returning from the dead to haunt the living, either as a disembodied ghost or alternatively as an animated ("undead") corpse. Also related is the concept of a fetch, the visible ghost or spirit of a person yet alive. A notion of the transcendent, supernatural, or numinous, usually involving entities like ghosts, demons, or deities, is a cultural universal. In pre-literate folk religions, these beliefs are often summarized under animism and ancestor worship. Some people believe the ghost or spirit never leaves Earth until there is no-one left to remember the one who died. In many cultures, malignant, restless ghosts are distinguished from the more benign spirits involved in ancestor worship. Ancestor worship typically involves rites intended to prevent revenants, vengeful spirits of the dead, imagined as starving and envious of the living. Strategies for preventing revenants may either include sacrifice, i.e., giving the dead food and drink to pacify them, or magical banishment of the deceased to force them not to return. Ritual feeding of the dead is performed in traditions like the Chinese Ghost Festival or the Western All Souls' Day. Magical banishment of the dead is present in many of the world's burial customs. The bodies found in many tumuli (kurgan) had been ritually bound before burial, and the custom of binding the dead persists, for example, in rural Anatolia. Nineteenth-century anthropologist James Frazer stated in his classic work "The Golden Bough" that souls were seen as the creature within that animated the body. Although the human soul was sometimes symbolically or literally depicted in ancient cultures as a bird or other animal, it appears to have been widely held that the soul was an exact reproduction of the body in every feature, even down to clothing the person wore. This is depicted in artwork from various ancient cultures, including such works as the Egyptian "Book of the Dead", which shows deceased people in the afterlife appearing much as they did before death, including the style of dress. While deceased ancestors are universally regarded as venerable, and often believed to have a continued presence in some form of afterlife, the spirit of a deceased person that persists in the material world (a ghost) is regarded as an unnatural or undesirable state of affairs and the idea of ghosts or revenants is associated with a reaction of fear. This is universally the case in pre-modern folk cultures, but fear of ghosts also remains an integral aspect of the modern ghost story, Gothic horror, and other horror fiction dealing with the supernatural. Another widespread belief concerning ghosts is that they are composed of a misty, airy, or subtle material. Anthropologists link this idea to early beliefs that ghosts were the person within the person (the person's spirit), most noticeable in ancient cultures as a person's breath, which upon exhaling in colder climates appears visibly as a white mist. This belief may have also fostered the metaphorical meaning of "breath" in certain languages, such as the Latin "spiritus" and the Greek "pneuma", which by analogy became extended to mean the soul. In the Bible, God is depicted as synthesising Adam, as a living soul, from the dust of the Earth and the breath of God. In many traditional accounts, ghosts were often thought to be deceased people looking for vengeance (vengeful ghosts), or imprisoned on earth for bad things they did during life. The appearance of a ghost has often been regarded as an omen or portent of death. Seeing one's own ghostly double or "fetch" is a related omen of death. White ladies were reported to appear in many rural areas, and supposed to have died tragically or suffered trauma in life. White Lady legends are found around the world. Common to many of them is the theme of losing a child or husband and a sense of purity, as opposed to the Lady in Red ghost that is mostly attributed to a jilted lover or prostitute. The White Lady ghost is often associated with an individual family line or regarded as a harbinger of death similar to a banshee. Legends of ghost ships have existed since the 18th century; most notable of these is the "Flying Dutchman". This theme has been used in literature in "The Rime of the Ancient Mariner" by Coleridge. Ghosts are often depicted as being covered in a shroud and/or dragging chains. A place where ghosts are reported is described as haunted, and often seen as being inhabited by spirits of deceased who may have been former residents or were familiar with the property. Supernatural activity inside homes is said to be mainly associated with violent or tragic events in the building's past such as murder, accidental death, or suicide—sometimes in the recent or ancient past. However, not all hauntings are at a place of a violent death, or even on violent grounds. Many cultures and religions believe the essence of a being, such as the 'soul', continues to exist. Some religious views argue that the 'spirits' of those who have died have not 'passed over' and are trapped inside the property where their memories and energy are strong. There are many references to ghosts in Mesopotamian religions – the religions of Sumer, Babylon, Assyria, and other early states in Mesopotamia. Traces of these beliefs survive in the later Abrahamic religions that came to dominate the region. Ghosts were thought to be created at time of death, taking on the memory and personality of the dead person. They traveled to the netherworld, where they were assigned a position, and led an existence similar in some ways to that of the living. Relatives of the dead were expected to make offerings of food and drink to the dead to ease their conditions. If they did not, the ghosts could inflict misfortune and illness on the living. Traditional healing practices ascribed a variety of illnesses to the action of ghosts, while others were caused by gods or demons. There was widespread belief in ghosts in ancient Egyptian culture The Hebrew Bible contains few references to ghosts, associating spiritism with forbidden occult activities cf. Deuteronomy 18:11. The most notable reference is in the First Book of Samuel (I Samuel 28:3–19 KJV), in which a disguised King Saul has the Witch of Endor summon the spirit or ghost of Samuel. The soul and spirit were believed to exist after death, with the ability to assist or harm the living, and the possibility of a second death. Over a period of more than 2,500 years, Egyptian beliefs about the nature of the afterlife evolved constantly. Many of these beliefs were recorded in hieroglyph inscriptions, papyrus scrolls and tomb paintings. The Egyptian "Book of the Dead" compiles some of the beliefs from different periods of ancient Egyptian history. In modern times, the fanciful concept of a mummy coming back to life and wreaking vengeance when disturbed has spawned a whole genre of horror stories and films. Ghosts appeared in Homer's "Odyssey" and "Iliad", in which they were described as vanishing "as a vapor, gibbering and whining into the earth". Homer's ghosts had little interaction with the world of the living. Periodically they were called upon to provide advice or prophecy, but they do not appear to be particularly feared. Ghosts in the classical world often appeared in the form of vapor or smoke, but at other times they were described as being substantial, appearing as they had been at the time of death, complete with the wounds that killed them. By the 5th century BC, classical Greek ghosts had become haunting, frightening creatures who could work to either good or evil purposes. The spirit of the dead was believed to hover near the resting place of the corpse, and cemeteries were places the living avoided. The dead were to be ritually mourned through public ceremony, sacrifice, and libations, or else they might return to haunt their families. The ancient Greeks held annual feasts to honor and placate the spirits of the dead, to which the family ghosts were invited, and after which they were "firmly invited to leave until the same time next year." The 5th-century BC play "Oresteia" includes an appearance of the ghost of Clytemnestra, one of the first ghosts to appear in a work of fiction. The ancient Romans believed a ghost could be used to exact revenge on an enemy by scratching a curse on a piece of lead or pottery and placing it into a grave. Plutarch, in the 1st century AD, described the haunting of the baths at Chaeronea by the ghost of a murdered man. The ghost's loud and frightful groans caused the people of the town to seal up the doors of the building. Another celebrated account of a haunted house from the ancient classical world is given by Pliny the Younger (c. 50 AD). Pliny describes the haunting of a house in Athens, which was bought by the Stoic philosopher Athenodorus, who lived about 100 years before Pliny. Knowing that the house was supposedly haunted, Athenodorus intentionally set up his writing desk in the room where the apparition was said to appear and sat there writing until late at night when he was disturbed by a ghost bound in chains. He followed the ghost outside where it indicated a spot on the ground. When Athenodorus later excavated the area, a shackled skeleton was unearthed. The haunting ceased when the skeleton was given a proper reburial. The writers Plautus and Lucian also wrote stories about haunted houses. In the New Testament, according to Luke 24:37–39, following his resurrection, Jesus was forced to persuade the Disciples that he was not a ghost (some versions of the Bible, such as the KJV and NKJV, use the term "spirit"). Similarly, Jesus' followers at first believed he was a ghost (spirit) when they saw him walking on water. One of the first persons to express disbelief in ghosts was Lucian of Samosata in the 2nd century AD. In his satirical novel "The Lover of Lies" (circa 150 AD), he relates how Democritus "the learned man from Abdera in Thrace" lived in a tomb outside the city gates to prove that cemeteries were not haunted by the spirits of the departed. Lucian relates how he persisted in his disbelief despite practical jokes perpetrated by "some young men of Abdera" who dressed up in black robes with skull masks to frighten him. This account by Lucian notes something about the popular classical expectation of how a ghost should look. In the 5th century AD, the Christian priest Constantius of Lyon recorded an instance of the recurring theme of the improperly buried dead who come back to haunt the living, and who can only cease their haunting when their bones have been discovered and properly reburied. Ghosts reported in medieval Europe tended to fall into two categories: the souls of the dead, or demons. The souls of the dead returned for a specific purpose. Demonic ghosts existed only to torment or tempt the living. The living could tell them apart by demanding their purpose in the name of Jesus Christ. The soul of a dead person would divulge its mission, while a demonic ghost would be banished at the sound of the Holy Name. Most ghosts were souls assigned to Purgatory, condemned for a specific period to atone for their transgressions in life. Their penance was generally related to their sin. For example, the ghost of a man who had been abusive to his servants was condemned to tear off and swallow bits of his own tongue; the ghost of another man, who had neglected to leave his cloak to the poor, was condemned to wear the cloak, now "heavy as a church tower". These ghosts appeared to the living to ask for prayers to end their suffering. Other dead souls returned to urge the living to confess their sins before their own deaths. Medieval European ghosts were more substantial than ghosts described in the Victorian age, and there are accounts of ghosts being wrestled with and physically restrained until a priest could arrive to hear its confession. Some were less solid, and could move through walls. Often they were described as paler and sadder versions of the person they had been while alive, and dressed in tattered gray rags. The vast majority of reported sightings were male. There were some reported cases of ghostly armies, fighting battles at night in the forest, or in the remains of an Iron Age hillfort, as at Wandlebury, near Cambridge, England. Living knights were sometimes challenged to single combat by phantom knights, which vanished when defeated. From the medieval period an apparition of a ghost is recorded from 1211, at the time of the Albigensian Crusade. Gervase of Tilbury, Marshal of Arles, wrote that the image of Guilhem, a boy recently murdered in the forest, appeared in his cousin's home in Beaucaire, near Avignon. This series of "visits" lasted all of the summer. Through his cousin, who spoke for him, the boy allegedly held conversations with anyone who wished, until the local priest requested to speak to the boy directly, leading to an extended disquisition on theology. The boy narrated the trauma of death and the unhappiness of his fellow souls in Purgatory, and reported that God was most pleased with the ongoing Crusade against the Cathar heretics, launched three years earlier. The time of the Albigensian Crusade in southern France was marked by intense and prolonged warfare, this constant bloodshed and dislocation of populations being the context for these reported visits by the murdered boy. Haunted houses are featured in the 9th-century "Arabian Nights" (such as the tale of "Ali the Cairene and the Haunted House in Baghdad"). Renaissance magic took a revived interest in the occult, including necromancy. In the era of the Reformation and Counter Reformation, there was frequently a backlash against unwholesome interest in the dark arts, typified by writers such as Thomas Erastus. The Swiss Reformed pastor Ludwig Lavater supplied one of the most frequently reprinted books of the period with his "Of Ghosts and Spirits Walking By Night." The Child Ballad "Sweet William's Ghost" (1868) recounts the story of a ghost returning to his fiancée begging her to free him from his promise to marry her. He cannot marry her because he is dead but her refusal would mean his damnation. This reflects a popular British belief that the dead haunted their lovers if they took up with a new love without some formal release. "The Unquiet Grave" expresses a belief even more widespread, found in various locations over Europe: ghosts can stem from the excessive grief of the living, whose mourning interferes with the dead's peaceful rest. In many folktales from around the world, the hero arranges for the burial of a dead man. Soon after, he gains a companion who aids him and, in the end, the hero's companion reveals that he is in fact the dead man. Instances of this include the Italian fairy tale "Fair Brow" and the Swedish "The Bird 'Grip'". Spiritualism is a monotheistic belief system or religion, postulating a belief in God, but with a distinguishing feature of belief that spirits of the dead residing in the spirit world can be contacted by "mediums", who can then provide information about the afterlife. Spiritualism developed in the United States and reached its peak growth in membership from the 1840s to the 1920s, especially in English-language countries. By 1897, it was said to have more than eight million followers in the United States and Europe, mostly drawn from the middle and upper classes, while the corresponding movement in continental Europe and Latin America is known as Spiritism. The religion flourished for a half century without canonical texts or formal organization, attaining cohesion by periodicals, tours by trance lecturers, camp meetings, and the missionary activities of accomplished mediums. Many prominent Spiritualists were women. Most followers supported causes such as the abolition of slavery and women's suffrage. By the late 1880s, credibility of the informal movement weakened, due to accusations of fraud among mediums, and formal Spiritualist organizations began to appear. Spiritualism is currently practiced primarily through various denominational Spiritualist churches in the United States and United Kingdom. Spiritism, or French spiritualism, is based on the five books of the Spiritist Codification written by French educator Hypolite Léon Denizard Rivail under the pseudonym Allan Kardec reporting séances in which he observed a series of phenomena that he attributed to incorporeal intelligence (spirits). His assumption of spirit communication was validated by many contemporaries, among them many scientists and philosophers who attended séances and studied the phenomena. His work was later extended by writers like Leon Denis, Arthur Conan Doyle, Camille Flammarion, Ernesto Bozzano, Chico Xavier, Divaldo Pereira Franco, Waldo Vieira, Johannes Greber, and others. Spiritism has adherents in many countries throughout the world, including Spain, United States, Canada, Japan, Germany, France, England, Argentina, Portugal, and especially Brazil, which has the largest proportion and greatest number of followers. The physician John Ferriar wrote "An Essay Towards a Theory of Apparitions" in 1813 in which he argued that sightings of ghosts were the result of optical illusions. Later the French physician Alexandre Jacques François Brière de Boismont published "On Hallucinations: Or, the Rational History of Apparitions, Dreams, Ecstasy, Magnetism, and Somnambulism" in 1845 in which he claimed sightings of ghosts were the result of hallucinations. David Turner, a retired physical chemist, suggested that ball lightning could cause inanimate objects to move erratically. Joe Nickell of the Committee for Skeptical Inquiry wrote that there was no credible scientific evidence that any location was inhabited by spirits of the dead. Limitations of human perception and ordinary physical explanations can account for ghost sightings; for example, air pressure changes in a home causing doors to slam, humidity changes causing boards to creak, condensation in electrical connections causing intermittent behavior, or lights from a passing car reflected through a window at night. Pareidolia, an innate tendency to recognize patterns in random perceptions, is what some skeptics believe causes people to believe that they have 'seen ghosts'. Reports of ghosts "seen out of the corner of the eye" may be accounted for by the sensitivity of human peripheral vision. According to Nickell, peripheral vision can easily mislead, especially late at night when the brain is tired and more likely to misinterpret sights and sounds.
https://en.wikipedia.org/wiki?curid=12514
Geneva Geneva ( ; ; ; ; ; ) is the second-most populous city in Switzerland (after Zürich) and the most populous city of Romandy, the French-speaking part of Switzerland. Situated where the Rhône exits Lake Geneva, it is the capital of the Republic and Canton of Geneva. The municipality ("ville de Genève") has a population () of , and the canton (essentially the city and its inner-ring suburbs) has residents. In 2014, the compact "agglomération du Grand Genève" had 946,000 inhabitants in 212 communities in both Switzerland and France. Within Swiss territory, the commuter area named ""Métropole lémanique"" contains a population of 1.26 million. This area is essentially spread east from Geneva towards the Riviera area (Vevey, Montreux) and north-east towards Yverdon-les-Bains, in the neighbouring canton of Vaud. Geneva is a global city, a financial centre, and a worldwide centre for diplomacy due to the presence of numerous international organizations, including the headquarters of many agencies of the United Nations and the Red Cross. Geneva hosts the highest number of international organizations in the world. It is also where the Geneva Conventions were signed, which chiefly concern the treatment of wartime non-combatants and prisoners of war. In 2017, Geneva was ranked as the world's fifteenth most important financial centre for competitiveness by the Global Financial Centres Index, fifth in Europe behind London, Zürich, Frankfurt and Luxembourg. In 2019, Geneva was ranked among the ten most liveable cities in the world by Mercer together with Zürich and Basel. The city has been referred to as the world's most compact metropolis and the "Peace Capital". In 2019, Mercer ranked Geneva as the thirteenth most expensive city in the world. Geneva was ranked first by gross earnings, second expensive, third in earnings purchasing power gross hourly pay in a global cities ranking by UBS in 2018. The city was mentioned in Latin texts, by Caesar, with the spelling "Genava", probably from the Celtic from the stem ("bend, knee"), in the sense of a bending river or estuary, an etymology shared with the Italian port city of Genoa (in Italian "Genova"). The medieval county of Geneva in Middle Latin was known as "pagus major Genevensis" or "Comitatus Genevensis" (also "Gebennensis"). After 1400 it became the "Genevois" province of Savoy (albeit not extending to the city proper, until the reformation of the seat of the Bishop of Geneva). Geneva was an Allobrogian border town, fortified against the Helvetii tribe, when the Romans took it in 121 BC. It became Christian under the Late Roman Empire, and acquired its first bishop in the 5th century, having been connected to the Bishopric of Vienne in the 4th. In the Middle Ages, Geneva was ruled by a count under the Holy Roman Empire until the late 14th century, when it was granted a charter giving it a high degree of self-governance. Around this time, the House of Savoy came to at least nominally dominate the city. In the 15th century, an oligarchic republican government emerged with the creation of the Grand Council. In the first half of the 16th century, the Protestant Reformation reached the city, causing religious strife, during which Savoy rule was thrown off and Geneva allied itself with the Swiss Confederacy. In 1541, with Protestantism on the rise, John Calvin, the Protestant Reformer and proponent of Calvinism, became the spiritual leader of the city and established the Republic of Geneva. By the 18th century, Geneva had come under the influence of Catholic France, which cultivated the city as its own. France tended to be at odds with the ordinary townsfolk, which inspired the failed Geneva Revolution of 1782, an attempt to win representation in the government for men of modest means. In 1798, revolutionary France under the Directory annexed Geneva. At the end of the Napoleonic Wars, on 1June 1814, Geneva was admitted to the Swiss Confederation. In 1907, the separation of Church and State was adopted. Geneva flourished in the 19th and 20th centuries, becoming the seat of many international organizations. Geneva is located at 46°12' North, 6°09' East, at the south-western end of Lake Geneva, where the Rhône flows out. It is surrounded by three mountain chains, each belonging to the Jura: the Jura main range lies north-westward, the Vuache southward, and the Salève south-eastward. The city covers an area of , while the area of the canton is , including the two small exclaves of Céligny in Vaud. The part of the lake that is attached to Geneva has an area of and is sometimes referred to as "petit lac" (small lake). The canton has only a border with the rest of Switzerland. Of of border, 103 are shared with France, the Département de l'Ain to the north and west and the Département de la Haute-Savoie to the south and east. Of the land in the city, , or 1.5%, is used for agricultural purposes, while , or 3.1%, is forested. The rest of the land, , or 91.8%, is built up (buildings or roads), , or 3.1%, is either rivers or lakes and , or 0.1%, is wasteland. Of the built up area, industrial buildings made up 3.4%, housing and buildings made up 46.2% and transportation infrastructure 25.8%, while parks, green belts and sports fields made up 15.7%. Of the agricultural land, 0.3% is used for growing crops. Of the water in the municipality, 0.2% is composed of lakes and 2.9% is rivers and streams. The altitude of Geneva is and corresponds to the altitude of the largest of the Pierres du Niton, two large rocks emerging from the lake which date from the last ice age. This rock was chosen by General Guillaume Henri Dufour as the reference point for surveying in Switzerland. The second main river of Geneva is the Arve, which flows into the Rhône just west of the city centre. Mont Blanc can be seen from Geneva and is an hour's drive from the city. The climate of Geneva is a temperate climate, more specifically an oceanic climate (Köppen climate classification: Cfb). Winters are cool, usually with light frosts at night and thawing conditions during the day. Summers are relatively warm. Precipitation is adequate and is relatively well-distributed throughout the year, although autumn is slightly wetter than other seasons. Ice storms near Lac Léman are normal in the winter: Geneva can be affected by the Bise, a north-easterly wind. This can lead to severe icing in winter. In summer, many people swim in the lake and patronise public beaches such as Genève Plage and the Bains des Pâquis. The city, in certain years, receives snow during colder months. The nearby mountains are subject to substantial snowfall and are suitable for skiing. Many world-renowned ski resorts such as Verbier and Crans-Montana are less than three hours away by car. Mont Salève (), just across the border in France, dominates the southerly view from the city centre, and Mont Blanc, the highest of the Alpine range, is visible from most of the city, towering high above Chamonix, which, along with Morzine, Le Grand Bornand, La Clusaz, and resorts of the Grand Massif such as Samoens, Morillon, and Flaine, are the closest French skiing destinations to Geneva. During the years 2000–2009, the mean yearly temperature was 11 °C and the mean number of sunshine-hours per year was 2003. The highest temperature recorded in Genève–Cointrin was in July 2015, and the lowest temperature recorded was −20.0 °C (−4.0 °F) in February 1956. The city is divided into eight "quartiers", or districts, sometimes composed of several neighbourhoods. On the left bank are: (1) Jonction, (2) Centre, Plainpalais, and Acacias; (3) Eaux-Vives; and (4) Champel. The right bank includes: (1) Saint-Jean and Charmilles; (2) Servette and Petit-Saconnex; (3) Grottes and Saint-Gervais; and (4) Paquis and Nations. The Administrative Council (Conseil administratif) constitutes the executive government of the City of Geneva and operates as a collegiate authority. It is composed of five councilors (), each presiding over a department. The president of the executive department acts as mayor ("la maire/le maire"). In the governmental year 2019–2020, the Administrative Council is presided over by "Madame la maire de Genève" Sandrine Salerno. Departmental tasks, coordination measures and implementation of laws decreed by the Municipal Council are carried out by the Administrative Council. Elections for the Administrative Council are held every five years. The current term of ("la législature") is from 1June 2015 to 31May 2020. The delegates are elected by means of a system of Majorz. The mayor changes each year, while the heads of the other departments are assigned by the collegiate. The executive body holds its meetings in the Palais Eynard, near the Parc des Bastions. , Geneva's Administrative Council is made up of two representatives of the Social Democratic Party (PS), and one member each of: the Green Party (PES); "Ensemble à Gauche" (an alliance of left-wing parties); the Swiss Party of Labour; solidaritéS); and the Christian Democratic Party (PDC). This gives the left-wing parties four out of the five seats. The last election was held on 19April 2015. All previous members were re-elected. The Municipal Council (Conseil municipal) holds legislative power. It is made up of 80 members, with elections held every five years. The Municipal Council makes regulations and by-laws that are executed by the Administrative Council and the administration. The delegates are selected by means of a system of proportional representation with a seven percent threshold. The sessions of the Municipal Council are public. Unlike members of the Administrative Council, members of the Municipal Council are not politicians by profession, and they are paid a fee based on their attendance. Any resident of Geneva allowed to vote can be elected as a member of the Municipal Council. The Council holds its meetings in the Town Hall ("Hôtel de Ville"), in the old city. The last election of the Municipal Council was held on 15March 2020 for the ("législature") of 2020–2025 (which has not yet begun). Currently, the Municipal Council consists of: 18 members of the Social Democratic Party (PS), 14 Les Libéraux-Radicaux (PLR), 10 Christian Democratic People's Party (PDC); 8 Geneva Citizens' Movement (MCG,), 8 "Ensemble à Gauche" (an alliance of the left parties PST-POP ("Parti Suisse du Travail – Parti Ouvrier et Populaire") and solidaritéS), 8 Green Party (PES), 6 Swiss People's Party (UDC), and 8 independents. In the 2019 federal election for the Swiss National Council the most popular party was the Green Party which received 26% (+14.6) of the vote. The next seven most popular parties were the PS (17.9%, -5.9), PLR (15.1%, -2.4), the UDC (12.6%, -3.7), the Pda/solidaritéS (10%, +1.3), the PDC (5.4%, -5.3), the pvl (5%, +2.9), and MCR (4.9%, -2.7). In the federal election a total of 34,319 votes were cast, and the voter turnout was 39.6%. In the 2015 federal election for the Swiss National Council the most popular party was the PS which received 23.8% of the vote. The next five most popular parties were the PLR (17.6%), the UDC (16.3%), the Green Party (11.4%), the PDC (10.7%), and the solidaritéS (8.8%). In the federal election a total of 36,490 votes were cast, and the voter turnout was 44.1%. Geneva intentionally does not have any sister relationships with other cities. It declares itself related to the entire world. Geneva has a population () of . The city of Geneva is at the centre of the Geneva metropolitan area, known as "Grand Genève" in French (Greater Geneva). Greater Geneva includes the Canton of Geneva in its entirety as well as the District of Nyon in the Canton of Vaud and several areas in the neighbouring French departments of Haute-Savoie and Ain. In 2011, the "agglomération franco-valdo-genevoise" had 915,000 inhabitants, two-thirds of whom lived on Swiss soil and one-third on French soil. The Geneva metropolitan area is experiencing steady demographic growth of 1.2% a year and the population of the "agglomération franco-valdo-genevoise" is expected to reach a total of one million people in the near future. The official language of Geneva (both the city and the canton) is French. English is also common due to the high number of anglophone expatriates and foreigners working in international institutions and in the bank sector. , 128,622 or 72.3% of the population speaks French as a first language, with English being the second most common (7,853 or 4.4%) language. 7,462 inhabitants speak Spanish (or 4.2%), 7,320 speak Italian (4.1%), 7,050 speak German (4.0%) and 113 people who speak Romansh. As a result of immigration flows in the 1960s and 1980s, Portuguese is also spoken by a considerable proportion of the population. In the city of Geneva, , 48% of the population are resident foreign nationals. For a list of the largest groups of foreign residents see the cantonal overview. Over the last 10 years (1999–2009), the population has changed at a rate of 7.2%; a rate of 3.4% due to migration and at a rate of 3.4% due to births and deaths. , the gender distribution of the population was 47.8% male and 52.2% female. The male population was made up of 46,284 Swiss men (24.2% of the population) and 45,127 (23.6%) non-Swiss men. There were 56,091 Swiss women (29.3%) and 43,735 (22.9%) non-Swiss women. approximately 24.3% of the population of the municipality were born in Geneva and lived there in 200043,296. A further 11,757 or 6.6% who were born in the same canton, while 27,359 or 15.4% were born elsewhere in Switzerland, and 77,893 or 43.8% were born outside of Switzerland. In , there were 1,147 live births to Swiss citizens and 893 births to non-Swiss citizens, and in the same time span there were 1,114 deaths of Swiss citizens and 274 non-Swiss citizen deaths. Ignoring immigration and emigration, the population of Swiss citizens increased by 33, while the foreign population increased by 619. There were 465 Swiss men and 498 Swiss women who emigrated from Switzerland. At the same time, there were 2933 non-Swiss men and 2662 non-Swiss women who immigrated from another country to Switzerland. The total Swiss population change in 2008 (from all sources, including moves across municipal borders) was an increase of 135 and the non-Swiss population increased by 3181 people. This represents a population growth rate of 1.8%. , children and teenagers (0–19 years old) make up 18.2% of the population, while adults (20–64 years old) make up 65.8% and seniors (over 64 years old) make up 16%. , there were 78,666 people who were single and never married in the municipality. There were 74,205 married individuals, 10,006 widows or widowers and 15,087 individuals who are divorced. , there were 86,231 private households in the municipality, and an average of 1.9 persons per household. There were 44,373 households that consist of only one person and 2,549 households with five or more people. Out of a total of 89,269 households that answered this question, 49.7% were households made up of just one person and there were 471 adults who lived with their parents. Of the rest of the households, there are 17,429 married couples without children, 16,607 married couples with children. There were 5,499 single parents with a child or children. There were 1,852 households that were made up of unrelated people and 3,038 households that were made up of some sort of institution or another collective housing. , there were 743 single family homes (or 10.6% of the total) out of a total of 6,990 inhabited buildings. There were 2,758 multi-family buildings (39.5%), along with 2,886 multi-purpose buildings that were mostly used for housing (41.3%) and 603 other use buildings (commercial or industrial) that also had some housing (8.6%). Of the single family homes, 197 were built before 1919, while 20 were built between 1990 and 2000. The greatest number of single family homes (277) were built between 1919 and 1945. , there were 101,794 apartments in the municipality. The most common apartment size was 3 rooms of which there were 27,084. There were 21,889 single room apartments and 11,166 apartments with five or more rooms. Of these apartments, a total of 85,330 apartments (83.8% of the total) were permanently occupied, while 13,644 apartments (13.4%) were seasonally occupied and 2,820 apartments (2.8%) were empty. , the construction rate of new housing units was 1.3 new units per 1000 residents. , the average price to rent an average apartment in Geneva was 1163.30 Swiss francs (CHF) per month (US$930, £520, €740 approx. exchange rate from 2003). The average rate for a one-room apartment was 641.60 CHF (US$510, £290, €410), a two-room apartment was about 874.46 CHF (US$700, £390, €560), a three-room apartment was about 1126.37 CHF (US$900, £510, €720) and a six or more room apartment cost an average of 2691.07 CHF (US$2150, £1210, €1720). The average apartment price in Geneva was 104.2% of the national average of 1116 CHF. The vacancy rate for the municipality, , was 0.25%. In June 2011, the average price of an apartment in and around Geneva was 13,681 CHF per square metre (). The average can be as high as 17,589 Swiss francs (CHF) per square metre () for a luxury apartment and as low as 9,847 Swiss francs (CHF) for an older or basic apartment. For houses in and around Geneva, the average price was 11,595 Swiss francs (CHF) per square metre () (June 2011), with a lowest price per square metre () of 4,874 Swiss francs (CHF), and a maximum price of 21,966 Swiss francs (CHF). William Monter calculates that the city's total population was 12,000–13,000 in 1550, doubling to over 25,000 by 1560. The historical population is given in the following chart: The recorded 66,491 residents (37.4% of the population) as Roman Catholic, while 41,289 people (23.20%) belonged to no church or were agnostic or atheist, 24,105 (13.5%) belonged to the Swiss Reformed Church, and 8,698 (4.89%) were Muslim. There were also 3,959 members of an Orthodox church (2.22%), 220 individuals (or about 0.12% of the population) who belonged to the Christian Catholic Church of Switzerland, 2,422 (1.36%) who belonged to another Christian church, and 2,601 people (1.46%) who were Jewish. There were 707 individuals who were Buddhist, 474 who were Hindu and 423 who belonged to another church. 26,575 respondents (14.93%) did not answer the question. According to 2012 statistics by Swiss Bundesamt für Statistik 49.2% of the population are Christian, divided into 34.2% Roman Catholic, 8.8% Swiss Reformed (organized in the Protestant Church of Geneva) and 6.2% other Christian (mostly various other Protestants), 38% of Genevans are non-religious, 6.1% are Muslim and 1.6% are Jews. Geneva has historically been considered a Protestant city and was known as the "Protestant Rome" due to it being the base of John Calvin, William Farel, Theodore Beza and other Protestant reformers. Over the past century, substantial immigration from France and other predominantly Roman Catholic countries, as well as general European secularization has changed its religious landscape. As a result, three times as many Roman Catholics as Protestants lived in the city in 2000, while a large number of residents were members of neither group. Geneva forms part of the Roman Catholic Diocese of Lausanne, Geneva and Fribourg. The World Council of Churches and the Lutheran World Federation both have their headquarters at the Ecumenical Centre in Grand-Saconnex, Geneva. The World Communion of Reformed Churches, a worldwide organization of Presbyterian, Continental Reformed, Congregational and other Reformed churches gathering more than 80 million people around the world was based here from 1948 until 2013. The Executive Committee of the World Communion of Reformed Churches voted in 2012 to move its offices to Hanover, Germany, citing the high costs of running the ecumenical organization in Geneva, Switzerland. The move was completed in 2013. Likewise, the Conference of European Churches have moved their headquarters from Geneva to Brussels. Prior to the Protestant Reformation the city was "de jure" and "de facto" Roman Catholic. Reaction to the new movement varied across Switzerland. John Calvin went to Geneva in 1536 after William Farel encouraged him to do so. In Geneva, the Catholic bishop had been obliged to seek exile in 1532. Geneva became a stronghold of Calvinism. Some of the tenets created there influenced Protestantism as a whole. St. Pierre Cathedral was where Calvin and his Protestant reformers preached. It constituted the epicentre of the newly developing Protestant thought that would later become known as the Reformed tradition. Many prominent Reformed theologians operated there, including William Farel and Theodore Beza, Calvin's successor who progressed Reformed thought after his death. Geneva was a shelter for Calvinists, but at the same time it persecuted Roman Catholics and others considered heretics. The case of Michael Servetus, an early Nontrinitarian, is notable. Condemned by Catholics and Protestants alike, he was arrested in Geneva and burnt at the stake as a heretic by order of the city's Protestant governing council. John Calvin and his followers denounced him, and possibly contributed to his sentence. In 1802, during its annexation to France under Napoleon I, the Diocese of Geneva was united with the Diocese of Chambéry, but the 1814 Congress of Vienna and the 1816 Treaty of Turin stipulated that in the territories transferred to a now considerably extended Geneva, the Catholic religion was to be protected and that no changes were to be made in existing conditions without an agreement with the Holy See. Napoleon's common policy was to emancipate Catholics in Protestant-majority areas, and the other way around, as well as emancipating Jews. In 1819, the city of Geneva and 20 parishes were united to the Diocese of Lausanne by Pope Pius VII and in 1822, the non-Swiss territory was made into the Diocese of Annecy. A variety of concord with the civil authorities came as a result of the separation of church and state, enacted with strong Catholic support in 1907. In 2014 the incidence of crimes listed in the Swiss Criminal Code in Geneva was 143.9 per thousand residents. During the same period the rate of drug crimes was 33.6 per thousand residents. The rate of violations of immigration, visa and work permit laws was 35.7 per thousand residents. There are 82 buildings or sites in Geneva that are listed as Swiss heritage sites of national significance, and the entire old city of Geneva is part of the Inventory of Swiss Heritage Sites. Religious buildings: Cathedral St-Pierre et Chapel des Macchabés, Notre-Dame Church, Russian church, St-Germain Church, Temple de la Fusterie, Temple de l'Auditoire Civic buildings: Former Arsenal and Archives of the City of Genève, Former Crédit Lyonnais, Former Hôtel Buisson, Former Hôtel du Résident de France et Bibliothèque de la Société de lecture de Genève, Former école des arts industriels, Archives d'État de Genève (Annexe), Bâtiment des forces motrices, Bibliothèque de Genève, Library juive de Genève «Gérard Nordmann», Cabinet des estampes, Centre d'Iconographie genevoise, Collège Calvin, École Geisendorf, University Hospital of Geneva (HUG), Hôtel de Ville et tour Baudet, Immeuble Clarté at Rue Saint-Laurent 2 and 4, Immeubles House Rotonde at Rue Charles-Giron 11–19, Immeubles at Rue Beauregard 2, 4, 6, 8, Immeubles at Rue de la Corraterie 10–26, Immeubles at Rue des Granges 2–6, Immeuble at Rue des Granges 8, Immeubles at Rue des Granges 10 and 12, Immeuble at Rue des Granges 14, Immeuble and Former Armory at Rue des Granges 16, Immeubles at Rue Pierre Fatio 7 and 9, House de Saussure at Rue de la Cité 24, House Des arts du Grütli at Rue du Général-Dufour 16, House Royale et les deux immeubles à côté at Quai Gustave Ador 44–50, Tavel House at Rue du Puits-St-Pierre 6, Turrettini House at Rue de l'Hôtel-de-Ville 8 and 10, Brunswick Monument, Palais de Justice, Palais de l'Athénée, Palais des Nations with library and archives of the SDN and ONU, Palais Eynard et Archives de la ville de Genève, Palais Wilson, Parc des Bastions avec Mur des Réformateurs, Place de Neuve et Monument du Général Dufour, Pont de la Machine, Pont sur l'Arve, Poste du Mont-Blanc, Quai du Mont-Blanc, Quai et Hôtel des Bergues, Quai Général Guisan and English Gardens, Quai Gustave-Ador and Jet d'eau, Télévision Suisse Romande, University of Geneva, Victoria Hall. Archeological sites: Foundation Baur and Museum of the arts d'Extrême-Orient, Parc et campagne de la Grange and Library (neolithic shore settlement/Roman villa), Bronze Age shore settlement of Plonjon, Temple de la Madeleine archeological site, Temple Saint-Gervais archeological site, Old City with Celtic, Roman and medieval villages. Museums, theaters, and other cultural sites: Conservatoire de musique at Place Neuve 5, Conservatoire et Jardin botaniques, Fonds cantonal d'art contemporain, Ile Rousseau and statue, Institut et Musée Voltaire with Library and Archives, Mallet House and Museum international de la Réforme, Musée Ariana, Museum of Art and History, Museum d'art moderne et contemporain, Museum d'ethnographie, Museum of the International Red Cross, Musée Rath, Natural History Museum, Plainpalais Commune Auditorium, Pitoëff Theatre, Villa Bartholoni at the Museum of History and Science. International organizations: International Labour Organization (BIT), International Committee of the Red Cross, United Nations High Commissioner for Refugees (UNHCR), World Meteorological Organization, World Trade Organization, International Telecommunication Union, World YMCA. The city's main newspaper is the daily "Tribune de Genève", with a readership of about 187,000. "Le Courrier" mainly focuses on Geneva. Both "Le Temps" (headquartered in Geneva) and "Le Matin" are widely read in Geneva, but cover the whole of Romandy. Geneva is the main media center for French-speaking Switzerland. It is the headquarters for the numerous French language radio and television networks of the Swiss Broadcasting Corporation, known collectively as Radio Télévision Suisse. While both networks cover the whole of Romandy, special programs related to Geneva are sometimes broadcast on some of the local radio frequencies. Other local radio stations broadcast from the city, including YesFM (FM 91.8 MHz), Radio Cité (non-commercial radio, FM 92.2 MHz), OneFM (FM 107.0 MHz, also broadcast in Vaud), and World Radio Switzerland (FM 88.4 MHz). Léman Bleu is a local TV channel, founded in 1996 and distributed by cable. Due to the proximity to France, many French television channels are also available. Geneva observes "Jeûne genevois" on the first Thursday following the first Sunday in September. By local tradition, this commemorates the date news of the St. Bartholomew's Day massacre of Huguenots reached Geneva. Geneva celebrates "L'Escalade" on the weekend nearest 12 December, celebrating the defeat of the surprise attack of troops sent by Charles Emmanuel I, Duke of Savoy during the night of 11–12 December 1602. Festive traditions include chocolate cauldrons filled with vegetable-shaped marzipan treats and the Escalade procession on horseback in seventeenth century armour. Geneva has also been organizing a 'Course de l'Escalade', which means 'Climbing Race'. This race takes place in Geneva's Old Town, and has been popular across all ages. Non-competitive racers dress up in fancy costumes, while walking in the race. Since 1818, a particular chestnut tree has been used as the official "herald of the spring" in Geneva. The "sautier" (secretary of the Parliament of the Canton of Geneva) observes the tree and notes the day of arrival of the first bud. While this event has no practical effect, the sautier issues a formal press release and the local newspaper will usually mention the news. As this is one of the world's oldest records of a plant's reaction to climatic conditions, researchers have been interested to note that the first bud has been appearing earlier and earlier in the year. During the 19th century many dates were in March or April. In recent years, they have usually been in late February (sometimes earlier). In 2002, the first bud appeared unusually early, on 7 February, and then again on 29 December of the same year. The following year, one of the hottest years recorded in Europe, was a year with no bud. In 2008, the first bud also appeared early, on 19 February. The opera house, the Grand Théâtre de Genève, which officially opened in 1876, was partly destroyed by a fire in 1951 and reopened in 1962. It has the largest stage in Switzerland. It features opera and dance performances, recitals, concerts and, occasionally, theatre. The Victoria Hall is used for classical music concerts. It is the home of the Orchestre de la Suisse Romande. Every summer the Fêtes de Genève (Geneva Festival) are organised in Geneva. According to Radio Télévision Suisse in 2013 hundreds of thousands of people came to Geneva to see the annual hour-long grand firework display of the Fêtes de Genève. An annual music festival takes place in June. Groups of artists perform in different parts of the city. In 2016 the festival celebrated its 25th anniversary. Further annual festivals are the "Fête de l’Olivier," a festival of Arabic music, organized by the ICAM since 1980, and the "Genevan Brass Festival", founded by Christophe Sturzenegger in 2010. The Canton of Geneva's public school system has "écoles primaires" (ages 4–12) and "cycles d'orientation" (ages 12–15). Students can leave school at 15, but secondary education is provided by "collèges" (ages 15–19), the oldest of which is the Collège Calvin, which could be considered one of the oldest public schools in the world, "écoles de culture générale" (15–18/19) and the "écoles professionnelles" (15–18/19). The "écoles professionnelles" offer full-time courses and part-time study as part of an apprenticeship. Geneva also has a number of private schools. In 2011 89,244 (37.0%) of the population had completed non-mandatory upper secondary education, and 107,060 or (44.3%) had completed additional higher education (either university or a Fachhochschule). Of the 107,060 who completed tertiary schooling, 32.5% were Swiss men, 31.6% were Swiss women, 18.1% were non-Swiss men and 17.8% were non-Swiss women. During the 2011–2012 school year, there were a total of 92,311 students in the Geneva school system (primary to university). The education system in the Canton of Geneva has eight years of primary school, with 32,716 students. The secondary school program consists of three lower, obligatory years of schooling, followed by three to five years of optional, advanced study. There were 13,146 lower-secondary students who attended schools in Geneva. There were 10,486 upper-secondary students from the municipality along with 10,330 students who were in a professional, non-university track program. An additional 11,797 students were attending private schools. Geneva is home to the University of Geneva where approximately 16,500 students are regularly enrolled. In 1559 John Calvin founded the Geneva Academy, a theological and humanist seminary. In the 19th century the Academy lost its ecclesiastic links and in 1873, with the addition of a medical faculty, it became the University of Geneva. In 2011 it was ranked European university. The Graduate Institute of International and Development Studies was among the first academic institutions in the World to teach international relations. It is one of Europe's most prestigious institutions, offering MA and PhD programmes in law, political science, history, economics, international affairs, and development studies. The oldest international school in the world is the International School of Geneva, founded in 1924 along with the League of Nations. The Geneva School of Diplomacy and International Relations is a private university in the grounds of the Château de Penthes. CERN (the European Organization for Nuclear Research) is probably the best known of Geneva's educational and research facilities, most recently for the Large Hadron Collider. Founded in 1954, CERN was one of Europe's first joint ventures and has developed as the world's largest particle physics laboratory. Physicists from around the world travel to CERN to research matter and explore the fundamental forces and materials that form the universe. Geneva is home to five major libraries, the "Bibliothèques municipales Genève", the "Haute école de travail social, Institut d'études sociales", the "Haute école de santé", the "Ecole d'ingénieurs de Genève" and the "Haute école d'art et de design". There were () 877,680 books or other media in the libraries, and in the same year 1,798,980 items were loaned. Geneva's economy is services oriented. The city has an important and long-established finance sector, which specialises in private banking, managing assets of about US$1 trillion, and the financing of international trade. In the September 2017 Global Financial Centres Index, Geneva was ranked as being the 15th most competitive financial centre in the world (up from 20th in March 2017) and the fifth most competitive in Europe (after London, Zürich, Frankfurt, and Luxembourg). Geneva hosts the international headquarters of companies such as Japan Tobacco International, Mediterranean Shipping Company, Vitol, Gunvor, Mercuria Energy Group, Merck Serono, SITA, Société Générale de Surveillance, STMicroelectronics, and Weatherford International. Many other multinational companies such as Caterpillar, DuPont, and Cargill have their international headquarters in the city; Take Two Interactive, Electronic Arts, INVISTA, Procter & Gamble and Oracle Corporation have their European headquarters in the city. Hewlett Packard has its Europe, Africa, and Middle East headquarters in Meyrin, near Geneva, as does PrivatAir. There is a long tradition of watchmaking in the city, which dates back to the 16th century. Many watchmakers have been based in Geneva since their foundation, such as (Baume et Mercier, Charriol, Chopard, Franck Muller, Patek Philippe, Rolex, Universal Genève, Raymond Weil, Vacheron Constantin and Frédérique Constant). Two major international producers of flavours and fragrances, Firmenich and Givaudan, have their headquarters and main production facilities in Geneva. The private sector has a number of employers' organizations, including the Fédération des Entreprises Romandes Genève (FER Genève) and the Fédération des métiers du bâtiment (FMB). Many people also work in the numerous offices of international organisations located in Geneva (about 22,233 in March 2012). The Geneva Motor Show is one of the most important international auto shows. It is held at Palexpo, a large convention centre next to the International Airport. In 2009, Geneva was ranked as the fourth most expensive city in the world. Geneva moved up four places from eighth place the previous year. , Geneva had an unemployment rate of 6.3%. , there were five people employed in the primary economic sector and about three businesses involved in this sector. 9,783 people were employed in the secondary sector and there were 1,200 businesses in this sector. 134,429 people were employed in the tertiary sector, with 12,489 businesses in this sector. There were 91,880 residents of the municipality who were employed in some capacity, with women making up 47.7% of the workforce. , the total number of full-time equivalent jobs was 124,185. The number of jobs in the primary sector was four, all of which were in agriculture. The number of jobs in the secondary sector was 9,363 of which 4,863 or (51.9%) were in manufacturing and 4,451 (47.5%) were in construction. The number of jobs in the tertiary sector was 114,818. In the tertiary sector; 16,573 or 14.4% were in wholesale or retail sales or the repair of motor vehicles, 3,474 or 3.0% were in the movement and storage of goods, 9,484 or 8.3% were in a hotel or restaurant, 4,544 or 4.0% were in the information industry, 20,982 or 18.3% were the insurance or financial industry, 12,177 or 10.6% were technical professionals or scientists, 10,007 or 8.7% were in education and 15,029 or 13.1% were in health care. , there were 95,190 workers who commuted into the municipality and 25,920 workers who commuted away. The municipality is a net importer of workers, with about 3.7 workers entering the municipality for every one leaving. About 13.8% of the workforce coming into Geneva are coming from outside Switzerland, while 0.4% of the locals commute out of Switzerland for work. Of the working population, 38.2% used public transportation to get to work, and 30.6% used a private car. Ice hockey is the most popular sport in Geneva. Geneva is home to Genève-Servette HC, which plays in the National League. They play their home games in the 7,135-seat Patinoire des Vernets. In 2008 and 2010 the team made it to the league finals but lost to the ZSC Lions and SC Bern respectively. The team is by far the most popular one in both the city and the canton of Geneva, drawing three times more spectators than the football team in 2017. The town is home to Servette FC, a football club founded in 1890 and named after a borough on the right bank of the Rhône. The home of Servette FC is the 30,000-seat Stade de Genève. Servette FC plays in the Raiffeisen Super League. Urania Genève Sport also play in the city. Geneva is home to the basketball team Lions de Genève, 2013 and 2015 champions of the Swiss Basketball League. The team plays its home games in the "Pavilion des Sports". Geneva Jets Australian Football Club have been playing Australian Football in the AFL Switzerland league since 2019. The city is served by the Geneva Cointrin International Airport. It is connected by Geneva Airport railway station () to both the Swiss Federal Railways network and the French SNCF network, including links to Paris, Lyon, Marseille and Montpellier by TGV. Geneva is connected to the motorway systems of both Switzerland (A1 motorway) and France. Public transport by bus, trolleybus or tram is provided by "Transports Publics Genevois". In addition to an extensive coverage of the city centre, the network extends to most of the municipalities of the Canton, with a few lines reaching into France. Public transport by boat is provided by the Mouettes Genevoises, which link the two banks of the lake within the city, and by the "Compagnie Générale de Navigation sur le lac Léman" which serves more distant destinations such as Nyon, Yvoire, Thonon, Évian, Lausanne and Montreux using both modern diesel vessels and vintage paddle steamers. Trains operated by Swiss Federal Railways connect the airport to the main station of Cornavin in six minutes. Regional train services are being developed towards Coppet and Bellegarde. At the city limits two new railway stations have been opened since 2002: Genève-Sécheron (close to the UN and the Botanical Gardens) and Lancy-Pont-Rouge. In 2011 work started on the CEVA rail (Cornavin – Eaux-Vives – Annemasse) project, first planned in 1884, which will connect Cornavin with the Cantonal hospital, Eaux-Vives railway station and Annemasse, in France. The link between the main railway station and the classification yard of La Praille already exists; from there, the line will go mostly underground to the Hospital and Eaux-Vives, where it will link to the existing line to France. In May 2013, the demonstrator electric bus system with a capacity of 133 passengers commenced between Geneva Airport and Palexpo. The project aims to introduce a new system of mass transport with electric "flash" recharging of the buses at selected stops while passengers are disembarking and embarking. Taxis in Geneva can be difficult to find, and may need to be booked in advance, especially in the early morning or at peak hours. Taxis can refuse to take babies and children because of seating legislation. An ambitious project to close 200 streets in the centre of Geneva to cars was approved by the Geneva cantonal authorities in 2010 and was planned to be implemented over a span of four years (2010–2014), though , work on the project has yet to be started. Water, natural gas and electricity are provided to the municipalities of the Canton of Geneva by the state-owned Services Industriels de Genève, known as SIG. Most of the drinking water (80%) is extracted from the lake; the remaining 20% is provided by groundwater, originally formed by infiltration from the Arve. 30% of the Canton's electricity needs is locally produced, mainly by three hydroelectric dams on the Rhône (Seujet, Verbois and Chancy-Pougny). In addition, 13% of the electricity produced in the Canton is from the burning of waste at the waste incineration facility of Les Cheneviers. The remaining needs (57%) are covered by imports from other cantons in Switzerland or other European countries; SIG buys only electricity produced by renewable methods, and in particular does not use electricity produced using nuclear reactors or fossil fuels. Natural gas is available in the City of Geneva, as well as in about two-thirds of the municipalities of the canton, and is imported from Western Europe by the Swiss company Gaznat. SIG also provides telecommunication facilities to carriers, service providers and large enterprises. From 2003 to 2005, "Voisin, voisine" a fibre to the Home pilot project with a triple play offering was launched to test the end-user market in the Charmilles district. Geneva is the European headquarters of the United Nations, in the Palace of Nations building, which was also the headquarters of the former League of Nations. Several agencies are headquartered at Geneva, including the United Nations High Commissioner for Refugees, the UN Office of the High Commissioner for Human Rights, the World Health Organization, the International Labour Organization, International Telecommunication Union, the International Baccalaureate Organization and the World Intellectual Property Organization. Apart from the UN agencies, Geneva hosts many inter-governmental organizations, such as the World Trade Organization, the South Centre, the World Meteorological Organization, the World Economic Forum, the International Organization for Migration, the International Federation of Red Cross and Red Crescent Societies and the International Committee of the Red Cross. The Maison de la Paix building hosts the three Geneva centres supported by the Swiss Confederation: the International Centre for Humanitarian Demining, the Centre for the Democratic Control of Armed Forces and the Geneva Centre for Security Policy, as well as other organisations active in the field of peace, international affairs and sustainable development. Organizations on the European level include the European Broadcasting Union (EBU) and CERN (the European Organization for Nuclear Research) which is the world's largest particle physics laboratory. The Geneva Environment Network (GEN) publishes the Geneva Green Guide, an extensive listing of Geneva-based global organisations working on environment protection and sustainable development. A website, jointly run by the Swiss Government, the World Business Council for Sustainable Development, the United Nations Environment Programme and the International Union for Conservation of Nature, includes accounts of how NGOs, business, government and the UN cooperate. By doing so, it attempts to explain why Geneva has been picked by so many NGOs and UN bodies as their headquarters' location. The World Organization of the Scout Movement and the World Scout Bureau Central Office are headquartered in Geneva.
https://en.wikipedia.org/wiki?curid=12521
Gerard Manley Hopkins Gerard Manley Hopkins (28 July 1844 – 8 June 1889) was an English poet and Jesuit priest, whose posthumous fame established him among the leading Victorian poets. His manipulation of prosody – particularly his concept of sprung rhythm – established him as an innovative writer of verse, as did his technique of praising God through vivid use of imagery and nature. Only after his death did Robert Bridges begin to publish a few of Hopkins's mature poems in anthologies, hoping to prepare the way for wider acceptance of his style. By 1930 his work was recognised as one of the most original literary accomplishments of his century. It had a marked influence on such leading 20th-century poets as T. S. Eliot, Dylan Thomas, W. H. Auden, Stephen Spender and Cecil Day-Lewis. Gerard Manley Hopkins was born in Stratford, Essex (now in Greater London), as the eldest of probably nine children to Manley and Catherine Hopkins, née Smith. He was christened at the Anglican church of St John's, Stratford. His father founded a marine insurance firm and at one time served as Hawaiian consul-general in London. He was also for a time churchwarden at St John-at-Hampstead. His grandfather was the physician John Simm Smith, a university colleague of John Keats, and close friend of the eccentric philanthropist Ann Thwaytes. As a poet, Hopkins's father published works including "A Philosopher's Stone and Other Poems" (1843), "Pietas Metrica" (1849), and "Spicelegium Poeticum, A Gathering of Verses by Manley Hopkins" (1892). He reviewed poetry for "The Times" and wrote one novel. Catherine (Smith) Hopkins was the daughter of a London physician, particularly fond of music and of reading, especially German philosophy, literature and the novels of Dickens. Both parents were deeply religious high-church Anglicans. Catherine's sister, Maria Smith Giberne, taught her nephew Gerard to sketch. The interest was supported by his uncle, Edward Smith, his great-uncle Richard James Lane, a professional artist, and many other family members. Hopkins's first ambition was to be a painter, and he would continue to sketch throughout his life, inspired as an adult by the work of John Ruskin and the Pre-Raphaelites. Hopkins became a skilled draughtsman, and found his early training in visual art supported his later work as a poet. His siblings were greatly inspired by language, religion and the creative arts. Milicent (1849–1946) joined an Anglican sisterhood in 1878. Kate (1856–1933) would go on to help Hopkins publish the first edition of his poetry. Hopkins's youngest sister Grace (1857–1945) set many of his poems to music. Lionel (1854–1952) became a world-famous expert on archaic and colloquial Chinese. Arthur (1848–1930) and Everard (1860–1928) were both highly successful artists. Cyril (1846–1932) was to join his father's insurance firm. Manley Hopkins moved his family to Hampstead in 1852, near where John Keats had lived thirty years before and close to the wide green spaces of Hampstead Heath. When he was ten years old, Gerard was sent to board at Highgate School (1854–1863). While studying Keats's poetry, he wrote "The Escorial" (1860), his earliest extant poem. Here he practised early attempts at asceticism. He once argued that most people drank more liquids than they really needed and bet that he could go without drinking for a week. He persisted until his tongue was black and he collapsed at drill. On another occasion, he abstained from salt for a week. Among his teachers at Highgate was Richard Watson Dixon, who became an enduring friend and correspondent, and among the older pupils Hopkins recalls in his boarding house was the poet Philip Stanhope Worsley, who won the Newdigate Prize. Hopkins studied classics at Balliol College, Oxford (1863–1867). He began his time in Oxford as a keen socialite and prolific poet, but he seems to have alarmed himself with the changes in his behaviour that resulted. At Oxford he forged a lifelong friendship with Robert Bridges (eventual Poet Laureate of the United Kingdom), which would be of importance in his development as a poet and in establishing his posthumous acclaim. Hopkins was deeply impressed with the work of Christina Rossetti and she became one of his greatest contemporary influences, meeting him in 1864. During this time he studied with the eminent writer and critic Walter Pater, who tutored him in 1866 and remained a friend until Hopkins left Oxford in September 1879. In a journal entry dated 6 November 1865, Hopkins declared an ascetic intention for his life and work. "On this day by God's grace I resolved to give up all beauty until I had His leave for it." On 18 January 1866, Hopkins composed his most ascetic poem, "The Habit of Perfection". On 23 January, he included poetry in the list of things to be given up for Lent. In July, he decided to become a Roman Catholic and travelled to Birmingham in September to consult the leader of the Oxford converts, John Henry Newman. Newman received him into the Roman Catholic Church on 21 October 1866. The decision to convert estranged him from his family and from a number of his acquaintances. After his graduation in 1867, Hopkins was provided by Newman with a teaching post at the Oratory in Birmingham. While there he began to study the violin. On 5 May 1868 Hopkins firmly "resolved to be a religious." Less than a week later, he made a bonfire of his poems and gave up poetry almost entirely for seven years. He also felt the call to enter the ministry and decided to become a Jesuit. He paused to first visit Switzerland, which officially forbade Jesuits to enter. Hopkins began his Jesuit novitiate at Manresa House, Roehampton, in September 1868. Two years later, he moved to St Mary's Hall, Stonyhurst, for his philosophical studies, taking vows of poverty, chastity and obedience on 8 September 1870. He felt his interest in poetry had stopped him devoting himself wholly to religion. However, on reading Duns Scotus in 1872, he saw that the two need not conflict. He continued to write a detailed prose journal between 1868 and 1875. Unable to suppress his desire to describe the natural world, he also wrote music, sketched, and for church occasions wrote some "verses," as he called them. He would later write sermons and other religious pieces. In 1874 Hopkins returned to Manresa House to teach classics. While he was studying in the Jesuit house of theological studies, St Beuno's College, near St Asaph in North Wales, he was asked by his religious superior to write a poem to commemorate the foundering of a German ship in a storm. So in 1875, he was moved to take up poetry once more and write a lengthy poem, "The Wreck of the Deutschland". The work was inspired by the "Deutschland" incident, a maritime disaster in which 157 people died, including five Franciscan nuns who had been leaving Germany due to harsh anti-Catholic laws (see Kulturkampf). The work displays both the religious concerns and some of the unusual metre and rhythms of his subsequent poetry not present in his few remaining early works. It not only depicts the dramatic events and heroic deeds but also tells of the poet's reconciling the terrible events with God's higher purpose. The poem was accepted but not printed by a Jesuit publication. This rejection fed his ambivalence about his poetry. Most of his poetry remained unpublished until after his death. Hopkins chose the austere and restrictive life of a Jesuit and was at times gloomy. Biographer Robert Bernard Martin notes that "the life expectancy of a man becoming a novice at twenty-one was twenty-three more years rather than the forty years of males of the same age in the general population." The brilliant student who had left Oxford with a first-class honours degree failed his final theology exam. This failure almost certainly meant that, although ordained in 1877, Hopkins would not progress in the order. In 1877 he wrote "God's Grandeur", an array of sonnets which included "The Starlight Night". He finished "The Windhover" only a few months before his ordination. Though rigorous, isolated and sometimes unpleasant, his life during Jesuit training had at least some stability; the uncertain and varied work after ordination was even harder on his sensibilities. In October 1877, not long after he completed "The Sea and the Skylark" and only a month after he had been ordained as a priest, Hopkins took up his duties as sub-minister and teacher at Mount St Mary's College near Sheffield. In July 1878 he became curate at the Jesuit church in Mount Street, London, and in December that of St Aloysius's Church, Oxford, then moving to Manchester, Liverpool and Glasgow. While ministering in Oxford, he became a founding member of The Newman Society, a society established in 1878 for Catholic members of the University of Oxford. He taught Greek and Latin at Mount St Mary's College, Sheffield, and Stonyhurst College, Lancashire. In the late 1880s Hopkins met Father Matthew Russell, the Jesuit founder and editor of the "Irish Monthly" magazine, who presented him to Katharine Tynan and W. B. Yeats. In 1884 he became professor of Greek and Latin at University College Dublin. His English roots and his disagreement with the Irish politics of the time, as well as his own small stature (5 feet 2 inches), unprepossessing nature and personal oddities meant that he was not a particularly effective teacher. This, as well as his isolation in Ireland, deepened his gloom. His poems of the time, such as "I Wake and Feel the Fell of Dark, not Day", reflected this. They came to be known as the "terrible sonnets", not because of their quality but because according to Hopkins's friend Canon Richard Watson Dixon, they reached the "terrible crystal", meaning that they crystallised the melancholic dejection that plagued the later part of Hopkins's life. Several issues brought about this melancholic state and restricted his poetic inspiration during the last five years of his life. His workload was extremely heavy. He disliked living in Dublin, away from England and friends; he was also disappointed at how far the city had fallen from its Georgian elegance of the previous century. His general health deteriorated as his eyesight began to fail. He felt confined and dejected. As a devout Jesuit, he found himself in an artistic dilemma. To subdue any egotism which would violate the humility required by his religious position, he decided never to publish his poems. But Hopkins realised that any true poet requires an audience for criticism and encouragement. This conflict between his religious obligations and his poetic talent caused him to feel that he had failed them both. After suffering ill health for several years and bouts of diarrhoea, Hopkins died of typhoid fever in 1889 and was buried in Glasnevin Cemetery, following his funeral in Saint Francis Xavier Church on Gardiner Street, located in Georgian Dublin. He is thought to have suffered throughout his life from what today might be diagnosed as either bipolar disorder or chronic unipolar depression, and battled a deep sense of melancholic anguish. However, on his death bed, his last words were, "I am so happy, I am so happy. I loved my life." He was 44 years of age. According to John Bayley, "All his life Hopkins was haunted by the sense of personal bankruptcy and impotence, the straining of 'time's eunuch' with no more to 'spend'... " a sense of inadequacy, graphically expressed in his last sonnets. Toward the end of his life, Hopkins suffered several long bouts of depression. His "terrible sonnets" struggle with problems of religious doubt. He described them to Bridges as "[t]he thin gleanings of a long weary while." "Thou Art Indeed Just, Lord" (1889) echoes "Jeremiah" 12:1 in asking why the wicked prosper. It reflects the exasperation of a faithful servant who feels he has been neglected, and is addressed to a divine person ("Sir") capable of hearing the complaint, but seemingly unwilling to listen. Hopkins uses parched roots as a metaphor for despair. The image of the poet's estrangement from God figures in "I wake and feel the fell of dark, not day", in which he describes lying awake before dawn, likening his prayers to "dead letters sent To dearest him that lives alas! away." The opening line recalls Lamentations 3:2: "He hath led me, and brought me into darkness, but not into light." "No Worst, There is None" and "Carrion Comfort" are also counted among the "terrible sonnets". Much of Hopkins's historical importance has to do with the changes he brought to the form of poetry, which ran contrary to conventional ideas of metre. Prior to Hopkins, most Middle English and Modern English poetry was based on a rhythmic structure inherited from the Norman side of English literary heritage. This structure is based on repeating "feet" of two or three syllables, with the stressed syllable falling in the same place on each repetition. Hopkins called this structure "running rhythm", and although he wrote some of his early verse in running rhythm, he became fascinated with the older rhythmic structure of the Anglo-Saxon tradition, of which "Beowulf" is the most famous example. Hopkins called his own rhythmic structure sprung rhythm. Sprung rhythm is structured around feet with a variable number of syllables, generally between one and four syllables per foot, with the stress always falling on the first syllable in a foot. It is similar to the "rolling stresses" of Robinson Jeffers, another poet who rejected conventional metre. Hopkins saw sprung rhythm as a way to escape the constraints of running rhythm, which he said inevitably pushed poetry written in it to become "same and tame". In this way, Hopkins sprung rhythm can be seen as anticipating much of free verse. His work has no great affinity with either of the contemporary Pre-Raphaelite and neo-romanticism schools, although he does share their descriptive love of nature and he is often seen as a precursor to modernist poetry, or as a bridge between the two poetic eras. The language of Hopkins's poems is often striking. His imagery can be simple, as in "Heaven-Haven", where the comparison is between a nun entering a convent and a ship entering a harbour out of a storm. It can be splendidly metaphysical and intricate, as it is in "As Kingfishers Catch Fire", where he leaps from one image to another to show how each thing expresses its own uniqueness, and how divinity reflects itself through all of them. Hopkins was a supporter of linguistic purism in English. In an 1882 letter to Robert Bridges, Hopkins writes: "It makes one weep to think what English might have been; for in spite of all that Shakespeare and Milton have done... no beauty in a language can make up for want of purity." He took time to learn Old English, which became a major influence on his writing. In the same letter to Bridges he calls Old English "a vastly superior thing to what we have now." He uses many archaic and dialect words but also coins new words. One example of this is "twindles", which seems from its context in "Inversnaid" to mean a combination of "twines" and "dwindles". He often creates compound adjectives, sometimes with a hyphen (such as "dapple-dawn-drawn falcon") but often without, as in "rolling level underneath him steady air". This use of compound adjectives, similar to the Old English use of compounds nouns, concentrates his images, communicating to his readers the instress of the poet's perceptions of an inscape. Added richness comes from Hopkins's extensive use of alliteration, assonance, onomatopoeia and rhyme, both at the end of lines and internally as in: Hopkins was influenced by the Welsh language, which he had acquired while studying theology at St Beuno's near St Asaph. The poetic forms of Welsh literature and particularly cynghanedd, with its emphasis on repeating sounds, accorded with his own style and became a prominent feature of his work. This reliance on similar-sounding words with close or differing senses means that his poems are best understood if read aloud. An important element in his work is Hopkins's own concept of "inscape", which was derived in part from the medieval theologian Duns Scotus. Anthony Domestico explains,Inscape, for Hopkins, is the charged essence, the absolute singularity that gives each created thing its being; instress is both the energy that holds the inscape together and the process by which this inscape is perceived by an observer. We instress the inscape of a tulip, Hopkins would say, when we appreciate the particular delicacy of its petals, when we are enraptured by its specific, inimitable shade of pink." "The Windhover" aims to depict not the bird in general, but instead one instance and its relation to the breeze. This is just one interpretation of Hopkins's most famous poem, one which he felt was his best. During his lifetime, Hopkins published few poems. It was only through the efforts of Robert Bridges that his works were seen. Despite Hopkins burning all his poems on entering the Jesuit novitiate, he had already sent some to Bridges, who with some other friends, was one of the few people to see many of them for some years. After Hopkins's death they were distributed to a wider audience, mostly fellow poets, and in 1918 Bridges, by then poet laureate, published a collected edition; an expanded edition, prepared by Charles Williams, appeared in 1930, and a greatly expanded edition by William Henry Gardner appeared in 1948 (eventually reaching a fourth edition, 1967, with N. H. Mackenzie). Notable collections of Hopkins's manuscripts and publications are in Campion Hall, Oxford; the Bodleian Library, Oxford; and the Foley Library at Gonzaga University in Spokane, Washington. Timothy d'Arch Smith, antiquarian bookseller, ascribes to Hopkins suppressed erotic impulses which he views as taking on a degree of specificity after Hopkins met Robert Bridges's distant cousin, friend, and fellow Etonian Digby Mackworth Dolben, "a Christian Uranian". Robert Martin asserts that when Hopkins first met Dolben, on Dolben's 17th birthday in Oxford in February 1865, it "was, quite simply, the most momentous emotional event of [his] undergraduate years, probably of his entire life." According to Robert Martin, "Hopkins was completely taken with Dolben, who was nearly four years his junior, and his private journal for confessions the following year proves how absorbed he was in imperfectly suppressed erotic thoughts of him." Martin also considers it "probable that [Hopkins] would have been deeply shocked at the reality of sexual intimacy with another person." Hopkins composed two poems about Dolben, "Where art thou friend" and "The Beginning of the End". Robert Bridges, who edited the first edition of Dolben's poems as well as Hopkins's, cautioned that the second poem "must never be printed," though Bridges himself included it in the first edition (1918). Another indication of the nature of his feelings for Dolben is that Hopkins's high Anglican confessor seems to have forbidden him to have any contact with Dolben except by letter. Hopkins never saw Dolben again after the latter's short visit to Oxford during which they met, and any continuation of their relationship was abruptly ended by Dolben's drowning two years later in June 1867. Hopkins's feeling for Dolben seems to have cooled by that time, but he was nonetheless greatly affected by his death. "Ironically, fate may have bestowed more through Dolben's death than it could ever have bestowed through longer life ... [for] many of Hopkins's best poems – impregnated with an elegiac longing for Dolben, his lost beloved and his muse – were the result." Hopkins's relationship with Dolben is explored in the novel "The Hopkins Conundrum". Some of Hopkins's poems, such as "The Bugler's First Communion" and "Epithalamion", arguably embody homoerotic themes, although the second poem was arranged by Robert Bridges from extant fragments. One contemporary critic, M. M. Kaylor, argued for Hopkins's inclusion with the Uranian poets, a group whose writings derived, in many ways, from prose works of Walter Pater, Hopkins's academic coach for his Greats exams and later a lifelong friend. Some critics have argued that homoerotic readings are either highly tendentious, or that they can be classified under the broader category of "homosociality", over the gender, sexual-specific "homosexual" term. Hopkins's journal writings, they argue, offer a clear admiration for feminised beauty. In his book "Hopkins Reconstructed" (2000), Justus George Lawler criticises Robert Martin's controversial biography "Gerard Manley Hopkins: A Very Private Life" (1991) by suggesting that Martin "cannot see the heterosexual beam... for the homosexual biographical mote in his own eye... it amounts to a slanted eisegesis". The poems that elicit homoerotic readings can be read not merely as exercises in sublimation but as powerful renditions of religious conviction, a conviction that caused strain in his family and even led him to burn some poems that he felt were unnecessarily self-centred. Julia Saville's book "A Queer Chivalry" views the religious imagery in the poems as Hopkins's way of expressing the tension with homosexual identity and desire. Christopher Ricks notes that Hopkins engaged in a number of penitential practices, "but all of these self-inflictions were not self-inflictions to him, and they are his business – or are his understanding of what it was for him to be about his Father's business." Ricks takes issue with Martin's apparent lack of appreciation of the importance of the role of Hopkins's religious commitment to his writing, and cautions against assigning a priority of influence to any sexual instincts over other factors such as Hopkins's estrangement from his family. Biographer Paul Mariani finds in Hopkins poems "an irreconcilable tension – on the one hand, the selflessness demanded by Jesuit discipline; on the other, the seeming self-indulgence of poetic creation." Hopkins spent the last five years of his life as a classics professor at University College Dublin. Hopkins's isolation in 1885 was multiple: a Jesuit distanced from his Anglican family and his homeland, an Englishman teaching in Dublin during a time of political strife, an unpublished poet striving to reconcile his artistic and religious callings. The poem "To seem the stranger" was written in Ireland between 1885 and 1886 and is a poem of isolation and loneliness. Ricks called Hopkins "the most original poet of the Victorian age." Hopkins is considered as influential as T. S. Eliot in initiating the modern movement in poetry. His experiments with elliptical phrasing and double meanings and quirky conversational rhythms turned out to be liberating to poets such as W. H. Auden and Dylan Thomas. Well-known works by Hopkins include:
https://en.wikipedia.org/wiki?curid=12523
Cis–trans isomerism "Cis"–"trans" isomerism, also known as geometric isomerism or configurational isomerism, is a term used in organic chemistry. The prefixes ""cis"" and ""trans"" are from Latin: "this side of" and "the other side of", respectively. In the context of chemistry, "cis" indicates that the functional groups are on the same side of the carbon chain while "trans" conveys that functional groups are on opposing sides of the carbon chain. Cis-trans isomers are stereoisomers, that is, pairs of molecules which have the same formula but whose functional groups are rotated into a different orientation in three-dimensional space. It is not to be confused with "E"–"Z" isomerism, which is an "absolute" stereochemical description. In general, stereoisomers contain double bonds that do not rotate, or they may contain ring structures, where the rotation of bonds is restricted or prevented. "Cis" and "trans" isomers occur both in organic molecules and in inorganic coordination complexes. "Cis" and "trans" descriptors are not used for cases of conformational isomerism where the two geometric forms easily interconvert, such as most open-chain single-bonded structures; instead, the terms ""syn"" and ""anti"" are used. The term "geometric isomerism" is considered by IUPAC to be an obsolete synonym of ""cis"–"trans" isomerism". When the substituent groups are oriented in the same direction, the diastereomer is referred to as "cis", whereas, when the substituents are oriented in opposing directions, the diastereomer is referred to as "trans". An example of a small hydrocarbon displaying "cis"–"trans" isomerism is but-2-ene. Alicyclic compounds can also display "cis"–"trans" isomerism. As an example of a geometric isomer due to a ring structure, consider 1,2-dichlorocyclohexane: "Cis" and "trans" isomers often have different physical properties. Differences between isomers, in general, arise from the differences in the shape of the molecule or the overall dipole moment. These differences can be very small, as in the case of the boiling point of straight-chain alkenes, such as pent-2-ene, which is 37 °C in the "cis" isomer and 36 °C in the "trans" isomer. The differences between "cis" and "trans" isomers can be larger if polar bonds are present, as in the 1,2-dichloroethenes. The "cis" isomer in this case has a boiling point of 60.3 °C, while the "trans" isomer has a boiling point of 47.5 °C. In the "cis" isomer the two polar C-Cl bond dipole moments combine to give an overall molecular dipole, so that there are intermolecular dipole–dipole forces (or Keesom forces), which add to the London dispersion forces and raise the boiling point. In the "trans" isomer on the other hand, this does not occur because the two C−Cl bond moments cancel and the molecule has a net zero dipole (it does however have a non-zero quadrupole). The two isomers of butenedioic acid have such large differences in properties and reactivities that they were actually given completely different names. The "cis" isomer is called maleic acid and the "trans" isomer fumaric acid. Polarity is key in determining relative boiling point as it causes increased intermolecular forces, thereby raising the boiling point. In the same manner, symmetry is key in determining relative melting point as it allows for better packing in the solid state, even if it does not alter the polarity of the molecule. One example of this is the relationship between oleic acid and elaidic acid; oleic acid, the "cis" isomer, has a melting point of 13.4 °C, making it a liquid at room temperature, while the "trans" isomer, elaidic acid, has the much higher melting point of 43 °C, due to the straighter "trans" isomer being able to pack more tightly, and is solid at room temperature. Thus, "trans" alkenes, which are less polar and more symmetrical, have lower boiling points and higher melting points, and "cis" alkenes, which are generally more polar and less symmetrical, have higher boiling points and lower melting points. In the case of geometric isomers that are a consequence of double bonds, and, in particular, when both substituents are the same, some general trends usually hold. These trends can be attributed to the fact that the dipoles of the substituents in a "cis" isomer will add up to give an overall molecular dipole. In a "trans" isomer, the dipoles of the substituents will cancel out due to being on opposite sides of the molecule. "Trans" isomers also tend to have lower densities than their "cis" counterparts. As a general trend, "trans" alkenes tend to have higher melting points and lower solubility in inert solvents, as "trans" alkenes, in general, are more symmetrical than "cis" alkenes. Vicinal coupling constants (3"J"HH), measured by NMR spectroscopy, are larger for "trans" (range: 12–18 Hz; typical: 15 Hz) than for "cis" (range: 0–12 Hz; typical: 8 Hz) isomers. Usually for acyclic systems "trans" isomers are more stable than "cis" isomers. This is typically due to the increased unfavorable steric interaction of the substituents in the "cis" isomer. Therefore, "trans" isomers have a less-exothermic heat of combustion, indicating higher thermochemical stability. In the Benson heat of formation group additivity dataset, "cis" isomers suffer a 1.10 kcal/mol stability penalty. Exceptions to this rule exist, such as 1,2-difluoroethylene, 1,2-difluorodiazene (FN=NF), and several other halogen- and oxygen-substituted ethylenes. In these cases, the "cis" isomer is more stable than the "trans" isomer. This phenomenon is called the "cis effect". The "cis"–"trans" system for naming alkene isomers should generally only be used when there are only two different substituents on the double bond, so there is no confusion about which substituents are being described relative to each other. For more complex cases, the cis/trans designation is generally based on the longest carbon chain as reflected in the root name of the molecule (i.e. an extension of standard organic nomenclature for the parent structure). The IUPAC standard designations "E"–"Z" are unambiguous in all cases, and therefore are especially useful for tri- and tetrasubstituted alkenes to avoid any confusion about which groups are being identified as "cis" or "trans" to each other. "Z" (from the German ) means "together". "E" (from the German ) means "opposed" in the sense of "opposite". That is, "Z" has the higher-priority groups "cis" to each other and "E" has the higher-priority groups "trans" to each other. Whether a molecular configuration is designated "E" or "Z" is determined by the Cahn-Ingold-Prelog priority rules; higher atomic numbers are given higher priority. For each of the two atoms in the double bond, it is necessary to determine the priority of each substituent. If both the higher-priority substituents are on the same side, the arrangement is "Z"; if on opposite sides, the arrangement is "E". Because the cis/trans and "E"–"Z" systems compare different groups on the alkene, it is not strictly true that "Z" corresponds to "cis" and "E" corresponds to trans. For example, "trans"-2-chlorobut-2-ene (the two methyl groups, C1 and C4, on the but-2-ene backbone are "trans" to each other) is ("Z")-2-chlorobut-2-ene (the chlorine and C4 are together because C1 and C4 are opposite). "Cis"–"trans" isomerism can also occur in inorganic compounds, most notably in diazenes and coordination compounds. Diazenes (and the related diphosphenes) can also exhibit cis/trans isomerism. As with organic compounds, the "cis" isomer is generally the more reactive of the two, being the only isomer that can reduce alkenes and alkynes to alkanes, but for a different reason: the "trans" isomer cannot line its hydrogens up suitably to reduce the alkene, but the "cis" isomer, being shaped differently, can. In inorganic coordination complexes with octahedral or square planar geometries, there are also "cis" isomers in which similar ligands are closer together and "trans" isomers in which they are further apart. For example, there are two isomers of square planar Pt(NH3)2Cl2, as explained by Alfred Werner in 1893. The "cis" isomer, whose full name is "cis"-diamminedichloroplatinum(II), was shown in 1969 by Barnett Rosenberg to have antitumor activity, and is now a chemotherapy drug known by the short name cisplatin. In contrast, the "trans" isomer (transplatin) has no useful anticancer activity. Each isomer can be synthesized using the trans effect to control which isomer is produced. For octahedral complexes of formula MX4Y2, two isomers also exist. (Here M is a metal atom, and X and Y are two different types of ligands.) In the "cis" isomer, the two Y ligands are adjacent to each other at 90°, as is true for the two chlorine atoms shown in green in "cis"-[Co(NH3)4Cl2]+, at left. In the "trans" isomer shown at right, the two Cl atoms are on opposite sides of the central Co atom. A related type of isomerism in octahedral MX3Y3 complexes is facial–meridional (or "fac"/"mer") isomerism, in which different numbers of ligands are "cis" or "trans" to each other. Metal carbonyl compounds can be characterized as ""fac"" or ""mer"" using infrared spectroscopy.
https://en.wikipedia.org/wiki?curid=12528
George Peppard George Peppard Rohrer Jr. (; October 1, 1928 – May 8, 1994) was an American film and television actor. Peppard secured a major role when he starred alongside Audrey Hepburn in "Breakfast at Tiffany's" (1961), and later portrayed a character based on Howard Hughes in "The Carpetbaggers" (1964). On television, he played the title role of millionaire insurance investigator and sleuth Thomas Banacek in the early-1970s mystery series "Banacek". He played Col. John "Hannibal" Smith, the cigar-smoking leader of a renegade commando squad in the hit 1980s action show "The A-Team". George Peppard, Jr. was born October 1, 1928, in Detroit, Michigan, the son of building contractor George Peppard, Sr. and opera singer and voice teacher Vernelle Rohrer. His mother had five miscarriages before George. His family lost all its money in the Depression, and his father had to leave George and his mother in Detroit while he went looking for work. He graduated from Dearborn High School in Dearborn, Michigan in 1946. Peppard enlisted in the United States Marine Corps July 8, 1946, and rose to the rank of corporal, leaving the Corps at the end of his enlistment in January 1948. During 1948 and 1949, he studied civil engineering at Purdue University where he was a member of the Purdue Playmakers theatre troupe and Beta Theta Pi fraternity. He became interested in acting, being an admirer of Walter Huston in particular. "I just decided I didn't want to be an engineer," he said later. "It was the best decision I ever made." He then transferred to Carnegie Institute of Technology (now Carnegie Mellon University) in Pittsburgh, Pennsylvania, where he earned his bachelor's degree in 1955. (It took longer than normal because he dropped out for a year when his father died in 1951 and Peppard had to finish his father's jobs.) He also trained at the Pittsburgh Playhouse. In addition to acting, Peppard was a pilot. He spent a portion of his 1966 honeymoon training to fly his Learjet in Wichita, Kansas. Peppard made his stage debut in 1949 at the Pittsburgh Playhouse. After moving to New York City, Peppard enrolled in the Actors Studio, where he studied the Method with Lee Strasberg. He did a variety of jobs to pay his way during this time, such as working as a disc jockey, being a radio station engineer, teaching fencing, driving a taxi and being a mechanic in a motorcycle repair shop. He worked in summer stock in New England and appeared at the open air Oregon Shakespeare Festival in Ashland, Oregon for two seasons. He worked as a cab driver until getting his first part in "Lamp Up Uno My Feet". He appeared with Paul Newman, in "The United States Steel Hour" (1956), as the singing, guitar-playing baseball player Piney Woods in "Bang the Drum Slowly", directed by Daniel Petrie. He appeared in an episode of "Kraft Theatre", "Flying Object at Three O'Clock High" (1956). In April 1956 he appeared in a segment of an episode of "Cameras Three" performing from "The Shoemaker's Holiday"; the "New York Times" called his performance "beguiling". In July 1956 he signed to make his film debut in "The Strange One" directed by Jack Garfein, based on the play "End as a Man". It was the first film from Garfein as director and Calder Willingham as producer, plus for Peppard, Ben Gazzara, Geoffrey Horne, Pat Hingle, Arthur Storch and Clifton James. Filming took place in Florida. "I wouldn't say I was nervous," said Peppard, "just excited." On his return to New York he did "Out to Kill" on TV for "Kraft". In September he joined the cast of "Girls of Summer" directed by Jack Garfein with Shelley Winters, Storch and Hingle, plus a title song by Stephen Sondheim. This reached Broadway in November. Brooks Atkinson said Peppard "expertly plays a sly, malicious dance teacher." It only had a short run. The bulk of his work around this stage was for television: "The Kaiser Aluminum Hour" ("A Real Fine Cutting Edge", directed by George Roy Hill), "Studio One in Hollywood" ("A Walk in the Forest"), "The Alcoa Hour" ("The Big Build-Up" with E.G. Marshall), "Matinee Theatre" ("End of the Rope" with John Drew Barrymore, "Thread That Runs So True", "Aftermath"), "Kraft Theatre" ("The Long Flight"), "Alfred Hitchcock Presents" ("The Diplomatic Corpse", with Peter Lorre directed by Paul Henreid), and "Suspicion" ("The Eye of Truth" with Joseph Cotten based on a script by Eric Ambler). "The Strange One" came out in April 1957 but despite some strong reviews - the "New York Times" called Peppard "resolute" - it was not a financial success. In September 1957 he trialled a play by Robert Thom, "The Minotaur", directed by Sidney Lumet. Peppard played a key role in "Little Moon of Alban" (1958) alongside Christopher Plummer for the Hallmark Hall of Fame. The "Los Angeles Times" called him "excellent". In May, Peppard played his second film role, a support part in the Korean War movie "Pork Chop Hill" (1959) directed by Lewis Milestone. He was cast in part because he was unfamiliar to moviegoers. In October 1958 Peppard appeared on Broadway in "The Pleasure of His Company" (1958) starring Cyril Ritchard, who also directed. Peppard played the boyfriend who wants to marry Dolores Hart who was Ritchard's daughter; the "New York Times" called Peppard "admirable". The play was a hit and ran for a year. During the show's run Peppard auditioned successfully for MGM's "Home from the Hill" (1960) and the studio signed him to a long term contract - which he had not wanted to do but was a condition for the film. In February 1959 Hedda Hopper announced Peppard would leave "Company" to make two films for MGM. "Home from the Hill" and "The Subterraneans". "Home from the Hill" was a prestigious film directed by Vincente Minnelli and starring Robert Mitchum, who played Peppard's father. It featured several young actors MGM were hoping to develop, including Peppard, George Hamilton and Luana Patten. During filming Peppard said "Brando is a dead talent - I saw him in "The Young Lions" but said Peck is "a man of integrity as a star and a person. Lee Strasberg is the only person I know who is brilliant." "I want to be an actor and proud of my craft," said Peppard. "I would like to be an actor who is starred but being a star is something you can't count on whereas acting is something I can work on." It was a success at the box office, although the film's high cost meant that it was not profitable. Peppard's next film for MGM was "The Subterraneans", an adaptation of the 1958 novel by Jack Kerouac co starring Leslie Caron. It flopped and Peppard said "I couldn't get arrested" afterwards. He had meant to follow "The Subterraneans" by returning to Broadway with Julie Harris in "The Warm Peninsular" but this did not happen. In April 1959 Hedda Hopper said he would be in "Chatauqua" but that was not made until a decade later, starring Elvis Presley, as "The Trouble with Girls" (1969). At the end of 1959 Hopper predicted Peppard would be a big star saying "he has great emotional power, is a fine athlete, and does offbeat characters such as James Dean excelled in." Sol Siegel announced he would play the lead in "Two Weeks in Another Town". (Kirk Douglas ended up playing it.) He was also announced for the role of Arthur Blake in a film about the first Olympics called "And Seven from America" which was never made. Peppard returned to television to star in an episode of the anthology series "Startime", "Incident at a Corner" (1960) under the direction of Alfred Hitchcock alongside Vera Miles. He played Teddy Roosevelt on television in an episode of "Our American Heritage", "The Invincible Teddy" (1961). His good looks, elegant manner and superior acting skills landed Peppard his most famous film role as Paul Varjak in "Breakfast at Tiffany's" with Audrey Hepburn, based on a story by Truman Capote. Director Blake Edwards had not wanted Peppard, but been overruled by the producers. He was cast in July 1960. During filming Peppard did not get along with Hepburn or Patricia Neal, the latter calling him "cold and conceited". In November 1961 a newspaper article dubbed him "the next big thing". Peppard said he had turned down two TV series and was "concentrating on big screen roles." His contract with MGM was for two pictures a year, allowing for one outside film and six TV appearances a year, plus the right to star in a play every second year. "In a series you don't have time to develop a character," he said. "There's no build up; in the first segment you're already established." He was meant to appear in "Unarmed in Paradise" which was not made. He bought a script by Robert Blees called "Baby Talk" but it was not made either. Instead MGM cast him in the lead of their epic western "How the West Was Won" in 1962 (his character spanned three sections of the episodic Cinerama extravaganza). It was a massive hit. He followed this with a war story for Carl Foreman, "The Victors" (1963), made in Europe. He was offered $200,000 to appear in "The Long Ships" but did not want to go to Yugoslavia for six months. He was going to do "Next Time We Love" with Ross Hunter but it was never made. He starred in "The Carpetbaggers", a 150-minute saga of a ruthless, Hughes-like aviation and film mogul based on a best-selling novel by Harold Robbins. The cast included Elizabeth Ashley who had an affair with Peppard during filming and later married him. She described him as "some kind of Nordic god - six feet tall with beautiful blond hair, blue eyes and a body out of every high school cheerleader's teenage lust fantasy." Ashley claimed Peppard "was never late on set and he had nothing but scorn for actors who weren't professional enough to keep that together." She added that Peppard: Never was one of those actors who believes his job is to take the money, hit the mark and say the lines and let it go at that. He felt that as an above-the-title star he had the responsibility to use his muscle and power to try and make it better and that has never stopped in him. He was unrelenting about it, to the point where a lot of executives and directors came to feel he was a pain in the ass. But the really talented people loved working with him because of all his wonderful creative energy. "My performances bore me", said Peppard in a 1964 interview, adding that his ambition was to deliver "one great performance. And I must say I feel a little presumptuous to shoot for that. But that's the goal, like a hockey goal. I figure I've got a choice ... not of the outcome but of the objective. And my objective is that one performance." Peppard returned to television to do "Bob Hope Presents the Chrysler Theatre", "The Game with Glass Pieces". For MGM he appeared in "Operation Crossbow" (1965), a war film with Sophia Loren. It was the first film he made under a new contract with MGM to do one movie a year for three years. He was meant to follow this with an adaptation of the play "Merrily We Roll Along" but it was never made. "I'm an actor not a star," he said around this time, adding that he looked for "three things" in a film, "a good director, a good part and a good script. If I get two out of three of those I'm satisfied." Peppard starred in a thriller, "The Third Day" (1965) with Ashley who had become his second wife. The film was directed by Jack Smight who claimed Warner Bros only agreed to finance it because they had a deal with Peppard. Peppard said when he made the film "I wasn't just broke I was up to my ears in debt." He was announced for "The Last Night of Don Juan" for Michael Gordon but it was not made. He was cast as the lead in "Sands of the Kalahari" (1965) at a fee of $200,000 but walked off the set after only a few days of filming in March 1965 and had to be replaced by Stuart Whitman. Paramount sued Peppard for $930,555 in damages and he countersued. Ashley later wrote: What tormented George so badly was that he was caught between being an actor and a movie star. He did not start off as an untalented pretty nothing who had to be grateful for any piece of meat that was thrown his way. He was intelligent and talented but because he was six foot tall with blond hair and blue eyes he had been put in the slot of being a movie star at a time when the movie studios were still very powerful and expected you to play the game by their rules... I don't think it was possible to be a male movie star who looked like he did and got hot when he did and not be trapped by it. He had a huge hit with "The Blue Max" (1966), playing a German World War One ace, alongside James Mason and Ursula Andress, directed by John Guillermin. Film critic David Shipman writes of this stage in his career: Peppard played a German Jew fighting for the Allies in "Tobruk" (1967) alongside Rock Hudson. "It's a big mistake to think I'm making a lot of money and turning out a lot of crap," he said in a 1966 interview. Peppard wanted to ensure financial security so he bought a cattle ranch but it was requiring funds. This prompted Peppard to sign a multi-million-dollar five-picture non-exclusive contract with Universal in August 1966 - two for the first year, one a year after that. Ashley claimed this ultimately hurt Peppard's career. The first two films under the contract were "Rough Night in Jericho" (1967), a Western with Dean Martin, and "What's So Bad About Feeling Good?" (1968), a comedy directed by George Seaton with Mary Tyler Moore; It was followed by a private eye film directed by Guillermin, "P.J." (1968), and "House of Cards" (1968) a thriller directed by Guillermin shot in Europe. None of these films were particularly successful at the box office. Ashley says doing these films caused Peppard to start drinking. She also claimed Peppard turned down "The Heart is a Lonely Hunter" because he did not want to play a weak or possibly homosexual character. In 1967 he bought a script "Midnight Fair" by Sheridan Greenway, to produce. In 1968 he announced he had co-written a script "Watch Them Die", which he planned to direct, but not play a starring role. It was never made. Neither was a version of "The Most Dangerous Game" for MGM, announced in 1967. He did a thriller, "Pendulum" (1969), directed by George Schaefer with Jean Seberg, and went to England to make "The Executioner" (1970) with Joan Collins. In "Cannon for Cordoba" (1970) he played the steely Captain Rod Douglas, who has been put in charge of gathering a group of soldiers on a dangerous mission into Mexico. It was not a success. Neither was "One More Train to Rob" (1971) another Western. Ashley wrote "he became more and more frustrated and disillusioned from hating the kind of pictures he had to do. There were no good scripts, no good directors and at some point it became icily clear that there weren't going to be any." In September 1970 he toured Vietnam with a USO show. In March 1971 Peppard announced his company, Tradewind Productions, had optioned a novel by Stanley Ellin, "The Eighth Circle", but it was not made. Peppard starred in a Western TV movie "The Bravos" (1972) with Pernell Roberts. He returned to features with "The Groundstar Conspiracy" (1972) co starring Michael Sarrazin, shot in Canada for Universal; Peppard's fee was $400,000. In August 1971 Peppard signed to star in "Banacek" (1972–1974), part of "The NBC Mystery Movie" series, starring in 90-minute whodunits as a wealthy Boston playboy who solves thefts for insurance companies for a finder's fee. Sixteen regular episodes were produced over two seasons. Peppard also did some second unit directing. "Ever since "The Carpetbaggers" I've played the iron-jawed cold-eyed killer and that gets to be a goddamed bore," he said in 1972. "Acting is not the most creative thing in the world and when you play a man of action it gets to be a long day. Banacek is the best character I've played in a long time." In February 1972 Peppard stood trial in Boston for attempting to rape a stripper in his hotel room. He was cleared of the charges. The same year he and Ashley were divorced, with Peppard to pay her $2,000 a month alimony plus $350 a month child support for their son Christopher. Peppard starred in "Newman's Law" (1974), an action film originally called "Newman". When "Banacek" ended Peppard wanted to take time off to focus on producing and directing, including a project called "The Total Beast". However alimony and child support obligations forced him back to acting. He made some TV movies "One of Our Own" (1975), a medical drama, and "" (1975), as Sam Sheppard, for which his fee was $100,000. "One of Our Own" had been a pilot for a TV series which was picked up - "Doctors' Hospital" (1975) lasted 15 episodes. Peppard starred in the science-fiction film "Damnation Alley" (1977), which has gone on to attain a substantial cult following. Peppard's role in the film was reportedly turned down by Steve McQueen because of salary issues. The movie cost $8.5 million - Peppard said Jack Smight's original cut was "wonderful" but claimed the film was re-edited by executives. With fewer interesting roles coming his way, he acted in, directed and produced the drama "Five Days from Home" in 1979. Peppard later said the low point of his career came over a three-year period around the time of "Five Days from Home". "It was a bad time", he said in 1983. "I was heavily in debt. My career seemed to be going nowhere. Not much work over a three-year period. Every morning I'd wake up and realize I was getting deeper and deeper into debt". He had to sell his car and take out a second mortgage on his home to finance "Five Days from Home". Eventually, he got his money back and was able to concentrate on his career."I'm quite proud of it," he said in 1979. "I sold many assets to help make it but I don't mind. It was the best time of my life." He had the lead in the TV movies "Crisis in Mid-air" (1979) and "Torn Between Two Lovers" (1979) and went to Europe for "From Hell to Victory" (1979). In a rare game show appearance, Peppard did a week of shows on "Password Plus" in 1979, in which he could often be seen smoking cigarettes while filming. Out of five shows, the first was never broadcast on NBC, but aired much later on GSN and Buzzr, because of on-camera comments made by Peppard regarding personal dissatisfaction he felt related to his treatment by the NBC officials who supervised the production of "Password Plus". As an result of this, Goodson-Todman banned Peppard from appearing on any of their game shows ever again for that incident, which cost them a lot since they had to film an extra episode two weeks later to make up for the pulled episode. In April 1979 Peppard said "I want to act again - and I need a good role. The Sam Shepherd story I did for TV was the only good role I've had in the last seven to ten years." He added he was developing two movies and a TV drama series plus an educational series. In 1980, Peppard was offered, and accepted, the role of Blake Carrington in the television series "Dynasty". During the filming of the pilot episode, which also featured Linda Evans and Bo Hopkins, Peppard repeatedly clashed with the show's producers, Richard and Esther Shapiro; among other things, he felt that his role was too similar to that of J. R. Ewing in the series "Dallas". Three weeks later, before filming was to begin on additional episodes, Peppard was fired and the part was offered to John Forsythe; the scenes with Peppard were re-shot and Forsythe became the permanent star of the show. "It was a big blow," Peppard noted subsequently, adding he felt Forsythe ultimately did "a better job (as Blake Carrington) than I could have done." Ironically, this led to him being available to be cast in NBC's "The A-Team", the number one rated television show in its first season in 1982. "I'm so glad I wasn't drinking," he said later (he had given up in 1979). "I bet a lot of people thought when I did certain things, I had been drinking and now they found out it wasn't the booze at all. It was me." Before then he had an excellent part as a cowboy in "Battle Beyond the Stars" (1980), a popular science fiction film. He travelled to Canada to make "Your Ticket Is No Longer Valid" (1981) with Richard Harris, to New Zealand for "Race for the Yankee Zephyr" (1982) and Spain for "Hit Man" (1982). "I almost disappeared for awhile, between ages 45 and 55," he later reflected. "Nobody wants to work with someone who quits three series. They think you're insane to quit a series with all the millions of dollars to be made there. It gets to be like crossing the mob. You find out some people you thought were your friends aren't really." In 1982, Peppard auditioned for and won the role of Colonel John "Hannibal" Smith in the television action adventure series "The A-Team", acting alongside Mr. T, Dirk Benedict and Dwight Schultz. In the series, the A-Team was a team of renegade commandos on the run from the military for "a crime they did not commit" while serving in the Vietnam War. The A-Team members made their collective living as soldiers of fortune, but they helped only people who came to them with justified grievances. As "Hannibal" Smith, Peppard played the leader of the A-Team, distinguished by his cigar smoking, confident smirk, black leather gloves, disguises, and distinctive catch phrase, "I love it when a plan comes together." Peppard was attracted to the role partly because Smith was a master of disguise enabling Peppard to play a variety of characters. "I love the character of Hannibal," he said. "It inspires my fantasy. And, frankly, I need the money." "I wanted to change from leading man to character actor for years now but have never been given the chance before," he added. The show started filming in late 1982 and premiered in January 1983. It was an instant ratings success, going straight into the top ten most watched shows in the country. The series ran five seasons on NBC from 1983 to 1987, made Peppard known to a new generation and is arguably his best-known role. His fee was reportedly $50,000 an episode. This went up to $65,000, making him one of the best paid stars on television. Peppard said "the first year of the show "it was kind of like Monty Python - absolutely ridiculous. It was fresh, it was fun, it was silly - building an airplane out of a lawn-mower engine - fun stuff done very straight." After that, though "it became very boring to me and not very good." It has been reported that the role was originally written with James Coburn in mind, but Coburn declined, and thus it went to Peppard. Peppard was reportedly annoyed by Mr. T upstaging him in his public image, and at one point in their relationship, refused to speak directly to Mr. T. Instead he sent messages through intermediaries (including at times fellow cast members, particularly Dirk Benedict), and for this, Peppard was occasionally portrayed by the press as not a team player. Melinda Culea claimed it was Peppard who got her fired after the first season. "It's the first time I ever had money in the bank," Peppard said later. "Four California divorces and 25 years of alimony will see to it you have no money in the bank. It was a giant boost to my career, and made me a viable actor for other roles." During the series' run Peppard guest starred on the "Tales of the Unexpected" episode "The Dirty Detail" (1983). Peppard's last series was an intended occasional series of television movie features entitled "Man Against the Mob" (1988) set in the 1940s. In these TV detective films, Peppard played Los Angeles Police Detective Sgt. Frank Doakey. The second film "" was broadcast in December 1989. A third film in this series was planned, but Peppard died before it was filmed. In his later years Peppard appeared in several stage productions. In 1988 he portrayed Ernest Hemingway in the play "PAPA", which played a number of cities including Boise, Idaho; Atlanta, Georgia; and San Francisco. Peppard financed it, and played in it. In 1988 he said, "Once I saw this thing, I knew that if I was going to do it, I'd have to stick with it. I've got a couple bucks in the bank, so I'm not working on anything else. I got an adrenalin rush when I first read this play - part joy, part fear." Peppard said he understood Hemingway. "We were both married four times; that's one similarity. Up until 10 years ago I used to drink a lot, as he did. And then, he had to deal with living the life of a famous person." The play was well received. Peppard said of his image, "There's a George Peppard out there that I don't know. He's been written about, and various people have interpreted him various ways. There are people who've made up stories, apocryphal, about me. There are people who didn't like me much." He appeared in "Silence Like Glass" (1989) and "Night of the Fox" (1990). In 1989 he said "I'm afraid I'm typecast. It was discouraging when it first happened. I was sad. I had hoped to do lots of different kinds of roles. But fear and insecurity guides casting decisions. Movies and TV have to make money. And people get used to you playing a part and doing certain things. If you don't do it, they get disappointed and it shows up at the box office." In 1990 he was seeking finance for "The Crystal Contract", a film about an international cocaine cartel in which he would produce and star (but was never made)." I would like to do another series because it would mean steady work - and because I would like one more hit." In 1992 he toured in "The Lion in Winter", in which he played Henry II to Susan Clark's Eleanor of Aquitaine. ""I haven't been as happy as I am for a long time," he said. "When you find a part you are right for and you love, it's a source of happiness, believe me... If I could have my wish come true, I'd spend the next two years doing nothing but this play." His last television role was guest starring in "The PI" and an episode of "Matlock". Peppard was married five times and was the father of three children. In 1990 he said, "Getting married and having a bad divorce is just like breaking your leg. The same leg, in the same place. I'm lucky I don't walk with a cane." Peppard resided in a Greek revival-style white cottage in Hollywood Hills, California with elegant porches on three sides and a guest house in the back. He was living there at the time of his death. Later owned by designer Brenda Antin who spent a year renovating it, the small home was purchased by writer/actress Lena Dunham in 2015 for 2.699 million dollars. Peppard, born and raised in Dearborn, Michigan, Dearborn's most famous resident after Ford Motor Company founder Henry Ford and legendary long serving Congressman John Dingell, wanted to go home, and George is buried simply and plainly with his mother and father in the local Northview Cemetery in Dearborn. In April 2017, Peppard's name resurfaced in the media after the cemetery was vandalized for the third time and 37 stones overturned. The Peppard family stone was not damaged. The cemetery was subsequently restored. Peppard overcame a serious alcohol problem in 1978; subsequently then became deeply involved in helping other alcoholics. "I knew I had to stop and I did," he said in 1983. "Looking back now I'm ashamed of some of the things I did when I was drinking." He had smoked three packs of cigarettes a day for most of his life until he quit after being diagnosed with lung cancer in 1992, having part of one lung removed in an operation shortly after the formal diagnosis. Despite health problems in his later years he continued acting. In 1994, shortly before his death, Peppard completed a pilot with Tracy Nelson for a new series called "The P.I." It aired as an episode of "Matlock" and was to be spun off into a new television series with Peppard playing an aging detective and Nelson his daughter/sidekick. While battling lung cancer Peppard died on May 8, 1994, in Los Angeles from pneumonia. He was buried in Northview Cemetery, Dearborn, Michigan. David Shipman published this appraisal of Peppard in 1972: In 1990 Peppard said "an enormous amount of my film work has been spent charging up a hill saying, "Follow me, men! This way!" Even though I did "Breakfast at Tiffany's," nobody seemed to think I could do comedy. I always played the man of action. And men of action are not terribly deep characters, and not real vocal characters. " He added "I trained for seven years before I started getting screen work as a stage actor. I love working for an audience. Aside from that, despite all the uniforms and the guns, I think I am at my base a character actor... Being a star has never interested me. Stars, , are a pain. Stars to me are in the sky. The important question is, "How good an actor are you?" And now I have some hope, because I'm of an age where I could be considered for character roles. " Shortly before he died, he said, "If you look at my movie list, you`ll see some really good movies and then the start of ones that were not so good. But I was making enough money to send my children to good schools, have a house for them and give them a center in their lives."
https://en.wikipedia.org/wiki?curid=12531
Geocaching Geocaching is an outdoor recreational activity, in which participants use a Global Positioning System (GPS) receiver or mobile device and other navigational techniques to hide and seek containers, called "geocaches" or "caches", at specific locations marked by coordinates all over the world. A typical cache is a small waterproof container containing a logbook and sometimes a pen or pencil. The geocacher signs the log with their established code name and dates it, in order to prove that they found the cache. After signing the log, the cache must be placed back exactly where the person found it. Larger containers such as plastic storage containers (Tupperware or similar) or ammunition boxes can also contain items for trading, such as toys or trinkets, usually of more sentimental worth than financial. Geocaching shares many aspects with benchmarking, trigpointing, orienteering, treasure-hunting, letterboxing, waymarking and Munzee. Geocaching was originally similar to the 160-year-old game letterboxing, which uses clues and references to landmarks embedded in stories. Geocaching was conceived shortly after the removal of Selective Availability from the Global Positioning System on May 2, 2000, because the improved accuracy of the system allowed for a small container to be specifically placed and located. The first documented placement of a GPS-located cache took place on May 3, 2000, by Dave Ulmer of Beavercreek, Oregon. The location was posted on the Usenet newsgroup sci.geo.satellite-nav at . Within three days, the cache had been found twice, first by Mike Teague. According to Dave Ulmer's message, this cache was a black plastic bucket that was partially buried and contained software, videos, books, money, a can of beans, and a slingshot. The geocache and most of its contents were eventually destroyed by a lawn mower; the can of beans was the only item salvaged and was turned into a trackable item called the "Original Can of Beans". Another geocache and plaque called the Original Stash Tribute Plaque now sit at the site. Groundspeak allows extraterrestrial caches, e.g. the Moon or Mars, although presently, the website provides only earthbound coordinates. Only one geocache has ever been extraterrestrial: GC1BE91, which was on the International Space Station between 2008 and 2017. It used the launch area Baikonur in Kazakhstan as its position. The activity was originally referred to as the "GPS stash hunt" or "gpsstashing." This was changed shortly after the original hide when it was suggested in the gpsstash eGroup that "stash" could have negative connotations and the term "geocaching" was adopted. Over time, a variety of different hide-and-seek-type activities have been created or abandoned, so that "geocaching" may now refer to hiding and seeking containers, or locations or information without containers. An independent accounting of the early history documents several controversial actions taken by Irish and Grounded, Inc., a predecessor to Groundspeak, to increase "commercialization and monopolistic control over the hobby". More recently, other similar hobbies such as Munzee have attracted some geocachers by rapidly adopting smart-phone technology, which has caused "some resistance from geocaching organizers about placing caches along with Munzees". For the traditional geocache, a geocacher will place a waterproof container containing a log book (with pen and/or pencil) and trade items or trackables, then record the cache's coordinates. These coordinates, along with other details of the location, are posted on a listing site (see list of some sites below). Other geocachers obtain the coordinates from that listing site and seek out the cache using their handheld GPS receivers. The finding geocachers record their exploits in the logbook and online, but then must return the cache to the same coordinates so that other geocachers may find it. Geocachers are free to take objects (except the logbook, pencil, or stamp) from the cache in exchange for leaving something of similar or higher value. Typical cache "treasures", also known in the geocaching world as swag, are not high in monetary value but may hold personal value to the finder. Aside from the logbook, common cache contents are unusual coins or currency, small toys, ornamental buttons, CDs, or books. Although not required, many geocachers decide to leave behind signature items, such as personal Geocoins, pins, or craft items, to mark their presence at the cache location. Disposable cameras are popular as they allow for anyone who found the cache to take a picture which can be developed and uploaded to a Geocaching web site listed below. Also common are objects that are moved from cache to cache called "hitchhikers", such as Travel Bugs or Geocoins, whose travels may be logged and followed online. Cachers who initially place a Travel Bug or Geocoins often assign specific goals for their trackable items. Examples of goals are to be placed in a certain cache a long distance from home, or to travel to a certain country, or to travel faster and farther than other hitchhikers in a race. Less common trends are site-specific information pages about the historic significance of the site, types of trees, birds in the area or other such information. Higher-value items are occasionally included in geocaches as a reward for the First to Find (called "FTF"), or in locations which are harder to reach. Dangerous or illegal items, weapons, food and drugs are not allowed and are specifically against the rules of most geocache listing sites. If a geocache has been vandalized or stolen, it is said to have been "muggled". The term plays off the fact that those not familiar with geocaching are called muggles, a word borrowed from the "Harry Potter" series of books which was rising in popularity at the same time geocaching started. Traditional geocaching gave birth to GeoCaching – an active urban game of the Encounter project. The game is quite similar to geocaching but has time limitations and hints. Geocaches vary in size, difficulty, and location. Simple caches that are placed near a roadside are often called "drive-bys", "park 'n grabs" (PNGs), or "cache and dash". Geocaches may also be complex, involving lengthy searches, significant travel, or use of specialist equipment such as SCUBA diving, kayaking, or abseiling. Different geocaching websites list different variations per their own policies. Container sizes range from "nanos", particularly magnetic nanos, which can be smaller than the tip of a finger and have only enough room to store the log sheet, to 20-liter (5 gallon) buckets or even larger containers, such as entire trucks. The most common cache containers in rural areas are lunch-box-sized plastic storage containers or surplus military ammunition cans. Ammo cans are considered the gold standard of containers because they are very sturdy, waterproof, animal- and fire-resistant, and relatively cheap, and have plenty of room for trade items. Smaller containers are more common in urban areas because they can be more easily hidden. Over time many variations of geocaches have developed. Different platforms often have their own rules on which types are allowed or how they are classified. The following cache types are supported by both geocaching.com and opencaching.us unless stated otherwise. A Traditional cache is the most common type and consists of a container with a logbook. Exact coordinates where the cache is located are given. A Multi-cache consists of one or more stages, each of them containing the coordinates for the next one; the final stage contains a physical container with the logbook. An Offset cache is a multi-cache in which the initial coordinates are for a location containing information that encodes the final cache coordinates. An example would be to direct the finder to a plaque where the digits of a date on the plaque correspond to coordinates of the final cache. Mystery/puzzle caches require one to discover information or solve a puzzle to find the cache. Some mystery caches provide a puzzle that must be solved to determine the physical cache location. Caches which do not fit into other categories are classified as mystery caches. Challenge caches require a geocacher to complete a reasonably attainable geocaching-related task before being able to log the find. Examples include finding a number of caches that meet a category, completing a number of cache finds within a period of time, or finding a cache for every calendar day. On geocaching.com it is considered a subtype of the Mystery cache, while it is a type on its own on opencaching.us. A Night cache is multi-stage and intended to be found at night by following a series of reflectors with a flashlight to the final cache location. Considered a variant of the Mystery cache on "geoocaching.com". A Chirp cache is a Garmin-created innovative advance on multi-caches using new wireless beacon technology. The Chirp stores hints, multicache coordinates, counts visitors and confirms the cache is nearby. These caches were fully supported at OpenCaching.com, but they caused considerable discussion and some controversy at Groundspeak, where they were given a new "attribute". A Wherigo cache is a multi-stage cache hunt that uses a Wherigo "cartridge" to guide the player to find a physical cache sometime during cartridge play, usually at the end. Not all Wherigo cartridges incorporate geocaches into game play. Wherigo caches are unique to the geocaching.com website. Wherigo is a GPS location-aware software platform initially released in January 2008. Authors can develop self-enclosed story files (called "cartridges") that are read by the Wherigo player software, installed on either a GPS unit or smartphone. The player and story take advantage of the location information provided by the GPS to trigger in-game events, such as using a virtual object or interacting with characters. Completing an adventure can require reaching different locations and solving puzzles. Cartridges are coded in Lua. Lua may be used directly, but a builder application is usually used. The Wherigo site offers a builder application and a database of adventures free for download, though the builder has remained in its Alpha version since its last release in May 2008. The official player is only available for Pocket PC. A built-in player is available on Garmin Colorado and Oregon GPS models. The Wherigo Foundation was organized in December 2012. The group is composed of all Wherigo application developers who, up until that time, had been acting and developing separately. Their goal is to provide a consistent Wherigo experience across platforms, connect Wherigo applications via an API, and add modern features to the Wherigo platform. While Groundspeak is aware of this project, the company has yet to take a position. A Letterbox cache or a Letterbox hybrid cache is a combination of a geocache and a letterbox in the same container. A letterbox has a rubber stamp and a logbook instead of tradable items. Letterboxers carry their own stamp with them, to stamp the letterbox's log book and inversely stamp their personal log book with the letterbox stamp. The hybrid cache contains the important materials for this and may or may not include trade items. Moving/travelling caches are found at a listed set of coordinates. The finder hides the cache in a different location, and updates the listing, essentially becomes the hider, and the next finder continues the cycle. This cache type is supported at opencaching.us while it has been discontinued at geocaching.com. Guest Book caches use guest books often found in museums, tourist information centers, etc. They are listed exclusively at opencaching.us. The following cache types don't contain a physical logbook. A BIT cache is a laminated card with a QR code, similar to Munzee. The BIT Cache also contains a URL and a password, for logging purposes. They are listed exclusively on opencaching.us. Virtual caches are coordinates for a location, which has some other described object. Validation for finding a virtual cache generally requires one to email the cache hider with information such as a date or a name on a plaque, or to post a picture of oneself at the site with GPS receiver in hand. New virtual caches are no longer allowed by Groundspeak, but they remain supported by other sites. The Groundspeak website no longer lists new caches without a physical container, including virtual and webcam caches; however, older caches of these types have been grandfathered in (except for locationless/reverse, which are completely archived). On August 24, 2017, Groundspeak announced "Virtual Rewards", allowing 4000 new virtual caches to be placed during the following year. Earthcaches are one of the two exceptions to the no-container rule; they are caches in which players must answer geological questions to complete the cache. The other exception is for event caches; for an event to qualify, it must be specifically or mainly for geocachers, and must have a minimum duration dependent upon its category (CITO, regular, Mega, or Giga). Attendees of event caches can log that they 'attended', which will increment their number of found caches. Groundspeak created a waymarking website to handle all other non-physical caches. Earthcaches are virtual caches that are organized by the Geological Society of America. The cacher usually has to perform a task which teaches an educational lesson about the earth science of the cache area. They are listed at geocaching.com. Locationless/reverse caches are similar to a scavenger hunt. A description is given for something to find, such as a one-room schoolhouse, and the finder locates an example of this object. The finder records the location using their GPS receiver and often takes a picture at the location showing the named object and his or her GPS receiver. Typically others are not allowed to log that same location as a find. Webcam caches are virtual caches whose coordinates have a public webcam. The finder is often required to capture their image from the webcam for verification of the find. New webcam caches are no longer allowed by Groundspeak, but they remain supported by opencaching.us. Finally, a USB Cache or Dead Drop cache location has a USB drive embedded into walls or other structures. The cache is retrieved by connecting a device that has a USB port and that is able to read standard text files. This type is available at opencaching.us. There are a few kinds of events. An Event Cache is a gathering organized and attended by geocachers. It is not a true cache, but is treated as such by geocaching platforms: it can be "found" upon attending the event. Cache-In Trash-Out (CITO) Events are coordinated activities of trash pickup and other maintenance tasks (such as constructing footpaths, planting trees and removing invasive species) to improve the environment. CITO is an ongoing environmental initiative created by Groundspeak Inc. related to geocaching which encourages geocachers to clean up parks and other areas. This is done in two ways: specific events, traditionally around the time of Earth Day each year, in which groups go around picking up litter and maintaining the landscape while finding geocaches. Finally, a GPS Adventures Maze Exhibit is an exhibit at various museums and science centers in which participants in the maze learn about geocaching. These "events" have their own cache type on geocaching.com and include many non-geocachers. Geodashing is an outdoor sport in which teams of players use GPS receivers to find and visit randomly selected "dashpoints" (also called "waypoints") around the world and report what they find. The objective is to visit as many dashpoints as possible. Unlike geocaching, nothing is to be left at the dashpoints; the sole objective is to visit them within the time limit. The first game, organized by gpsgames.org, ran for two months (June and July 2001); each subsequent game has run for one month. Players are often encouraged to take pictures at the dashpoints and upload them to the site. Geocaching from space is a combination of flight to near space, the geocaching game, and a unique science experiment. The first Stratocaching event was held on 16 November 2013 in Prague and was successful. Ten caches and two "radioseeds" went up to into the stratosphere on a gondola called Dropion module carried by a high-altitude balloon. The caches and seeds then fell to earth for people to find. GPX files containing information such as a cache description and information about recent visitors to the cache are available from various listing sites. Geocachers may upload geocache data (also known as waypoints) from various websites in various formats, most commonly in file-type GPX, which uses XML. Some websites allow geocachers to search (build queries) for multiple caches within a geographic area based on criteria such as ZIP code or coordinates, downloading the results as an email attachment on a schedule. In recent years, Android and iPhone users have been able to download apps such as GeoBeagle that allow them to use their 3G and GPS-enabled devices to actively search for and download new caches. A variety of geocaching applications are available for geocache data management, file-type translation, and personalization. Geocaching software can assign special icons or search (filter) for caches based on certain criteria (e.g. distance from an assigned point, difficulty, date last found). Paperless geocaching means hunting a geocache without a physical printout of the cache description. Traditionally, this means that the seeker has an electronic means of viewing the cache information in the field, such as pre-downloading the information to a PDA or other electronic device. Various applications are able to directly upload and read GPX files without further conversion. Newer GPS devices released by Garmin, DeLorme and Magellan have the ability to read GPX files directly, thus eliminating the need for a PDA. Other methods include viewing real-time information on a portable computer with internet access or with an Internet-enabled smart phone. The latest advancement of this practice involves installing dedicated applications on a smart phone with a built-in GPS receiver. Seekers can search for and download caches in their immediate vicinity directly to the application and use the on-board GPS receiver to find the cache. A more controversial version of paperless caching involves mass-downloading only the coordinates and cache names (or waypoint IDs) for hundreds of caches into older receivers. This is a common practice of some cachers and has been used successfully for years. In many cases, however, the cache description and hint are never read by the seeker before hunting the cache. This means they are unaware of potential restrictions such as limited hunt times, park open/close times, off-limit areas, and suggested parking locations. The website geocaching.com now sells mobile applications which allow users to view caches through a variety of different devices. Currently, the Android, iPhone, and Windows Phone mobile platforms have applications in their respective stores. The apps also allow for a trial version with limited functionality. The site promotes mobile applications, and lists over two dozen applications (both mobile and browser/desktop based) that are using their proprietary but royalty-free public API. Developers at c:geo have criticised Groundspeak for being incompatible with open-source development. Additionally "c:geo - opensource" is a free opensource full function application for Android phones that is very popular. This app includes similar features to the official Geocaching mobile application, such as: View caches on a live map (Google Maps or OpenStreet Maps), navigation using a compass, map, or other applications, logging finds online and offline, etc. Geocaching enthusiasts have also made their own hand-held GPS devices using a Lego Mindstorms NXT GPS sensor. Geocache listing websites have their own guidelines for acceptable geocache publications. Government agencies and others responsible for public use of land often publish guidelines for geocaching, and a "Geocacher's Creed" posted on the Internet asks participants to "avoid causing disruptions or public alarm". Generally accepted rules are to not endanger others, to minimize the impact on nature, to respect private property, and to avoid public alarm. The reception from authorities and the general public outside geocache participants has been mixed to hostile. Cachers have been approached by police and questioned when they were seen as acting suspiciously. Other times, investigation of a cache location after suspicious activity was reported has resulted in police and bomb squad discovery of the geocache, such as the evacuation of a busy street in Wetherby, Yorkshire, England in 2011, and a street in Alvaston, Derby in 2020. Schools have also been evacuated when a cache has been seen by teachers or police, such as the case of Fairview High School in Boulder, Colorado in 2009. A number of caches have been destroyed by bomb squads. Diverse locations, from rural cemeteries to Disneyland, have been locked down as a result of such scares. The placement of geocaches has occasional critics among some government personnel and the public at large who consider it littering. Some geocachers act to mitigate this perception by picking up litter while they search for geocaches, a practice referred to in the community as "Cache In Trash Out". Events and caches are often organized revolving around this practice, with many areas seeing significant cleanup that would otherwise not take place, or would instead require federal, state or local funds to accomplish. Geocachers are also encouraged to clean up after themselves by retrieving old containers once a cache has been removed from play. Geocaching is legal in every country except North Korea (where GPS and all other mobile devices are illegal to possess) and is usually positively received when explained to law enforcement officials. However, certain types of placements can be problematic. Although generally disallowed, hiders could place caches on private property without adequate permission (intentionally or otherwise), which encourages cache finders to trespass. Caches might also be hidden in places where the act of searching can make a finder look suspicious (e.g. near schools, children's playgrounds, banks, courthouses, or in residential neighborhoods), or where the container placement could be mistaken for a drug stash or a bomb (especially in urban settings, under bridges, near banks, courthouses, or embassies). As a result, geocachers are strongly advised to label their geocaches where possible, so that they are not mistaken for a harmful object if discovered by non-geocachers. As well as concerns about littering and bomb threats, some geocachers hide their caches in inappropriate locations, such as electrical boxes, that may encourage risky behaviour, especially amongst children. Hides in these areas are discouraged, and cache listing websites enforce guidelines that disallow certain types of placements. However, as cache reviewers typically cannot see exactly where and how every particular cache is hidden, problematic hides can slip through. Ultimately it is also up to cache finders to use discretion when attempting to search for a cache, and report any problems. Regional rules for placement of caches have become quite complex. For example, in Virginia, the Virginia Department of Transportation and the Wildlife Management Agency now forbids the placement of geocaches on all land controlled by those agencies. Some cities, towns and recreation areas allow geocaches with few or no restrictions, but others require compliance with lengthy permitting procedures. The South Carolina House of Representatives passed Bill 3777 in 2005, stating, "It is unlawful for a person to engage in the activity of geocaching or letterboxing in a cemetery or in an historic or archeological site or property publicly identified by an historical marker without the express written consent of the owner or entity which oversees that cemetery site or property." The bill was referred to committee on first reading in the Senate and has been there ever since. The Illinois Department of Natural Resources requires geocachers who wish to place a geocache at any Illinois state park to submit the location on a USGS 7.5 minute topographical map, the name and contact information of the person(s) wishing to place the geocache, a list of the original items to be included in the geocache, and a picture of the container that is to be placed. In April 2020, during the COVID-19 pandemic, the township of Highlands East, Ontario, Canada temporarily banned geocaching, over concerns that geocache containers cannot be properly disinfected between finds. Several deaths have occurred while geocaching. The death of a 21-year-old experienced cacher, in December 2011, "while attempting a Groundspeak cache that does not look all that dangerous," led to discussion in Groundspeak forums of whether changes should be made, and whether cache owners or Groundspeak could be held liable. Groundspeak have since updated their geocaching.com Terms of Use Agreement which specifies that geocachers find geocaches at their own risk. In 2008, two lost hikers on Mount Hood, Oregon, United States, after spending the night in a snow cave, stumbled across a geocache and were able to phone this information out to rescuers, resulting in their timely rescue. Three adult geocachers, a 24-year-old woman and her parents, were trapped in a cave and rescued by firefighters in Rochester, New York, United States, while searching for an ammo can in 2012. Rochester Fire Department spokesman Lt. Ted Kuppinger said, "It's difficult because you're invested in it you want to find something like that so people will probably try to push themselves more than they should but you need to be prudent about what you're capable of doing." In 2015, the coastguard were called to a group of geocachers who were spotted walking into the Severn Estuary off the coast of Clevedon, England, in search of clues to a multi-cache. Although they felt they were safe and were able to return to land, they were considered to be in danger and were airlifted back to the shore. In October 2016, four people discovered a crashed car at the bottom of a ravine in Benton, Washington, United States, while out geocaching. They spotted the driver still trapped inside, and alerted the emergency services who effected a rescue. On 9 June 2018 four people in Prague, Czech Republic, were surprised by a strong sudden storm while searching for a cache in 4 km long tunnel. They were carried by the tidal wave for almost the whole length of the tunnel to the Vltava river where the tunnel ends. One woman was found dead in the river a few hours later. Six days later a second body, that of a man in the group, was found in the river. Two exhausted drowning people were rescued from the river suffering mostly from numerous bruises and blunt traumas. Numerous websites list geocaches around the world. Geocaching websites vary in many ways, including control of data. The first website to list geocaches was announced by Mike Teague on May 8, 2000. On September 2, 2000, Jeremy Irish emailed the gpsstash mailing list that he had registered the domain name geocaching.com and had set up his own Web site. He copied the caches from Mike Teague's database into his own. On September 6, Mike Teague announced that Jeremy Irish was taking over cache listings. , Teague had logged only 5 caches. The largest site is Geocaching.com, owned by Groundspeak Inc., which began operating in late 2000. With a worldwide membership and a freemium business model, the website claims millions of caches and members in over 200 countries. Hides and events are reviewed by volunteer regional cache reviewers before publication. Free membership allows users access to coordinates, descriptions, and logs for some caches; for a fee, users are allowed additional search tools, the ability to download large amounts of cache information onto their gps at once, instant email notifications about new caches, and access to premium-member-only caches. Geocaching Headquarters are located in Seattle, Washington, United States. The Opencaching Network provides independent, non-commercial listing sites based in the cacher's country or region. The Opencaching Network lists the most types of caches, including traditional, virtual, moving, multi, quiz, webcam, BIT, guest book, USB, event and MP3. The Opencaching Network is less restrictive than many sites, and does not charge for the use of the sites, the service being community driven. Some (or all) listings may or may not be required to be reviewed by community volunteers before being published and although cross-listing is permitted, it is discouraged. Some listings are listed on other sites, but there are many that are unique to the Opencaching Network. Features include the ability to organize one's favourite caches, build custom searches, be instantly notified of new caches in one's area, seek and create caches of all types, export GPX queries, statpics, etc. Each Opencaching Node provides the same API for free (called "OKAPI") for use by developers who want to create third-party applications able to use the Opencaching Network's content. Countries with associated opencaching websites include the United States at www.opencaching.us; Germany at www.opencaching.de; Sweden at www.opencaching.se; Poland at www.opencaching.pl; Czech Republic at www.opencaching.cz; The Netherlands at www.opencaching.nl; Romania at www.opencaching.ro; the United Kingdom at www.opencache.uk. The main difference between opencaching and traditional listing sites is that all services are open to the users at no cost. Generally, most geocaching services or websites offer some basic information for free, but users may have to pay for premium membership that allows access to more information or advanced searching capabilities. This is not the case with opencaching; every geocache is listed and accessible to everyone for free. Additionally, Opencaching sites allow users to rate and report on existing geocaches. This allows users to see what other cachers think of the cache and it encourages participants to place higher quality caches. The rating system also greatly reduces the problem of abandoned or unsatisfactory caches still being listed after repeated negative comments or posts in the cache logs. OpenCaching.com (short: OX) was a site created and run by Garmin from 2010–2015, which had the stated aim of being as free and open as possible with no paid content. Caches were approved by a community process and coordinates were available without an account. The service closed on 14 August 2015. In many countries there are regional geocaching sites, but these mostly only compile lists of caches in the area from the three main sites. Many of them also accept unique listings of caches for their site, but these listings tend to be less popular than the international sites, although occasionally the regional sites may have more caches than the international sites. There are some exceptions though, e.g. in the former Soviet Union, the site Geocaching.su remains popular because it accepts listings in the Cyrillic script. Additional international sites include Geocaching.de, a German website, and Geocaching Australia, which accepts listings of cache types deprecated by geocaching.com, cache types such as TrigPoint and Moveable caches, as well as traditional geocache types. GPSgames.org is an online community dedicated to all kinds of games involving Global Positioning System receivers. GPSgames.org allows traditional geocaches as well as virtual, locationless, and traveler geocaches. Geodashing, Shutterspot, GeoVexilla, MinuteWar, GeoPoker, and GeoGolf are among the GPS games available. GPSgames.org has been 100% free since 2001, through donations. Navicache.com started as a regional listing service in 2001. While many of the website's listings have been posted to other sites, it also offers unique listings. The website lists nearly any type of geocache and does not charge to access any of the caches listed in its database. All submissions are reviewed and approved. Navicache is under transition to new owners, who said they "plan to develop a site that geocachers want, with rules that geocachers think are suitable. Geocaching.com and OX are both backed by large enterprises, and while that means they have more funding and people, we’re a much smaller team – so our advantage is the ability to be dynamic and listen to the users." Terracaching.com seeks to provide high-quality caches made so by the difficulty of the hide or from the quality of the location. Membership is managed through a sponsorship system, and each cache is under continual peer review from other members. Terracaching.com embraces virtual caches alongside traditional or multi-stage caches and includes many locationless caches among the thousands of caches in its database. It is increasingly attracting members who like the point system. In Europe, TerraCaching is supported by Terracaching.eu. This site is translated in different European languages, has an extended FAQ and extra supporting tools for TerraCaching. TerraCaching strongly discourages caches that are listed on other sites (so-called double-listing). Extremcaching is a German private database for alternative geocaches with a focus on T5 / climbing caches, night caches and lost place caches. For extreme caching all you need is an extreme caching account, a GPS device with coordinates or a GPS-enabled smartphone with geocaching or outdoor navigation software, e.g. c:geo. Geocaching Australia is a community website for geocachers in Australia and New Zealand as well as many other countries. Geocaching Australia also has many unique cache types such as Burke And Wills, Moveable_cache & Podcache geocaches. Geocaching slang includes:
https://en.wikipedia.org/wiki?curid=12533
Geographical mile The geographical mile is a unit of length determined by 1 minute of arc along the Earth's equator. For the 1924 International Spheroid this equalled 1855.4 metres. "The American Practical Navigator" 2017 defines the geographical mile as . Greater precision depends more on choice of ellipsoid than on more careful measurement: the length of the equator in the World Geodetic System WGS-84 is which makes the geographical mile 1855.3248 m, while the "IERS Conventions (2010)" takes the equator to be making the geographical mile 1855.3250 m, 0.2 millimetres longer. In any ellipsoid, the length of a degree of longitude at the equator is thus exactly 60 geographical miles. The shape of the Earth is a slightly flattened sphere, which results in the Earth's circumference being 0.168% larger when measured around the equator as compared to through the poles. The geographical mile is slightly larger than the nautical mile (which was historically linked to the circumference measured through both poles); one geographic mile is equivalent to approximately . It was closely related to the nautical mile, which was originally determined as 1 minute of arc along a great circle of the Earth, but is nowadays defined as exactly 1852 metres. The US National Institute of Standards and Technology (NIST) notes that: "The international nautical mile of 1 852 meters (6 076.115 49...feet) was adopted effective July 1, 1954, for use in the United States. The value formerly used in the United States was 6 080.20 feet = 1 nautical (geographical or sea) mile." (The deprecated value, 6080.2 feet, is n) A separate reference identifies the geographic mile as being identical to the international nautical mile of 1852 metres (and slightly shorter than the British nautical mile of ). The unit is not used much, but is cited in some United States laws (e.g., Section 1301(a) of the Submerged Lands Act, which defines state seaward boundaries in terms of geographic miles). While debating what became the Land Ordinance of 1785, Thomas Jefferson's committee wanted to divide the public lands in the west into "hundreds of ten geographical miles square, each mile containing 6086 and 4-10ths of a foot" and "sub-divided into lots of one mile square each, or 850 and 4-10ths of an acre". The Danish and German geographical mile ("geografisk mil" and "geographische Meile" or "geographische Landmeile", respectively) is 4 minutes of arc, and was defined as approximately 7421.5 metres by the astronomer Ole Rømer of Denmark. In Norway and Sweden, this 4-minute geographical mile was mainly used at sea ("sjømil"), up to the beginning of the 20th century.
https://en.wikipedia.org/wiki?curid=12534
Golden Heroes Golden Heroes is a superhero role-playing game that was originally written and published on an amateur basis in 1982. Games Workshop then published a more complete version in 1984. It was written by Simon Burley and Peter Haines and was illustrated by a group of artists who were working for "2000 AD" at the time. The game was published in a box, the rules books features fake bar codes and Comics Code approval badges. The character generation system is a combination of random rolling and design. Players roll some random superpowers which they can the customise and develop in various ways to create a character they want to play. A character can only keep their full set of powers if they can justify them all in a plausible origins story. The system really strives to recreate comics, with the actions occurring in "frames" and a lot of classic comics assumptions being written into the rules. Characters are "rated" after each game and are more likely to succeed in future games if they behave in ways consistent with ComicBook tropes. Simon Burley, one of the original authors of Golden Heroes, has since revisited the genre and issued a new game - informed by the same aesthetic - called Squadron UK. "Golden Heroes" was first published in an amateur - photocopied - format by the original authors in 1982. The more professional and complete version was published by Games Workshop in 1984. Pete Tamlyn reviewed "Golden Heroes" for "Imagine" magazine, and stated that "For younger players, and If you just want the Superhero game for light relief and one-off scenarios, then MSH is the best, but if you are planning to run an extended Superhero campaign then Golden Heroes wins hands down." Tony Johnston did a retrospective review of "Golden Heroes" for "Arcane" magazine, stating that "A superb system, and one which some referees I know still use today, adapted for other games." "Golden Heroes" was ranked 41st in the 1996 reader poll of "Arcane" magazine to determine the 50 most popular roleplaying games of all time. The UK magazine's editor Paul Pettengale commented: "The gameplay reflects a refined approach to the superhero genre, and roleplaying tends to take priority over combat."
https://en.wikipedia.org/wiki?curid=12535
Guangzhou Guangzhou (; , or ; ), also known as Canton and formerly romanized as Kwangchow, is the capital and most populous city of the province of Guangdong in southern China. On the Pearl River about north-northwest of Hong Kong and north of Macau, Guangzhou has a history of over 2,200 years and was a major terminus of the maritime Silk Road, and continues to serve as a major port and transportation hub, as well as one of China's three largest cities. Guangzhou is at the heart of the most-populous built-up metropolitan area in mainland China, which extends into the neighboring cities of Foshan, Dongguan, Zhongshan and Shenzhen, forming one of the largest urban agglomerations on Earth, the Pearl River Delta Economic Zone. Administratively, the city holds sub-provincial status and is one of China's nine National Central Cities. At the end of 2018, the population of the city's expansive administrative area was estimated at 14,904,400 by city authorities, up 3.8% from the previous year. Guangzhou is ranked as an Alpha global city. There is a rapidly increasing number of foreign temporary residents and immigrants from Southeast Asia, the Middle East, Eastern Europe and Africa. This has led to it being dubbed the "Capital of the Third World". The domestic migrant population from other provinces of China in Guangzhou was 40% of the city's total population in 2008. Together with Shanghai, Beijing and Shenzhen, Guangzhou has one of the most expensive real estate markets in China. In the late 1990s and early 2000s, nationals of sub-Saharan Africa who had initially settled in the Middle East and other parts of Southeast Asia moved in unprecedented numbers to Guangzhou in response to the 1997/98 Asian financial crisis. Long the only Chinese port accessible to most foreign traders, Guangzhou fell to the British during the First Opium War. No longer enjoying a monopoly after the war, it lost trade to other ports such as Hong Kong and Shanghai, but continued to serve as a major entrepôt. In modern commerce, Guangzhou is best known for its annual Canton Fair, the oldest and largest trade fair in China. For three consecutive years (2013–2015), Forbes ranked Guangzhou as the best commercial city in mainland China. "Guǎngzhōu" is the pinyin romanisation of the Chinese name , which was simplified in mainland China to in the 1950s. The name of the city is taken from the ancient Guang Province (Guang Zhou), after it had become the prefecture's seat of government, which is how some other Chinese cities, including Hangzhou, Suzhou, and Fuzhou got their names. The character or —which also appears in the names of the provinces Guangdong and Guangxi, together called the Liangguang—means "broad" or "expansive" and refers to the intention to dispense imperial grace broadly in the region with the founding of county of Guangxin in the Han Dynasty. Before acquiring its current name, the town was known as Panyu (), a name still borne by one of Guangzhou's districts not far from the main city. The origin of the name is still uncertain, with 11 various explanations being offered, including that it may have referred to two local mountains. The city has also sometimes been known as Guangzhou Fu or Guangfu after its status as the capital of a prefecture. From this latter name, Guangzhou was known to medieval Persians such as Al-Masudi and Ibn Khordadbeh as Khanfu (). Under the Southern Han, the city was renamed Xingwang Fu (). The Chinese abbreviation for Guangzhou is "" (although the abbreviation on car license plates, as with the rest of the province, is ), after its nickname "Rice City" (). The city has long borne the nickname () or () from the five stones at the old Temple of the Five Immortals said to have been the sheep or goats ridden by the Taoist culture heroes credited with introducing rice cultivation to the area around the time of the city's foundation. The former name "City of the Immortals" (/) came from the same story. The more recent () is usually taken as a simple reference to the area's fine greenery. The English name "Canton" derived from Portuguese ' or ', a muddling of dialectical pronunciations of "Guangdong" (e.g., Hakka "Kóng-tûng"). Although it originally and chiefly applied to the walled city, it was occasionally conflated with Guangdong by some authors. It was adopted as the Postal Map Romanization of Guangzhou and remained in common use until the gradual adoption of pinyin. As an adjective, it is still used in describing the people, language, cuisine and culture of Guangzhou and the surrounding Liangguang region. The 19th-century name "" derived from Nanjing dialect of Mandarin and the town's status as a prefectural capital. A settlement now known as Nanwucheng was present in the area by . Some traditional Chinese histories placed Nanwucheng's founding during the reign of Ji Yan, king of Zhou from 314–256 . It was said to have consisted of little more than a stockade of bamboo and mud. Panyu was established on the east bank of the Pearl River in 214  to serve as a base for the Qin Empire's first failed invasion of the Baiyue lands in southern China. Legendary accounts claimed the soldiers at Panyu were so vigilant that they did not remove their armor for three years. Upon the fall of the Qin, General Zhao Tuo established his own kingdom of Nanyue and made Panyu its capital in 204 . It remained independent through the Chu-Han Contention, although Zhao negotiated recognition of his independence in exchange for his nominal submission to the Han in 196 . Archaeological evidence shows that Panyu was an expansive commercial centre: in addition to items from central China, archaeologists have found remains originating from Southeast Asia, India, and even Africa. Zhao Tuo was succeeded by Zhao Mo and then Zhao Yingqi. Upon Zhao Yingqi's death in , his younger son Zhao Xing was named as his successor in violation of Chinese primogeniture. By , his Chinese mother, the Empress Dowager Jiu () had prevailed upon him to submit Nanyue as a formal part of the Han Empire. The native prime minister Lü Jia () launched a coup, killing Han ambassadors along with the king, his mother, and their supporters. A successful ambush then annihilated a Han force which had been sent to arrest him. The enraged Emperor Wu launched a massive river- and sea-borne invasion: six armies under Lu Bode and Yang Pu took Panyu and annexed Nanyue by the end of 111 . Incorporated into the Han Dynasty, Panyu became a provincial capital. In  226, it became the seat of Guang Prefecture, which gave it its modern name. The "Old Book of Tang" described Guangzhou as an important port in southern China. Direct routes connected the Middle East and China, as shown in records of a Chinese prisoner returning home from Iraq twelve years after his capture at Talas. Relations were not always peaceful: Arab and Persian pirates sacked the city on 30 October 758 and were massacred by the Chinese rebel Huang Chao in 878, along with the city's Jews, Christians, and Parsis. The port was reportedly closed for fifty years after the chaos. Amid the Five Dynasties and Ten Kingdoms that followed the collapse of the Tang dynasty, the Later Liang governor Liu Yan used his base at Panyu to establish a "Great Yue" or "Southern Han" empire, which lasted from 917 to 971. The region enjoyed considerable cultural and economic success in this period. From the 10th to 12th century, there are records that the large foreign communities were not exclusively male, but included "Persian women". According to Odoric of Pordenone, Guangzhou was as large as three Venices and rivalled all of Italy in the amount of crafts it produced. He also noted the large amount of ginger available as well as large geese and snakes. Guangzhou was visited by the Moroccan traveler Ibn Battuta during his 14th-century journey around the world; he detailed the process by which the Chinese constructed their large ships in the port's shipyards. Shortly after the Hongwu Emperor's declaration of the Ming dynasty, he reversed his earlier support of foreign trade and imposed the first of a series of sea bans ("haijin"). These banned private foreign trade upon penalty of death for the merchant and exile for his family and neighbors. The Yuan-era maritime intendancies of Guangzhou, Quanzhou, and Ningbo were closed in 1384 and legal trade became limited to the tribute delegations sent to or by official representatives of foreign governments. Following the Portuguese conquest of Malacca, Rafael Perestrello travelled to Guangzhou as a passenger on a native junk in 1516. His report induced Fernão Pires de Andrade to sail to the city with eight ships the next year, but De Andrade's exploration was understood as spying and his brother Simão and others began attempting to monopolize trade, enslaving Chinese women and children, engaging in piracy, and fortifying the island of Tamão. Rumors even circulated that Portuguese were eating the children. The Guangzhou administration was charged with driving them off: they bested the Portuguese at the Battle of Tunmen and in Xicao Bay; held a diplomatic mission hostage in a failed attempt to pressure the restoration of the sultan of Malacca, who had been accounted a Ming vassal; and, after placing them in cangues and keeping them for most of a year, ultimately executed 23 by lingchi. With the help of local pirates, the "Folangji" then carried out smuggling at Macao, Lampacau, and Island (now Shangchuan), until Leonel de Sousa legalized their trade with bribes to Admiral Wang Bo () and the 1554 Luso-Chinese Accord. The Portuguese undertook not to raise fortifications and to pay customs dues; three years later, after providing the Chinese with assistance suppressing their former pirate allies, the Portuguese were permitted to warehouse their goods at Macau instead of Guangzhou itself. In October 1646, the Longwu Emperor's brother Zhu Yuyue fled by sea to Guangzhou the last seat of the Ming empire. On 11 December, he declared himself the Shaowu Emperor, borrowing his imperial regalia from local theatre troupes. He led a successful offense against his cousin Zhu Youlang but was deposed and executed on 20 January 1647 when the Ming turncoat Li Chengdong () sacked the city on behalf of the Qing. The Qing became somewhat more open to foreign trade after gaining control of Taiwan in 1683. The Portuguese from Macau and Spaniards from Manila returned, as did private Muslim, Armenian, and English traders. From 1699 to 1714, the French and British East India Companies sent a ship or two each year; the Austrian Ostend General India Co. arrived in 1717, the Dutch East India Co. in 1729, the Danish Asiatic Co. in 1731, and the Swedish East India Co. the next year. These were joined by the occasional Prussian or Trieste Company vessel. The first independent American ship arrived in 1784 and the first colonial Australian one in 1788. By that time, Guangzhou was one of the world's great ports, organised under the Canton System. The main exports were tea and porcelain. As a meeting place of merchants from all over the world, Guangzhou became a major contributor to the rise of the modern global economy. In the 19th century, most of the city's buildings were still only one or two storeys. The major structures were the Plain Minaret of the Huaisheng Mosque, the Flower Pagoda of the Temple of the Six Banyan Trees, and the guard tower known as the 5-Storey Pagoda. The northern hills, since urbanized, were bare and covered with traditional graves. The brick city walls were about in circumference, high, and wide. Its eight main gates and two water gates all held guards during the day and were closed at night. The wall rose to incorporate a hill on its northern side and was surrounded on the other three by a moat which, along with the canals, functioned as the city's sewer, emptied daily by the river's tides. A partition wall with four gates divided the northern "old town" from the southern "new town" closer to the river; the suburb of Xiguan ("West Gate") stretched beyond and the boats of fishers, traders, and Tanka ("boat people") almost entirely concealed the riverbank for about . It was common for homes to have a storefront facing the street and to treat their courtyards as a kind of warehouse. The city was part of a network of signal towers so effective that messages could be relayed to Beijing—about away—in less than 24 hours. The Canton System was maintained until the outbreak of the First Opium War in 1839. Following a series of battles in the Pearl River Delta, the British captured Guangzhou itself on 18 March 1841. The Second Battle of Canton was fought two months later. Following the Qing's 1842 treaty with Great Britain, Guangzhou lost its privileged trade status as more and more treaty ports were opened to more and more countries, usually including extraterritorial enclaves. Amid the decline of Qing prestige and the chaos of the Red Turban Rebellion (1854–1856), the Punti and Hakka waged a series of clan wars from 1855 to 1867 in which one million people died. The concession for the Guangzhou–Hankou railway was awarded to the American China Development Co. in 1898. It completed its branch line west to Foshan and Sanshui before being engulfed in a diplomatic crisis after a Belgian consortium bought a controlling interest and the Qing cancelled its concession. J.P. Morgan was awarded millions in damages and the line to Wuchang wasn't completed until 1936 and a unified Beijing–Guangzhou Railway waited until the completion of Wuhan's Yangtze River Bridge in 1957. During the late Qing Dynasty, Guangzhou was the site of failed revolts such as the Uprisings of 1895 and 1911 to overthrow the Qing; the 72 rebels whose bodies were found after the latter uprising are remembered and honoured as the city's 72 Martyrs in the Huanghuagang ("Yellow Flower Mound") Mausoleum. All these failed revolutionary attempts would eventually lead to the Xinhai Revolution which successfully overthrew the Qing Dynasty to establish a new Han Chinese republic. After the assassination of Song Jiaoren and Yuan Shikai's attempts to remove the Nationalist Party of China from power, the leader of Guangdong Hu Hanmin joined the 1913 Second Revolution against him but was forced to flee to Japan with Sun Yat-sen after its failure. The city came under national spotlight again in 1917, when Prime Minister Duan Qirui's abrogation of the constitution triggered the Constitutional Protection Movement. Sun Yat-sen came to head the Guangzhou Military Government supported by the members of the dissolved parliament and the Southwestern warlords. The Guangzhou government fell apart as the warlords withdrew their support. Sun fled to Shanghai in November 1918 until the Guangdong warlord Chen Jiongming restored him in October 1920 during the Yuegui Wars. On 16 June 1922, Sun was ousted in a coup and fled on the warship Yongfeng after Chen sided with the Zhili Clique's Beijing government. In the following months Sun mounted a counterattack into Guangdong by rallying supporters from Yunnan and Guangxi, and in January established a government in the city for the third time. From 1923 to 1926 Sun and the Kuomintang used the city as a base to prosecute a renewed revolution in China by conquering the warlords in the north. Although Sun was previously dependent on opportunistic warlords who hosted him in the city, with the leadership of Chiang Kai-shek, the KMT developed its own military power to serve its ambition. The Canton years saw the evolution of the KMT into a revolutionary movement with a strong military focus and ideological commitment, setting the tone of the KMT rule of China beyond 1927. In 1924 the KMT made the momentous decision to ally with the Communist Party and the USSR. With Soviet help, KMT reorganized itself along the Leninist line and adopted a pro-labor and pro-peasant stance. The Kuomintang-CCP cooperation was confirmed in the First Congress of the KMT and the communists were instructed to join the KMT. The allied government set up the Peasant Movement Training Institute in the city, of which Mao Zedong was a director for one term. Sun and his military commander Chiang used Soviet funds and weapons to build an armed force staffed by communist commissars, training its cadres in the Whampoa Military Academy. In August, the fledgling army suppressed the Canton Merchants' Corps Uprising. The next year the anti-imperialist May Thirtieth Movement swept the country, and the KMT government called for strikes in Canton and Hong Kong. The tensions of the massive strikes and protests led to the Shakee Massacre. After the death of Sun Yat-sen in 1925 the mood was changing in the party toward the communists. In August the left-wing KMT leader Liao Zhongkai was assassinated and the right-wing leader Hu Hanmin, the suspected mastermind, was exiled to the Soviet Union, leaving the pro-communist Wang Jingwei in charge. Opposing communist encroachment, the right-wing Western Hills Group vowed to expel the communists from the KMT. The "Canton Coup" on 20 March 1926 saw Chiang solidify his control over the Nationalists and their army against Wang Jingwei, the party's left wing, its Communist allies, and its Soviet advisors. By May, he had ended civilian control of the military and begun his Northern Expedition against the warlords of the north. Its success led to the split of the KMT between Wuhan and Nanking and the purge of the communists in the Shanghai Massacre. Immediately afterwards Canton joined the purge under the auspice of Li Jishen, resulting in the arrest of communists and the suspension of left wing KMT apparatuses and labor groups. Later in 1927 when Zhang Fakui, a general supportive of the Wuhan faction seized Canton and installed Wang Jingwei's faction in the city, the communists saw an opening and launched the Guangzhou Uprising. Prominent communist military leaders Ye Ting and Ye Jianying led the failed defense of the city. Soon, control of the city reverted to Li Jishen. Li was deposed in the War between Chiang and Guangxi Clique. By 1929, Chen Jitang had established himself as the powerholder of Guangdong. In 1931 he threw his weight behind the anti-Chiang schism by hosting a separate Nationalist government in Guangzhou. Opposing Chiang's alleged dictatorship, the separatists included KMT leaders like Wang Jingwei, Sun Fo and others from diverse factions. The peace negotiations amidst the armed stand-off led to the 4th National Congress of Kuomintang being held separately by three factions in Nanjing, Shanghai and Canton. Resigning all his posts, Chiang pulled off a political compromise that reunited all factions. While the intraparty division was resolved, Chen kept his power until he was defeated by Chiang in 1936. During World War II, the "Canton Operation" subjected the city to Japanese occupation by the end of December 1938. Amid the closing months of the Chinese Civil War, Guangzhou briefly served as the capital of the Republic of China after the taking of Nanjing by the PLA in April 1949. The People's Liberation Army entered the city on 14 October 1949. Amid a massive exodus to Hong Kong and Macau, the Nationalists blew up the Haizhu Bridge across the Pearl River in retreat. The Cultural Revolution had a large effect on the city with much of its temples, churches and other monuments destroyed during this chaotic period. The People's Republic of China initiated building projects including new housing on the banks of the Pearl River to adjust the city's boat people to life on land. Since the 1980s, the city's close proximity to Hong Kong and Shenzhen and its ties to overseas Chinese have made it one of the first beneficiaries of China's opening up under Deng Xiaoping. Beneficial tax reforms in the 1990s have also helped the city's industrialisation and development. The municipality was expanded in the year 2000, with Huadu and Panyu joining the city as urban districts and Conghua and Zengcheng as more rural counties. The former districts of Dongshan and Fangcun were abolished in 2005, merged into Yuexiu and Liwan respectively. The city acquired Nansha and Luogang. The former was carved out of Panyu, the latter from parts of Baiyun, Tianhe, Zengcheng, and an exclave within Huangpu. The National People's Congress approved a development plan for the Pearl River Delta in January 2009; on March 19 the same year, the Guangzhou and Foshan municipal governments agreed to establish a framework to merge the two cities. In 2014, Luogang merged into Huangpu and both Conghua and Zengcheng counties were upgraded to districts. The old town of Guangzhou was near Baiyun Mountain on the east bank of the Pearl River (Zhujiang) about from its junction with the South China Sea and about below its head of navigation. It commanded the rich alluvial plain of the Pearl River Delta, with its connection to the sea protected at the Humen Strait. The present city spans on both sides of the river from to longitude and to latitude in south-central Guangdong. The Pearl is the 4th-largest river of China. Baiyun Mountain is now locally referred to as the city's "lung" (). The elevation of the prefecture generally increases from southwest to northeast, with mountains forming the backbone of the city and the ocean comprising the front. Tiantang Peak (, "Heavenly Peak") is the highest point of elevation at above sea level. There are 47 different types of minerals and also 820 ore fields in Guangzhou, including 18 large and medium-sized oil deposits. The major minerals are granite, cement limestone, ceramic clay, potassium, albite, salt mine, mirabilite, nepheline, syenite, fluorite, marble, mineral water, and geothermal mineral water. Since Guangzhou is located in the water-rich area of southern China, it has a wide water area with lots of rivers and water systems, accounting for 10% of the total land area. The rivers and streams improve the landscape and keep the ecological environment of the city stable. Despite being located just south of the Tropic of Cancer, Guangzhou has a humid subtropical climate (Köppen "Cfa") influenced by the East Asian monsoon. Summers are wet with high temperatures, high humidity, and a high heat index. Winters are mild and comparatively dry. Guangzhou has a lengthy monsoon season, spanning from April through September. Monthly averages range from in January to in July, while the annual mean is . Autumn, from October to December, is very moderate, cool and windy, and is the best travel time. The relative humidity is approximately 68 percent, whereas annual rainfall in the metropolitan area is over . With monthly percent possible sunshine ranging from 17 percent in March and April to 52 percent in November, the city receives 1,628 hours of bright sunshine annually, considerably less than nearby Shenzhen and Hong Kong. Extreme temperatures have ranged from to . The last recorded snowfall in the city was on 24 January 2016, 87 years after the second last recorded snowfall. Guangzhou is a sub-provincial city. It has direct jurisdiction over eleven districts: Guangzhou is the main manufacturing hub of the Pearl River Delta, one of mainland China's leading commercial and manufacturing regions. In 2017, the GDP reached ¥2,150 billion (US$318 billion), per capita was ¥150,678 (US$22,317). Guangzhou is considered one of the most prosperous cities in China. Owing to rapid industrialisation, it is also considered one of the most polluted cities once. But as city development goes greener, it is now one of the most livable cities in China. The Canton Fair, formally the "China Import and Export Fair", is held every year in April and October by the Ministry of Trade. Inaugurated in the spring of 1957, the fair is a major event for the city. It is the trade fair with the longest history, highest level, largest scale in China. From the 104th session onwards, the fair moved to the new Guangzhou International Convention and Exhibition Center () in Pazhou, from the older complex in Liuhua. The GICEC is served by two stations on Line 8 and three stations on Tram Line THZ1. Since the 104th session, the Canton Fair has been arranged in three phases instead of two phases. Guangzhou Peugeot Automobile Company produced the Peugeot 504 and Peugeot 505 automobiles from 1989 to 1997. The 2010 census found Guangzhou's population to be 12.78 million. , it was estimated at 13,080,500, with 11,264,800 urban residents. Its population density is thus around 1,800 people per km2. The built-up area of the Guangzhou proper connects directly to several other cities. The built-up area of the Pearl River Delta Economic Zone covers around and has been estimated to house 22 million people, including Guangzhou's nine urban districts, Shenzhen (5.36m), Dongguan (3.22m), Zhongshan (3.12m), most of Foshan (2.2m), Jiangmen (1.82m), Zhuhai (890k), and Huizhou's Huiyang District (760k). The total population of this agglomeration is over 28 million after including the population of the adjacent Hong Kong Special Administrative Region. The area's fast-growing economy and high demand for labour has produced a huge "floating population" of migrant workers. Up to 10 million migrants reside in the area least six months each year. In 2008, about five million of Guangzhou's permanent residents were hukouless migrants. Most of Guangzhou's population is Han Chinese. Almost all of the local Cantonese people speak Cantonese as their first language, while most migrants speak forms of Mandarin. In 2010, each language was the native tongue of roughly half of the city's population, although minor but substantial numbers speak other varieties as well. In 2018, He Huifeng of the "South China Morning Post" stated that younger residents have increasingly favored using Mandarin instead of Cantonese in their daily lives, causing their Cantonese-speaking grandparents and parents to use Mandarin to communicate with them. He Huifeng stated that factors included local authorities discouraging the use of Cantonese in schools and the rise in prestige of Mandarin-speaking Shenzhen. Guangzhou has a very unbalanced gender ratio, but its province has a higher imbalance than the rest of the country. While most areas of China have 112-120 boys per 100 girls, the Guangdong province that houses Guangzhou has more than 130 boys for every 100 girls. The influx of Chinese immigrants in Guangzhou also brought an estimated 300,000 prostitutes mostly from Sichuan from central China. Guangzhou now has a huge influx of migrants, with up to 30 million additional migrants living in the area for at least six months out of every year. This huge influx of people from other areas, called the floating population, is due to the city's fast-growing economy and high labor demands. Guangzhou Mayor Wan Qingliang told an urban planning seminar that Guangzhou is facing a very serious population problem stating that, while the city had 10.33 million registered residents at the time with targets and scales of land use based on this number, the city actually had a population with migrants of nearly 15 million. According to the Guangzhou Academy of Social Sciences researcher Peng Peng, the city is almost at its maximum capacity of just 15 million, which means the city is facing a great strain, mostly due to a high population of unregistered people. According to the 2000 National Census, marriage is one of the top two reasons for permanent migration and particular important for female as 29.3% of the permanent female migrants migrate for marriage [Liang et al.,2004]. Many of the economic migrant female marries men from Guangzhou in hopes of a better life. but like elsewhere in the People's Republic of China, the household registration system ("hukou") limits migrants' access to residences, educational institutions and other public benefits. In May 2014, legally employed migrants in Guangzhou were permitted to receive a "hukou" card allowing them to marry and obtain permission for their pregnancies in the city, rather than having to return to their official hometowns as previously. Historically, the Cantonese people have made up a sizeable part of the 19th- and 20th-century Chinese diaspora and many overseas Chinese have ties to Guangzhou. This is particularly true in the United States, Canada, and Australia. Demographically, the only significant immigration into China has been by overseas Chinese, but Guangzhou sees many foreign tourists, workers, and residents from the usual locations such as the United States. Notably, it is also home to thousands of African immigrants, including people from Nigeria, Somalia, Angola and the Democratic Republic of Congo. The encompassing metropolitan area was estimated by the OECD (Organisation for Economic Co-operation and Development) to have, , a population of 25 million. When the first line of the Guangzhou Metro opened in 1997, Guangzhou was the fourth city in Mainland China to have an underground railway system, behind Beijing, Tianjin, and Shanghai. Currently the metro network is made up of thirteen lines, covering a total length of . A long-term plan is to make the city's metro system expand to over by 2020 with 15 lines in operation. In addition to the metro system there is also the Haizhu Tram line which opened on 31 December 2014. The Guangzhou Bus Rapid Transit (GBRT) system which was introduced in 2010 along Zhongshan Road. It has several connections to the metro and is the world's 2nd-largest Bus Rapid Transit system with 1,000,000 passenger trips daily. It handles 26,900 pphpd during the peak hour a capacity second only to the TransMilenio BRT system in Bogota. The system averages one bus every 10 seconds or 350 per hour in a single direction and contains the world's longest BRT stations—around including bridges. In the 19th century, the city already boasted over 600 long, straight streets; these were mostly paved but still very narrow. In June 1919, work began on demolishing the city wall to make way for wider streets and the development of tramways. The demolition took three years in total. In 2009, it was reported that all 9,424 buses and 17,695 taxis in Guangzhou would be operating on LPG-fuel by 2010 to promote clean energy for transport and improve the environment ahead of the 2010 Asian Games which were held in the city. At present, Guangzhou is the city that uses the most LPG-fueled vehicles in the world, and at the end of 2006, 6,500 buses and 16,000 taxis were using LPG, taking up 85 percent of all buses and taxis. Effective January 1, 2007, the municipal government banned motorcycles in Guangdong's urban areas. Motorcycles found violating the ban are confiscated. The Guangzhou traffic bureau claimed to have reported reduced traffic problems and accidents in the downtown area since the ban. Guangzhou's main airport is the Baiyun International Airport in Baiyun District; it opened on August 5, 2004. This airport is the second busiest airport in terms of traffic movements in China. It replaced the old Baiyun International Airport, which was very close to the city centre but failed to meet the city's fast-growing air traffic demand. The old Baiyun International Airport was in operation for 72 years. Guangzhou Baiyun International Airport now has three runways, with two more planned. The Terminal 2 has opened on April 26, 2018. Another airport located in Zengcheng District is under planning. Guangzhou is served by Hong Kong International Airport; ticketed passengers can take ferries from the Lianhuashan Ferry Terminal and Nansha Ferry Port in Nansha District to the HKIA Skypier. There are also coach bus services connecting Guangzhou with HKIA. Guangzhou is the terminus of the Beijing–Guangzhou, Guangzhou–Shenzhen, Guangzhou–Maoming and Guangzhou–Meizhou–Shantou conventional speed railways. In late 2009, the Wuhan–Guangzhou high-speed railway started service, with multiple unit trains covering at a top speed of . In December 2014, the Guiyang–Guangzhou high-speed railway and Nanning-Guangzhou railway began service with trains running at top speeds of and , respectively. The Guangdong Through Train departs from the Guangzhou East railway station and arrives at the Hung Hom Station in Kowloon, Hong Kong. The route is approximately in length and the ride takes less than two hours. Frequent coach services are also provided with coaches departing every day from different locations (mostly major hotels) around the city. A number of regional railways radiating from Guangzhou started operating such as the Guangzhou–Zhuhai intercity railway and the Guangzhou-Foshan-Zhaoqing intercity railway. There are daily high-speed catamaran services between Nansha Ferry Terminal and Lianhua Shan Ferry Terminal in Guangzhou and the Hong Kong China Ferry Terminal, as well as between Nansha Ferry Terminal and Macau Ferry Pier in Hong Kong. Within China, the culture of the Cantonese people is a subset of the larger "Southern" or "Lingnan" cultural areas. Notable aspects of Guangzhou's cultural heritage include: Guangzhou Opera House & Symphony Orchestra also perform classical Western music and Chinese compositions in their style. Cantonese music is a traditional style of Chinese instrumental music, while Cantopop is the local form of pop music and rock-and-roll which developed from neighbouring Hong Kong. Qing-era Guangzhou had around 124 religious pavilions, halls, and temples. Today, in addition to the Buddhist Association, Guangzhou also has a Taoist Association, a Jewish community, and a history with Christianity and Islam. Taoism and Chinese folk religion are still represented at a few of the city's temples. Among the most important is the Temple of the Five Immortals, honoring the five immortals credited with introducing rice cultivation at the foundation of the city. The five rams they rode were supposed to have turned into stones upon their departure and gave the city several of its nicknames. Another place of worship is the City God Temple. Guangzhou, like most of southern China, is also notably observant concerning ancestral veneration during occasions like the Tomb Sweeping and Ghost Festivals. Buddhism is the most prominent religion in Guangzhou. The Zhizhi Temple was founded in  233 from the estate of a Wu official; it is said to comprise the residence of Zhao Jiande, the last of the Nanyue kings, and has been known as the Guangxiao Temple ("Temple of Bright Filial Piety") since the Ming. The missionary Bodhidharma is traditionally said to have visited Panyu during the Liu Song or Liang dynasties (5th or 6th century). Around  520, Emperor Wu of the Liang ordered the construction of the Baozhuangyan Temple and the Xilai Monastery to store the relics of Cambodian Buddhist saints which had been brought to the city and to house the monks beginning to assemble there. The Baozhuangyan is now known as the Temple of the Six Banyan Trees, after a famous poem composed by Su Shi after a visit during the Northern Song. The Xilai Monastery was renamed the Hualin Temple ("Flowery Forest Temple") after its reconstruction during the Qing. The temples were damaged by both the Republican campaign to "Promote Education with Temple Property" () and the Maoist Cultural Revolution but have been renovated since the opening up that began in the 1980s. The Ocean Banner Temple on Henan Island, once famous in the west as the only tourist spot in Guangzhou accessible to foreigners, has been reopened as the Hoi Tong Monastery. Nestorian Christians first arrived in China via the overland Silk Road, but suffered during Emperor Wuzong's 845 persecution and were essentially extinct by the year 1000. The Qing-era ban on foreigners limited missionaries until it was abolished following the First Opium War, although the Protestant Robert Morrison was able to perform some work through his service with the British factory. The Catholic Archdiocese of Guangzhou is housed at Guangzhou's Sacred Heart Cathedral, known locally as the "Stone House". A Gothic Revival edifice which was built by hand from 1861 to 1888 under French direction, its original Latin and French stained-glass windows were destroyed during the wars and amid the Cultural Revolution; they have since been replaced by English ones. The Canton Christian College (1888) and Hackett Medical College for Women (1902) were both founded by missionaries and now form part of Guangzhou's Lingnan. Since the opening up of China in the 1980s, there has been renewed interest in Christianity, but Guangzhou maintains pressure on underground churches which avoid registration with government officials. The Catholic archbishop Dominic Tang was imprisoned without trial for 22 years, but his present successor is recognised by both the Vatican and China's Patriotic Church. Guangzhou has had a Muslim community since very early in the history of Islam; the native or nativised adherents of the faith are known as the Hui. Huaisheng Mosque was probably built during the Tang dynasty. Arab and Persian pirates sacked the city in 758 and were massacred by the Chinese rebel Huang Chao in 878, along with the Jews, Christians, and Parsis. The modern city includes numerous halal restaurants. The 18,000 seat Guangzhou International Sports Arena will be one of the venues for the 2019 FIBA Basketball World Cup. From 12–27 November 2010, Guangzhou hosted the 16th Asian Games. The same year, it hosted the first Asian Para Games from December 12 to 19. Combined, these were the largest sporting events the city ever hosted. Guangzhou also hosted the following major sporting events: Current professional sports clubs based in Guangzhou include: In the 2010s, Guangzhou Evergrande Taobao F.C. has risen to be a powerhouse in association football in China, having won eight out of nine national titles between 2011 and 2019. The team has also won the AFC Champions League in 2013 and 2015. The club has competed at the 2013 and 2015 FIFA Club World Cup, where it lost 3–0 in the semi-final stage to the 2012–13 UEFA Champions League winners FC Bayern Munich and the 2014–15 UEFA Champions League winners FC Barcelona, respectively. The Eight Views of Ram City are Guangzhou's eight most famous tourist attractions. They have varied over time since the Song dynasty, with some being named or demoted by emperors. The following modern list was chosen through public appraisal in 2011: Guangzhou attracts more than 223 million visitors each year, and the total revenue of the tourism exceeded 400 billion in 2018. There are many tourist attractions, including: In every district there are many shopping areas where people can walk on the sidewalks; however most of them are not set as pedestrian streets. The popular pedestrian streets are: There are many malls and shopping centers in Guangzhou. The majority of the new malls are located in the Tianhe district. Guangzhou has two local radio stations: the provincial Radio Guangdong and the municipal Radio Guangzhou. Together they broadcast in more than a dozen channels. The primary language of both stations is Cantonese. Traditionally only one channel of Radio Guangdong is dedicated to Mandarin Chinese. However, in recent years there has been an increase in Mandarin programmes on most Cantonese channels. Radio stations from cities around Guangzhou mainly broadcast in Cantonese and can be received in different parts of the city, depending on the radio stations' locations and transmission power. The Beijing-based China National Radio also broadcasts Mandarin programmes in the city. Radio Guangdong has a 30-minute weekly English programme, "Guangdong Today", which is broadcast globally through the World Radio Network. Daily English news programmes are also broadcast by Radio Guangdong. Guangzhou has some of the most notable Chinese-language newspapers and magazines in mainland China, most of which are published by three major newspaper groups in the city, the Guangzhou Daily Press Group, Nanfang Press Corporation, and the Yangcheng Evening News Group. The two leading newspapers of the city are "Guangzhou Daily" and "Southern Metropolis Daily". The former, with a circulation of 1.8 million, has been China's most successful newspaper for 14 years in terms of advertising revenue, while "Southern Metropolis Daily" is considered one of the most liberal newspapers in mainland China. In addition to Guangzhou's Chinese-language publications, there are a few English magazines and newspapers. The most successful is "That's Guangzhou", which started more than a decade ago and has since blossomed into "That's PRD", producing expatriate magazines in Beijing and Shanghai as well. It also produces "In the Red". The Guangzhou Higher Education Mega Centre, also known as Guangzhou University Town (), is a large tertiary education complex located in the southeast suburbs of Guangzhou. It occupies the entirety of Xiaoguwei Island in Panyu District, covering an area of about . The complex accommodates campuses from ten higher education institutions and can eventually accommodate up to 200,000 students, 20,000 teachers, and 50,000 staff. The Guangzhou Higher Education Mega Centre higher education campuses: Guangzhou's other fully accredited and degree-granting universities and colleges include: The two main comprehensive libraries are Guangzhou Library and Sun Yat-sen Library of Guangdong Province. Guangzhou Library is a public library in Guangzhou. The library has moved to a new building in Zhujiang New Town, which fully opened on 23 June 2013. Sun Yat-sen Library of Guangdong Province has the largest collection of ancient books in Southern China. Guangzhou currently maintains sister city agreements with the following foreign cities.
https://en.wikipedia.org/wiki?curid=12537
Genitive case In grammar, the genitive case (abbreviated ), also called the second case, is the grammatical case that marks a word, usually a noun, as modifying another word, also usually a noun—thus, indicating an attributive relationship of one noun to the other noun. A genitive can also serve purposes indicating other relationships. For example, some verbs may feature arguments in the genitive case; and the genitive case may also have adverbial uses (see adverbial genitive). Genitive construction includes the genitive case, but is a broader category. Placing a modifying noun in the genitive case is one way of indicating that it is related to a head noun, in a genitive construction. However, there are other ways to indicate a genitive construction. For example, many Afroasiatic languages place the head noun (rather than the modifying noun) in the construct state. Possessive grammatical constructions, including the possessive case, may be regarded as a subset of genitive construction. For example, the genitive construction "pack of dogs" is similar, but not identical in meaning to the possessive case "dogs' pack" (and neither of these is entirely interchangeable with "dog pack", which is neither genitive nor possessive). Modern English is an example of a language that has a possessive case rather than a "conventional" genitive case. That is, Modern English indicates a genitive construction with either the possessive clitic suffix "-", or a prepositional genitive construction such as "x of y". However, some irregular English pronouns do have possessive forms which may more commonly be described as genitive (see English possessive). Many languages have a genitive case, including Albanian, Arabic, Armenian, Basque, Czech, Dutch, Estonian, Finnish, Georgian, German, Greek, Hungarian, Icelandic, Irish, Latin, Latvian, Lithuanian, Romanian, Sanskrit, Scottish Gaelic, Kannada, Tamil, Telugu, Turkish and all Slavic languages except Bulgarian and Macedonian. Depending on the language, specific varieties of genitive-noun–main-noun relationships may include: Depending on the language, some of the relationships mentioned above have their own distinct cases different from the genitive. Possessive pronouns are distinct pronouns, found in Indo-European languages such as English, that function like pronouns inflected in the genitive. They are considered separate pronouns if contrasting to languages where pronouns are regularly inflected in the genitive. For example, English "my" is either a separate possessive adjective or an irregular genitive of "I", while in Finnish, for example, "minun" is regularly agglutinated from "minu-" "I" and "-n" (genitive). In some languages, nouns in the genitive case also agree in case with the nouns they modify (that is, it is marked for two cases). This phenomenon is called suffixaufnahme. In some languages, nouns in the genitive case may be found in inclusio – that is, between the main noun's article and the noun itself. The particle 嘅 ("ge") or the possessed noun's classifier is used to denote possession for singular nouns, while the particle 啲 ("dī") is used for plural nouns. Examples (In Yale transcription): The Hokkien possessive is constructed by using the suffix "ê" (的 or 个 or 兮) to make the genitive case. For example: It also uses the suffix "chi" (之) for classical or official cases. For example: Some of the Hokkien singular pronouns play the roles of possessive determiners with their nasalized forms. For example: (see Hokkien pronouns) Still, suffix "ê" is available for pronouns to express the genitive. For example: In Mandarin Chinese, the genitive case is made by use of the particle 的 (de). For instance: 我的猫 "wǒ de māo" (my cat). However, about persons in relation to one's self, 的 is often dropped when the context allows for it to be easily understood. For instance: 我妈妈 "wǒ māmā" and 我的妈妈 "wǒ de māmā" both mean "my mother". Old English had a genitive case, which has left its mark in modern English in the form of the possessive ending "'s" (now sometimes referred to as the "Saxon genitive"), as well as possessive pronoun forms such as "his", "theirs", etc., and in certain words derived from adverbial genitives such as "once" and "afterwards". (Other Old English case markers have generally disappeared completely.) The modern English possessive forms are not normally considered to represent a grammatical case, although they are sometimes referred to as genitives or as belonging to a possessive case. One of the reasons that the status of "’s" as a case ending is often rejected is that it does not behave as such, but rather as a clitic marking that indicates that a dependency relationship exists between phrases. One can say "the Queen’s dress", but also "the Queen of England’s dress", where the genitive marker is completely separated from the actual possessor. If it were a genitive case as many other languages have (including Old English), one would expect something like "*the Queen’s of England dress" or, to emulate languages with a single consistent genitive case, "*the England’s queen’s dress". Finnic languages (Finnish, Estonian) have genitive cases. In Finnish, prototypically the genitive is marked with "-n", e.g. "maa – maan" "country – of the country". The stem may change, however, with consonant gradation and other reasons. For example, in certain words ending in consonants, "-e-" is added, e.g. "mies – miehen" "man – of the man", and in some, but not all words ending in "-i", the "-i" is changed to an "-e-", to give "-en", e.g. "lumi – lumen" "snow – of the snow". The genitive is used extensively, with animate and inanimate possessors. In addition to the genitive, there is also a partitive case (marked "-ta/-tä" or "-a/-ä") used for expressing that something is a part of a larger mass, e.g. "joukko miehiä" "a group of men". In Estonian, the genitive marker "-n" has elided with respect to Finnish. Thus, the genitive always ends with a vowel, and the singular genitive is sometimes (in a subset of words ending with a vocal in nominative) identical in form to nominative. In Finnish, in addition to the uses mentioned above, there is a construct where the genitive is used to mark a surname. For example, "Juhani Virtanen" can be also expressed "Virtasen Juhani" ("Juhani of the Virtanens"). A complication in Finnic languages is that the accusative case "-(e)n" is homophonic to the genitive case. This case does not indicate possession, but is a syntactic marker for the object, additionally indicating that the action is telic (completed). In Estonian, it is often said that only a "genitive" exists. However, the cases have completely different functions, and the form of the accusative has developed from *"-(e)m". (The same sound change has developed into a synchronic mutation of a final "m" into "n" in Finnish, e.g. genitive "sydämen" vs. nominative "sydän".) This homophony has exceptions in Finnish, where a separate accusative "-(e)t" is found in pronouns, e.g. "kenet" "who (telic object)", vs. "kenen" "whose". A difference is also observed in some of the related Sámi languages, where the pronouns and the plural of nouns in the genitive and accusative are easily distinguishable from each other, e.g., "kuä'cǩǩmi" "eagles' (genitive plural)" and "kuä'cǩǩmid" "eagles (accusative plural)" in Skolt Sami. The genitive singular definite article for masculine and neuter nouns is "des", while the feminine and plural definite article is "der". The indefinite articles are "eines" for masculine and neuter nouns, and "einer" for feminine and plural nouns (although the bare form cannot be used in the plural, it manifests in "keiner", "meiner", etc.) Singular masculine and neuter nouns of the strong declension in the genitive case are marked with "-(e)s". Generally, one-syllable nouns favour the "-es" ending, and it is obligatory with nouns ending with a sibilant such as "s" or "z". Otherwise, a simple "-s" ending is usual. Feminine and plural nouns remain uninflected: Singular masculine nouns (and one neuter noun) of the weak declension are marked with an "-(e)n" (or rarely "-(e)ns") ending in the genitive case: The declension of adjectives in the genitive case is as follows: The genitive personal pronouns are quite rare and either very formal, literary or outdated. They are as follows (with comparison to the nominative pronouns): Some examples: Unlike the personal ones, the genitive relative pronouns are in regular use and are as follows (with comparison to the nominative relative pronouns): Some examples: The genitive case is often used to show possession or the relation between nouns: A simple "s" is added to the end of a name: The genitive case is also commonly found after certain prepositions: The genitive case can sometimes be found in connection with certain adjectives: The genitive case is occasionally found in connection with certain verbs (some of which require an accusative before the genitive); they are mostly either formal or legal: The ablative case of Indo-European was absorbed into the genitive in Classical Greek. This added to the usages of the "genitive proper", the usages of the "ablatival genitive". The genitive occurs with verbs, adjectives, adverbs and prepositions. The Hungarian genitive is constructed using the suffix "-é" or "-e" depending on vowel harmony. It is a noun case formed from standalone external possession as a contraction of owner and ownee in one. It serves the role of mine, yours, hers, etc. For example: In addition, the suffix "-i" ('of') is also used. For example: The Japanese possessive is constructed by using the suffix "-no" 〜の to make the genitive case. For example: It also uses the suffix "-na" 〜な for adjectival noun; in some analyses adjectival nouns are simply nouns that take "-na" in the genitive, forming a complementary distribution ("-no" and "-na" being allomorphs). The archaic genitive case particle "-ga" ~が is still retained in certain expressions, place names, and dialects. Typically, languages have nominative case nouns converting into genitive case. It has been found, however, that Japanese will in rare cases allow accusative case to convert to genitive, if specific conditions are met in the clause in which the conversion appears. This is referred to as "Accusative-Genitive conversion." The genitive in Korean can be formed using the particle "-ui" '의', although this particle is normally elided in Modern Korean, which leaves the genitive unmarked. (If not, it is usually pronounced "-e" '에') Only some personal pronouns retain a distinctive genitive which comes from the amalgamation of the pronoun plus "-ui" '의' But, Modern Korean: "igeoseun geu namja jadongchayeyo." 이것은 그 남자 자동차예요. 의 is used to mark possession, relation, origination, containment, description/limitation, partition, being an object of a metaphor, or modification. The genitive is one of the cases of nouns and pronouns in Latin. Latin genitives still have certain modern scientific uses: The Irish language also uses a genitive case ("tuiseal ginideach"). For example, in the phrase "bean an tí" (woman of the house), "tí" is the genitive case of "teach", meaning "house". Another example is "barr an chnoic", "top of the hill", where "cnoc" means "hill", but is changed to "chnoic", which also incorporates lenition. Old Persian had a true genitive case inherited from Proto-Indo-European. By the time of Middle Persian, the genitive case had been lost and replaced by an analytical construction which is now called Ezāfe. This construction was inherited by New Persian, and was also later borrowed into numerous other Iranic, Turkic and Indo-Aryan languages of Western and South Asia. Genitive case marking existed in Proto-Semitic, Akkadian, and Ugaritic. It indicated possession, and it is preserved today only in Arabic. Called المجرور "al-majrūr" (meaning "dragged") in Arabic, the genitive case functions both as an indication of ownership (ex. the door of the house) and for nouns following a preposition. The Arabic genitive marking also appears after prepositions. The Semitic genitive should not be confused with the pronominal possessive suffixes that exist in all the Semitic languages With the exception of Bulgarian and Macedonian, all Slavic languages decline the nouns and adjectives in accordance with the genitive case using a variety of endings depending on the word's lexical category, its gender, and number (singular or plural). To indicate possession the ending of the noun indicating the possessor changes depending on the word's ending in the nominative case. For example, to "a, u, i or y" in Polish, "а, я, ы or и" in Russian, and similar cases in other Slavic languages. Possessives can also be formed by the construction (pol.) "u [subject] jest [object]" / (rus.) "У [subject] есть [object]". In sentences where the possessor includes an associated pronoun, the pronoun also changes: And in sentences denoting negative possession, the ending of the object noun also changes: Note that the Polish phrase "nie ma [object]" can work both as a negation of having [object] or a negation of an existence of [object], but the meaning of the two sentences and its structure is different. (In the first case [subject] is Irene, and in the second case [subject] is virtual, it is "the space" at Irene's place, not Irene herself) Note that the Russian word "нет" is a contraction of "не" + "есть". In Russian there is no distinction between [subject] not having an [object] and [object] not being present at [subject]'s. The genitive case is also used in sentences expressing negation, even when no possessive relationship is involved. The ending of the subject noun changes just as it does in possessive sentences. The genitive, in this sense, can only be used to negate nominative, accusative and genitive sentences, and not other cases. Use of genitive for negation is obligatory in Slovene, Polish and Old Church Slavonic. The East Slavic languages (Russian, Ukrainian, and Belorussian) employ either the accusative or genitive for negation, although the genitive is more commonly used. In Czech, Slovak and Serbo-Croatian, negating with the genitive case is perceived as rather archaic and the accusative is preferred, but genitive negation in these languages is still not uncommon, especially in music and literature. The genitive case is used with some verbs and mass nouns to indicate that the action covers only a part of the direct object (having a function of non-existing partitive case), whereas similar constructions using the Accusative case denote full coverage. Compare the sentences: In Russian, special partitive case or sub-case is observed for some uncountable nouns which in some contexts have preferred alternative form on -у/ю instead of standard genitive on -а/я: выпил чаю ('drank some tea'), but сорта чая ('sorts of tea'). The genitive case is also used in many prepositional constructions. (Usually when some movement or change of state is involved, and when describing the source / destination of the movement. Sometimes also when describing the manner of acting.) The Turkish possessive is constructed using two suffixes: a genitive case for the possessor and a possessive suffix for the possessed object. For example: The genitive in Albanian is formed with the help of clitics. For example: If the possessed object is masculine, the clitic is "i". If the possessed object is feminine, the clitic is "e". If the possessed object is plural, the clitic is "e" regardless of the gender. The genitive is used with some prepositions: "me anë" ('by means of'), "nga ana" ('on behalf of', 'from the side of'), "për arsye" ('due to'), "për shkak" ('because of'), "me përjashtim" ('with the exception of'), "në vend" ('instead of'). In Kannada, the genitive case-endings are: for masculine or feminine nouns ending in "ಅ" (a): ನ (na) for neuter nouns ending in "ಅ" (a): ದ (da) for all nouns ending in "ಇ" (i), "ಈ" (ī), "ಎ" (e), or "ಏ" (ē): ಅ (a) for all nouns ending in "ಉ" (u), "ಊ" (ū), "ಋ" (r̥), or "ೠ" (r̥̄): ಇನ (ina) Most postpositions in Kannada take the genitive case. In Tamil, the genitive case ending is the word உடைய, which signifies possession. Depending on the last letter of the noun, the genitive case endings may vary. If the last letter is a consonant (மெய் எழுத்து), like க், ங், ச், ஞ், ட், ண், த், ந், ப், ம், ய், ர், ல், வ், ழ், then the suffix உடைய gets added. *Examples: His: அவன் + உடைய = அவனுடைய, Doctor's: மருத்துவர் + உடைய = மருத்துவருடைய, Kumar's: குமார் + உடைய = குமாருடைய
https://en.wikipedia.org/wiki?curid=12539
Gematria Gematria (; or Gimatria , plural or , "gematriot") is an alphanumeric code of assigning a numerical value to a name, word or phrase based on its letters. A single word can yield multiple values depending on the cipher used. Gematria originated as an Assyro-Babylonian-Greek system of alphanumeric code or cipher that was later adopted into Jewish culture. Similar systems have been used in other languages and cultures: earlier, the Greek isopsephy, and later, derived from or inspired by Hebrew gematria, Arabic abjad numerals, and English gematria. A well-known example of Hebrew gematria is the word "chai" ("alive"), which is composed of two letters that (using the assignments in the "Mispar gadol" table shown below) add up to 18. This has made 18 a "lucky number" among the Jewish people. Gifts of money in multiples of 18 are very popular. Although the term is Hebrew, it may be derived from the Greek γεωμετρία "geōmetriā", "geometry", which was used as a translation of gēmaṭriyā, though some scholars believe it to derive from Greek γραμματεια "grammateia" "knowledge of writing". It is likely that both Greek words had an influence on the formation of the Hebrew word. Some also hold it to derive from the order of the Greek alphabet, gamma being the third letter of the Greek alphabet. The word has been extant in English since at least the 17th century from translations of works by Giovanni Pico della Mirandola. Although ostensibly derived from Greek isopsephy, it is largely used in Jewish texts, notably in those associated with the Kabbalah. The term does not appear in the Hebrew Bible itself. Some identify two forms of gematria: the "revealed" form, which is prevalent in many hermeneutic methods found throughout rabbinic literature, and the "mystical" form, a largely Kabbalistic practice. A few instances of gematria in Arabic, Spanish and Greek, spelled with the Hebrew letters, are mentioned in the works of Abraham Abulafia; some Hasidic Rebbis also used it, although rarely, for Yiddish. Some scholars think it possible that gematria is encoded in the Hebrew Bible. For example, Israel Knohl noted that “it is not out of the question that this technique was already known in the biblical period and was used specifically in religious contexts”, and has hypothesized “the fact that the representation of the numerical values of letters is not demonstrated in mundane use in ancient Israel before the Hellenistic period may point to the possibility that this method was first a sacred secret knowledge that was kept in closed circles”. Victor Hurowitz points out “numerological principles in the organization of the book [of Proverbs]” and demonstrates that Gematria has Mesopotamian precedents. Stephen J. Lieberman writes “we must admit that it is possible such techniques were employed in biblical texts.  The means were available, and if the desire was present, it was certainly possible for hidden messages to be put into the Bible”. A Mishnaic textual source makes clear that the word "gematria" is dated to at least the Tannaic period: Herbert Danby, in his Mishna translation, explains "gematriot" as follows: However, Danby also notes a case of the latter kind of gematria already in Mishnah Uktzin 3:12. In the Talmud, the term "gematria" is also used to refer to a different concept: the atbash cipher. A Biblical commentary incorporating "gematria" is "Baal ha-Turim" by Jacob ben Asher. Anglican scholar E. W. Bullinger wrote Number in Scripture (1894) analyzing both Hebrew and Greek gematria in the Old and New Testaments. Although his conclusions have since been challenged on the grounds of confirmation bias and mathematical inconsistencies. In the standard ("Mispar hechrechi") version of "gematria", each letter is given a numerical value between 1 and 400, as shown in the following table. In the "Mispar gadol" variation, the five final letters are given their own values, ranging from 500 to 900. A mathematical formula for finding a letter's corresponding number in Mispar Gadol is: formula_1 where "x" is the position of the letter in the language letters index (Regular order of letters), and the floor and modulo functions are used. The value of the Hebrew vowels is not usually counted, but some lesser-known methods include the vowels as well. The most common vowel values are as follows (a less common alternative value, based on digit sum, is given in parentheses): Sometimes the names of the vowels are spelled out and their gematria is calculated using standard methods. There are many different methods used to calculate the numerical value for the individual Hebrew/Aramaic words, phrases or whole sentences. More advanced methods are usually used for the most significant Biblical verses, prayers, names of God and angels etc. These methods include: Within the wider topic of Gematria are included the various alphabet transformations where one letter is substituted by another based on a logical scheme: Most of the above-mentioned methods and ciphers are listed by Rabbi Moshe Cordevero. Some authors provide lists of as many as 231 various replacement ciphers, related to the 231 mystical Gates of the Sefer Yetzirah. Dozens of other far more advanced methods are used in Kabbalistic literature, without any particular names. In Ms. Oxford 1,822, one article lists 75 different forms of gematria. Some known methods are recursive in nature and are reminiscent of the graph theory or use heavily combinatorics. Rabbi Elazar Rokeach often used multiplication, instead of addition, for the above-mentioned methods. For example, spelling out the letters of a word and then multiplying the squares of each letter value in the resulting string produces very large numbers, in orders of trillions. The spelling process can be applied recursively, until a certain pattern (e.g., all the letters of the word "Talmud") is found; the gematria of the resulting string is then calculated. The same author also used sums of all possible unique letter combinations, which add up to the value of a given letter. For example, the letter Hei, which has the standard value of 5, can be produced by combining formula_6, formula_7, formula_8, formula_9, formula_10, or formula_11, which adds up to formula_12. Sometimes combinations of repeating letters are not allowed (e.g., formula_11 is valid, but formula_8 is not). The original letter itself can also be viewed as a valid combination. Variant spellings of some letters can be used to produce sets of different numbers, which can be added up or analyzed separately. Many various complex formal systems and recursive algorithms, based on graph-like structural analysis of the letter names and their relations to each other, modular arithmetic, pattern search and other highly advanced techniques, are found in the "Sefer ha-Malchuth" by Rabbi David ha-Levi of Draa Valley, a Spanish-Moroccan Kabbalist of the 15–16th century. Rabbi David ha-Levi's methods take into consideration the numerical values and other properties of the vowels as well. Kabbalistic astrology uses some specific methods to determine the astrological influences on a particular person. According to one method, the gematria of the person's name is added to the gematria of his or her mother's name; the result is then divided by 7 and 12. The remainders signify a particular planet and Zodiac sign. The most common form of Hebrew "gematria" is used in the Talmud and Midrash, and elaborately by many post-Talmudic commentators. It involves reading words and sentences as numbers, assigning numerical instead of phonetic value to each letter of the Hebrew alphabet. When read as numbers, they can be compared and contrasted with other words or phrases – cf. the Hebrew proverb ("", lit. "wine entered, secret went out", i.e. ""in vino veritas""). The gematric value of ("wine") is 70 (=10; =10; =50) and this is also the gematric value of ("secret", =60; =6; =4)‎. The first attested use of gematria occurs in an inscription of Assyrian ruler Sargon II (727–705 BCE) stating that the king built the wall of Khorsabad 16,283 cubits long to correspond with the numerical value of his name. Gematria or isopsephy was borrowed from the Greek probably soon after their adoption of the Semitic writing system. The extant examples of use in Greek come primarily from the Christian literature and, unlike rabbinic sources, is always explicitly stated as being used. It has been asserted that Plato (c. 427-347 BC) offers a discussion of gematria "in its simplest forms" in the Cratylus, where he is said to have claimed that "the 'essential force' of a thing's name is to be found in its numerical value, and that words and phrases of the same numerical value may be substituted in context without loss in meaning". A direct review of the Cratylus, however, shows that Plato made no such claim and that gematria is not discussed in it either explicitly or implicitly. What can be more accurately stated is that Plato's discussion in the Cratylus involves a view of words and names as referring (more or less accurately) to the "essential nature" of a person or object, and that this view may have influenced – and is central to – "Greek gematria". The Latin-script languages exhibit borrowing of gematria methods dating from the early Middle Ages after the use lapsed following the collapse of the Roman Empire in the 5th century. "Simple(6,74) English(7,74) Gematria(8,74)" uses 'the key'(74) of A=1, B2, C3...O15 or zero...Z26. Many researchers connect the "Number of the Beast", referred to in the Book of Revelation of the New Testament, with Hebrew gematria as used by the early Jewish Christians. According to such interpretations, the number in question, "six hundred sixty-six" (666; see ) was originally derived from the Latin name of the Roman emperor at the time, Nero Caesar, via the Greek, 'Neron Kaisar', and transliterated into Hebrew gematria. The result of this operation is six hundred sixty-six (formula_15). ( "").
https://en.wikipedia.org/wiki?curid=12541
Grateful Dead The Grateful Dead was an American rock band formed in 1965 in Palo Alto, California. The band is known for its eclectic style, which fused elements of rock, folk, country, jazz, bluegrass, blues, gospel, and psychedelic rock; for live performances of lengthy instrumental jams; and for its devoted fan base, known as "Deadheads." "Their music," writes Lenny Kaye, "touches on ground that most other groups don't even know exists." These various influences were distilled into a diverse and psychedelic whole that made the Grateful Dead "the pioneering Godfathers of the jam band world". The band was ranked 57th by "Rolling Stone" magazine in its The Greatest Artists of All Time issue. The band was inducted into the Rock and Roll Hall of Fame in 1994 and a recording of their May 8, 1977 performance at Cornell University's Barton Hall was added to the National Recording Registry of the Library of Congress in 2012. The Grateful Dead have sold more than 35 million albums worldwide. The Grateful Dead was founded in the San Francisco Bay Area amid the rise of the counterculture of the 1960s. The founding members were Jerry Garcia (lead guitar, vocals), Bob Weir (rhythm guitar, vocals), Ron "Pigpen" McKernan (keyboards, harmonica, vocals), Phil Lesh (bass, vocals), and Bill Kreutzmann (drums). Members of the Grateful Dead had played together in various San Francisco bands, including Mother McCree's Uptown Jug Champions and the Warlocks. Lesh was the last member to join the Warlocks before they became the Grateful Dead; he replaced Dana Morgan Jr., who had played bass for a few gigs. Drummer Mickey Hart and non-performing lyricist Robert Hunter joined in 1967. With the exception of McKernan, who died in 1973, and Hart, who took time off from 1971 to 1974, the core of the band stayed together for its entire 30-year history. The other official members of the band are Tom Constanten (keyboards; 1968–1970), John Perry Barlow (nonperforming lyricist; 1971–1995), Keith Godchaux (keyboards; 1971–1979), Donna Godchaux (vocals; 1972–1979), Brent Mydland (keyboards, vocals; 1979–1990), and Vince Welnick (keyboards, vocals; 1990–1995). Bruce Hornsby (accordion, piano, vocals) was a touring member from 1990 to 1992, as well as a guest with the band on occasion before and after the tours. After the death of Garcia in 1995, former members of the band, along with other musicians, toured as the Other Ones in 1998, 2000, and 2002, and the Dead in 2003, 2004, and 2009. In 2015, the four surviving core members marked the band's 50th anniversary in a that were billed as their last performances together. There have also been several spin-offs featuring one or more core members, such as Dead & Company, Furthur, the Rhythm Devils, Phil Lesh and Friends, RatDog, and Billy & the Kids. The Grateful Dead began their career as the Warlocks, a group formed in early 1965 from the remnants of a Palo Alto, California jug band called Mother McCree's Uptown Jug Champions. The band's first show was at Magoo's Pizza Parlor located at 639 Santa Cruz Avenue in suburban Menlo Park, on May 5, 1965. They continued playing bar shows as the Warlocks, but quickly changed its name after finding out that the Velvet Underground had put out a record under the same name. The first show under the name Grateful Dead was in San Jose on December 4, 1965, at one of Ken Kesey's Acid Tests. Earlier demo tapes have survived, but the first of over 2,000 concerts known to have been recorded by the band's fans was a show at the Fillmore Auditorium in San Francisco on January 8, 1966. Later that month, the Grateful Dead played at the Trips Festival, an early psychedelic rock concert. The name "Grateful Dead" was chosen from a dictionary. According to Phil Lesh, "[Jerry Garcia] picked up an old "Britannica World Language Dictionary" ... [and] ... In that silvery elf-voice he said to me, 'Hey, man, how about the Grateful Dead?'" The definition there was "the soul of a dead person, or his angel, showing gratitude to someone who, as an act of charity, arranged their burial". According to Alan Trist, director of the Grateful Dead's music publisher company Ice Nine, Garcia found the name in the Funk & Wagnalls "Folklore Dictionary", when his finger landed on that phrase while playing a game of Fictionary. In the Garcia biography, "Captain Trips", author Sandy Troy states that the band was smoking the psychedelic DMT at the time. The term "grateful dead" appears in folktales of a variety of cultures. Other supporting personnel who signed on early included Rock Scully, who heard of the band from Kesey and signed on as manager after meeting them at the Big Beat Acid Test; Stewart Brand, "with his side show of taped music and slides of Indian life, a multimedia presentation" at the Big Beat and then, expanded, at the Trips Festival; and Owsley Stanley, the "Acid King" whose LSD supplied the tests and who, in early 1966, became the band's financial backer, renting them a house on the fringes of Watts and buying them sound equipment. "We were living solely off of Owsley's good graces at that time. ... [His] trip was he wanted to design equipment for us, and we were going to have to be in sort of a lab situation for him to do it", said Garcia. One of the group's earliest major performances in 1967 was the Mantra-Rock Dance—a musical event held on January 29, 1967, at the Avalon Ballroom by the San Francisco Hare Krishna temple. The Grateful Dead performed at the event along with the Hare Krishna founder Bhaktivedanta Swami, poet Allen Ginsberg, bands Moby Grape and Big Brother and the Holding Company with Janis Joplin, donating proceeds to the Krishna temple. The band's first LP, "The Grateful Dead", was released on Warner Brothers in 1967. Classically trained trumpeter Phil Lesh performed on bass guitar. Bob Weir, the youngest original member of the group, played rhythm guitar. Ron "Pigpen" McKernan played keyboards and harmonica until shortly before his death in 1973 at the age of 27. Garcia, Weir, and McKernan shared the lead vocal duties more or less equally; Lesh only sang a few leads, but his tenor was a key part of the band's three-part vocal harmonies. Bill Kreutzmann played drums, and in September 1967 was joined by a second drummer, New York City native Mickey Hart, who also played a wide variety of other percussion instruments. 1970 included tour dates in New Orleans, Louisiana, where the band performed at The Warehouse for two nights. On January 31, 1970, the local police raided their hotel on Bourbon Street, and arrested and charged a total of 19 people with possession of various drugs. The second night's concert was performed as scheduled after bail was posted. Eventually, the charges were dismissed, except those against sound engineer Owsley Stanley, who was already facing charges in California for manufacturing LSD. This event was later memorialized in the lyrics of the song "Truckin'", a single from "American Beauty" which reached number 64 on the charts. Mickey Hart took time off from the Grateful Dead beginning in February 1971, leaving Kreutzmann once again as the sole percussionist. Hart rejoined the Grateful Dead for good in October 1974. Tom "TC" Constanten was added as a second keyboardist from 1968 to 1970, while Pigpen also played various percussion instruments and sang. After Constanten's departure, Pigpen reclaimed his position as sole keyboardist. Less than two years later, in late 1971, Pigpen was joined by another keyboardist, Keith Godchaux, who played grand piano alongside Pigpen's Hammond B-3 organ. In early 1972, Keith's wife, Donna Jean Godchaux, joined the Grateful Dead as a backing vocalist. Following the Grateful Dead's "Europe '72" tour, Pigpen's health had deteriorated to the point that he could no longer tour with the band. His final concert appearance was June 17, 1972, at the Hollywood Bowl, in Los Angeles; he died on March 8, 1973 of complications from liver damage. The death of Pigpen did not slow the band down, and they continued with their new members. They soon formed their own record group, Grateful Dead Records. Later that year, they released their next studio album, the jazz-influenced "Wake of the Flood". It became their biggest commercial success thus far. Meanwhile, capitalizing on "Flood"'s success, the band soon went back to the studio, and the next year, 1974, released another album, "From the Mars Hotel". Not long after that album's release however, the Dead decided to take a hiatus from live touring. Before embarking on the hiatus, the band performed a series of five concerts at the Winterland Ballroom in San Francisco in October 1974. The concerts were filmed, and Garcia compiled the footage into "The Grateful Dead Movie", a feature-length concert film that would be released in 1977. In September 1975, the Dead released their eighth studio album, "Blues for Allah". They resumed touring in June 1976. That same year, they signed with Arista Records. Their new contract soon produced "Terrapin Station" in 1977. The band's tour in the spring of that year is held in high regard by their fans, and their concert of May 8 at Cornell University in Ithaca, New York is often considered to be one of the best performances of their career. Keith and Donna Jean Godchaux left the band in February 1979. Following the departure of the Godchauxs, Brent Mydland joined as keyboardist and vocalist and was considered "the perfect fit". The Godchauxs then formed the Heart of Gold Band before Keith died in a car accident in 1980. Mydland was the keyboardist for the Grateful Dead for 11 years until his death by narcotics overdose in July 1990, becoming the third keyboardist to die. Shortly after Mydland found his place in the early 1980s, Garcia's health began to decline. His drug habits caused him to lose his liveliness on stage. After beginning to curtail his opiate usage in 1985 gradually, Garcia slipped into a diabetic coma for several days in July 1986. After he recovered, the band released "In the Dark" in July 1987, which became their best selling studio album and produced their only top-10 single, "Touch of Grey". Also that year, the group toured with Bob Dylan, as heard on the album "Dylan & the Dead". Mydland died after the summer tour in 1990 and Vince Welnick, former keyboardist for the Tubes, joined as a band member, while Bruce Hornsby, who had a successful career with his band the Range, joined as a touring member. Both performed on keyboards and vocals—Welnick until the band's end, and Hornsby mainly from 1990 to 1992. Jerry Garcia died on August 9, 1995, and a few months later, the remaining members of the Grateful Dead decided to disband. Since that time, there have been a number of reunions by the surviving members involving various combinations of musicians. Additionally, the former members have also begun or continued their individual projects. In 1998, Bob Weir, Phil Lesh, and Mickey Hart, along with several other musicians, formed a band called the Other Ones, and performed a number of concerts that year, releasing a live album, "The Strange Remain", the following year. In 2000, the Other Ones toured again, this time with Kreutzmann but without Lesh. After taking another year off, the band toured again in 2002 with Lesh. That year, the Other Ones then included all four living former Grateful Dead members who had been in the band for most or all of its history. At different times the shifting lineup of the Other Ones also included guitarists Mark Karan, Steve Kimock, and Jimmy Herring, keyboardists Bruce Hornsby, Jeff Chimenti, and Rob Barraco, saxophonist Dave Ellis, drummer John Molo, bassist Alphonso Johnson, and vocalist Susan Tedeschi. In 2003, the Other Ones, still including Weir, Lesh, Hart, and Kreutzmann, changed their name to the Dead. The Dead toured the United States in 2003, 2004 and 2009. The band's lineups included Jimmy Herring and Warren Haynes on guitar, Jeff Chimenti and Rob Barraco on keyboards, and Joan Osborne on vocals. In 2008, members of the Dead played two concerts, called "Deadheads for Obama" and "Change Rocks". Following the 2009 Dead tour, Lesh and Weir formed the band Furthur, which debuted in September 2009. Joining Lesh and Weir in Furthur were John Kadlecik (guitar), Jeff Chimenti (keyboards), Joe Russo (drums), Jay Lane (drums), Sunshine Becker (vocals), and Zoe Ellis (vocals). Lane and Ellis left the band in 2010, and vocalist Jeff Pehrson joined later that year. Furthur disbanded in 2014. In 2010, Hart and Kreutzmann re-formed the Rhythm Devils, and played a summer concert tour. Since 1995, the former members of the Grateful Dead have also pursued solo music careers. Both Bob Weir & RatDog and Phil Lesh and Friends have performed many concerts and released several albums. Mickey Hart and Bill Kreutzmann have also each released a few albums. Hart has toured with his world music percussion ensemble Planet Drum as well as the Mickey Hart Band. Kreutzmann has led several different bands, including BK3, 7 Walkers (with Papa Mali), and Billy & the Kids. Donna Godchaux has returned to the music scene, with the Donna Jean Godchaux Band, and Tom Constanten also continues to write and perform music. All of these groups continue to play Grateful Dead music. In October 2014, it was announced that Martin Scorsese would produce a documentary film about the Grateful Dead, to be directed by Amir Bar-Lev. David Lemieux supervised the musical selection, and Weir, Hart, Kreutzmann and Lesh agreed to new interviews for the film. Bar-Lev's four-hour documentary, titled "Long Strange Trip", was released in 2017. In 2015, Weir, Lesh, Kreutzmann, and Hart reunited for five concerts called "Fare Thee Well: Celebrating 50 Years of the Grateful Dead". The shows were performed on June 27 and 28 at Levi's Stadium in Santa Clara, California, and on July 3, 4 and 5 at Soldier Field in Chicago. The band stated that this would be the final time that Weir, Lesh, Hart, and Kreutzmann would perform together. They were joined by Trey Anastasio of Phish on guitar, Jeff Chimenti on keyboards, and Bruce Hornsby on piano. Demand for tickets was very high. The concerts were simulcast via various media. The Chicago shows have been released as a box set of CDs and DVDs. In the fall of 2015, Mickey Hart, Bill Kreutzmann and Bob Weir joined with guitarist John Mayer, keyboardist Jeff Chimenti, and bassist Oteil Burbridge to tour in a band called Dead & Company. Mayer recounts that in 2011 he was listening to Pandora and happened upon the Grateful Dead song "Althea", and that soon Grateful Dead music was all he would listen to. The band has played three tours, and is currently on a fourth: October–December 2015, June–July 2016, and May–July 2017. Summer Tour 2019 was announced in November 2018 and began in May 2019. Barlow died in 2018 and Hunter in 2019. The Grateful Dead formed during the era when bands such as the Beatles, the Beach Boys and the Rolling Stones were dominating the airwaves. "The Beatles were why we turned from a jug band into a rock 'n' roll band", said Bob Weir. "What we saw them doing was impossibly attractive. I couldn't think of anything else more worth doing." Former folk-scene star Bob Dylan had recently put out a couple of records featuring electric instrumentation. Grateful Dead members have said that it was after attending a concert by the touring New York City band the Lovin' Spoonful that they decided to "go electric" and look for a dirtier sound. Jerry Garcia and Bob Weir (each of whom had been immersed in the American folk music revival of the late 1950s and early 1960s), were open-minded to electric guitars. The Grateful Dead's early music (in the mid-1960s) was part of the process of establishing what "psychedelic music" was, but theirs was essentially a "street party" form of it. They developed their "psychedelic" playing as a result of meeting Ken Kesey in Palo Alto, California, and subsequently becoming the house band for the Acid Tests he staged. They did not fit their music to an established category such as pop rock, blues, folk rock, or country & western. Individual tunes within their repertoire could be identified under one of these stylistic labels, but overall their music drew on all of these genres and, more frequently, melded several of them. Bill Graham said of the Grateful Dead, "They're not the best at what they do, they're the only ones that do what they do." Often (both in performance and on recording) the Dead left room for exploratory, spacey soundscapes. Their live shows, fed by an improvisational approach to music, were different from most touring bands. While rock and roll bands often rehearse a standard set, played with minor variations, the Grateful Dead did not prepare in this way. Garcia stated in a 1966 interview, "We don't make up our sets beforehand. We'd rather work off the tops of our heads than off a piece of paper." They maintained this approach throughout their career. For each performance, the band drew material from an active list of a hundred or so songs. The 1969 live album "Live/Dead" did capture the band in-form, but commercial success did not come until "Workingman's Dead" and "American Beauty", both released in 1970. These records largely featured the band's laid-back acoustic musicianship and more traditional song structures. With their rootsy, eclectic stylings, particularly evident on the latter two albums, the band pioneered the hybrid Americana genre. As the band and its sound matured over thirty years of touring, playing, and recording, each member's stylistic contribution became more defined, consistent, and identifiable. Lesh, who was originally a classically trained trumpet player with an extensive background in music theory, did not tend to play traditional blues-based bass forms, but more melodic, symphonic and complex lines, often sounding like a second lead guitar. Weir, too, was not a traditional rhythm guitarist, but tended to play jazz-influenced, unique inversions at the upper end of the Dead's sound. The two drummers, Mickey Hart and Kreutzmann, developed a unique, complex interplay, balancing Kreutzmann's steady beat with Hart's interest in percussion styles outside the rock tradition. Hart incorporated an 11-count measure to his drumming, bringing a dimension to the band's sound that became an important part of its style. Garcia's lead lines were fluid, supple and spare, owing a great deal of their character to his training in fingerpicking and banjo. The band's primary lyricists, Robert Hunter and John Perry Barlow, commonly used themes involving love and loss, life and death, gambling and murder, beauty and horror, chaos and order, God and other religious themes, travelling and touring. In a retrospective, "The New Yorker" described Hunter's verses as "elliptical, by turns vivid and gnomic", which were often "hippie poetry about roses and bells and dew", and critic Robert Christgau described them as "American myths" that later gave way to "the old karma-go-round". Hal Kant was an entertainment industry attorney who specialized in representing musical groups. He spent 35 years as principal lawyer and general counsel for the Grateful Dead, a position in the group that was so strong that his business cards with the band identified his role as "Czar". Kant brought the band millions of dollars in revenue through his management of the band's intellectual property and merchandising rights. At Kant's recommendation, the group was one of the few rock 'n roll pioneers to retain ownership of their music masters and publishing rights. In 2006, the Grateful Dead signed a ten-year licensing agreement with Rhino Entertainment to manage the band's business interests including the release of musical recordings, merchandising, and marketing. The band retained creative control and kept ownership of its music catalog. A Grateful Dead video game titled "Grateful Dead Game – The Epic Tour" was released in April 2012 and was created by Curious Sense. After Lithuania gained its independence from the USSR, the country announced its withdrawal from the 1992 Olympics due to the lack of any money to sponsor participants. But NBA star Šarūnas Marčiulionis, a native Lithuanian basketball star, wanted to help his native team to compete. His efforts resulted in a call from representatives of the Grateful Dead who set up a meeting with the band members. The band agreed to fund transportation costs for the team (about five thousand dollars) along with Grateful Dead design for the basketball jerseys and shorts for the team to wear in the competition. The Lithuanian basketball team won the bronze medal in the 1992 Olympics and the Lithuanian basketball/Grateful Dead T-shirts became part of pop culture, especially in Lithuania. The incident was covered by the documentary "The Other Dream Team". The Grateful Dead toured constantly throughout their career, playing more than 2,300 concerts. They promoted a sense of community among their fans, who became known as "Deadheads", many of whom followed their tours for months or years on end. Around concert venues, an impromptu communal marketplace known as 'Shakedown Street' was created by Deadheads to serve as centers of activity where fans could buy and sell anything from grilled cheese sandwiches to home-made t-shirts and recordings of Grateful Dead concerts. In their early career, the band also dedicated their time and talents to their community, the Haight-Ashbury area of San Francisco, making available free food, lodging, music, and health care to all. It has been said that the band performed "more free concerts than any band in the history of music". With the exception of 1975, when the band was on hiatus and played only four concerts together, the Grateful Dead performed many concerts every year, from their formation in April 1965, until July 9, 1995. Initially all their shows were in California, principally in the San Francisco Bay Area and in or near Los Angeles. They also performed, in 1965 and 1966, with Ken Kesey and the Merry Pranksters, as the house band for the Acid Tests. They toured nationally starting in June 1967 (their first foray to New York), with a few detours to Canada, Europe and three nights at the Great Pyramid of Giza in Egypt in 1978. They appeared at the Monterey Pop Festival in 1967, the Woodstock Festival in 1969 and the Festival Express train tour across Canada in 1970. They were scheduled to appear as the final act at the infamous Altamont Free Concert on December 6, 1969 after the Rolling Stones but withdrew after security concerns. "That's the way things went at Altamont—so badly that the Grateful Dead, prime organizers and movers of the festival, didn't even get to play", staff at "Rolling Stone" magazine wrote in a detailed narrative on the event" Their first UK performance was at the Hollywood Music Festival in 1970. Their largest concert audience came in 1973 when they played, along with the Allman Brothers Band and the Band, before an estimated 600,000 people at the Summer Jam at Watkins Glen. The 1998 edition of the Guinness Book of World Records recognized them with a listing under the heading, "most rock concerts performed" (2,318 concerts). They played to an estimated total of 25 million people, more than any other band, with audiences of up to 80,000 attending a single show. Many of these concerts were preserved in the band's tape vault, and several dozen have since been released on CD and as downloads. The Dead were known for the tremendous variation in their setlists from night to night—the list of songs documented to have been played by the band exceeds 500. The band has released four concert videos under the name "View from the Vault". In the 1990s, the Grateful Dead earned a total of $285 million in revenue from their concert tours, the second-highest during the 1990s, with the Rolling Stones earning the most. This figure is representative of tour revenue through 1995, as touring stopped after the death of Jerry Garcia. In a 1991 PBS documentary, segment host Buck Henry attended an August 1991 concert at Shoreline Amphitheatre and gleaned some information from some band members about the Grateful Dead phenomenon and its success. At the time, Jerry Garcia stated, "We didn't really invent the Grateful Dead, the "crowd" invented the Grateful Dead, you know what I mean? We were sort of standing in line, and uh, it's gone way past our expectations, "way" past, so it's, we've been going along with it to see what it's gonna do next." Furthermore, Mickey Hart stated, "This is one of the last places in America that you can really have this kind of fun, you know, considering the political climate and so forth." Hart also stated that "the transformative power of the Grateful Dead is really the essence of it; it's what it can do to your consciousness. We're more into "transportation" than we are into music, "per se", I mean, the business of the Grateful Dead is transportation." Their numerous studio albums were generally collections of new songs that they had first played in concert. The band was also famous for its extended musical improvisations, having been described as having never played the same song the same way twice. Their concert sets often blended songs, one into the next (a segue). The Wall of Sound was a large sound system designed specifically for the band. The band was never satisfied with the house system anywhere they played. After the Monterey Pop Festival, the band's crew 'borrowed' some of the other performers' sound equipment and used it to host some free shows in San Francisco. In their early days, soundman Owsley "Bear" Stanley designed a public address (PA) and monitor system for them. Stanley was the Grateful Dead's soundman for many years; he was also one of the largest suppliers of LSD. Stanley's sound systems were delicate and finicky, and frequently brought shows to a halt with technical breakdowns. After Stanley went to jail for manufacturing LSD in 1970, the group briefly used house PAs, but found them to be even less reliable than those built by their former soundman. On February 2, 1970, the group contacted Bob Heil to use his system. In 1971, the band purchased their first solid-state sound system from Alembic Inc Studios. Because of this, Alembic would play an integral role in the research, development, and production of the Wall of Sound. The band also welcomed Dan Healy into the fold on a permanent basis that year. Healy would mix the Grateful Dead's live sound until 1993. Like several other bands during this time, the Grateful Dead allowed their fans to record their shows. For many years the tapers set up their microphones wherever they could, and the eventual forest of microphones became a problem for the sound crew. Eventually, this was solved by having a dedicated taping section located behind the soundboard, which required a special "tapers" ticket. The band allowed sharing of tapes of their shows, as long as no profits were made on the sale of their show tapes. Sometimes the sound crew would allow the tapers to connect directly to the soundboard, which created exceptional recordings. Recently, there have been some disputes over which recordings archive.org could host on their site. Although all the recordings are hosted at present, the soundboard recordings can only be streamed and not downloaded. Of the approximately 2,350 shows the Grateful Dead played, almost 2,200 were taped, and most of these are available online. The band began collecting and cataloging tapes early on and Dick Latvala was their keeper. "Dick's Picks" is named after Latvala. After his death in 1999, David Lemieux gradually took the post. Concert set lists from a subset of 1,590 Grateful Dead shows were used to perform a comparative analysis between how songs were played in concert and how they are listened online by Last.fm members. In their book "Marketing Lessons from the Grateful Dead: What Every Business Can Learn From the Most Iconic Band in History", David Meerman Scott and Brian Halligan identify the taper section as a crucial contributor to increasing the Grateful Dead's fan base. Over the years, a number of iconic images have come to be associated with the Grateful Dead. Many of these images originated as artwork for concert posters or album covers. Fans and enthusiasts of the band are commonly referred to as Deadheads. While the origin of the term may be unclear, "Dead Heads" were made canon by the notice placed inside the "Skull and Roses" (1971) album by manager Jon McIntire: Many of the Dead Heads would go on tour with the band. As a group, the Dead Heads were considered very mellow. "I'd rather work nine Grateful Dead concerts than one Oregon football game," Police Det. Rick Raynor said. "They don't get belligerent like they do at the games." On April 24, 2008, members Bob Weir and Mickey Hart, along with Nion McEvoy, CEO of Chronicle Books, UC Santa Cruz chancellor George Blumenthal, and UC Santa Cruz librarian Virginia Steel, held a press conference announcing UCSC's McHenry Library would be the permanent home of the Grateful Dead Archive, which includes a complete archival history from 1965 to the present. The archive includes correspondence, photographs, fliers, posters, and several other forms of memorabilia and records of the band. Also included are unreleased videos of interviews and TV appearances that will be installed for visitors to view, as well as stage backdrops and other props from the band's concerts. Blumenthal stated at the event, "The Grateful Dead Archive represents one of the most significant popular cultural collections of the 20th century; UC Santa Cruz is honored to receive this invaluable gift. The Grateful Dead and UC Santa Cruz are both highly innovative institutions—born the same year—that continue to make a major, positive impact on the world." Guitarist Bob Weir stated "We looked around, and UC Santa Cruz seems the best possible home. If you ever wrote the Grateful Dead a letter, you'll probably find it there!" Professor of music Fredric Lieberman was the key contact between the band and the university, who let the university know about the search for a home for the archive, and who had collaborated with Mickey Hart on three books in the past, "Planet Drum" (1990), "Drumming at the Edge of Magic" (1991), and "Spirit into Sound" (2006). The first large-scale exhibition of materials from the Grateful Dead Archive was mounted at the New-York Historical Society in 2010. In 2004, "Rolling Stone" ranked the Grateful Dead No. 57 on their list of the 100 Greatest Artists of All Time. On February 10, 2007, the Grateful Dead received a Grammy Lifetime Achievement Award. The award was accepted on behalf of the band by Mickey Hart and Bill Kreutzmann. In 2011, a recording of the Grateful Dead's May 8, 1977, concert at Cornell University's Barton Hall was selected for induction into the National Recording Registry of the Library of Congress. Twelve members of the Grateful Dead (the eleven official performing members plus Robert Hunter) were inducted into the Rock and Roll Hall of Fame in 1994, and Bruce Hornsby was their presenter. Lead guitarist Jerry Garcia was often viewed both by the public and the media as the leader or primary spokesperson for the Grateful Dead, but was reluctant to be perceived that way, especially since he and the other group members saw themselves as equal participants and contributors to their collective musical and creative output. Garcia, a native of San Francisco, grew up in the Excelsior District. One of his main influences was bluegrass music, and he also performed—on banjo, one of his other great instrumental loves, along with the pedal steel guitar—in bluegrass bands, notably Old & In the Way with mandolinist David Grisman. Bruce Hornsby never officially joined the band full-time because of his other commitments, but he did play keyboards at most Dead shows between September 1990 and March 1992, and sat in with the band over 100 times in all between 1988 and 1995. He added several Dead songs to his own live shows and Jerry Garcia referred to him as a "floating member" who could come and go as he pleased. Robert Hunter and John Perry Barlow were the band's primary lyricists, starting in 1967 and 1971, respectively, and continuing until the band's dissolution. Hunter collaborated mostly with Garcia and Barlow mostly with Weir, though each wrote with other band members as well. Both are listed as official members at Dead.net, the band's website, alongside the performing members. Barlow was the only member not inducted into the Rock and Roll Hall of Fame.
https://en.wikipedia.org/wiki?curid=12542
Groupoid In mathematics, especially in category theory and homotopy theory, a groupoid (less often Brandt groupoid or virtual group) generalises the notion of group in several equivalent ways. A groupoid can be seen as a: In the presence of dependent typing, a category in general can be viewed as a typed monoid, and similarly, a groupoid can be viewed as simply a typed group. The morphisms take one from one object to another, and form a dependent family of types, thus morphisms might be typed formula_1, formula_2, say. Composition is then a total function: formula_3, so that formula_4. Special cases include: Groupoids are often used to reason about geometrical objects such as manifolds. introduced groupoids implicitly via Brandt semigroups. A groupoid is an algebraic structure formula_6 consisting of a non-empty set formula_5 and a binary partial function 'formula_8' defined on formula_5. A groupoid is a set formula_5 with a unary operation formula_11 and a partial function formula_12. Here * is not a binary operation because it is not necessarily defined for all pairs of elements of formula_5. The precise conditions under which formula_14 is defined are not articulated here and vary by situation. formula_8 and −1 have the following axiomatic properties: For all formula_16, formula_17, and formula_18 in formula_5, Two easy and convenient properties follow from these axioms: Proof of first property: from 2. and 3. we obtain "a"−1 = "a"−1 * "a" * "a"−1 and ("a"−1)−1 = ("a"−1)−1 * "a"−1 * ("a"−1)−1. Substituting the first into the second and applying 3. two more times yields ("a"−1)−1 = ("a"−1)−1 * "a"−1 * "a" * "a"−1 * ("a"−1)−1 = ("a"−1)−1 * "a"−1 * "a" = "a". ✓ Proof of second property: since "a" * "b" is defined, so is ("a" * "b")−1 * "a" * "b". Therefore ("a" * "b")−1 * "a" * "b" * "b"−1 = ("a" * "b")−1 * "a" is also defined. Moreover since "a" * "b" is defined, so is "a" * "b" * "b"−1 = "a". Therefore "a" * "b" * "b"−1 * "a"−1 is also defined. From 3. we obtain ("a" * "b")−1 = ("a" * "b")−1 * "a" * "a"−1 = ("a" * "b")−1 * "a" * "b" * "b"−1 * "a"−1 = "b"−1 * "a"−1. ✓ A groupoid is a small category in which every morphism is an isomorphism, i.e. invertible. More precisely, a groupoid "G" is: satisfying, for any "f" : "x" → "y", "g" : "y" → "z", and "h" : "z" → "w": If "f" is an element of "G"("x","y") then "x" is called the source of "f", written "s"("f"), and "y" is called the target of "f", written "t"("f"). More generally, one can consider a groupoid object in an arbitrary category admitting finite fiber products. The algebraic and category-theoretic definitions are equivalent, as we now show. Given a groupoid in the category-theoretic sense, let "G" be the disjoint union of all of the sets "G"("x","y") (i.e. the sets of morphisms from "x" to "y"). Then formula_46 and formula_47 become partial operations on "G", and formula_47 will in fact be defined everywhere. We define ∗ to be formula_46 and −1 to be formula_47, which gives a groupoid in the algebraic sense. Explicit reference to "G"0 (and hence to formula_51) can be dropped. Conversely, given a groupoid "G" in the algebraic sense, define an equivalence relation formula_52 on its elements by formula_53 iff "a" ∗ "a"−1 = "b" ∗ "b"−1. Let "G"0 be the set of equivalence classes of formula_52, i.e. formula_55. Denote "a" ∗ "a"−1 by formula_56 if formula_57 with formula_58. Now define formula_59 as the set of all elements "f" such that formula_60 exists. Given formula_61 and formula_62 their composite is defined as formula_63. To see that this is well defined, observe that since formula_64 and formula_65 exist, so does formula_66. The identity morphism on "x" is then formula_56, and the category-theoretic inverse of "f" is "f"−1. "Sets" in the definitions above may be replaced with classes, as is generally the case in category theory. Given a groupoid "G", the vertex groups or isotropy groups or object groups in "G" are the subsets of the form "G"("x","x"), where "x" is any object of "G". It follows easily from the axioms above that these are indeed groups, as every pair of elements is composable and inverses are in the same vertex group. A subgroupoid is a subcategory that is itself a groupoid. A groupoid morphism is simply a functor between two (category-theoretic) groupoids. The category whose objects are groupoids and whose morphisms are groupoid morphisms is called the groupoid category, or the category of groupoids, denoted Grpd. It is useful that this category is, like the category of small categories, Cartesian closed. That is, we can construct for any groupoids formula_68 a groupoid formula_69 whose objects are the morphisms formula_70 and whose arrows are the natural equivalences of morphisms. Thus if formula_71 are just groups, then such arrows are the conjugacies of morphisms. The main result is that for any groupoids formula_72 there is a natural bijection formula_73 This result is of interest even if all the groupoids formula_72 are just groups. Particular kinds of morphisms of groupoids are of interest. A morphism formula_75 of groupoids is called a fibration if for each object formula_76 of formula_77 and each morphism formula_17 of formula_79 starting at formula_80 there is a morphism formula_81 of formula_77 starting at formula_76 such that formula_84. A fibration is called a covering morphism or covering of groupoids if further such an formula_81 is unique. The covering morphisms of groupoids are especially useful because they can be used to model covering maps of spaces. It is also true that the category of covering morphisms of a given groupoid formula_79 is equivalent to the category of actions of the groupoid formula_79 on sets. Given a topological space formula_88, let formula_89 be the set formula_88. The morphisms from the point formula_91 to the point formula_92 are equivalence classes of continuous paths from formula_91 to formula_92, with two paths being equivalent if they are homotopic. Two such morphisms are composed by first following the first path, then the second; the homotopy equivalence guarantees that this composition is associative. This groupoid is called the fundamental groupoid of formula_88, denoted formula_96 (or sometimes, formula_97). The usual fundamental group formula_98 is then the vertex group for the point formula_76. An important extension of this idea is to consider the fundamental groupoid formula_100 where formula_101 is a chosen set of "base points". Here, one considers only paths whose endpoints belong to formula_102. formula_100 is a sub-groupoid of formula_96. The set formula_102 may be chosen according to the geometry of the situation at hand. If formula_88 is a set with an equivalence relation denoted by infix formula_52, then a groupoid "representing" this equivalence relation can be formed as follows: If the group formula_5 acts on the set formula_88, then we can form the action groupoid (or transformation groupoid) representing this group action as follows: More explicitly, the "action groupoid" is a small category with formula_127 and formula_128 with source and target maps formula_129 and formula_130. It is often denoted formula_131 (or formula_132). Multiplication (or composition) in the groupoid is then formula_133 which is defined provided formula_134. For formula_76 in formula_88, the vertex group consists of those formula_137 with formula_138, which is just the isotropy subgroup at formula_76 for the given action (which is why vertex groups are also called isotropy groups). Another way to describe formula_5-sets is the functor category formula_141, where formula_142 is the groupoid (category) with one element and isomorphic to the group formula_5. Indeed, every functor formula_144 of this category defines a set formula_145 and for every formula_123 in formula_5 (i.e. for every morphism in formula_142) induces a bijection formula_149 : formula_150. The categorical structure of the functor formula_144 assures us that formula_144 defines a formula_5-action on the set formula_5. The (unique) representable functor formula_144 : formula_142→formula_157 is the Cayley representation of formula_5. In fact, this functor is isomorphic to formula_159 and so sends formula_160 to the set formula_161 which is by definition the "set" formula_5 and the morphism formula_123 of formula_142 (i.e. the element formula_123 of formula_5) to the permutation formula_149 of the set formula_5. We deduce from the Yoneda embedding that the group formula_5 is isomorphic to the group formula_170, a subgroup of the group of permutations of formula_5. Consider the finite set formula_172, we can form the group action formula_173 acting on formula_88 by taking each number to its negative, so formula_175 and formula_176. The quotient groupoid formula_177 is the set of equivalence classes from this group action formula_178, and formula_179 has a group action of formula_173 on it. On formula_181, any finite group formula_182 which maps to formula_183 give a group action on formula_181 (since this is the group of automorphisms). Then, a quotient groupoid can be forms formula_185, which has one point with stabilizer formula_182 at the origin. Given a diagram of groupoids with groupoid morphisms where formula_188 and formula_189, we can form the groupoid formula_190 whose objects are triples formula_191, where formula_192, formula_193, and formula_194 in formula_195. Morphisms can be defined as a pair of morphisms formula_196 where formula_197 and formula_198 such that for triples formula_199 , there is a commutative diagram in formula_195 of formula_201, formula_202 and the formula_203. A two term complex of objects in a concrete Abelian category can be used to form a groupoid. It has as objects the set formula_205 and arrows formula_206 where the source morphism is just the projection onto formula_205 while the target morphism is the addition of projection onto formula_208 composed with formula_209 and projection onto formula_205. That is, given formula_211 we have Of course, if the abelian category is the category of coherent sheaves on a scheme, then this construction can be used to form a presheaf of groupoids. While puzzles such as the Rubik's Cube can be modeled using group theory (see Rubik's Cube group), certain puzzles are better modeled as groupoids. The transformations of the fifteen puzzle form a groupoid (not a group, as not all moves can be composed). This groupoid acts on configurations. The Mathieu groupoid is a groupoid introduced by John Horton Conway acting on 13 points such that the elements fixing a point form a copy of the Mathieu group M12. If a groupoid has only one object, then the set of its morphisms forms a group. Using the algebraic definition, such a groupoid is literally just a group. Many concepts of group theory generalize to groupoids, with the notion of functor replacing that of group homomorphism. If formula_76 is an object of the groupoid formula_5, then the set of all morphisms from formula_76 to formula_76 forms a group formula_217 (called the vertex group, defined above). If there is a morphism formula_218 from formula_76 to formula_110, then the groups formula_217 and formula_222 are isomorphic, with an isomorphism given by the mapping formula_223. Every connected groupoid - that is, one in which any two objects are connected by at least one morphism - is isomorphic to an action groupoid (as defined above) formula_224. By connectedness, there will only be one orbit under the action. If the groupoid is not connected, then it is isomorphic to a disjoint union of groupoids of the above type (possibly with different groups formula_5 and sets formula_88 for each connected component). Note that the isomorphism described above is not unique, and there is no natural choice. Choosing such an isomorphism for a connected groupoid essentially amounts to picking one object formula_227, a group isomorphism formula_228 from formula_229 to formula_5, and for each formula_76 other than formula_227, a morphism in formula_5 from formula_227 to formula_76. In category-theoretic terms, each connected component of a groupoid is equivalent (but not isomorphic) to a groupoid with a single object, that is, a single group. Thus any groupoid is equivalent to a multiset of unrelated groups. In other words, for equivalence instead of isomorphism, one need not specify the sets formula_88, only the groups formula_237 For example, The collapse of a groupoid into a mere collection of groups loses some information, even from a category-theoretic point of view, because it is not natural. Thus when groupoids arise in terms of other structures, as in the above examples, it can be helpful to maintain the full groupoid. Otherwise, one must choose a way to view each formula_217 in terms of a single group, and this choice can be arbitrary. In our example from topology, you would have to make a coherent choice of paths (or equivalence classes of paths) from each point formula_91 to each point formula_92 in the same path-connected component. As a more illuminating example, the classification of groupoids with one endomorphism does not reduce to purely group theoretic considerations. This is analogous to the fact that the classification of vector spaces with one endomorphism is nontrivial. Morphisms of groupoids come in more kinds than those of groups: we have, for example, fibrations, covering morphisms, universal morphisms, and quotient morphisms. Thus a subgroup formula_248 of a group formula_5 yields an action of formula_5 on the set of cosets of formula_248 in formula_5 and hence a covering morphism formula_91 from, say, formula_254 to formula_5, where formula_254 is a groupoid with vertex groups isomorphic to formula_248. In this way, presentations of the group formula_5 can be "lifted" to presentations of the groupoid formula_254, and this is a useful way of obtaining information about presentations of the subgroup formula_248. For further information, see the books by Higgins and by Brown in the References. The inclusion formula_261 has both a left and a right adjoint: Here, formula_264 denotes the localization of a category that inverts every morphism, and formula_265 denotes the subcategory of all isomorphisms. The nerve functor formula_266 embeds Grpd as a full subcategory of the category of simplicial sets. The nerve of a groupoid is always Kan complex. The nerve has a left adjoint Here, formula_96 denotes the fundamental groupoid of the simplicial set X. When studying geometrical objects, the arising groupoids often carry some differentiable structure, turning them into Lie groupoids. These can be studied in terms of Lie algebroids, in analogy to the relation between Lie groups and Lie algebras.
https://en.wikipedia.org/wiki?curid=12543
General surgery General surgery is a surgical specialty that focuses on abdominal contents including esophagus, stomach, small intestine, large intestine, liver, pancreas, gallbladder, appendix and bile ducts, and often the thyroid gland (depending on local referral patterns). They also deal with diseases involving the skin, breast, soft tissue, trauma, Peripheral artery disease and hernias and perform endoscopic procedures such as gastroscopy and colonoscopy. General surgeons may sub-specialize into one or more of the following disciplines: In many parts of the world including North America, Australia and the United Kingdom, the overall responsibility for trauma care falls under the auspices of general surgery. Some general surgeons obtain advanced training in this field (most commonly surgical critical care) and specialty certification surgical critical care. General surgeons must be able to deal initially with almost any surgical emergency. Often, they are the first port of call to critically ill or gravely injured patients, and must perform a variety of procedures to stabilize such patients, such as thoracostomy, cricothyroidotomy, compartment fasciotomies and emergency laparotomy or thoracotomy to stanch bleeding. They are also called upon to staff surgical intensive care units or trauma intensive care units. All general surgeons are trained in emergency surgery. Bleeding, infections, bowel obstructions and organ perforations are the main problems they deal with. Cholecystectomy, the surgical removal of the gallbladder, is one of the most common surgical procedures done worldwide. This is most often done electively, but the gallbladder can become acutely inflamed and require an emergency operation. Infections and rupture of the appendix and small bowel obstructions are other common emergencies. This is a relatively new specialty dealing with minimal access techniques using cameras and small instruments inserted through 3 to 15mm incisions. Robotic surgery is now evolving from this concept (see below). Gallbladders, appendices, and colons can all be removed with this technique. Hernias are also able to be repaired laparoscopically. Bariatric surgery can be performed laparoscopically and there a benefits of doing so to reduce wound complications in obese patients. General surgeons that are trained today are expected to be proficient in laparoscopic procedures. General surgeons treat a wide variety of major and minor colon and rectal diseases including inflammatory bowel diseases (such as ulcerative colitis or Crohn's disease), diverticulitis, colon and rectal cancer, gastrointestinal bleeding and hemorrhoids. General surgeons perform a majority of all non-cosmetic breast surgery from lumpectomy to mastectomy, especially pertaining to the evaluation, diagnosis and treatment of breast cancer. General surgeons can perform vascular surgery if they receive special training and certification in vascular surgery. Otherwise, these procedures are typically performed by vascular surgery specialists. However, general surgeons are capable of treating minor vascular disorders. General surgeons are trained to remove all or part of the thyroid and parathyroid glands in the neck and the adrenal glands just above each kidney in the abdomen. In many communities, they are the only surgeon trained to do this. In communities that have a number of subspecialists, other subspecialty surgeons may assume responsibility for these procedures. Responsible for all aspects of pre-operative, operative, and post-operative care of abdominal organ transplant patients. Transplanted organs include liver, kidney, pancreas, and more rarely small bowel. Surgical oncologist refers to a general surgical oncologist (a specialty of a general surgeon), but thoracic surgical oncologists, gynecologist and so forth can all be considered surgeons who specialize in treating cancer patients. The importance of training surgeons who sub-specialize in cancer surgery lies in evidence, supported by a number of clinical trials, that outcomes in surgical cancer care are positively associated to surgeon volume—i.e., the more cancer cases a surgeon treats, the more proficient he or she becomes, and his or her patients experience improved survival rates as a result. This is another controversial point, but it is generally accepted—even as common sense—that a surgeon who performs a given operation more often, will achieve superior results when compared with a surgeon who rarely performs the same procedure. This is particularly true of complex cancer resections such as pancreaticoduodenectomy for pancreatic cancer, and gastrectomy with extended (D2) lymphadenectomy for gastric cancer. Surgical oncology is generally a 2 year fellowship following completion of a general surgery residency (5-7 years). Most cardiothoracic surgeons in the U.S. (D.O. or M.D.) first complete a general surgery residency (typically 5–7 years), followed by a cardiothoracic surgery fellowship (typically 2–3 years). Pediatric surgery is a subspecialty of general surgery. Pediatric surgeons do surgery on patients age lower than 18. Pediatric surgery is 5–7 years of residency and a 2-3 year fellowship. In the 2000s minimally invasive surgery became more prevalent. Considerable enthusiasm has been built around robot-assisted surgery (also known as "robotic surgery"), despite a lack of data suggesting it has significant benefits that justify its cost. In Canada, Australia, New Zealand, and the United States general surgery is a five to seven year residency and follows completion of medical school, either MD, MBBS, MBChB, or DO degrees. In Australia and New Zealand, a residency leads to eligibility for Fellowship of the Royal Australasian College of Surgeons. In Canada, residency leads to eligibility for certification by and Fellowship of the Royal College of Physicians and Surgeons of Canada, while in the United States, completion of a residency in general surgery leads to eligibility for board certification by the American Board of Surgery or the American Osteopathic Board of Surgery which is also required upon completion of training for a general surgeon to have operating privileges at most hospitals in the United States. In the United Kingdom, surgical trainees enter training after five years of medical school and two years of the Foundation Programme. During the two to three-year core training programme, doctors will sit the Membership of the Royal College of Surgeons (MRCS) examination. On award of the MRCS examination, surgeons may hold the title 'Mister' or 'Miss/Ms./Mrs' rather than doctor. This is a tradition dating back hundreds of years in the United Kingdom from when only physicians attended medical school and surgeons did not, but were rather associated with barbers in the Barber Surgeon’s Guild. The tradition is also present in many Commonwealth countries including New Zealand and some states of Australia. Trainees will then go onto Higher Surgical Training (HST), lasting a further five to six years. During this time they may choose to subspecialise. Before the end of HST, the examination of Fellow of the Royal College of Surgeons (FRCS) must be taken in general surgery plus the subspeciality. Upon completion of training, the surgeon will become a consultant surgeon and will be eligible for entry on the GMC Specialist Register and may work both in the NHS and independent sector as a consultant general surgeon. The implementation of the European Working Time Directive limited UK surgical residents to a 48-hour working week. The introduction of a sub-consultant grade to enable those who have recently received a UK Certificate of Completion of Training may be necessary.
https://en.wikipedia.org/wiki?curid=12545
Gorilla Gorillas are ground-dwelling, predominantly herbivorous apes that inhabit the forest of central Sub-Saharan Africa. The genus "Gorilla" is divided into two species: the eastern gorillas and the western gorillas (both critically endangered), and either four or five subspecies. They are the largest living primates. The DNA of gorillas is highly similar to that of humans, from 95 to 99% depending on what is included, and they are the next closest living relatives to humans after the chimpanzees and bonobos. Gorillas' natural habitats cover tropical or subtropical forest in Sub-Saharan Africa. Although their range covers a small percentage of Sub-Saharan Africa, gorillas cover a wide range of elevations. The mountain gorilla inhabits the Albertine Rift montane cloud forests of the Virunga Volcanoes, ranging in altitude from . Lowland gorillas live in dense forests and lowland swamps and marshes as low as sea level, with western lowland gorillas living in Central West African countries and eastern lowland gorillas living in the Democratic Republic of the Congo near its border with Rwanda. The word "gorilla" comes from the history of Hanno the Navigator, ( 500 BC) a Carthaginian explorer on an expedition on the west African coast to the area that later became Sierra Leone. Members of the expedition encountered "savage people, the greater part of whom were women, whose bodies were hairy, and whom our interpreters called Gorillae". It is unknown whether what the explorers encountered were what we now call gorillas, another species of ape or monkeys, or humans. Skins of gorillai women, brought back by Hanno, are reputed to have been kept at Carthage until Rome destroyed the city 350 years later at the end of the Punic Wars, 146 BC. The American physician and missionary Thomas Staughton Savage and naturalist Jeffries Wyman first described the western gorilla (they called it "Troglodytes gorilla") in 1847 from specimens obtained in Liberia. The name was derived , described by Hanno. The closest relatives of gorillas are the other two Homininae genera, chimpanzees and humans, all of them having diverged from a common ancestor about 7 million years ago. Human gene sequences differ only 1.6% on average from the sequences of corresponding gorilla genes, but there is further difference in how many copies each gene has. Until recently, gorillas were considered to be a single species, with three subspecies: the western lowland gorilla, the eastern lowland gorilla and the mountain gorilla. There is now agreement that there are two species, each with two subspecies. More recently, a third subspecies has been claimed to exist in one of the species. The separate species and subspecies developed from a single type of gorilla during the Ice Age, when their forest habitats shrank and became isolated from each other. Primatologists continue to explore the relationships between various gorilla populations. The species and subspecies listed here are the ones upon which most scientists agree. The proposed third subspecies of "Gorilla beringei", which has not yet received a trinomen, is the Bwindi population of the mountain gorilla, sometimes called the Bwindi gorilla. Some variations that distinguish the classifications of gorilla include varying density, size, hair colour, length, culture, and facial widths. Population genetics of the lowland gorillas suggest that the western and eastern lowland populations diverged ~261 thousand years ago. Gorillas move around by knuckle-walking, although they sometimes walk bipedally for short distances while carrying food or in defensive situations, and some mountain gorillas use other parts of their hand to aid locomotion (studies of 77 mountain gorillas published in 2018 showed 61% only used knuckle walking, but the remainder used knuckle walking plus other parts of their hand—fist walking in ways that do not use the knuckles, using the backs of their hand, and using their palms). There is individual variation in the frequency and ease with which gorillas walk upright; certain gorillas at Philadephia Zoo and London Zoo have been observed walking upright more frequently and for longer distances than normal, with one gorilla at the Philadelphia Zoo often doing so to avoid getting mud on his hands. Wild male gorillas weigh , while adult females weigh . Adult males are tall, with an arm span that stretches from . Female gorillas are shorter at , with smaller arm spans. Groves (1970) calculated the average weight of 47 wild adult male gorillas at 143 kg, while Smith and Jungers (1997) found the average weight of 19 wild adult male gorillas to be 170 kg. Adult male gorillas are known as silverbacks due to the characteristic silver hair on their backs reaching to the hips. The tallest gorilla recorded was a silverback with an arm span of , a chest of , and a weight of , shot in Alimbongo, northern Kivu in May 1938. The heaviest gorilla recorded was a silverback shot in Ambam, Cameroon, which weighed . Males in captivity can be overweight and reach weights up to . Gorilla facial structure is described as mandibular prognathism, that is, the mandible protrudes farther out than the maxilla. Adult males also have a prominent sagittal crest. The eastern gorilla is more darkly coloured than the western gorilla, with the mountain gorilla being the darkest of all. The mountain gorilla also has the thickest hair. The western lowland gorilla can be brown or grayish with a reddish forehead. In addition, gorillas that live in lowland forest are more slender and agile than the more bulky mountain gorillas. The eastern gorilla also has a longer face and broader chest than the western gorilla. Studies have shown gorilla blood is not reactive to anti-A and anti-B monoclonal antibodies, which would, in humans, indicate type O blood. Due to novel sequences, though, it is different enough to not conform with the human ABO blood group system, into which the other great apes fit. Like humans, gorillas have individual fingerprints. Their eye colour is dark brown, framed by a black ring around the iris. A gorilla's lifespan is normally between 35 and 40 years, although zoo gorillas may live for 50 years or more. Colo, a female western gorilla at the Columbus Zoo and Aquarium, was the oldest known gorilla at 60 years of age when she died on 17 January 2017. Gorillas have a patchy distribution. The range of the two species is separated by the Congo River and its tributaries. The western gorilla lives in west central Africa, while the eastern gorilla lives in east central Africa. Between the species, and even within the species, gorillas live in a variety of habitats and elevations. Gorilla habitat ranges from montane forest to swampland. Eastern gorillas inhabit montane and submontane forest between above sea level. Mountain gorillas live in montane forest at the higher end of the elevation range, while eastern lowland gorillas live in submontane forest at the lower end. In addition, eastern lowland gorillas live in montane bamboo forest, as well as lowland forest ranging from in elevation. Western gorillas live in both lowland swamp forest and montane forest, at elevations ranging from sea level to . Western lowland gorillas live in swamp and lowland forest ranging up to , and Cross River gorillas live in low-lying and submontane forest ranging from . Gorillas construct nests for daytime and night use. Nests tend to be simple aggregations of branches and leaves about in diameter and are constructed by individuals. Gorillas, unlike chimpanzees or orangutans, tend to sleep in nests on the ground. The young nest with their mothers, but construct nests after three years of age, initially close to those of their mothers. Gorilla nests are distributed arbitrarily and use of tree species for site and construction appears to be opportunistic. Nest-building by great apes is now considered to be not just animal architecture, but as an important instance of tool use. A gorilla's day is divided between rest periods and travel or feeding periods. Diets differ between and within species. Mountain gorillas mostly eat foliage, such as leaves, stems, pith, and shoots, while fruit makes up a very small part of their diets. Mountain gorilla food is widely distributed and neither individuals nor groups have to compete with one another. Their home ranges vary from 3 to 15 km2 (1.16 to 5.79 mi2), and their movements range around or less on an average day. Despite eating a few species in each habitat, mountain gorillas have flexible diets and can live in a variety of habitats. Eastern lowland gorillas have more diverse diets, which vary seasonally. Leaves and pith are commonly eaten, but fruits can make up as much as 25% of their diets. Since fruit is less available, lowland gorillas must travel farther each day, and their home ranges vary from 2.7–6.5 km2 (1.04 to 2.51 mi2), with day ranges . Eastern lowland gorillas will also eat insects, preferably ants. Western lowland gorillas depend on fruits more than the others and they are more dispersed across their range. They travel even farther than the other gorilla subspecies, at per day on average, and have larger home ranges of 7–14 km2 (2.70–5.41 mi2). Western lowland gorillas have less access to terrestrial herbs, although they can access aquatic herbs in some areas. Termites and ants are also eaten. Gorillas rarely drink water "because they consume succulent vegetation that is almost half water as well as morning dew", although both mountain and lowland gorillas have been observed drinking. Gorillas live in groups called troops. Troops tend to be made of one adult male or silverback, multiple adult females and their offspring. However, multiple-male troops also exist. A silverback is typically more than 12 years of age, and is named for the distinctive patch of silver hair on his back, which comes with maturity. Silverbacks also have large canine teeth that also come with maturity. Both males and females tend to emigrate from their natal groups. For mountain gorillas, females disperse from their natal troops more than males. Mountain gorillas and western lowland gorillas also commonly transfer to second new groups. Mature males also tend to leave their groups and establish their own troops by attracting emigrating females. However, male mountain gorillas sometimes stay in their natal troops and become subordinate to the silverback. If the silverback dies, these males may be able to become dominant or mate with the females. This behaviour has not been observed in eastern lowland gorillas. In a single male group, when the silverback dies, the females and their offspring disperse and find a new troop. Without a silverback to protect them, the infants will likely fall victim to infanticide. Joining a new group is likely to be a tactic against this. However, while gorilla troops usually disband after the silverback dies, female eastern lowlands gorillas and their offspring have been recorded staying together until a new silverback transfers into the group. This likely serves as protection from leopards. The silverback is the center of the troop's attention, making all the decisions, mediating conflicts, determining the movements of the group, leading the others to feeding sites, and taking responsibility for the safety and well-being of the troop. Younger males subordinate to the silverback, known as blackbacks, may serve as backup protection. Blackbacks are aged between 8 and 12 years and lack the silver back hair. The bond that a silverback has with his females forms the core of gorilla social life. Bonds between them are maintained by grooming and staying close together. Females form strong relationships with males to gain mating opportunities and protection from predators and infanticidal outside males. However, aggressive behaviours between males and females do occur, but rarely lead to serious injury. Relationships between females may vary. Maternally related females in a troop tend to be friendly towards each other and associate closely. Otherwise, females have few friendly encounters and commonly act aggressively towards each other. Females may fight for social access to males and a male may intervene. Male gorillas have weak social bonds, particularly in multiple-male groups with apparent dominance hierarchies and strong competition for mates. Males in all-male groups, though, tend to have friendly interactions and socialise through play, grooming, and staying together, and occasionally they even engage in homosexual interactions. Severe aggression is rare in stable groups, but when two mountain gorilla groups meet the two silverbacks can sometimes engage in a fight to the death, using their canines to cause deep, gaping injuries. One possible predator of gorillas is the leopard. Gorilla remains have been found in leopard scat, but this may be the result of scavenging. When the group is attacked by humans, leopards, or other gorillas, an individual silverback will protect the group, even at the cost of his own life. Females mature at 10–12 years (earlier in captivity), and males at 11–13 years. A female's first ovulatory cycle occurs when she is six years of age, and is followed by a two-year period of adolescent infertility. The estrous cycle lasts 30–33 days, with outward ovulation signs subtle compared to those of chimpanzees. The gestation period lasts 8.5 months. Female mountain gorillas first give birth at 10 years of age and have four-year interbirth intervals. Males can be fertile before reaching adulthood. Gorillas mate year round. Females will purse their lips and slowly approach a male while making eye contact. This serves to urge the male to mount her. If the male does not respond, then she will try to attract his attention by reaching towards him or slapping the ground. In multiple-male groups, solicitation indicates female preference, but females can be forced to mate with multiple males. Males incite copulation by approaching a female and displaying at her or touching her and giving a "train grunt". Recently, gorillas have been observed engaging in face-to-face sex, a trait once considered unique to humans and bonobos. Gorilla infants are vulnerable and dependent, thus mothers, their primary caregivers, are important to their survival. Male gorillas are not active in caring for the young, but they do play a role in socialising them to other youngsters. The silverback has a largely supportive relationship with the infants in his troop and shields them from aggression within the group. Infants remain in contact with their mothers for the first five months and mothers stay near the silverback for protection. Infants suckle at least once per hour and sleep with their mothers in the same nest. Infants begin to break contact with their mothers after five months, but only for a brief period each time. By 12 months old, infants move up to from their mothers. At around 18–21 months, the distance between mother and offspring increases and they regularly spend time away from each other. In addition, nursing decreases to once every two hours. Infants spend only half of their time with their mothers by 30 months. They enter their juvenile period at their third year, and this lasts until their sixth year. At this time, gorillas are weaned and they sleep in a separate nest from their mothers. After their offspring are weaned, females begin to ovulate and soon become pregnant again. The presence of play partners, including the silverback, minimizes conflicts in weaning between mother and offspring. Twenty-five distinct vocalisations are recognised, many of which are used primarily for group communication within dense vegetation. Sounds classified as grunts and barks are heard most frequently while traveling, and indicate the whereabouts of individual group members. They may also be used during social interactions when discipline is required. Screams and roars signal alarm or warning, and are produced most often by silverbacks. Deep, rumbling belches suggest contentment and are heard frequently during feeding and resting periods. They are the most common form of intragroup communication. For this reason, conflicts are most often resolved by displays and other threat behaviours that are intended to intimidate without becoming physical. The ritualized charge display is unique to gorillas. The entire sequence has nine steps: (1) progressively quickening hooting, (2) symbolic feeding, (3) rising bipedally, (4) throwing vegetation, (5) chest-beating with cupped hands, (6) one leg kick, (7) sideways running, two-legged to four-legged, (8) slapping and tearing vegetation, and (9) thumping the ground with palms to end display. Gorillas are considered highly intelligent. A few individuals in captivity, such as Koko, have been taught a subset of sign language. Like the other great apes, gorillas can laugh, grieve, have "rich emotional lives", develop strong family bonds, make and use tools, and think about the past and future. Some researchers believe gorillas have spiritual feelings or religious sentiments. They have been shown to have cultures in different areas revolving around different methods of food preparation, and will show individual colour preferences. The following observations were made by a team led by Thomas Breuer of the Wildlife Conservation Society in September 2005. Gorillas are now known to use tools in the wild. A female gorilla in the Nouabalé-Ndoki National Park in the Republic of Congo was recorded using a stick as if to gauge the depth of water whilst crossing a swamp. A second female was seen using a tree stump as a bridge and also as a support whilst fishing in the swamp. This means all of the great apes are now known to use tools. In September 2005, a two-and-a-half-year-old gorilla in the Republic of Congo was discovered using rocks to smash open palm nuts inside a game sanctuary. While this was the first such observation for a gorilla, over 40 years previously, chimpanzees had been seen using tools in the wild 'fishing' for termites. Great apes are endowed with semiprecision grips, and have been able to use both simple tools and even weapons, such as improvising a club from a convenient fallen branch. American physician and missionary Thomas Staughton Savage obtained the first specimens (the skull and other bones) during his time in Liberia. The first scientific description of gorillas dates back to an article by Savage and the naturalist Jeffries Wyman in 1847 in "Proceedings of the Boston Society of Natural History", where "Troglodytes gorilla" is described, now known as the western gorilla. Other species of gorilla were described in the next few years. The explorer Paul Du Chaillu was the first westerner to see a live gorilla during his travel through western equatorial Africa from 1856 to 1859. He brought dead specimens to the UK in 1861. The first systematic study was not conducted until the 1920s, when Carl Akeley of the American Museum of Natural History traveled to Africa to hunt for an animal to be shot and stuffed. On his first trip, he was accompanied by his friends Mary Bradley, a mystery writer, her husband, and their young daughter Alice, who would later write science fiction under the pseudonym James Tiptree Jr. After their trip, Mary Bradley wrote "On the Gorilla Trail". She later became an advocate for the conservation of gorillas, and wrote several more books (mainly for children). In the late 1920s and early 1930s, Robert Yerkes and his wife Ava helped further the study of gorillas when they sent Harold Bigham to Africa. Yerkes also wrote a book in 1929 about the great apes. After World War II, George Schaller was one of the first researchers to go into the field and study primates. In 1959, he conducted a systematic study of the mountain gorilla in the wild and published his work. Years later, at the behest of Louis Leakey and the "National Geographic", Dian Fossey conducted a much longer and more comprehensive study of the mountain gorilla. When she published her work, many misconceptions and myths about gorillas were finally disproved, including the myth that gorillas are violent. Western lowland gorillas ("G. g. gorilla") are believed to be one of the zoonotic origins of HIV/AIDS. The SIVgor Simian immunodeficiency virus that infects them is similar to a certain strain of HIV-1. The gorilla became the next-to-last great ape genus to have its genome sequenced. The first gorilla genome was generated with short read and Sanger sequencing using DNA from a female western lowland gorilla named Kamilah. This gave scientists further insight into the evolution and origin of humans. Despite the chimpanzees being the closest extant relatives of humans, 15% of the human genome was found to be more like that of the gorilla. In addition, 30% of the gorilla genome "is closer to human or chimpanzee than the latter are to each other; this is rarer around coding genes, indicating pervasive selection throughout great ape evolution, and has functional consequences in gene expression." Analysis of the gorilla genome has cast doubt on the idea that the rapid evolution of hearing genes gave rise to language in humans, as it also occurred in gorillas. Since coming to the attention of western society in the 1860s, gorillas have been a recurring element of many aspects of popular culture and media. For example, gorillas have featured prominently in monstrous fantasy films such as "King Kong". Additionally, pulp fiction stories such as "Tarzan" and "Conan the Barbarian" have featured gorillas as physical opponents of the titular protagonists. All species (and subspecies) of gorilla are listed as endangered or critically endangered on the IUCN Red List. Now, over 100,000 western lowland gorillas are thought to exist in the wild, with 4,000 in zoos, thanks to conservation; eastern lowland gorillas have a population of under 5,000 in the wild and 24 in zoos. Mountain gorillas are the most severely endangered, with an estimated population of about 880 left in the wild and none in zoos. Threats to gorilla survival include habitat destruction and poaching for the bushmeat trade. In 2004, a population of several hundred gorillas in the Odzala National Park, Republic of Congo was essentially wiped out by the Ebola virus. A 2006 study published in "Science" concluded more than 5,000 gorillas may have died in recent outbreaks of the Ebola virus in central Africa. The researchers indicated in conjunction with commercial hunting of these apes, the virus creates "a recipe for rapid ecological extinction". Conservation efforts include the Great Apes Survival Project, a partnership between the United Nations Environment Programme and the UNESCO, and also an international treaty, the Agreement on the Conservation of Gorillas and Their Habitats, concluded under UNEP-administered Convention on Migratory Species. The Gorilla Agreement is the first legally binding instrument exclusively targeting gorilla conservation; it came into effect on 1 June 2008.
https://en.wikipedia.org/wiki?curid=12546
Gallifrey Gallifrey () is a fictional planet in the long-running British science fiction television series "Doctor Who". It is the original home world of the Time Lords, the civilisation to which the main protagonist, the Doctor belongs. It is located in a binary star system 250 million light years from Earth. It was first shown in "The War Games" (1969) during the Second Doctor's trial though was not identified by name until "The Time Warrior" (1973–74). In the revived series (2005 onwards) Gallifrey was originally referred to as having been destroyed in the Time War, which was fought between the Time Lords and the Daleks. It was depicted in a flashback in "The Sound of Drums" (2007) and appeared prominently in "The End of Time" (2009–10). At the conclusion of "The Day of the Doctor" (2013) Gallifrey is revealed to have actually survived the Time War, though it was frozen in time and transported into a bubble dimension, which was revealed to have been situated at the end of the universe at a chronological point before "Hell Bent" (2015). By "Spyfall" (2020), it had been reduced to ruin by The Master who described the planet as hiding in its bubble universe again. At the end of "The Timeless Children," one of the Doctor's allies, Ko Sharmus, detonates the "Death Particle" on Gallifrey, wiping out all organic life on the planet, the Master and his army of CyberMasters presumably included. It is never definitively stated when the appearances of Gallifrey take place. As the planet is often reached by means of time travel, its relative present could conceivably exist almost anywhere in the Earth's past or future as well as anywhere in the conceivable universe. From space, Gallifrey is seen as a yellow-orange planet and was close enough to central space lanes for spacecraft to require clearance from Gallifreyan Space Traffic Control as they pass through its system. The planet was protected from physical attack by an impenetrable barrier called the quantum force field, and from teleportation incursions by the transduction barrier—which could be reinforced to repel most levels of this type of technological attack. The Time Lords' principal city, named The Capitol, consists of shining towers protected by a mighty glass dome. Outside The Capitol is a wilderness with plains of red grass, as mentioned by the Doctor in "Gridlock" as well as "The End of Time". The planet's so-called "second city" is Arcadia, and is seen falling to the Daleks in the 2013 minisode "The Last Day." The Doctor's granddaughter Susan first describes her home world (not named as "Gallifrey" at the time) as having bright, silver-leafed trees and a burnt orange sky at night in the serial "The Sensorites" (1964). This casts an amber tint on anything outside the city, as seen in "The Invasion of Time". However, Gallifrey's sky appears blue and Earth-like in "The Five Doctors" (1983) within the isolated Death Zone. In "The Time Monster", the Third Doctor says that "When I was a little boy, we used to live in a house that was perched halfway up the top of a mountain", explaining, "I ran down that mountain and I found that the rocks weren't grey at all—but they were red, brown and purple and gold. And those pathetic little patches of sludgy snow were shining white. Shining white in the sunlight." In "Gridlock", the Tenth Doctor echoes Susan's description of the world now named as Gallifrey and goes further by mentioning the vast mountain ranges "with fields of deep red grass, capped with snow". He then elaborates how Gallifrey's second sun would "rise in the south and the mountains would shine", with the silver-leafed trees looking like "a forest on fire" in the mornings. Outer Gallifrey's wastelands are where the "Outsiders" reside, The Doctor Who Role Playing Game released by FASA equates the Outsiders with the "Shobogans", who are briefly mentioned in the serial "The Deadly Assassin". The wastes of Gallifrey include the Death Zone, an area that was used as a gladiatorial arena by the first Time Lords, pitting various species kidnapped from their respective time zones against each other (although Daleks and Cybermen were considered too dangerous to use). Inside the Death Zone stands the Tomb of Rassilon, the founder of Time Lord society. Somewhere on Gallifrey there is also an institute called the Academy, which the Doctor and various other Time Lords have attended. "The Last Day" mentions birds as something expected in Gallifrey's skies. Gallifrey appeared in the "Doctor Who" 50th anniversary special, "The Day of the Doctor" which aired on November 23, 2013. Several of the spin-off novels have further information about Gallifrey. It is said to have at least two moons, one being the copper-coloured Pazithi Gallifreya (first named in ""); the novel "Lungbarrow" also places Karn (setting of "The Brain of Morbius", 1976) in Gallifrey's solar system, along with a frozen gas giant named Polarfrey and an "astrological figure" of "Kasterborous the Fibster". "Cat's Cradle: Time's Crucible" also mentions edible rodent-like mammals called tafelshrews. "The Three Doctors" (1972–73) seemed to set Gallifrey's relative present contemporary with the events of the story set on Earth, with its sequel "Arc of Infinity" (1983) setting it in the 1980s. Alternatively, "The Trial of a Time Lord" (1986) seems to imply that the planet's relative present is in the Earth's far future. This is also the position taken by "The Doctor Who Role Playing Game" released by FASA, although the information in it is not usually considered canon. In the episode "Utopia" (2007), the Tenth Doctor describes the year 100 trillion as a time period not even the Time Lords travelled to. The present of Gallifrey from after it returns to the universe from another dimension is stated to be around the end of the universe, "several billion years in the future", in "Hell Bent" (2015). Both the Virgin New Adventures and the BBC Books "Doctor Who" novels seem to take the stance that Gallifrey's relative present is far in the Earth's relative past. In "Spyfall," the Master states that Gallifrey is once again in the bubble universe it was sent to in "The Day of the Doctor." The Master causes a wormhole called "The Boundary" to connect to Gallifrey from the far future in "Ascension of the Cybermen" and "The Timeless Children" though he refuses to answer the Doctor's questions about how he accomplished this. The television series and people involved in its production repeatedly refer to the Time Lords as a species. In "The War Games" (1969), the Second Doctor says the Time Lords are "an immensely civilised race". Writer Malcolm Hulke and writer and script editor Terrance Dicks repeat this description in their 1972 book "The Making of Doctor Who". In "The Time Warrior" (1973–74), Commander Linx quotes Sontaran military intelligence as describing the Time Lords as "A race of great technical achievement, but lacking the morale to withstand a determined assault." In "Pyramids of Mars" (1975), Sutekh calls the Time Lords "a perfidious species". In "School Reunion" (2006), the alien Krillitane Mr Finch calls the Time Lords "such a pompous race". In "Smith and Jones" (2007), the Tenth Doctor answers "what sort of species [he is]" with "I'm a Time Lord." In "Human Nature" (2007), Tim Latimer hears a voice saying, "Last of the Time Lords, the last of that wise and ancient race." In "Utopia" (2007), the Tenth Doctor answers "what species are you" with "Time Lord, last of." In "The Sound of Drums" (2007), the Tenth Doctor calls the Time Lords "the oldest and most mighty race in the universe". In "Planet of the Dead" (2009), the Tenth Doctor says, "I come from a race of people called Time Lords." About sixteen minutes into "Let's Kill Hitler" (2011), a computer readout describes the "Pilot Species" of the Doctor's ship the TARDIS as "Time Lord". In "The Witch's Familiar" (2015), Davros describes a prophecy that spoke of a "hybrid creature [of] two great warrior races forced together to create a warrior greater than either," and believes that the creature being referred to was "half Dalek, half Time Lord." In "Before the Flood" (2015), the Fisher King describes the Time Lords as "the most warlike race in the galaxy". In "Hell Bent" (2015), the General describes the Hybrid as being "crossbred between two warrior races". He says it is supposed that these races are "the Daleks and the Time Lords". In "Knock Knock" (2017), the Twelfth Doctor tells Bill Potts the Time Lords are "my species". Writing in issue #356 of "Doctor Who Magazine", head writer and executive producer Russell T Davies describes the Time Lords as "the Doctor's race". In a special feature on the DVD set of "The Invasion of Time", script editor Anthony Read called the Time Lords "an interesting species to somebody coming in". In a special feature on the DVD and Blu-ray set of series 8, actor Peter Capaldi said the Doctor belongs to "a race called the Time Lords". Terrance Dicks recalled in a 2016 interview in "The Essential Doctor Who" magazine that during a discussion of "The War Games", producer Derrick Sherwin said the Doctor belongs to "this mysterious race called Time Lords". The Time Lords are also referred to as a race by media outlets including the "Radio Times" and "The New York Times", while a correction in a 2014 NPR article claims that "not all Gallifreyans are Time Lords". "The Stolen Tardis" (1979), a spin-off comic printed in issue #9 of "Doctor Who Weekly" (the original name of "Doctor Who Magazine") also claims that "not everyone on Gallifrey is a Time Lord", while a feature in issue #21 instead states that the Doctor is "a member of a race called the Time Lords". Few details on the history of the planet itself emerge from the original series run from 1963–1989. In "The End of the World" (2005), the Ninth Doctor states that his home planet has been destroyed in a war and the Time Lords with it. The episode also indicates that the Time Lords are remembered in the far future. Subsequently, in "Dalek" (2005), it is revealed that the last great Time War was fought between the Time Lords and the Daleks, ending in the obliteration of both sides and with only two apparent survivors; the Doctor and a lone Dalek that had somehow fallen through time and crashed on Earth. At the conclusion of that episode, that surviving Dalek self-destructs, leaving the Ninth Doctor believing that he was the sole survivor of the Time War. However, the Daleks return in "Bad Wolf"/"The Parting of the Ways" (2005), and subsequently in several other stories. The Tenth Doctor's reference to Gallifrey in "The Runaway Bride" (2006) is the first time the name of his homeworld has been given onscreen since the new series began. The Doctor's revelation that he is from Gallifrey elicits terror from the Empress of the Racnoss. The Tenth Doctor in human form (as "John Smith") mentions Gallifrey in "Human Nature" (2007) and is asked if it was in Ireland; this is the same question asked in the 1970s stories "The Hand of Fear" and "The Invisible Enemy". The planet makes its first appearance in the revived series in "The Sound of Drums" (2007), where the Citadel, enclosed in a glass dome (as described by the Doctor in "Gridlock", 2007), is seen in flashback as the Doctor describes it. Also seen is a ceremony initiating 8-year-old Gallifreyans — in particular the Master — into the Time Lord Academy. "The End of Time" (2009–10) once again featured Gallifrey. By the end of the war, Gallifrey is depicted in ruins. The dome of the main city, the Time Lord capital, the Citadel, is shattered and dozens of Dalek saucers have crashed on the plains below. The Master releases Gallifrey from its time lock. However, Gallifrey's reemergence is eventually stopped and reversed after it is made clear that the release of Gallifrey would lead to the Time Lords destroying time — in effect destroying the universe — in order to defeat the Daleks and ultimately to preserve the Time Lords at the expense of all creation. Lord President Rassilon also believes that this action would elevate them to a higher form of existence, becoming "creatures of consciousness". Upon realising the scope of Rassilon's plan for self-preservation, the Tenth Doctor recalls that the Doctor that fought in the Time War had attempted to stop them during the war. Eventually, the Master comes to the aid of the Tenth Doctor and prevents Rassilon and the rest of Gallifrey from coming through, breaking the link that held Gallifrey in relative time to 21st-century Earth. It is stated by the Tenth Doctor in "The End of Time" that Gallifrey is not how he and the Master knew it in their youth. Implying that the Time Lords had resorted to desperate and deplorable measures to fight the Daleks, the Doctor is willing to break his code of non-violence to stop the return of the Time Lords. This is reinforced within a short feature that discloses the hitherto unknown circumstances of the Eighth Doctor's regeneration into the War Doctor, entitled "The Night of the Doctor" (2013). A young pilot rejects assistance from the Eighth Doctor due to her fear of the Time Lords. In the series seven finale, "The Name of the Doctor", two Gallifreyan mechanics were seen on Gallifrey watching the First Doctor and Susan steal a TARDIS. In the 50th anniversary special of the television series, "The Day of the Doctor" (2013), scenes are shown during the fall of Arcadia, Gallifrey's second city. Subsequently it is shown that Gallifrey wasn't actually destroyed. The final scenes depict three of the Doctor's incarnations—the Tenth Doctor, the Eleventh Doctor and the War Doctor; the interface of the weapon, the Moment, that was supposed to destroy the planet; and the Eleventh Doctor's companion Clara Oswald decide against destroying it. Instead, they freeze the planet in time within a parallel pocket universe. This occurs with the help of the other Doctors, including the Twelfth Doctor, whose eyes alone are seen at this point. Instead of destroying Gallifrey, the Dalek fleet echelons open fire on and destroy each other. In "The Time of the Doctor" (2013), the Time Lords are depicted as trying to re-enter the universe through a crack in the Universe on the planet Trenzalore. They broadcast a message throughout space and time, the question "Doctor Who?", a question which only the Doctor could answer. When the Doctor answers, they will know that it is safe to leave. However, this message inadvertently attracts various races, including the Daleks and Cybermen, to lay siege to Trenzalore; the Eleventh Doctor remaining to protect the inhabitants, but not wanting to release the Time Lords as this would mean the destruction of Trenzalore and the initiation of another Time War. Hundreds of years later, Clara convinces the Time Lords to help the Doctor, dying from old age in his final regeneration, and the crack closes, before reopening in the sky above Trenzalore. The Time Lords give the Doctor a new regeneration cycle, before the crack seals for good with the Time Lords still lost, but a newly regenerated Twelfth Doctor ready to find them. In "Death in Heaven" (2014), Missy tells the Twelfth Doctor that Gallifrey is located at its original coordinates. These claims prove to be false, leaving the Doctor distraught. The Twelfth Doctor returns to Gallifrey in "Heaven Sent" (2015), having taken the "long way around" through breaking a wall of azbantium after he is teleported inside his confession dial. In "Hell Bent" (2015), Gallifrey is revealed to be in a bubble universe, and is now situated "several billion years in the future, and [when] the universe is pretty much over" for its own protection. After Rassilon is deposed and exiled by the Twelfth Doctor, Gallifrey is seen in a much later period in the episode abandoned apart from the human immortal Ashildr, who is protected in the Cloisters beneath the Citadel by a reality bubble. The Thirteenth Doctor returns to Gallifrey at the end of the events of "Spyfall" to find everything on it destroyed and a recording of the Master revealing that he orchestrated the planet's destruction. Gallifrey is revealed to have moved to a hitherto unknown wormhole called the Boundary, guarded by Ko Sharmus, at the end of the episode "Ascension of the Cybermen" with the Master reappearing from the planet as well. The Doctor goes through to Gallifrey where she is captured by the Master and through the Matrix the Master shows her she is really the Timeless Child, a figure from Gallifrey's past from whom the secret of regeneration was gained. The Master converts the corpses of Time Lords into Cybermasters, with the power to regenerate. The Doctor threatens him with the Death Particle, a weapon created by the Cyberium capable of destroying all organic life. She was unable to press it but Ko Sharmus did so for her, apparently destroying all life on the planet. Various spin-off novels have expanded on the history and nature of Gallifrey. Marc Platt's novels "" and "Lungbarrow", provide a detailed backstory for the civilisation seen in the main series. In the Dark Times (occasionally mentioned in the televised serials such as "The Five Doctors"), Gallifrey was at the centre of an empire covering dozens of worlds and continually being extended by heroes such as Prydonius (whom the Time Lord chapter is named after). Ancient Gallifreyans are all telepathic and were ruled by a female cult centred on a figure called the Pythia, who controlled the population through mysticism and prophecies. When the prophetic powers of the last of the Pythias failed her, Rassilion, Omega and a shadowy figure known as The Other seized power in the name of science and rationality. Seeing this, the Pythia committed suicide and cursed Gallifrey, killing all children in their wombs and making the world sterile. To combat this, Rassilion restructured society and used genetic looms to create new generations of Gallifreyans, who emerge from the looms as fully grown adults. Each of the Great Houses is allotted a total of forty-five cousins and given a regeneration cycle of thirteen lives. The Houses themselves are to some degree alive, in the same way TARDISes are, and the furniture can move about, occasionally growing into 'Drudges' who function as servants for the family. The Doctor was loomed in the House of Lungbarrow in the mountains of South Gallifrey, but unique among the house's cousins, he has a belly button. This sterility backstory is contradicted by on screen depictions and descriptions of the Master and the Doctor as children, the Eleventh Doctor stating he slept in the cot he brings out of his TARDIS and references made to the Doctor and the Master having parents, the Doctor himself being a father and the Master having a daughter. The Virgin New Adventures establish a religion on Gallifrey centred around the three main gods, Time, Death and Pain. The Time Lords use these figures to understand the concepts they represent and in some cases make deals with them and become their chosen champions. The Seventh Doctor is Time's Champion (as well as someone who makes frequent deals with or wages against Death to save his friends) and the audio play "Master" states that the Master is Death's champion. It's also briefly implied in "Vampire Science", that the Eighth Doctor is Life's champion, implying the existence of another unseen figure. "Happy Endings" and other books imply that these gods are Eternals as seen in the serial "Enlightenment". In the Eighth Doctor Adventures novel "The Ancestor Cell" by Peter Anghelides and Stephen Cole, Gallifrey is destroyed as a result of the Eighth Doctor's desire to prevent the voodoo cult Faction Paradox from starting a war between the Time Lords and an unnamed Enemy. Hints about this future war are dropped in several books earlier in the series beginning with Lawrence Miles' "Alien Bodies". In order to have boltholes or decoys in case of attack, the Time Lords have created nine separate planet Gallifreys (it even hinted that the original Gallifrey may at some point be reduced to ruins) and special looms to constantly produce new soldiers. By this time TARDISes have evolved to point where they appear human and reproduce sexually (the Doctor's companion Compassion is the first such TARDIS). It is also hinted that the Celestial Intervention Agency will evolve into the beings of pure thought known as the Celestis, who observe the war from outside this dimension (the Last Parliament in which they sit resembles the Panopticon on Gallifrey and the closest anyone gets to describing them is similar to the Time Lords' robes). Faction Paradox itself is a counter to Time Lord society, dedicated to creating time-travel paradoxes, in contrast to the Time Lords' web of time. It was founded by a mysterious figure Grandfather Paradox, who it is believed was once a Time Lord from the House of Lungbarrow. "The Ancestor Cell" suggests that he is a future version of the Doctor, but this is retconned in "The Gallifrey Chronicles", to him being everyone's potential future self. When the Doctor destroys Gallifrey the war no longer happens and his actions also apparently (and retroactively) wipe the Time Lords from history. In the last regular Eighth Doctor novel, "The Gallifrey Chronicles" by Lance Parkin, it is revealed that while Gallifrey was destroyed, the Time Lords were not erased from history. However, the cataclysm sets up an event horizon in time that prevents anyone from entering Gallifrey's relative past or travelling from it to the present or future. The Time Lords also survive within the Matrix, which has been downloaded into the Eighth Doctor's mind, but their reconstruction requires a sufficiently advanced computer. At the novel's end, the question of whether or not the Time Lords will be restored remains unanswered. Television series executive producer Russell T Davies wrote in "Doctor Who Magazine" #356 that there is no connection between the War of the books and the Time War of the television series. In the same "Doctor Who Magazine" column, Davies compared Gallifrey being destroyed twice with Earth's two World Wars. He also said that he was "usually happy for old and new fans to invent the Complete History of the Doctor in their heads, completely free of the production team's hot and heavy hands".
https://en.wikipedia.org/wiki?curid=12550
Gymnastics Gymnastics is a sport that includes physical exercises requiring balance, strength, flexibility, agility, coordination, and endurance. The movements involved in gymnastics contribute to the development of the arms, legs, shoulders, back, chest, and abdominal muscle groups. Alertness, precision, daring, self-confidence, and self-discipline are mental traits that can also be developed through gymnastics evolved from exercises used by the ancient Greeks that included skills for mounting and dismounting a horse and from circus performance skills. The most common form of competitive gymnastics is artistic gymnastics, which consists of (for women [WAG]) the events floor, vault, uneven bars and beam. For men [MAG], it consists of the events floor, vault, rings, pommel horse , parallel bars, and horizontal bar. The governing body for gymnastics through out the world is the Fédération Internationale de Gymnastique (FIG). Eight sports are governed by the FIG, which include Gymnastics for All, Men's and Women’s Artistic Gymnastics, Rhythmic Gymnastics, Trampoline (including Double Mini-trampoline), Tumbling, acrobatic, and aerobic. Disciplines not currently recognized by FIG include wheel gymnastics, aesthetic group gymnastics, men's rhythmic gymnastics, TeamGym, and mallakhamba. Participants in gymnastics-related sports can include young children, recreational-level athletes, and competitive athletes at varying levels of skill, including world-class athletes. The word gymnastics derives from the common Greek adjective γυμνός (), by way of the related verb γυμνάζω ("gymnazo"), whose meaning is to "train naked", "train in gymnastic exercise", generally "to train, to exercise". The verb had this meaning, because athletes in ancient times exercised and competed without clothing. Gymnastics can be traced to exercise in ancient Greece- in Sparta and Athens. That exercise for that time was documented by Philostratus' work "Gymnasticus". Exercise in the gymnasium in later dates prepared men for war. The original term for the practice of gymnastics is from the related verb γυμνάζω (gymnazo), which translates as "to exercise naked" because young men exercising trained without clothing. In ancient Greece, physical fitness was a highly valued attribute in both men and women. It wasn't until after the Romans conquered Greece in 146BC that gymnastics became more formalized and used to train men in warfare. Based on Philostratus' claim that gymnastics is a form of wisdom, comparable to philosophy, poetry, music, geometry, and astronomy, Athens combined this more physical training with the education of the mind. At the Palestra, a physical education training center, the discipline of educating the body and educating the mind were combined allowing for a form of gymnastics that was more aesthetic and individual and which left behind the form that focused on strictness, discipline, the emphasis on defeating records, and focus on strength. Don Francisco Amorós y Ondeano, was born on February 19, 1770, in Valencia and died on August 8, 1848, in Paris. He was a Spanish colonel, and the first person to introduce educative gymnastic in France. The German Friedrich Ludwig Jahn started the German gymnastics movement in 1811 which lead to the invention of the parallel bars, rings, high bar, the pommel horse and the vault horse. The Federation of International Gymnastics (FIG) was founded in Liege in 1881. By the end of the nineteenth century, men's gymnastics competition was popular enough to be included in the first modern Olympic Games in 1896. From then on until the early 1950s, both national and international competitions involved a changing variety of exercises gathered under the rubric, "gymnastics", that included, for example, synchronized team floor calisthenics, rope climbing, high jumping, running, and horizontal ladder. During the 1920s, women organized and participated in gymnastics events. The first women's Olympic competition was limited, only involving synchronized calisthenics and track and field. These games were held in 1928, in Amsterdam. By 1954, Olympic Games apparatus and events for both men and women had been standardized in modern format, and uniform grading structures (including a point system from 1 to 15) had been agreed upon. At this time, Soviet gymnasts astounded the world with highly disciplined and difficult performances, setting a precedent that continues. Television has helped publicize and initiate a modern age of gymnastics. Both men's and women's gymnastics now attract considerable international interest, and excellent gymnasts can be found on every continent. In 2006, a new points system for Artistic gymnastics was put into play. With an A Score (or D score) being the difficulty score, which as of 2009 is based on the top 8 high scoring elements in a routine (excluding Vault). The B Score (or E Score), is the score for execution and is given for how well the skills are performed. The following disciplines are governed by FIG. Artistic Gymnastics is usually divided into Men's and Women's Gymnastics. Men compete on six events: Floor Exercise, Pommel Horse, Still Rings, Vault, Parallel Bars, and Horizontal Bar, while women compete on four: Vault, Uneven Bars, Balance Beam, and Floor Exercise. In some countries, women at one time competed on the rings, high bar, and parallel bars (for example, in the 1950s in the USSR). In 2006, FIG introduced a new point system for Artistic gymnastics in which scores are no longer limited to 10 points. The system is used in the US for elite level competition. Unlike the old code of points, there are two separate scores, an execution score and a difficulty score. In the previous system, the execution score was the only score. It was and still is out of 10.00, except for short exercises. During the gymnast's performance, the judges deduct this score only. A fall, on or off the event, is a 1.00 deduction, in elite level gymnastics. The introduction of the difficulty score is a significant change. The gymnast's difficulty score is based on what elements they perform and is subject to change if they do not perform or complete all the skills, or they do not connect a skill meant to be connected to another. Connection bonuses are where deviation happens most common between the intended and actual difficulty scores, as it can be difficult to connect multiple flight elements. It is very hard to connect skills if the first skill is not performed correctly. The new code of points allows the gymnasts to gain higher scores based on the difficulty of the skills they perform as well as their execution. There is no maximum score for difficulty, as it can keep increasing as the difficulty of the skills increase. In the vaulting events, gymnasts sprint down a runway, jump onto a springboard (or perform a roundoff or handspring entry onto a springboard), land momentarily inverted on the hands on the vaulting horse or vaulting table (pre-flight segment), then propel themselves forward or backward off that platform to a two-footed landing (post-flight segment). Every gymnast starts at a different point on the vault runway depending on their height and strength. The post-flight segment may include one or more multiple saltos, somersaults, or twisting movements. A round-off entry vault, called a Yurchenko, is the most common vault in the higher levels in gymnastics. When performing a Yurchenko, gymnasts round off so their hands are on the runway while their feet land on the springboard. From the roundoff position, the gymnast travels backwards and executes a back handspring so that the hands land on the vaulting table. The gymnast then blocks off the vaulting platform into various twisting and/or somersaulting combinations. The post-flight segment brings the gymnast to her feet. In the lower levels of gymnastics, the gymnasts do not perform this move. These gymnasts will jump onto the springboard with both feet at the same time and either do a front handspring onto the vault or a roundoff onto the vault. In 2001, the traditional vaulting horse was replaced with a new apparatus, sometimes known as a tongue, horse or vaulting table. The new apparatus is more stable, wider, and longer than the older vaulting horse, approximately 1 m in length and 1 m in width, giving gymnasts a larger blocking surface. This apparatus is thus considered safer than the vaulting horse used in the past. With the addition of this new, safer vaulting table, gymnasts are attempting more difficult and dangerous vaults. On the uneven bars, the gymnast performs a timed routine on two parallel horizontal bars set at different heights. These bars are made of fiberglass covered in wood laminate, to prevent them from breaking. In the past, bars were made of wood, but the bars were prone to breaking, providing an incentive to switch to newer technologies. The width and height of the bars may be adjusted to the size needed by individual gymnasts. In the past, the uneven parallel bars were closer together. The bars have been moved increasingly further apart, allowing gymnasts to perform swinging, circling, transitional, and release moves that may pass over, under, and between the two bars. At the Elite level, movements must pass through the handstand. Gymnasts often mount the uneven bars using a springboard or a small mat. Gymnasts may use chalk (MgCO3) and grips (a leather strip with holes for fingers to protect hands and improve performance) when performing this event. The chalk helps take the moisture out of gymnasts' hands to decrease friction and prevent rips (tears to the skin of the hands); dowel grips help gymnasts grip the bar. The gymnast performs a choreographed routine of up to 90 seconds in length consisting of leaps, acrobatic skills, somersaults, turns and dance elements on a padded beam. The beam is from the ground, long, and wide. This stationary object can also be adjusted, to be raised higher or lower. The event requires balance, flexibility, grace, poise, and strength. The event in gymnastics performed on the floor is called floor exercise. The English abbreviation for the event in gymnastics scoring is FX. In the past, the floor exercise event was executed on the bare floor or mats such as wrestling mats. The floor event now occurs on a carpeted 12m × 12m square, usually consisting of hard foam over a layer of plywood, which is supported by springs generally called a spring floor. This provides a firm surface that provides extra bounce or spring when compressed, allowing gymnasts to achieve greater height and a softer landing after the composed skill. Gymnasts perform a choreographed routine up to 90 seconds in the floor exercise event. Depending on the level, the gymnast may choose their own routine; however some levels have compulsory routines, where default music must be played. Levels three to six the music is the same for each levels along with the skills within the routine. However, recently, the levels have switched. Now, levels 6–10 are optional levels and they get to have custom routines made. In the optional levels (levels six to ten) there are skill requirements for the routine but the athlete is able to pick her own music without any words. The routine should consist of tumbling passes, series of jumps, leaps, dance elements, acrobatic skills, and turns, or pivots, on one foot. A gymnast can perform up to four tumbling passes, each of which usually includes at least one flight element without hand support. Each level of gymnastics requires the athlete to perform a different number of tumbling passes. In level 7 in the United States, a gymnast is required to do 2–3, and in levels 8–10, at least 3–4 tumbling passes are required. Scoring for both Junior Olympic and NCAA level gymnastics uses a 10.0 scale. Levels below Level 9 start from a 10.0 automatically if all requirements for an event are met. Levels 9 and 10, and NCAA gymnastics all start below a 10.0, and require gymnastics to acquire bonus points through connections and skills to increase their start value to a 10.0. During a routine, deductions will be made by the judges for flaws in the form of technique of a skill. For example, steps on landings or flexed feet can range from .05-.1 off, depending on the severity of the mistake. Male gymnasts also perform on a 12meter x 12meter spring floor. A series of tumbling passes are performed to demonstrate flexibility, strength, and balance. Strength skills include circles, scales, and press handstands. Men's floor routines usually have multiple passes that have to total between 60–70 seconds and are performed without music, unlike the women's event. Rules require that male gymnasts touch each corner of the floor at least once during their routine. A typical pommel horse exercise involves both single leg and double leg work. Single leg skills are generally found in the form of scissors, an element often done on the pommels. Double leg work however, is the main staple of this event. The gymnast swings both legs in a circular motion (clockwise or counterclockwise depending on preference) and performs such skills on all parts of the apparatus. To make the exercise more challenging, gymnasts will often include variations on a typical circling skill by turning (moores and spindles) or by straddling their legs (Flares). Routines end when the gymnast performs a dismount, either by swinging his body over the horse, or landing after a handstand variation. The rings are suspended on wire cable from a point 5.75 meters from the floor. The gymnasts must perform a routine demonstrating balance, strength, power, and dynamic motion while preventing the rings themselves from swinging. At least one static strength move is required, but some gymnasts may include two or three. A routine ends with a dismount. Gymnasts sprint down a runway, which is a maximum of 25 meters in length, before hurdling onto a spring board. The gymnast is allowed to choose where they start on the runway. The body position is maintained while punching (blocking using only a shoulder movement) the vaulting platform. The gymnast then rotates to a standing position. In advanced gymnastics, multiple twists and somersaults may be added before landing. Successful vaults depend on the speed of the run, the length of the hurdle, the power the gymnast generates from the legs and shoulder girdle, the kinesthetic awareness in the air, how well they stuck the landing and the speed of rotation in the case of more difficult and complex vaults. Men perform on two bars executing a series of swings, balances, and releases that require great strength and coordination. The width between the bars is adjustable dependent upon the actual needs of the gymnasts and usually 2m high. A 2.8 cm thick steel or fiberglass bar raised 2.5 m above the landing area is all the gymnast has to hold onto as he performs giant swings or "giants" (forward or backward revolutions around the bar in the handstand position), release skills, twists, and changes of direction. By using all of the momentum from giants and then releasing at the proper point, enough height can be achieved for spectacular dismounts, such as a triple-back salto. Leather grips are usually used to help maintain a grip on the bar. As with women, male gymnasts are also judged on all of their events including their execution, degree of difficulty, and overall presentation skills. According to FIG rules, only women compete in rhythmic gymnastics. This is a sport that combines elements of ballet, gymnastics, dance, and apparatus manipulation. The sport involves the performance of five separate routines with the use of five apparatus; ball, ribbon, hoop, clubs, rope—on a floor area, with a much greater emphasis on the aesthetic rather than the acrobatic. There are also group routines consisting of 5 gymnasts and 5 apparatuses of their choice. Rhythmic routines are scored out of a possible 30 points; the score for artistry (choreography and music) is averaged with the score for difficulty of the moves and then added to the score for execution. International competitions are split between Juniors, under sixteen by their year of birth; and Seniors, for women sixteen and over again by their year of birth. Gymnasts in Russia and Europe typically start training at a very young age and those at their peak are typically in their late teens (15–19) or early twenties. The largest events in the sport are the Olympic Games, World Championships, European Championships, World Cup and Grand-Prix Series. The first World Championships were held in 1963 with its first appearance at the Olympics in 1984. Trampolining and tumbling consists of four events, individual and synchronized trampoline, double mini trampoline, and tumbling (also known as power tumbling or rod floor). Since 2000, individual trampoline has been included in the Olympic Games. The first World Championships were held in 1964. Individual routines in trampolining involve a build-up phase during which the gymnast jumps repeatedly to achieve height, followed by a sequence of ten bounces without pause during which the gymnast performs a sequence of aerial skills. Routines are marked out of a maximum score of 10 points. Additional points (with no maximum at the highest levels of competition) can be earned depending on the difficulty of the moves and the length of time taken to complete the ten skills which is an indication of the average height of the jumps. In high level competitions, there are two preliminary routines, one which has only two moves scored for difficulty and one where the athlete is free to perform any routine. This is followed by a final routine which is optional. Some competitions restart the score from zero for the finals, other add the final score to the preliminary results. Synchronized trampoline is similar except that both competitors must perform the routine together and marks are awarded for synchronization as well as the form and difficulty of the moves. Double mini trampoline involves a smaller trampoline with a run-up, two scoring moves are performed per routine. Moves cannot be repeated in the same order on the double-mini during a competition. Skills can be repeated if a skill is competed as a mounter in one routine and a dismount in another. The scores are marked in a similar manner to individual trampoline. In Tumbling, athletes perform an explosive series of flips and twists down a sprung tumbling track. Scoring is similar to trampolining. Tumbling was originally contested as one of the events in Men's Artistic Gymnastics at the 1932 Summer Olympics, and in 1955 and 1959 at the Pan American Games. From 1974 to 1998 it was included as an event for both genders at the Acrobatic Gymnastics World Championships. The event has also been contested since 1976 at the Trampoline and Tumbling World Championships. Tumbling is competed along a 25 metre sprung tack with a 10 metre run up. A tumbling pass or run is a combination of 8 skills, with an entry skill, normally a round-off, to whips and into an end skill. Usually the end skill is the hardest skill of the pass. At the highest level, gymnasts with perform transitions skills, these are skills which are not whips, instead they are double or triple somersaults normally competed at the end of the run, now competed in the middle of the run connected before and after by either a whip or a flick. Competition is made up of a qualifying round and a finals round. There are two different types of competition in tumbling, individual and team. In the team event three gymnasts out of a team of four compete one run each, if one run fails the final member of the team is allowed to compete with the three highest scores being counted. In the individual event qualification, the competitor will compete two runs, one a straight pass (including double and triple somersaults) and a twisting pass (including full twisting whips and combination skills such as a full twisting double straight ’full in back’). In the final of the individual event, the competitor must compete two different runs which can be either twisting or straight but each run normally uses both types (using transition skills). Acrobatic gymnastics (formerly Sport Acrobatics), often referred to as acro if involved with the sport, acrobatic sports or simply sports acro, is a group gymnastic discipline for both men and women. Acrobats in groups of two, three and four perform routines with the heads, hands and feet of their partners. They may, subject to regulations (e.g. no lyrics), pick their own music. There are four international age categories: 11–16, 12–18, 13–19, and Senior (15+), which are used in the World Championships and many other events around the world, including the European Championships and the World Games. All levels require a balance and dynamic routine; 12–18, 13–19, and Seniors are also required to perform a final (combined) routine. Currently, acrobatic gymnastics score is marked out of 30.00 for juniors, and can be higher at Senior FIG level based on difficulty: There are five competitive event categories: The World Championships have been held since 1974. Aerobic gymnastics (formally Sport Aerobics) involves the performance of routines by individuals, pairs, trios, groups with 5 people, and aerobic dance and aerobic step(8 people). Strength, flexibility, and aerobic fitness rather than acrobatic or balance skills are emphasized. Routines are performed for all individuals on a 7x7m floor and also for 12–14 and 15–17 trios and mixed pairs. From 2009, all senior trios and mixed pairs were required to be on the larger floor (10x10m), all groups also perform on this floor. Routines generally last 60–90 seconds depending on age of participant and routine category. The World Championships have been held since 1995. The events consist of: On January 28, 2018 Parkour was given the go ahead to begin development as a FIG sport. The FIG is planning to run World Cup competitions from 2018 onwards and will hold the first Parkour World Championships in 2020. The events consist of: The following disciplines are not currently recognized by the Fédération Internationale de Gymnastique. Aesthetic Group Gymnastics (AGG) was developed from the Finnish "naisvoimistelu". It differs from Rhythmic Gymnastics in that body movement is large and continuous and teams are larger. Athletes do not use apparatus in international AGG competitions compared to Rhythmic Gymnastics where ball, ribbon, hoop and clubs are used on the floor area. The sport requires physical qualities such as flexibility, balance, speed, strength, coordination and sense of rhythm where movements of the body are emphasized through the flow, expression and aesthetic appeal. A good performance is characterized by uniformity and simultaneity. The competition program consists of versatile and varied body movements, such as body waves, swings, balances, pivots, jumps and leaps, dance steps, and lifts. The International Federation of Aesthetic Group Gymnastics (IFAGG) was established in 2003. The first Aesthetic Group Gymnastics World Championships was held in 2000. Men's rhythmic gymnastics is related to both men's artistic gymnastics and wushu martial arts. It emerged in Japan from stick gymnastics. Stick gymnastics has been taught and performed for many years with the aim of improving physical strength and health. Male athletes are judged on some of the same physical abilities and skills as their female counterparts, such as hand/body-eye co-ordination, but tumbling, strength, power, and martial arts skills are the main focus, as opposed to flexibility and dance in women's rhythmic gymnastics. There are a growing number of participants, competing alone and on a team; it is most popular in Asia, especially in Japan where high school and university teams compete fiercely. , there were 1000 men's rhythmic gymnasts in Japan. The technical rules for the Japanese version of men's rhythmic gymnastics came around the 1970s. For individuals, only four types of apparatus are used: the double rings, the stick, the rope, and the clubs. Groups do not use any apparatus. The Japanese version includes tumbling performed on a spring floor. Points are awarded based a 10-point scale that measures the level of difficulty of the tumbling and apparatus handling. On November 27–29, 2003, Japan hosted first edition of the Men's Rhythmic Gymnastics World Championship. The events consist of: TeamGym is a form of competition created by the European Union of Gymnastics, named originally EuroTeam. The first official competition was held in Finland in 1996. TeamGym events consist of three sections: women, men and mixed teams. Athletes compete in three different disciplines: floor, tumbling and trampette. In common for the performance is effective teamwork, good technique in the elements and spectacular acrobatic skills. There is no World Championships however there has been a European Championships held since 2010. Wheel gymnasts do exercises in a large wheel known as the Rhönrad, gymnastics wheel, gym wheel, or German wheel, in the beginning also known as ayro wheel, aero wheel, and Rhon rod. There are four core categories of exercise: straight line, spiral, vault and cyr wheel. The first World Championships was held in 1995. Mallakhamba (Marathi: मल्लखम्ब) is a traditional Indian sport in which a gymnast performs feats and poses in concert with a vertical wooden pole or rope. The word also refers to the pole used in the sport. Mallakhamba derives from the terms "malla" which denotes a wrestler and "khamba" which means a pole. Mallakhamba can therefore be translated to English as "pole gymnastics". On April 9, 2013, the Indian state of Madhya Pradesh declared mallakhamba as the state sport. General gymnastics also known as Gymnastics for All enables people of all ages and abilities to participate in performance groups of 6 to more than 150 athletes. They can perform synchronized, choreographed routines. Troupes may consist of both genders and are separated into age divisions. The largest general gymnastics exhibition is the quadrennial World Gymnaestrada which was first held in 1939. In 1984 Gymnastics for All was officially recognized first as a Sport Program by the FIG (International Gymnastic Federation), and subsequently by national gymnastic federations worldwide with participants that now number 30 million. In the US, gymnastics levels for women called the Junior Olympic Program begins at 1 and goes to 10. Elite can follow 10 and is generally considered Olympic level. Men's gymnastics or The Junior Olympic Program consists of ten levels of training or competition with multiple age groups at each level creating opportunities for athletes and coaches to participate and or compete. An artistic gymnast's score comes from deductions taken from the start value of a routine's elements. The start value of a routine is based on the difficulty of the elements the gymnast attempts and whether or not the gymnast meets composition requirements. The composition requirements are different for each apparatus. This score is called the D score. Deductions in execution and artistry are taken from a maximum of 10.0. This score is called the E score. The final score is calculated by adding the D and E score. The current method of scoring, by adding D and E score to give the final score has been in place since 2006.. The current method is called "open-end" scoring because there is no theoretical cap (although there is practical cap) to the D-score and hence the total possible score for a routine. Before 2006, a gymnast's final score is deducted from a possible maximum of 10 for a routine. A Code of Points or guidelines of scoring a routine's difficulty and execution, is slightly revised for each quadrennium, or period of four years culminating in the Olympics year. In a tumbling pass, dismount or vault, landing is the final phase, following take off and flight This is a critical skill in terms of execution in competition scores, general performance, and injury occurrence. Without the necessary magnitude of energy dissipation during impact, the risk of sustaining injuries during somersaulting increases. These injuries commonly occur at the lower extremities such as cartilage lesions, ligament tears, and bone bruises/fractures. To avoid such injuries, and to receive a high-performance score, proper technique must be used by the gymnast. "The subsequent ground contact or impact landing phase must be achieved using a safe, aesthetic and well-executed double foot landing." A successful landing in gymnastics is classified as soft, meaning the knee and hip joints are at greater than 63 degrees of flexion. A higher flight phase results in a higher vertical ground reaction force. Vertical ground reaction force represents an external force which the gymnasts have to overcome with their muscle force and affects the gymnasts' linear and angular momentum. Another important variable that affects linear and angular momentum is the time the landing takes. Gymnasts can decrease the impact force by increasing the time taken to perform the landing. Gymnasts can achieve this by increasing hip, knee and ankle amplitude. Generally, competitors climbed either a 6m (6.1m = 20 ft in US) or an 8m (7.6m = 25 ft in US), 38 mm diameter (1.5-inch) natural fiber rope for speed, starting from a seated position on the floor and using only the hands and arms. Kicking the legs in a kind of "tride was normally permitted. Many gymnasts can do this in the straddle or pike position, which eliminates the help generated from the legs though it can be done with legs as well. Flying rings was an event similar to still rings, but with the performer executing a series of stunts while swinging. It was a gymnastic event sanctioned by both the NCAA and the AAU until the early 1960s. Club swinging, a.k.a. Indian clubs, was an event in Men's Artistic Gymnastics sometime up until the 1950s. It was similar to the clubs in both Women's and Men's Rhythmic Gymnastics but much simpler with few throws allowed. It was practice. It was included in the 1904 and 1932 Summer Olympic Games. Gymnastics is one of the most dangerous sports, with a very high injury rate seen in girls age 11 to 18. Compared to athletes who play other sports, gymnasts are at higher than average risk of overuse injuries and injuries caused by early sports specialization among children and young adults. Gymnasts are at particular risk of foot and wrist injuries. Strength training can help prevent injuries. Gymnasts tend to have short stature, but it is unlikely that the sport affects their growth. Parents of gymnasts tend also to be shorter than average.
https://en.wikipedia.org/wiki?curid=12551
Great auk The great auk ("Pinguinus impennis") is a species of flightless alcid that became extinct in the mid-19th century. It was the only modern species in the genus Pinguinus. It is not closely related to the birds now known as penguins, which were discovered later and so named by sailors because of their physical resemblance to the great auk. It bred on rocky, isolated islands with easy access to the ocean and a plentiful food supply, a rarity in nature that provided only a few breeding sites for the great auks. When not breeding, they spent their time foraging in the waters of the North Atlantic, ranging as far south as northern Spain and along the coastlines of Canada, Greenland, Iceland, the Faroe Islands, Norway, Ireland, and Great Britain. The great auk was tall and weighed about , making it the largest alcid to survive into the modern era, and the second-largest member of the alcid family overall (the prehistoric "Miomancalla" was larger). It had a black back and a white belly. The black beak was heavy and hooked, with grooves on its surface. During summer, great auk plumage showed a white patch over each eye. During winter, the great auk lost these patches, instead developing a white band stretching between the eyes. The wings were only long, rendering the bird flightless. Instead, the great auk was a powerful swimmer, a trait that it used in hunting. Its favourite prey were fish, including Atlantic menhaden and capelin, and crustaceans. Although agile in the water, it was clumsy on land. Great auk pairs mated for life. They nested in extremely dense and social colonies, laying one egg on bare rock. The egg was white with variable brown marbling. Both parents participated in the incubation of the egg for around 6 weeks before the young hatched. The young left the nest site after 2–3 weeks, although the parents continued to care for it. The great auk was an important part of many Native American cultures, both as a food source and as a symbolic item. Many Maritime Archaic people were buried with great auk bones. One burial discovered included someone covered by more than 200 great auk beaks, which are presumed to be the remnants of a cloak made of great auk skins. Early European explorers to the Americas used the great auk as a convenient food source or as fishing bait, reducing its numbers. The bird's down was in high demand in Europe, a factor that largely eliminated the European populations by the mid-16th century. Scientists soon began to realize that the great auk was disappearing and it became the beneficiary of many early environmental laws, but this proved ineffectual. Its growing rarity increased interest from European museums and private collectors in obtaining skins and eggs of the bird. On 3 June 1844, the last two confirmed specimens were killed on Eldey, off the coast of Iceland, ending the last known breeding attempt. Later reports of roaming individuals being seen or caught are unconfirmed. A record of one great auk in 1852 is considered by some to be the last sighting of a member of the species. The great auk is mentioned in several novels and the scientific journal of the American Ornithologists' Union is named "The Auk" in honour of this bird. Analysis of mtDNA sequences has confirmed morphological and biogeographical studies suggesting that the razorbill is the closest living relative of the great auk. The great auk also was related closely to the little auk (dovekie), which underwent a radically different evolution compared to "Pinguinus". Due to its outward similarity to the razorbill (apart from flightlessness and size), the great auk often was placed in the genus "Alca", following Linnaeus. The fossil record (especially the sister species, "Pinguinus alfrednewtoni") and molecular evidence show that the three closely related genera diverged soon after their common ancestor, a bird probably similar to a stout Xantus's murrelet, had spread to the coasts of the Atlantic. Apparently, by that time, the murres, or Atlantic guillemots, already had split from the other Atlantic alcids. Razorbill-like birds were common in the Atlantic during the Pliocene, but the evolution of the little auk is sparsely documented. The molecular data are compatible with either possibility, but the weight of evidence suggests placing the great auk in a distinct genus. Some ornithologists still believe it is more appropriate to retain the species in the genus "Alca". It is the only recorded British bird made extinct in historic times. The following cladogram shows the placement of the great auk among its closest relatives, based on a 2004 genetic study: "Pinguinus alfrednewtoni" was a larger, and also flightless, member of the genus "Pinguinus" that lived during the Early Pliocene. Known from bones found in the Yorktown Formation of the Lee Creek Mine in North Carolina, it is believed to have split, along with the great auk, from a common ancestor. "Pinguinus alfrednewtoni" lived in the western Atlantic, while the great auk lived in the eastern Atlantic. After the former died out following the Pliocene, the great auk took over its territory. The great auk was not related closely to the other extinct genera of flightless alcids, "Mancalla", "Praemancalla", and "Alcodes". The great auk was one of the 4,400 animal species formally described by Carl Linnaeus in his eighteenth-century work, "Systema Naturae", in which it was given the binomial "Alca impennis". The name "Alca" is a Latin derivative of the Scandinavian word for razorbills and their relatives. The bird was known in literature even before this and was described by Charles d'Ecluse in 1605 as "Mergus Americanus." This also included a woodcut which represents the oldest unambiguous visual depictions of the bird. The species was not placed in its own scientific genus, "Pinguinus", until 1791. The generic name is derived from the Spanish and Portuguese name for the species, in turn from Latin "pinguis" meaning 'plump', and the specific name, "impennis", is from Latin and refers to the lack of flight feathers or pennae. The Irish name for the great auk is 'falcóg mhór', meaning 'big seabird/auk'. The Basque name is "arponaz", meaning "spearbill". Its early French name was "apponatz". The Norse called the great auk, "geirfugl", which means "spearbird". This has led to an alternative English common name for the bird, "garefowl" or "gairfowl". The Inuit name for the great auk was "isarukitsok", which meant "little wing". The word "penguin" first appears in the sixteenth century as a synonym for "great auk". Although the etymology is debated, the generic name, "penguin", may be derived from the Welsh "pen gwyn" "white head", either because the birds lived in Newfoundland on White Head Island (Pen Gwyn in Welsh), or, because the great auk had such large white circles on its head. When European explorers discovered what today are known as penguins in the Southern Hemisphere, they noticed their similar appearance to the great auk and named them after this bird, although biologically, they are not closely related. Whalers also lumped the northern and southern birds together under the common name "woggins". Standing about tall and weighing approximately as adult birds, the flightless great auk was the second-largest member of both its family and the order Charadriiformes overall, surpassed only by the mancalline "Miomancalla". It is, however, the largest species to survive into modern times. The great auks that lived farther north averaged larger in size than the more southerly members of the species. Males and females were similar in plumage, although there is evidence for differences in size, particularly in the bill and femur length. The back was primarily a glossy black, and the belly was white. The neck and legs were short, and the head and wings small. During summer, it developed a wide white eye patch over each eye, which had a hazel or chestnut iris. During winter the great auk moulted and lost this eye patch, which was replaced with a wide white band and a gray line of feathers that stretched from the eye to the ear. During the summer, its chin and throat were blackish-brown and the inside of the mouth was yellow. In winter, the throat became white. Some individuals reportedly had grey plumage on their flanks, but the purpose, seasonal duration, and frequency of this variation is unknown. The bill was large at long and curved downward at the top; the bill also had deep white grooves in both the upper and lower mandibles, up to seven on the upper mandible and twelve on the lower mandible in summer, although there were fewer in winter. The wings were only in length and the longest wing feathers were only long. Its feet and short claws were black, while the webbed skin between the toes was brownish black. The legs were far back on the bird's body, which gave it powerful swimming and diving abilities. Hatchlings were described as grey and downy, but their exact appearance is unknown, since no skins exist today. Juvenile birds had fewer prominent grooves in their beaks than adults and they had mottled white and black necks, while the eye spot found in adults was not present; instead, a grey line ran through the eyes (which still had white eye rings) to just below the ears. Great Auk calls included low croaking and a hoarse scream. A captive great auk was observed making a gurgling noise when anxious. It is not known what its other vocalizations were, but it is believed that they were similar to those of the razorbill, only louder and deeper. The great auk was found in the cold North Atlantic coastal waters along the coasts of Canada, the northeastern United States, Norway, Greenland, Iceland, the Faroe Islands, Ireland, Great Britain, France, and the Iberian Peninsula. Pleistocene fossils indicate the great auk also inhabited Southern France, Italy, and other coasts of the Mediterranean basin. The great auk left the North Atlantic waters for land only to breed, even roosting at sea when not breeding. The rookeries of the great auk were found from Baffin Bay to the Gulf of St. Lawrence, across the far northern Atlantic, including Iceland, and in Norway and the British Isles in Europe. For their nesting colonies the great auks required rocky islands with sloping shorelines that provided access to the sea. These were very limiting requirements and it is believed that the great auk never had more than 20 breeding colonies. The nesting sites also needed to be close to rich feeding areas and to be far enough from the mainland to discourage visitation by predators such as humans and polar bears. The localities of only seven former breeding colonies are known: Papa Westray in the Orkney Islands, St. Kilda off Scotland, Grimsey Island, Eldey Island, Geirfuglasker near Iceland, Funk Island near Newfoundland, and the Bird Rocks (Rochers-aux-Oiseaux) in the Gulf of St. Lawrence. Records suggest that this species may have bred on Cape Cod in Massachusetts. By the late eighteenth and early nineteenth centuries, the breeding range of the great auk was restricted to Funk Island, Grimsey Island, Eldey Island, the Gulf of St. Lawrence, and St. Kilda Island. Funk Island was the largest known breeding colony. After the chicks fledged, the great auk migrated north and south away from the breeding colonies and they tended to go southward during late autumn and winter. It also was common on the Grand Banks of Newfoundland. In recorded history, the great auk typically did not go farther south than Massachusetts Bay in the winter. Archeological excavations have found great auk remains in New England and Southern Spain. Great auk bones have been found as far south as Florida, where it may have been present during four periods: approximately 1000 BC and 1000 AD, as well as, during the fifteenth century and the seventeenth century. (It has been suggested that some of the bones discovered in Florida may be the result of aboriginal trading.) Pleistocene fossils indicate the great auk also inhabited Southern France, Italy, and other coasts of the Mediterranean basin. The great auk was never observed and described by modern scientists during its existence and is only known from the accounts of laymen, such as sailors, so its behaviour is not well known and difficult to reconstruct. Much may be inferred from its close, living relative, the razorbill, as well as from remaining soft tissue. Great auks walked slowly and sometimes used their wings to help them traverse rough terrain. When they did run, it was awkwardly and with short steps in a straight line. They had few natural predators, mainly large marine mammals, such as the orca and white-tailed eagles. Polar bears preyed on nesting colonies of the great auk. Reportedly, this species had no innate fear of human beings, and their flightlessness and awkwardness on land compounded their vulnerability. Humans preyed upon them as food, for feathers, and as specimens for museums and private collections. Great auks reacted to noises, but were rarely frightened by the sight of something. They used their bills aggressively both in the dense nesting sites and when threatened or captured by humans. These birds are believed to have had a life span of approximately 20 to 25 years. During the winter, the great auk migrated south, either in pairs or in small groups, but never with the entire nesting colony. The great auk was generally an excellent swimmer, using its wings to propel itself underwater. While swimming, the head was held up but the neck was drawn in. This species was capable of banking, veering, and turning underwater. The great auk was known to dive to depths of and it has been claimed that the species was able to dive to depths of . To conserve energy, most dives were shallow. It also could hold its breath for 15 minutes, longer than a seal. Its ability to dive so deeply reduced competition with other alcid species. The great auk was capable of accelerating underwater, then shooting out of the water to land on a rocky ledge above the ocean's surface. This alcid typically fed in shoaling waters that were shallower than those frequented by other alcids, although after the breeding season, they had been sighted as far as from land. They are believed to have fed cooperatively in flocks. Their main food was fish, usually in length and weighing , but occasionally their prey was up to half the bird's own length. Based on remains associated with great auk bones found on Funk Island and on ecological and morphological considerations, it seems that Atlantic menhaden and capelin were their favoured prey. Other fish suggested as potential prey include lumpsuckers, shorthorn sculpins, cod, sand lance, as well as crustaceans. The young of the great auk are believed to have eaten plankton and, possibly, fish and crustaceans regurgitated by adults. Historical descriptions of the great auk breeding behaviour are somewhat unreliable. Great Auks began pairing in early and mid-May. They are believed to have mated for life (although some theorize that great auks could have mated outside their pair, a trait seen in the razorbill). Once paired, they nested at the base of cliffs in colonies, likely where they copulated. Mated pairs had a social display in which they bobbed their heads and displayed their white eye patch, bill markings, and yellow mouth. These colonies were extremely crowded and dense, with some estimates stating that there was a nesting great auk for every of land. These colonies were very social. When the colonies included other species of alcid, the great auks were dominant due to their size. Female great auks would lay only one egg each year, between late May and early June, although they could lay a replacement egg if the first one was lost. In years when there was a shortage of food, the great auks did not breed. A single egg was laid on bare ground up to from shore. The egg was ovate and elongate in shape, and it averaged in length and across at the widest point. The egg was yellowish white to light ochre with a varying pattern of black, brown, or greyish spots and lines that often were congregated on the large end. It is believed that the variation in the egg streaks enabled the parents to recognize their egg among those in the vast colony. The pair took turns incubating the egg in an upright position for the 39 to 44 days before the egg hatched, typically in June, although eggs could be present at the colonies as late as August. The parents also took turns feeding their chick. According to one account, the chick was covered with grey down. The young bird took only two or three weeks to mature enough to abandon the nest and land for the water, typically around the middle of July. The parents cared for their young after they fledged, and adults would be seen swimming with their young perched on their backs. Great auks matured sexually when they were four to seven years old. The great auk was a food source for Neanderthals more than 100,000 years ago, as evidenced by well-cleaned bones found by their campfires. Images believed to depict the great auk also were carved into the walls of the El Pendo Cave in Camargo, Spain, and Paglicci, Italy, more than 35,000 years ago, and cave paintings 20,000 years old have been found in France's Grotte Cosquer. Native Americans valued the great auk as a food source during the winter and as an important cultural symbol. Images of the great auk have been found in bone necklaces. A person buried at the Maritime Archaic site at Port au Choix, Newfoundland, dating to about 2000 BC, was found surrounded by more than 200 great auk beaks, which are believed to have been part of a suit made from their skins, with the heads left attached as decoration. Nearly half of the bird bones found in graves at this site were of the great auk, suggesting that it had great cultural significance for the Maritime Archaic people. The extinct Beothuks of Newfoundland made pudding out of the eggs of the great auk. The Dorset Eskimos also hunted it. The Saqqaq in Greenland overhunted the species, causing a local reduction in range. Later, European sailors used the great auks as a navigational beacon, as the presence of these birds signalled that the Grand Banks of Newfoundland were near. This species is estimated to have had a maximum population in the millions. The great auk was hunted on a significant scale for food, eggs, and its down feathers from at least the eighth century. Prior to that, hunting by local natives may be documented from Late Stone Age Scandinavia and eastern North America, as well as from early fifth century Labrador, where the bird seems to have occurred only as stragglers. Early explorers, including Jacques Cartier, and numerous ships attempting to find gold on Baffin Island were not provisioned with food for the journey home, and therefore, used great auks as both a convenient food source and bait for fishing. Reportedly, some of the later vessels anchored next to a colony and ran out planks to the land. The sailors then herded hundreds of great auks onto the ships, where they were slaughtered. Some authors have questioned the reports of this hunting method and whether it was successful. Great auk eggs were also a valued food source, as the eggs were three times the size of a murre's and had a large yolk. These sailors also introduced rats onto the islands, that preyed upon nests. The Little Ice Age may have reduced the population of the great auk by exposing more of their breeding islands to predation by polar bears, but massive exploitation by humans for their down drastically reduced the population, with recent evidence indicating the latter alone is likely the primary driver of its extinction. By the mid-sixteenth century, the nesting colonies along the European side of the Atlantic were nearly all eliminated by humans killing this bird for its down, which was used to make pillows. In 1553, the great auk received its first official protection, and in 1794 Great Britain banned the killing of this species for its feathers. In St. John's, those violating a 1775 law banning hunting the great auk for its feathers or eggs were publicly flogged, though hunting for use as fishing bait was still permitted. On the North American side, eider down initially was preferred, but once the eiders were nearly driven to extinction in the 1770s, down collectors switched to the great auk at the same time that hunting for food, fishing bait, and oil decreased. The great auk had disappeared from Funk Island by 1800. An account by Aaron Thomas of HMS "Boston" from 1794 described how the bird had been slaughtered systematically until then: With its increasing rarity, specimens of the great auk and its eggs became collectible and highly prized by rich Europeans, and the loss of a large number of its eggs to collection contributed to the demise of the species. Eggers, individuals who visited the nesting sites of the great auk to collect their eggs, quickly realized that the birds did not all lay their eggs on the same day, so they could make return visits to the same breeding colony. Eggers only collected the eggs without embryos and typically, discarded the eggs with embryos growing inside of them. On the islet of Stac an Armin, St. Kilda, Scotland, in July 1840, the last great auk seen in Britain was caught and killed. Three men from St. Kilda caught a single "garefowl", noticing its little wings and the large white spot on its head. They tied it up and kept it alive for three days, until a large storm arose. Believing that the bird was a witch and was causing the storm, they then killed it by beating it with a stick. The last colony of great auks lived on Geirfuglasker (the "Great Auk Rock") off Iceland. This islet was a volcanic rock surrounded by cliffs that made it inaccessible to humans, but in 1830, the islet submerged after a volcanic eruption, and the birds moved to the nearby island of Eldey, which was accessible from a single side. When the colony initially was discovered in 1835, nearly fifty birds were present. Museums, desiring the skins of the great auk for preservation and display, quickly began collecting birds from the colony. The last pair, found incubating an egg, was killed there on 3 June 1844, on request from a merchant who wanted specimens, with Jón Brandsson and Sigurður Ísleifsson strangling the adults and Ketill Ketilsson smashing the egg with his boot. Great Auk specialist John Wolley interviewed the two men who killed the last birds, and Sigurður described the act as follows: A later claim of a live individual sighted in 1852 on the Grand Banks of Newfoundland has been accepted by the International Union for Conservation of Nature and Natural Resources (IUCN). There is an ongoing discussion on the internet about the possibilities for recreating the Great Auk using its DNA from specimens collected. This possibility is the subject of concern to science. Today, 78 skins of the great auk remain, mostly in museum collections, along with approximately 75 eggs and 24 complete skeletons. All but four of the surviving skins are in summer plumage, and only two of these are immature. No hatchling specimens exist. Each egg and skin has been assigned a number by specialists. Although thousands of isolated bones were collected from nineteenth century Funk Island to Neolithic middens, only a few complete skeletons exist. Natural mummies also are known from Funk Island, and the eyes and internal organs of the last two birds from 1844 are stored in the Zoological Museum, Copenhagen. The whereabouts of the skins from the last two individuals has been unknown for more than a hundred years, but that mystery has been partly resolved using DNA extracted from the organs of the last individuals and the skins of the candidate specimens suggested by Errol Fuller (those in Übersee-Museum Bremen, Royal Belgian Institute of Natural Sciences, Zoological Museum of Kiel University, Los Angeles County Museum of Natural History, and Landesmuseum Natur und Mensch Oldenburg). A positive match was found between the organs from the male individual and the skin now in the RBINS in Brussels. No match was found between the female organs and a specimen from Fuller's list, but authors speculate that the skin in Cincinnati Museum of Natural History and Science may be a potential candidate due to a common history with the L.A. specimen. Following the bird's extinction, remains of the great auk increased dramatically in value, and auctions of specimens created intense interest in Victorian Britain, where 15 specimens are now located, the largest number of any country. A specimen was bought in 1971 by the Icelandic Museum of National History for £9000, which placed it in the Guinness Book of Records as the most expensive stuffed bird ever sold. The price of its eggs sometimes reached up to 11 times the amount earned by a skilled worker in a year. The present whereabouts of six of the eggs are unknown. Several other eggs have been destroyed accidentally. Two mounted skins were destroyed in the twentieth century, one in the Mainz Museum during the Second World War, and one in the Museu Bocage, Lisbon that was destroyed by a fire in 1978. The great auk is one of the more frequently referenced extinct birds in literature, much as the famous dodo. It appears in many works of children's literature. Charles Kingsley's "The Water-Babies, A Fairy Tale for a Land Baby" depicts a great auk telling the tale of the extinction of its species. Enid Blyton's "The Island of Adventure" features the bird's extinction, sending the protagonist on a failed search for what he believes is a lost colony of the species. The great auk also is present in a wide variety of other works of fiction. In the short story "The Harbor-Master" by Robert W. Chambers, the discovery and attempted recovery of the last known pair of great auks is central to the plot (which also involves a proto-Lovecraftian element of suspense). The story first appeared in "Ainslee's Magazine" (August 1889) and was slightly revised to become the first five chapters of Chambers' episodic novel "In Search of the Unknown", (Harper and Brothers Publishers, New York, 1904). In his novel "Ulysses", James Joyce mentions the bird while the novel's main character is drifting into sleep. He associates the great auk with the mythical roc as a method of formally returning the main character to a sleepy land of fantasy and memory. "Penguin Island", a 1908 French satirical novel by the Nobel Prize winning author Anatole France, narrates the fictional history of a great auk population that is mistakenly baptized by a nearsighted missionary. A great auk is collected by fictional naturalist Stephen Maturin in the Patrick O'Brian historical novel "The Surgeon's Mate". This work also details the harvesting of a colony of auks. The great auk is the subject of "The Last Great Auk", a novel by Allen Eckert, which tells of the events leading to the extinction of the great auk as seen from the perspective of the last one alive. It appears also in Farley Mowat's "Sea of Slaughter". Ogden Nash warns that humans could suffer the same fate as the great auk in his short poem "A Caution to Everybody". W.S. Merwin mentions the Great Auk in a short litany of extinct animals in his poem "For a Coming Extinction," one of the seminal poems from his 1967 collection, "The Lice". "Night of the Auk", a 1956 Broadway drama by Arch Oboler, depicts a group of astronauts returning from the moon to discover that a full-blown nuclear war has broken out. Obeler draws a parallel between the human-caused extinction of the Great Auk and of the story's nuclear extinction of humankind. The Doctor Who audio drama 'Last Chance', produced by Big Finish Productions, depicts the killing of the final breeding pair in 1844. This bird also is featured in a variety of other media. It is the subject of a ballet, "Still Life at the Penguin Café", and a song, "A Dream Too Far", in the ecological musical Rockford's Rock Opera. A great auk appears as a prized possession of Baba the Turk in Igor Stravinsky's opera "The Rake's Progress" (libretto by W. H. Auden and Chester Kallman). The great auk is the mascot of the Archmere Academy in Claymont, Delaware, and the Adelaide University Choral Society (AUCS) in Australia. The great auk was formerly the mascot of the Lindsay Frost campus of Sir Sandford Fleming College in Ontario. In 2012, the two separate sports programs of Fleming College were combined and the great auk mascot went extinct. The Lindsay Frost campus student owned bar, student center, and lounge is still known as the Auk's Lodge. It was also the mascot of the now ended Knowledge Masters educational competition. The scientific journal of the American Ornithologists' Union is named "The Auk" in honour of this bird. According to Homer Hickam's memoir, "Rocket Boys", and its film production, "October Sky", the early rockets he and his friends built, ironically were named "Auk". A cigarette company, the British Great Auk Cigarettes, was named after this bird. Walton Ford, the American painter, has featured great auks in two paintings: "The Witch of St. Kilda" and "Funk Island". The English painter and writer Errol Fuller produced "Last Stand" for his monograph on the species. The great auk also appeared on one stamp in a set of five depicting extinct birds issued by Cuba in 1974.
https://en.wikipedia.org/wiki?curid=12552
Glorantha Glorantha is a fantasy world created by Greg Stafford. It was first introduced in the board game "White Bear and Red Moon" (1975) by Chaosium, and thereafter in a number of other board, roleplaying and computer games, including "RuneQuest" and "HeroQuest", as well as several works of fiction and the computer strategy game "King of Dragon Pass". The Gloranthan world is characterised by its complex use of mythology, heavily influenced by the universalist approaches of Joseph Campbell and Mircea Eliade, its sword and sorcery ethos, its long and distinctive history as a setting for role-playing games, its community development and expansion, and (unusual among early American fantasy role-playing games) its relative lack of Tolkienesque influence. Stafford began imagining Glorantha in 1966 as a way to deepen his own understanding of mythology. He founded the company Chaosium to publish the board wargame "White Bear and Red Moon" in 1974 (other sources say 1975), which was set in Glorantha. Chaosium later published other games in the setting, including the critically acclaimed "RuneQuest". Various later editions of "RuneQuest", the narrative role-playing game "HeroQuest" (the first edition of which was published as "Hero Wars"), and the video game "King of Dragon Pass" were also set in Glorantha, as were several prominent fan efforts. Stafford has also explored the Gloranthan setting in the fantasy novel "King of Sartar" and a number of unfinished works published under the collective name of "the Stafford Library". In Glorantha, magic operates from the everyday level of prayers and charms to the creation and maintenance of the world. Heroes make their way in the world, and may also venture into metaphysical realms to gain knowledge and power, at the risk of body and soul. In the more recent material competing magical outlooks, such as theism, shamanism and mysticism, exist to explain the world. Within each metaphysical system, adherents may also compete - such as when theistic worshipers of rival gods battle each other. The world is flat, with a dome-like sky, and it has been shaped in large and small ways by the mythic actions of the gods. The 'historical' world of Glorantha is in a more or less fallen state, having recovered only partially from a universal battle against Chaos in the mythic Godtime. Humans are the dominant race, but other sentient beings abound. Some, such as the mystic dragonewts, are unique to Glorantha. Familiar nonhuman races, such as elves and dwarves, are distinct from their common, Tolkienesque portrayals. Glorantha has been the background for: two board-games ("White Bear and Red Moon"/"Dragon Pass" and "Nomad Gods"), two role-playing games ("RuneQuest" and "HeroQuest"), two video games ("King of Dragon Pass" & "Six Ages: Ride Like The Wind"), one comic book series ("Path of the Damned"), five novels or collections of fiction ("King of Sartar" by "Greg Stafford", "The Collected/Complete Griselda" by Oliver Dickinson, "Gloranthan Visions" by various authors, and "The Widow's Tale" and "Eurhol's Vale & Other Tales" by Penelope Love), and numerous pieces of myth and fiction created by the Glorantha community, featuring in magazines such as "Tales of the Reaching Moon". Several hundred gaming miniatures by various licensees and about a dozen plush toys have also been produced at various times. Glorantha's origins lie in experiments with mythology, storytelling, and recreation and blending of ancient societies, unlike its contemporary, "Dungeons & Dragons" which has its roots in wargaming. Stafford's first imaginings of Glorantha date back to 1966, when he began his studies at Beloit College, as a vehicle for him to deepen his own understanding of mythology by creating his own mythology. Stafford was greatly influenced by the ideas on mythology of Joseph Campbell, and echoes of Campbell's work are to be found in many aspects of Glorantha; for instance the story of the "God Learners" can be seen as an exercise on the implications of Campbell's idea of a unifying monomyth, and the story of Prince Argrath an exploration of Campbell's "The Hero with a Thousand Faces" (1949). More abstractly, Campbell's idea that myths are how we shape our lives deeply informs the picture of life in Glorantha throughout the game world's publication history. The first game system set in Glorantha was the board game "White Bear and Red Moon". Stafford first tried to sell the game to established publishers, but despite being accepted by three different game companies, each attempt ended in failure; eventually he founded his own game company in 1974, the influential Chaosium, to publish his game. The game detailed the rise of the barbarian Prince Argrath to defend his homeland of Sartar against the red tide of the civilized Lunar Empire, and filled out the area of "Dragon Pass"; the game has undergone several reissues since that time. The next publication was also a board game, "Nomad Gods", published by Chaosium in 1978, which detailed the raids and wars between the beast-riding spirit-worshiping tribes of Prax, a cursed land to the east of Dragon Pass. A French language edition was published by Oriflam under license from Chaosium under the name "Les Dieux Nomades" in 1994. The first edition of the role-playing game "RuneQuest" was released in 1978. On the back cover of this edition, the game world was called ""Glorontha"" (sic). Several later editions were made; the second edition ("RuneQuest 2") in 1979 introduced many sophisticated game aids, such as "Cults of Prax" and "Cults of Terror", and highly polished campaign packs, such as "Griffin Mountain". Using materials such as "Cults of Prax", players aligned their characters with any of several distinct religions, grounding those characters in the political, cultural, and metaphysical conflicts of Glorantha. Each religion offered a distinct worldview and cultural outlook, none of which are objectively correct. This approach of offering competing mythical histories and value systems continues in current Glorantha material. "Cults of Terror" focused on the worship of evil gods and adversaries, such as Vivamort, a vampire cult, and Lunar and Chaos cults. In an attempt to leverage the power of a much bigger gaming company, a third edition, "RuneQuest 3", was published with Avalon Hill in 1984. This edition both loosened the connection between "RuneQuest" and Glorantha, introducing 'Fantasy Europe' as the principal game world for "RuneQuest", and much broadened the scope of Glorantha, which was treated as an alternative game world. Unfortunately, "RuneQuest" did not prosper with its association with Avalon Hill, and the relationship between Chaosium, who held the rights to Glorantha, and Avalon Hill, who held the rights to "RuneQuest", finally broke down completely in 1995. A draft of the "RuneQuest 4" rules, called "", was written but never published. It has since been renamed "RuneSlayers" and released as a free download; it is not set in Glorantha. During this period of breakdown, Glorantha continued to evolve. The advent of popular use of the internet caused a boom in fan creations in Glorantha, supported by some unofficial business ventures, such as Reaching Moon Megacorp, and a lively convention scene. Loren Miller proposed his Maximum Game Fun principle as a basis for gaming, which soon became a game system in its own right, David Dunham proposed his "PenDragon Pass" system, a nearly freeform game system, and several ambitious freeform games were played at conventions, such as "Home of the Bold" with up to eighty participants. The video game "King of Dragon Pass" was released by A Sharp, allowing the player to play an Orlanthi hero who seeks to unite the clans and tribes of Dragon Pass in a kingdom; the game features exceptional depth of coverage of the area of Dragon Pass, and featured the first compelling public view of Stafford's ideas about the hero quest. Also Stafford was at this time publishing material about the history and mythology of Glorantha in non-game form as books such as "King of Sartar" and "The Glorious (Re)Ascent of Yelm". In recent years, Gloranthan gaming has been supported by two lines of game systems: the "Hero Wars"/"HeroQuest" game system, originally published by Issaries, Inc. in 2000 and now available from Moon Design Publications, and "RuneQuest", originally published by Mongoose Publishing in 2006 and now available from The Design Mechanism. The "HeroQuest" game system, written by Robin Laws in collaboration with Greg Stafford, is radically different from "RuneQuest" in that it emphasises narrativist aspects of role-playing; in contrast, "RuneQuest" emphasised simulationist aspects. Because of this change in approach some "RuneQuest" fans found it difficult to adjust to "HeroQuest". However, other long-term fans felt that the game fit Glorantha far better than "RuneQuest". A rewritten second edition was published in Spring 2009 by Moon Design Publications, and is being supported with comprehensive Gloranthan sourcebooks: "Sartar: Kingdom of Heroes" (2009), "Sartar Companion" (2010) and "Pavis: Gateway to Adventure" (2012). The "RuneQuest" game system was licensed to Mongoose Publishing by Issaries, allowing Mongoose to publish material concerning the little-explored Second Age of Gloranthan history. Their new fourth edition ("Mongoose RuneQuest" / MRQ) debuted in August 2006, with many Gloranthan supplements following. A fifth edition ("Mongoose RuneQuest II" / MRQ2) was published in January 2010, but Mongoose Publishing's licence for Gloranthan material lapsed in May 2011. The "RuneQuest II" game system has been retitled "Legend", and contains no Gloranthan material. A new company, The Design Mechanism, was formed by the authors of "RuneQuest II", and ownership of the Gloranthan supplements produced for the "RuneQuest II" line was transferred to them (PDF versions continue to be sold). There are close links between The Design Mechanism and Moon Design Publications, with The Design Mechanism founders writing material for both companies. A new sixth edition of the RuneQuest rules, "RuneQuest 6" (RQ6), was published in August 2012, but no Gloranthan material has yet been published. Latest publication is RuneQuest Roleplaying in Glorantha, published 21 st of september 2018. For a first time the rulebook is written completely in the Glorantha and the rulebook works as basic encyclopedia about the Glorantha. There is already couple new PDF publications to support the rules set and the rules are close enough to RuneQuest second edition to be used with the older material. There are a variety of cultures in Glorantha that have strikingly different perceptions of their world, the magic that pervades it and the major events that have shaped it. The Glorantha website introduces Glorantha as: "Glorantha is an action-packed world of adventure. Gods and Goddesses struggle here, with nations of people nothing but their pawns. The stormy barbarians with their brutal but honest Storm God struggle against the Lunar Empire, led by the imperial Sun God and devious Moon Goddess. Glorantha is an exciting world of heroes. Legends are being made by great individuals, many who are not even human beings. Some work with the deities, other heroes and heroines fight against them. Glorantha is colorful and full of magic. Supernatural animals are found, ranging from unicorns to seven types of merfolk and the Goddess of Lions. Glorantha is immense. If explored, it has different worlds and dimensions, whole realms where Gods, spirits and sorcerous powers come from. Unlike many fantasy settings, Glorantha emphasises religion, myth and belief to a level rarely seen in role-playing or fantasy fiction elsewhere. Glorantha shares some fantasy tropes such as dwarves, elves, trolls, giants, but has developed them differently to the more conventional versions based on the work of Tolkien. Dwarves are literally made of stone and exist as manifest rigid inflexible laws of creation, while elves are intelligent, mobile plants. Glorantha is full of surprises. Glorantha is as deep as you want it to be, or not. Hackers and choppers have what they want, while scholars and mythologists have a vast playground of new stories, legends and myths to enjoy." The world of Glorantha has various cultures analogous to Earth spread over two major landmasses and a widespread archipelago. The northern continent of Genertela has a caste society of roughly Vedic type to the west, an autocratic Oriental society to the east and a classical style Bronze Age culture in the center. The southern continent of Pamaltela is somewhat like Africa, but with many differences. Broos are creatures of chaos. As a result of their ability to mate with anything, they have the body of a man and features of their animal parent, often deer, goats, antelope, cattle, and sheep. The animal parent normally dies with the child eating its way out of the host at full gestation. They worship Malia, the Mother of Disease, and Thed, the goddess of rape and mother of Chaos. Scorpionmen are belligerent folk, who look like a sort of scorpion-human centaur. They are described as stupid, vicious and live in violent matriarchies with a religious emphasis on devouring. They are chaotic in nature. Ducks or "Durulz" are large intelligent ducks with arms instead of wings (or men cursed with feathers and webbed feet, depending on your point of view). They reside around rivers, mainly in Sartar, and have an unexplained mystical affinity with Death. Aldryami are Gloranthan plantmen, nature and sun worshipping—mainly worshipping Aldrya, deity of plants. Unlike Tolkienesque elves, they are alien, physically plant-like and often hostile to normal humans ("meat men'"). Like many other fictional elf races, they are excellent archers. Mostali are a machine-like dwarf race, extremely xenophobic, orthodox and insular. Inventors of iron, which has many extraordinary magical properties in Glorantha, contrasting to the primary metal used bronze. Uz, the trolls, are the race of darkness, large, intelligent, astoundingly omnivorous, with a very developed sonar-like sense ("darksense"). Their societies are matriarchal, and they worship, among others, a goddess of darkness called Kyger Litor, mother of the Trolls, and the more violent and sinister Zorak Zoran. Dragonewts are a magical race who comprise several forms of neotenic dragons, engaged in a cycle of self-improving reincarnation. They are extremely alien with an incomprehensible mindset. They must have oral surgery in order to speak human languages.
https://en.wikipedia.org/wiki?curid=12553
Garnet Bailey Garnet Edward "Ace" Bailey (June 13, 1948 – September 11, 2001) was a Canadian professional ice hockey player and scout who was a member of Stanley Cup and Memorial Cup winning teams. He died at the age of 53 while aboard United Airlines Flight 175, which crashed into the South Tower of the World Trade Center in New York City during the September 11 attacks. Garnet Edward "Ace" Bailey was born June 13, 1948 in Lloydminster, Saskatchewan. He was not related to Irvine "Ace" Bailey, who played for the Toronto Maple Leafs from 1926 to 1933. Bailey played junior hockey with the Edmonton Oil Kings from 1964 to 1967, during which time the Oil Kings won the Memorial Cup in 1966. He joined the Boston Bruins in 1968 and was a member of their Stanley Cup championship teams in 1970 and 1972. He later played for the Detroit Red Wings, St. Louis Blues and the Washington Capitals. In 1978-79, Bailey returned to Edmonton to play with the Edmonton Oilers of the World Hockey Association, where he took rookie Wayne Gretzky under his wing. He was head coach of the Wichita Wind, the Oilers' Central Hockey League affiliate, in the 1980–81 season. Bailey then worked as a scout with the Oilers from 1981 to 1994, during which time the team won five Stanley Cups (1984, 1985, 1987, 1988 and 1990); his name was engraved on three of them (1985, 1987 and 1990). In an NHL career spanning 10 seasons and 568 games, Bailey scored 107 goals and 171 assists with 633 penalty minutes. His most productive season offensively was 1974–75, when he scored 19 goals and 58 points for the Blues and the Capitals. In his sole WHA season, he scored 5 goals and 4 assists with 22 penalty minutes in 38 games. At the time of his death, he was the Los Angeles Kings' director of pro scouting. Bailey died when the plane in which he was traveling, United Airlines Flight 175, was hijacked and deliberately crashed into the South Tower of the World Trade Center in New York City during the September 11 attacks. Bailey and amateur scout Mark Bavis were traveling from Boston to Los Angeles when the flight was hijacked. They had been in Manchester visiting the Los Angeles Kings' AHL affiliate, the Monarchs. Bailey and Bavis are mentioned in the Boston-based Dropkick Murphys song "Your Spirit's Alive." Denis Leary wore a Bailey memorial T-shirt as the character Tommy Gavin in the season 1 episode "" and the fourth-season episode "" in the TV series "Rescue Me". In his memory, the Los Angeles Kings named their new mascot "Bailey". Bailey's family founded the Ace Bailey Children's Foundation in his memory. The foundation raises funds to benefit hospitalized children, infants and their families. At the National September 11 Memorial, Bailey and Bavis are memorialized at the South Pool, on Panel S-3. On October 14, 2012 the Kings brought the Stanley Cup to the Memorial and placed it on panels featuring Bailey and Bavis' names, so that the families of Bailey and Bavis could "[have] their day with the Stanley Cup". Kings general manager Dean Lombardi was also in attendance.
https://en.wikipedia.org/wiki?curid=12556
Mean value theorem In mathematics, the mean value theorem states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval. More precisely, if formula_1 is a continuous function on the closed interval formula_2 and differentiable on the open interval formula_3, then there exists a point formula_4 in formula_3 such that the tangent at c is parallel to the secant line through the endpoints formula_6 and formula_7, that is, It is one of the most important results in real analysis. A special case of this theorem was first described by Parameshvara (1370–1460), from the Kerala School of Astronomy and Mathematics in India, in his commentaries on Govindasvāmi and Bhāskara II. A restricted form of the theorem was proved by Michel Rolle in 1691; the result was what is now known as Rolle's theorem, and was proved only for polynomials, without the techniques of calculus. The mean value theorem in its modern form was stated and proved by Augustin Louis Cauchy in 1823. Let formula_8 be a continuous function on the closed interval formula_2 , and differentiable on the open interval formula_3, where a. Then there exists some formula_4 in formula_3 such that The mean value theorem is a generalization of Rolle's theorem, which assumes formula_14, so that the right-hand side above is zero. The mean value theorem is still valid in a slightly more general setting. One only needs to assume that formula_8 is continuous on formula_2 , and that for every formula_17 in formula_3 the limit exists as a finite number or equals formula_20 or formula_21 . If finite, that limit equals formula_22 . An example where this version of the theorem applies is given by the real-valued cube root function mapping formula_23 , whose derivative tends to infinity at the origin. Note that the theorem, as stated, is false if a differentiable function is complex-valued instead of real-valued. For example, define formula_24 for all real formula_17 . Then while formula_27 for any real formula_17 . These formal statements are also known as Lagrange's Mean Value Theorem. The expression formula_29 gives the slope of the line joining the points formula_6 and formula_7 , which is a chord of the graph of formula_1 , while formula_22 gives the slope of the tangent to the curve at the point formula_34 . Thus the mean value theorem says that given any chord of a smooth curve, we can find a point lying between the end-points of the chord such that the tangent at that point is parallel to the chord. The following proof illustrates this idea. Define formula_35 , where formula_36 is a constant. Since formula_1 is continuous on formula_2 and differentiable on formula_3 , the same is true for formula_40 . We now want to choose formula_36 so that formula_40 satisfies the conditions of Rolle's theorem. Namely By Rolle's theorem, since formula_40 is differentiable and formula_45 , there is some formula_4 in formula_3 for which formula_48 , and it follows from the equality formula_35 that, Assume that "f" is a continuous, real-valued function, defined on an arbitrary interval "I" of the real line. If the derivative of "f" at every interior point of the interval "I" exists and is zero, then "f" is constant in the interior. Proof: Assume the derivative of "f" at every interior point of the interval "I" exists and is zero. Let ("a", "b") be an arbitrary open interval in "I". By the mean value theorem, there exists a point "c" in ("a","b") such that This implies that "f"("a") = "f"("b"). Thus, "f" is constant on the interior of "I" and thus is constant on "I" by continuity. (See below for a multivariable version of this result.) Remarks: Cauchy's mean value theorem, also known as the extended mean value theorem, is a generalization of the mean value theorem. It states: If functions "f" and "g" are both continuous on the closed interval ["a", "b"], and differentiable on the open interval ("a", "b"), then there exists some "c" ∈ ("a", "b"), such that Of course, if and if , this is equivalent to: Geometrically, this means that there is some tangent to the graph of the curve which is parallel to the line defined by the points ("f"("a"), "g"("a")) and ("f"("b"), "g"("b")). However Cauchy's theorem does not claim the existence of such a tangent in all cases where ("f"("a"), "g"("a")) and ("f"("b"), "g"("b")) are distinct points, since it might be satisfied only for some value "c" with , in other words a value for which the mentioned curve is stationary; in such points no tangent to the curve is likely to be defined at all. An example of this situation is the curve given by which on the interval [−1, 1] goes from the point (−1, 0) to (1, 0), yet never has a horizontal tangent; however it has a stationary point (in fact a cusp) at . Cauchy's mean value theorem can be used to prove l'Hôpital's rule. The mean value theorem is the special case of Cauchy's mean value theorem when . The proof of Cauchy's mean value theorem is based on the same idea as the proof of the mean value theorem. Assume that formula_59 and formula_60 are differentiable functions on formula_3 that are continuous on formula_2. Define There exists formula_64 such that formula_65. Notice that and if we place formula_67, we get Cauchy's mean value theorem. If we place formula_67 and formula_69 we get Lagrange's mean value theorem. The proof of the generalization is quite simple: each of formula_70 and formula_71 are determinants with two identical rows, hence formula_72. The Rolle's theorem implies that there exists formula_73 such that formula_74. The mean value theorem generalizes to real functions of multiple variables. The trick is to use parametrization to create a real function of one variable, and then apply the one-variable theorem. Let formula_75 be an open convex subset of formula_76 , and let formula_77 be a differentiable function. Fix points formula_78 , and define formula_79 . Since formula_40 is a differentiable function in one variable, the mean value theorem gives: for some formula_4 between 0 and 1. But since formula_83 and formula_84 , computing formula_85 explicitly we have: where formula_87 denotes a gradient and formula_88 a dot product. Note that this is an exact analog of the theorem in one variable (in the case formula_89 this "is" the theorem in one variable). By the Cauchy–Schwarz inequality, the equation gives the estimate: In particular, when the partial derivatives of formula_1 are bounded, formula_1 is Lipschitz continuous (and therefore uniformly continuous). Note that formula_1 is not assumed to be continuously differentiable or continuous on the closure of formula_75 . However, in order to use the chain rule to compute formula_95, we really do need to know that formula_1 is differentiable on formula_75; the existence of the formula_98 and formula_99 partial derivatives by itself is not sufficient for the theorem to be true . As an application of the above, we prove that formula_1 is constant if formula_75 is open and connected and every partial derivative of formula_1 is 0. Pick some point formula_103 , and let formula_104 . We want to show formula_105 for every formula_106 . For that, let formula_107 . Then "E" is closed and nonempty. It is open too: for every formula_108 , for every formula_110 in some neighborhood of formula_17 . (Here, it is crucial that formula_17 and formula_110 are sufficiently close to each other.) Since formula_75 is connected, we conclude formula_115 . The above arguments are made in a coordinate-free manner; hence, they generalize to the case when formula_75 is a subset of a Banach space. There is no exact analog of the mean value theorem for vector-valued functions. In "Principles of Mathematical Analysis," Rudin gives an inequality which can be applied to many of the same situations to which the mean value theorem is applicable in the one dimensional case: Theorem. "For a continuous vector-valued function formula_117 differentiable on formula_3, there exists formula_119 such that formula_120." Jean Dieudonné in his classic treatise "Foundations of Modern Analysis" discards the mean value theorem and replaces it by mean inequality as the proof is not constructive and one cannot find the mean value and in applications one only needs mean inequality. Serge Lang in "Analysis I "uses the mean value theorem, in integral form, as an instant reflex but this use requires the continuity of the derivative. If one uses the Henstock–Kurzweil integral one can have the mean value theorem in integral form without the additional assumption that derivative should be continuous as every derivative is Henstock–Kurzweil integrable. The problem is roughly speaking the following: If "f" : "U" → R"m" is a differentiable function (where "U" ⊂ R"n" is open) and if "x" + "th", "x, h" ∈ R"n", "t" ∈ [0, 1] is the line segment in question (lying inside "U"), then one can apply the above parametrization procedure to each of the component functions "fi" ("i" = 1, ..., "m") of "f" (in the above notation set "y" = "x" + "h"). In doing so one finds points "x" + "tih" on the line segment satisfying But generally there will not be a "single" point "x" + "t*h" on the line segment satisfying for all "i" "simultaneously". For example, define: Then formula_124, but formula_125 and formula_126 are never simultaneously zero as formula_17 ranges over formula_128. However a certain type of generalization of the mean value theorem to vector-valued functions is obtained as follows: Let "f" be a continuously differentiable real-valued function defined on an open interval "I", and let "x" as well as "x" + "h" be points of "I". The mean value theorem in one variable tells us that there exists some "t*" between 0 and 1 such that On the other hand, we have, by the fundamental theorem of calculus followed by a change of variables, Thus, the value "f′"("x" + "t*h") at the particular point "t*" has been replaced by the mean value This last version can be generalized to vector valued functions: Proof. Let "f"1, ..., "fm" denote the components of "f" and define: Then we have The claim follows since "Df" is the matrix consisting of the components formula_135 Proof. Let "u" in R"m" denote the value of the integral Now we have (using the Cauchy–Schwarz inequality): Now cancelling the norm of "u" from both ends gives us the desired inequality. Proof. From Lemma 1 and 2 it follows that Let "f" : ["a", "b"] → R be a continuous function. Then there exists "c" in ["a", "b"] such that Since the mean value of "f" on ["a", "b"] is defined as we can interpret the conclusion as "f" achieves its mean value at some "c" in ("a", "b"). In general, if "f" : ["a", "b"] → R is continuous and "g" is an integrable function that does not change sign on ["a", "b"], then there exists "c" in ("a", "b") such that Suppose "f" : ["a", "b"] → R is continuous and "g" is a nonnegative integrable function on ["a", "b"]. By the extreme value theorem, there exists "m" and "M" such that for each "x" in ["a", "b"], formula_144 and formula_145. Since "g" is nonnegative, Now let If formula_148, we're done since means so for any "c" in ("a", "b"), If "I" ≠ 0, then By the intermediate value theorem, "f" attains every value of the interval ["m", "M"], so for some "c" in ["a", "b"] that is, Finally, if "g" is negative on ["a", "b"], then and we still get the same result as above. QED There are various slightly different theorems called the second mean value theorem for definite integrals. A commonly found version is as follows: Here formula_157 stands for formula_158, the existence of which follows from the conditions. Note that it is essential that the interval ("a", "b"] contains "b". A variant not having this requirement is: If the function formula_75 returns a multi-dimensional vector, then the MVT for integration is not true, even if the domain of formula_75 is also multi-dimensional. For example, consider the following 2-dimensional function defined on an formula_162-dimensional cube: Then, by symmetry it is easy to see that the mean value of formula_75 over its domain is (0,0): However, there is no point in which formula_166, because formula_167 everywhere. Let "X" and "Y" be non-negative random variables such that E["X"] < E["Y"] < ∞ and formula_168 (i.e. "X" is smaller than "Y" in the usual stochastic order). Then there exists an absolutely continuous non-negative random variable "Z" having probability density function Let "g" be a measurable and differentiable function such that E["g"("X")], E["g"("Y")] < ∞, and let its derivative "g′" be measurable and Riemann-integrable on the interval ["x", "y"] for all "y" ≥ "x" ≥ 0. Then, E["g′"("Z")] is finite and As noted above, the theorem does not hold for differentiable complex-valued functions. Instead, a generalization of the theorem is stated such: Let "f" : Ω → C be a holomorphic function on the open convex set Ω, and let "a" and "b" be distinct points in Ω. Then there exist points "u", "v" on "Lab" (the line segment from "a" to "b") such that Where Re() is the Real part and Im() is the Imaginary part of a complex-valued function.
https://en.wikipedia.org/wiki?curid=19662
Marc Bloch Marc Léopold Benjamin Bloch (; ; 6 July 1886 – 16 June 1944) was a French historian. A founding member of the Annales School of French social history, he specialised in medieval history and published widely on Medieval France over the course of his career. As an academic, he worked at the University of Strasbourg (1920 to 1936), the University of Paris (1936 to 1939), and the University of Montpellier (1941 to 1944). Born in Lyon to an Alsatian Jewish family, Bloch was raised in Paris, where his father—the classical historian Gustave Bloch—worked at Sorbonne University. Bloch was educated at various Parisian lycées and the École Normale Supérieure, and from an early age was affected by the anti-semitism of the Dreyfus affair. During the First World War, he served in the French Army and fought at the First Battle of the Marne and the Somme. After the war, he was awarded his doctorate in 1918 and became a lecturer at the University of Strasbourg. There, he formed an intellectual partnership with modern historian Lucien Febvre. Together they founded the Annales School and began publishing the journal "Annales d'histoire économique et sociale" in 1929. Bloch was a modernist in his historiographical approach, and repeatedly emphasised the importance of a multidisciplinary engagement towards history, particularly blending his research with that on geography, sociology and economics, which was his subject when he was offered a post at the University of Paris in 1936. During the Second World War Bloch volunteered for service, and was a logistician during the Phoney War. Involved in the Battle of Dunkirk and spending a brief time in Britain, he unsuccessfully attempted to secure passage to the United States. Back in France, where his ability to work was curtailed by new anti-Semitic regulations, he applied for and received one of the few permits available allowing Jews to continue working in the French university system. He had to leave Paris, and complained that the Nazi German authorities looted his apartment and stole his books; he was also forced to relinquish his position on the editorial board of "Annales". Bloch worked in Montpellier until November 1942 when Germany invaded Vichy France. He then joined the French Resistance, acting predominantly as a courier and translator. In 1944, he was captured in Lyon and executed by firing squad. Several works—including influential studies like "The Historian's Craft" and "Strange Defeat"—were published posthumously. His historical studies and his death as a member of the Resistance together made Bloch highly regarded by generations of post-war French historians; he came to be called "the greatest historian of all time". By the end of the 20th century, historians were making a more sober assessment of Bloch's abilities, influence, and legacy, arguing that there were flaws to his approach. Marc Bloch was born in Lyon on 6 July 1886, one of two children to Gustave and Sarah Bloch, née Ebstein. Bloch's family were Alsatian Jews: secular, liberal and loyal to the French Republic. They "struck a balance", says the historian Carole Fink, between both "fierce Jacobin patriotism and the antinationalism of the left". His family had lived in Alsace for five generations under French rule. In 1871, France was forced to cede the region to Germany following its defeat in the Franco-Prussian War. The year after Bloch's birth, his father was appointed professor of Roman History at the Sorbonne, and the family moved to Paris—"the glittering capital of the Third Republic". Marc had a brother, Louis Constant Alexandre, seven years his senior. The two were close, although Bloch later described Louis as being occasionally somewhat intimidating. The Bloch family lived at 72, Rue d'Alésia, in the 14th arrondissement of Paris. Gustave began teaching Marc history while he was still a boy, with a secular, rather than Jewish, education intended to prepare him for a career in professional French society. Bloch's later close collaborator, Lucien Febvre, visited the Bloch family at home in 1902; although the reason for Febvre's visit is now unknown, he later wrote of Bloch that "from this fleeting meeting, I have kept the memory of a slender adolescent with eyes brilliant with intelligence and timid cheeks—a little lost then in the radiance of his older brother, future doctor of great prestige". Bloch's biographer Karen Stirling ascribed significance to the era in which Bloch was born: the middle of the French Third Republic, so "after those who had founded it and before the generation that would aggressively challenge it". When Bloch was nine-years-old, the Dreyfus affair broke out in France. As the first major display of political antisemitism in Europe, it was probably a formative event of Bloch's youth, along with, more generally, the atmosphere of "fin de siècle" Paris. Bloch was 11 when Émile Zola published "J'Accuse…!", his indictment of the French establishment's antisemitism and corruption. Bloch was greatly affected by the Dreyfus affair, but even more affected was nineteenth-century France generally, and his father's employer, the École Normale Supérieure, saw existing divides in French society reinforced in every debate. Gustave Bloch was closely involved in the Dreyfusard movement and his son agreed with the cause. Bloch was educated at the prestigious Lycée Louis-le-Grand for three years, where he was consistently head of his class and won prizes in French, history, Latin and natural history. He passed his "baccalauréat", in Letters and Philosophy, in July 1903, being graded "trés bien" (very good). The following year, he received a scholarship and undertook postgraduate study there for the École normale supérieure (ÉNS) (where his father had been appointed "maître de conferences" in 1887). His father had been nicknamed "le Méga" by his students at the ÉNS and the moniker "Microméga" was bestowed upon Bloch. Here he was taught history by Christian Pfister and Charles Seignobos, who led a relatively new school of historical thought which saw history as broad themes punctuated by tumultuous events. Another important influence on Bloch from this period was his father's contemporary, the sociologist Émile Durkheim, who pre-figured Bloch's own later emphasis on cross-disciplinary research. The same year, Bloch visited England; he later recalled being struck more by the number of homeless people on the Victoria Embankment than the new Entente Cordiale relationship between the two countries. The Dreyfus affair had soured Bloch's views of the French Army, and he considered it laden with "snobbery, anti-semitism and anti-republicanism". National service had been made compulsory for all French adult males in 1905, with an enlistment term of two years. Bloch joined the 46th Infantry Regiment based at Pithiviers from 1905 to 1906. By this time, changes were taking place in French academia. In Bloch's own speciality of history, attempts were being made at instilling a more scientific methodology. In other, newer departments such a sociology, efforts were made at establishing an independent identity. Bloch graduated in 1908 with degrees in both geography and history (Davies notes, given Bloch's later divergent interests, the significance of the two qualifications). He had a high respect for historical geography, then a speciality of French historiography, as practised by his tutor Vidal de la Blache whose "Tableau de la géographie" Bloch had studied at the ÉNS, and Lucien Gallois. Bloch applied unsuccessfully for a fellowship at the "Fondation Thiers". As a result, he travelled to Germany in 1909 where he studied demography under Karl Bücher in Leipzig and religion under Adolf Harnack in Berlin; he did not, however, particularly socialise with fellow students while in Germany. He returned to France the following year and again applied to the "Fondation", this time successfully. Bloch researched the medieval Île-de-France in preparation for his thesis. This research was Bloch's first focus on rural history. His parents had moved house and now resided at the Avenue d'Orleans, not far from Bloch's quarters. Bloch's research at the Fondation—especially his research into the Capetian kings—laid the groundwork for his career. He began by creating maps of the Paris area illustrating where serfdom had thrived and where it had not. He also investigated the nature of serfdom, the culture of which, he discovered, was founded almost completely on custom and practice. His studies of this period formed Bloch into a mature scholar and first brought him into contact with other disciplines whose relevance he was to emphasise for most of his career. Serfdom as a topic was so broad that he touched on commerce, currency, popular religion, the nobility, as well as art, architecture and literature. His doctoral thesis—a study of 10th-century French serfdom—was titled "Rois et Serfs, un Chapitre d'Histoire Capétienne". Although it helped mould Bloch's ideas for the future, it did not, says Bryce Loyn, give any indication of the originality of thought that Bloch would later be known for, and was not vastly different to what others had written on the subject. Following his graduation, he taught at two lycées, first in Montpelier, a minor university town of 66,000 inhabitants. With Bloch working over 16 hours a week on his classes, there was little time for him to work on his thesis. He also taught at the University of Amiens. While there, he wrote a review of Febvre's first book, "Histoire de Franche-Comté". Bloch intended to turn his thesis into a book, but the First World War intervened. Both Marc and Louis Bloch volunteered for service in the French Army. Although the Dreyfus Affair had soured Bloch's views of the French Army, he later wrote that his criticisms were only of the officers; he "had respect only for the men". Bloch was one of over 800 ÉNS students who enlisted; 239 were to be killed in action. On 2 August 1914 he was assigned to the 272nd Reserve Regiment. Within eight days he was stationed on the Belgian border where he fought in the Battle of the Meuse later that month. His regiment took part in the general retreat on the 25th, and the following day they were in Barricourt, in the Argonne. The march westward continued towards the River Marne—with a temporary recuperative halt in Termes—which they reached in early September. During the First Battle of the Marne, Bloch's troop was responsible for the assault and capture of Florent before advancing on La Gruerie. Bloch led his troop with shouts of "Forward the 18th!" They suffered heavy casualties: 89 men were either missing or known to be dead. Bloch enjoyed the early days of the war; like most of his generation, he had expected a short but glorious conflict. Gustave Bloch remained in France, wishing to be close to his sons at the front. Except for two months in hospital followed by another three recuperating, he spent the war in the infantry; he joined as a sergeant and rose to become the head of his section. Bloch kept a war diary from his enlistment. Very detailed in the first few months, it rapidly became more general in its observations. However, says the historian Daniel Hochedez, Bloch was aware of his role as both a "witness and narrator" to events and wanted as detailed a basis for his historiographical understanding as possible. The historian Rees Davies notes that although Bloch served in the war with "considerable distinction", it had come at the worst possible time both for his intellectual development and his study of medieval society. For the first time in his life, Bloch later wrote, he worked and lived alongside people he had never had close contact with before, such as shop workers and labourers, with whom he developed a great camaraderie. It was a completely different world to the one he was used to, being "a world where differences were settled not by words but by bullets". His experiences made him rethink his views on history, and influenced his subsequent approach to the world in general. He was particularly moved by the collective psychology he witnessed in the trenches. He later declared he knew of no better men than "the men of the Nord and the Pas de Calais" with whom he had spent four years in close quarters. His few references to the French generals were sparse and sardonic. Apart from the Marne, Bloch fought at the battles of the Somme, the Argonne, and the final German assault on Paris. He survived the war, which he later described as having been an "honour" to have served through. He had, however, lost many friends and colleagues. Among the closest of them, all killed in action, were: Maxime David (died 1914), Antoine-Jules Bianconi (died 1915) and Ernest babut (died 1916). Bloch himself was wounded twice and decorated for courage, receiving the Croix de Guerre and the Légion d'Honneur. He had joined as a non-commissioned officer, received an officer's commission after the Marne, and had been promoted to warrant officer and finally a captain in the fuel service, ("Service des essences)" before the war ended. He was clearly, says Loyn, both a good and a brave soldier; he later wrote, "I know only one way to persuade a troop to brave danger: brave it yourself". While on front-line service, Bloch contracted severe arthritis which required him to retire regularly to the thermal baths of Aix-les-Bains for treatment. He later remembered very little of the historical events he found himself in, writing only that his memories were "a discontinuous series of images, vivid in themselves, but badly arranged, like a reel of motion picture film containing some large gaps and some reversals of certain scenes". Bloch later described the war, in a detached style, as having been a "gigantic social experience, of unbelievable richness". For example, he had a habit of noting the different coloured smoke that different shells made — percussion bombs had black smoke, timed bombs were brown. He also remembered both the "friends killed at our side ... of the intoxication which had taken hold of us when we saw the enemy in flight". He also considered it to have been "four years of fighting idleness". Following the Armistice in November 1918, Bloch was demobilised on 13 March 1919. The war was fundamental in re-arranging Bloch's approach to history, although he never acknowledged it as a turning point. In the years following the war, a disillusioned Bloch rejected the ideas and the traditions that had formed his scholarly training. He rejected the political and biographical history which up until that point was the norm, along with what the historian George Huppert has described as a "laborious cult of facts" that accompanied it. In 1920, with the opening of the University of Strasbourg, Bloch was appointed "chargé de cours" (assistant lecturer) of medieval history. Alsace-Lorraine had been returned to France with the Treaty of Versailles; the status of the region was a contentious political issue in Strasbourg, its capital, which had a large German population. Bloch, however, refused to take either side in the debate; indeed, he appears to have avoided politics entirely. Under Wilhelmine Germany, Strasbourg had rivalled Berlin as a centre for intellectual advancement, and the University of Strasbourg possessed the largest academic library in the world. Thus, says Stephan R. Epstein of the London School of Economics, "Bloch's unrivalled knowledge of the European Middle Ages was ... built on and around the French University of Strasbourg's inherited German treasures". Bloch also taught French to the few German students who were still at the Centre d'Études Germaniques at the University of Mainz during the Occupation of the Rhineland. He refrained from taking a public position when France occupied the Ruhr in 1923 over Germany's perceived failure to pay war reparations. Bloch began working energetically, and later said that the most productive years of his life were spent at Strasbourg. In his teaching, his delivery was halting. His approach sometimes appeared cold and distant—caustic enough to be upsetting—but conversely, he could be also both charismatic and forceful. Durkheim died in 1917, but the movement he began against the "smugness" that pervaded French intellectual thinking continued. Bloch had been greatly influenced by him, as Durkheim also considered the connections between historians and sociologists to be greater than their differences. Not only did he openly acknowledge Durkheim's influence, but Bloch "repeatedly seized any opportunity to reiterate" it, according to R. C. Rhodes. At Strasbourg he again met Febvre, who was now a leading historian of the 16th century. Modern and medieval seminars were adjacent to each other at Strasbourg, and attendance often overlapped. Their meeting has been called a "germinal event for 20th-century historiography", and they were to work closely together for the rest of Bloch's life. Febvre was some years older than Bloch and was probably a great influence on him. They lived in the same area of Strasbourg and became kindred spirits, often going on walking trips across the Vosges and other excursions. Bloch's fundamental views on the nature and purpose of the study of history were established by 1920. That same year he defended, and subsequently published, his thesis. It was not as extensive a work as had been intended due to the war. There was a provision in French further education for doctoral candidates for whom the war had interrupted their research to submit only a small portion of the full-length thesis usually required. It sufficed, however, to demonstrate his credentials as a medievalist in the eyes of his contemporaries. He began publishing articles in Henri Berr's "Revue de Synthèse Historique". Bloch also published his first major work, "Les Rois Thaumaturges", which he later described as ""ce gros enfant"" (this big child). In 1928, Bloch was invited to lecture at the Institute for the Comparative Study of Civilizations in Oslo. Here he first expounded publicly his theories on total, comparative history: "it was a compelling plea for breaking out of national barriers that circumscribed historical research, for jumping out of geographical frameworks, for escaping from a world of artificiality, for making both horizontal and vertical comparisons of societies, and for enlisting the assistance of other disciplines". His Oslo lecture, called "Towards a Comparative History of Europe", formed the basis of his next book, "Les Caractères Originaux de l'Histoire Rurale Française". In the same year he founded the historical journal "Annales" with Febvre. One of its aims was to counteract the administrative school of history, which Davies says had "committed the arch error of emptying history of human element". As Bloch saw it, it was his duty to correct that tendency. Both Bloch and Febvre were keen to refocus French historical scholarship on social rather than political history and to promote the use of sociological techniques. The journal avoided narrative history almost completely. The inaugural issue of the "Annales" stated the editors' basic aims: to counteract the arbitrary and artificial division of history into periods, to re-unite history and social science as a single body of thought, and to promote the acceptance of all other schools of thought into historiography. As a result, the "Annales" often contained commentary on contemporary, rather than exclusively historical, events. Editing the journal led to Bloch forming close professional relationships with scholars in different fields across Europe. The "Annales" was the only academic journal to boast a preconceived methodological perspective. Neither Bloch nor Febvre wanted to present a neutral facade. During the decade it published it maintained a staunchly left-wing position. Henri Pirenne, a Belgian historian who wrote comparative history, closely supported the new journal. Before the war he had acted in an unofficial capacity as a conduit between French and German schools of historiography. Fernand Braudel—who was himself to become an important member of the Annales School after the Second World War—later described the journal's management as being a chief executive officer—Bloch—with a minister of foreign affairs—Febvre. The comparative method allowed Bloch to discover instances of uniqueness within aspects of society, and he advocated it as a new kind of history. According to Bryce Lyon, Braudel and Febvre, "promising to perform all the burdensome tasks" themselves, asked Pirenne to become editor-in-chief of "Annales" to no avail. Pirenne remained a strong supporter, however, and had an article published in the first volume in 1929. He became close friends with both Bloch and Febvre. He was particularly influential on Bloch, who later said that Pirenne's approach should be the model for historians and that "at the time his country was fighting beside mine for justice and civilisation, wrote in captivity a history of Europe". The three men kept up a regular correspondence until Pirenne's death in 1935. In 1923, Bloch attended the inaugural meeting of the International Congress on Historical Studies (ICHS) in Brussels, which was opened by Pirenne. Bloch was a prolific reviewer for "Annales", and during the 1920s and 1930s he contributed over 700 reviews. These were both criticisms of specific works, but more generally, represented his own fluid thinking during this period. The reviews demonstrate the extent to which he shifted his thinking on particular subjects. In 1930, both keen to make a move to Paris, Febvre and Bloch applied to the "École pratique des hautes études" for a position: both failed. Three years later Febvre was elected to the Collège de France. He moved to Paris, and in doing so, says Fink, became all the more aloof. This placed a strain on Bloch's and his relations, although they communicated regularly by letter and much of their correspondence has been preserved. In 1934, Bloch was invited to speak at the London School of Economics. There he met Eileen Power, R. H. Tawney and Michael Postan, among others. While in London, he was asked to write a section of the "Cambridge Economic History of Europe"; at the same time, he also attempted to foster interest in the "Annales" among British historians. He later told Febvre in some ways he felt he had a closer affinity with academic life in England than that of France. For example, in comparing the "Bibliothèque Nationale" with the British Museum, he said that During this period he supported the Popular Front politically. Although he did not believe it would do any good, he signed Alain's—Émile Chartier's pseudonym—petition against Paul Boncour's Militarisation laws in 1935. While he was opposed to the growth of European fascism, he also objected to "demagogic appeals to the masses" to fight it, as the Communist Party was doing. Febvre and Bloch were both firmly on the left, although with different emphases. Febvre, for example, was more militantly Marxist than Bloch, while the latter criticised both the pacifist left and corporate trade unionism. In 1934, Étienne Gilson sponsored Bloch's candidacy for a chair at the Collège de France. The College, says the historian Eugen Weber, was Bloch's "dream" appointment, although one never to be realised, as it was one of the few (possibly the only) institutions in France where personal research was central to lecturing. Camille Jullian had died the previous year, and his position was now available. While he had lived, Julian had wished for his chair to go to one of his students, Albert Grenier, and after his death, his colleagues generally agreed with him. However, Gilson proposed that not only should Bloch be appointed, but that the position be redesignated the study of comparative history. Bloch, says Weber, enjoyed and welcomed new schools of thought and ideas, but mistakenly believed the College should do so also. The College did not. The contest between Bloch and Grenier was not just the struggle for one post between two historians, but the path that historiography within the College would take for the next generation. To complicate the situation further, the country was in both political and economic crises, and the College had had its budget slashed by 10%. No matter who filled it, this made another new chair financially unviable. By the end of the year, and with further retirements, the College had lost four professors: it could replace only one, and Bloch was not appointed. Bloch personally suspected his failure was due to anti-Semitism and Jewish quotas. At the time, Febvre blamed it on a distrust of Bloch's approach to scholarship by the academic establishment, although Epstein has argued that this could not have been an over-riding fear as Bloch's next appointment indicated. Henri Hauser retired from the Sorbonne in 1936, and his chair in economic history was up for appointment. Bloch—"distancing himself from the encroaching threat of Nazi Germany"—applied and was approved for his position. This was a more demanding position than the one he had applied for at the College. Weber has suggested Bloch was appointed because unlike at the College, he had not come into conflict with many faculty members. Weber researched the archives of the College in 1991 and discovered that Bloch had indicated an interest in working there as early as 1928, even though that would have meant him being appointed to the chair in numismatics rather than history. In a letter to the recruitment board written the same year, Bloch indicated that although he was not officially applying, he felt that "this kind of work (which he claimed to be alone in doing) deserves to have its place one day in our great foundation of free scientific research". H. Stuart Hughes says of Bloch's Sorbonne appointment: "In another country, it might have occasioned surprise that a medievalist like Bloch should have been named to such a chair with so little previous preparation. In France it was only to be expected: no one else was better qualified". His first lecture was on the theme of never-ending history, a process, a never to be finished thing. Davies says his years at the Sorbonne were to be "the most fruitful" of Bloch's career, and according to Epstein he was by now the most significant French historian of his age. In 1936, Friedman says he considered using Marx in his teachings, with the intention of bringing "some fresh air" into the Sorbonne. The same year, Bloch and his family visited Venice, where they were chaperoned by the Italian historian Gino Luzzatto. During this period they were living in the Sèvres – Babylone area of Paris, next to the Hôtel Lutetia. By now, "Annales" was being published six times a year to keep on top of current affairs, however, its "outlook was gloomy". In 1938, the publishers withdrew support and, experiencing financial hardship, the journal moved to cheaper offices, raised its prices and returned to publishing quarterly. Febvre increasingly opposed the direction Bloch wanted to take the journal. Febvre wanted it to be a "journal of ideas", whereas Bloch saw it as a vehicle for the exchange of information to different areas of scholarship. By early 1939, war was known to be imminent. Bloch, in spite of his age, which automatically exempted him, had a reserve commission for the army holding the rank of captain. He had already been mobilised twice in false alarms. In August 1939, he and his wife Simonne intended to travel to the ICHS in Bucharest. In autumn 1939, just before the outbreak of war, Bloch published the first volume of "Feudal Society". On 24 August 1939, at the age of 53, Bloch was mobilised for a third time, now as a fuel supply officer. He was responsible for the mobilisation of the French Army's massive motorised units. This involved him undertaking such a detailed assessment of the French fuel supply that he later wrote he was able to "count petrol tins and ration every drop" of fuel he obtained. During the first few months of the war, called the Phoney War, he was stationed in Alsace. He possessed none of the eager patriotism with which he had approached the First World War. Instead, Carole Fink suggests that because Bloch felt himself to have been discriminated against, he had "begun to distance himself intellectually and emotionally from his comrades and leaders". Back in Strasbourg, his main duty was the evacuation of civilians to behind the Maginot Line. Further transfers occurred, and Bloch was re-stationed to Molsheim, Saverne, and eventually to the 1st Army headquarters in Picardy, where he joined the Intelligence Department, in liaison with the British. Bloch was largely bored between 1939 and May 1940 as he often had little work to do. To pass the time and occupy himself, he decided to begin writing a history of France. To this end, he purchased notebooks and began to work out a structure for the work. Although never completed, the pages he managed to write, "in his cold, poorly lit rooms", eventually became the kernel of "The Historian's Craft". At one point he expected to be invited to neutral Belgium to deliver a series of lectures in Liège. These never took place, however, disappointing Bloch very much; he had planned to speak on Belgian neutrality. He also turned down the opportunity to travel to Oslo as an attaché to the French Military Mission there. He was considered an excellent candidate for the position due to his fluency in Norwegian and knowledge of the country. Bloch considered it and came close to accepting; ultimately, though, it was too far from his family, whom he rarely saw enough of in any case. Some academics had escaped France for The New School in New York City, and the School also invited Bloch. He refused, possibly because of difficulties in obtaining visas: the US government would not grant visas to every member of his family. In May 1940, the German army outflanked the French and forced them to withdraw. Facing capture in Rennes, Bloch disguised himself in civilian clothes and lived under German occupation for a fortnight before returning to his family at their country home in Fougères. He fought at the Battle of Dunkirk in May–June 1940 and was evacuated to England with the British Expeditionary Force on the requisitioned steamer MV "Royal Daffodil", which he later described as taking place "under golden skies coloured by the black and fawn smoke". Before the evacuation, Bloch ordered the immediate burning of fuel supplies. Although he could have remained in Britain, he chose to return to France the day he arrived because his family was still there. Bloch felt that the French Army lacked the "esprit de corps" or "fervent fraternity" of the French Army in the First World War. He saw the French generals of 1940 as behaving as unimaginatively as Joseph Joffre had in the first war. He did not, however, believe that the earlier war was an indication of how the next would progress: "no two successive wars", he wrote in 1940, "are ever the same war". To Bloch, France collapsed because her generals failed to capitalise on the best qualities humanity possessed—character and intelligence—because of their own "sluggish and intractable" progress since the First World War. He was horrified by the defeat which, Carole Fink has suggested, he saw as being worse, for both France and the world, than her previous defeats at Waterloo and Sedan. Bloch understood the reasons for France's sudden defeat: not in the rumours of British betrayal, communist fifth columns or fascist plots, but in her failure to motorise, and perhaps more importantly, her failure to understand what motorisation meant. He understood that it was the latter that allowed the French army to become bogged down in Belgium, and this had been compounded by the French army's slow retreat. He wrote in "Strange Defeat" that a fast, motorised retreat might have saved the army. Two-thirds of France was occupied by Germany. Bloch, one of the only elderly academics to volunteer, was demobilised soon after Philippe Pétain's government signed the Armistice of 22 June 1940 forming Vichy France in the remaining southern-third of the country. Bloch moved south, where in January 1941, he applied for and received one of only ten exemptions to the ban on employing Jewish academics the Vichy government made. This was probably due to Bloch's pre-eminence in the field of history. He was allowed to work at the "University of Strasbourg-in-exile", the universities of Clermont-Ferrand, and Montpellier. The latter, further south, was beneficial to his wife's health, which was in decline. The dean of faculty at Montpellier was Augustin Fliche, an ecclesiastical historian of the Middle Ages, who, according to Weber, "made no secret of his antisemitism". He disliked Bloch further for having once given him a poor review. Fliche not only opposed Bloch's transfer to Montpellier but made his life uncomfortable when he was there. The Vichy government was attempting to promote itself as a return to traditional French values. Bloch condemned this as propaganda; the rural idyll that Vichy said it would return France to was impossible, he said, "because the idyllic, docile peasant life of the French right had never existed". Bloch's professional relationship with Febvre was also under strain. The Nazis wanted French editorial boards to be stripped of Jews in accordance with German racial policies; Bloch advocated disobedience, while Febvre was passionate about the survival of "Annales" at any cost. He believed that it was worth making concessions to keep the journal afloat and to keep France's intellectual life alive. Bloch rejected out of hand any suggestion that he should, in his words, "fall into line". Febvre also asked Bloch to resign as joint-editor of the journal. Febvre feared that Bloch's involvement, as a Jew in Nazi-occupied France, would hinder the journal's distribution. Bloch, forced to accede, turned the "Annales" over to the sole editorship of Febvre, who then changed the journal's name to "Mélanges d'Histoire Sociale". Bloch was forced to write for it under the pseudonym Marc Fougères. The journal's bank account was also in Bloch's name; this too had to go. Henri Hauser supported Febvre's position, and Bloch was offended when Febvre intimated that Hauser had more to lose than both of them. This was because, whereas Bloch had been allowed to retain his research position, Hauser had not. Bloch interpreted Febvre's comment as implying that Bloch was not a victim. Bloch, alluding to his ethnicity, replied that the difference between them was that, whereas he feared for his children because of their Jewishness, Febvre's children were in no more danger than any other man in the country. The Annalist historian André Burguière suggests Febvre did not really understand the position Bloch, or any French Jew, was in. Already damaged by this disagreement, Bloch's and Febvre's relationship declined further when the former had been forced to leave his library and papers in his Paris apartment following his move to Vichy. He had attempted to have them transported to his Creuse residence, but the Nazis—who had made their headquarters in the hotel next to Bloch's apartment—looted his rooms and confiscated his library in 1942. Bloch held Febvre responsible for the loss, believing he could have done more to prevent it. Bloch's mother had recently died, and his wife was ill; furthermore, although he was permitted to work and live, he faced daily harassment. On 18 March 1941, Bloch made his will in Clermont-Ferrand. The Polish social historian Bronisław Geremek suggests that this document hints at Bloch in some way foreseeing his death, as he emphasised that nobody had the right to avoid fighting for their country. In March 1942 Bloch and other French academics such as Georges Friedmann and Émile Benveniste, refused to join or condone the establishment of the Union Générale des Israelites des France by the Vichy government, a group intended to include all Jews in France, both of birth and immigration. In November 1942, as part of an operation known as Case Anton, the German Army crossed the demarcation line and occupied the territory previously under direct Vichy rule. This was the catalyst for Bloch's decision to join the French Resistance sometime between late 1942 and March 1943. Bloch was careful not to join simply because of his ethnicity or the laws that were passed against it. As Burguière has pointed out, and Bloch would have known, taking such a position would effectively "indict all Jews who did not join". Burguière has pinpointed Bloch's motive for joining the Resistance in his characteristic refusal to mince his words or play half a role. Bloch had previously expressed the view that "there can be no salvation where there is not some sacrifice". He sent his family away, and returned to Lyon to join the underground. In spite of knowing a number of "francs-tireurs" around Lyon, Bloch still found it difficult to join them because of his age. Although the Resistance recruited heavily among university lecturers—and indeed, Bloch's alma mater, the École Normale Superieur, provided it with many members—he commented in exasperation to Simonne that he "didn't know it is so difficult to offer one's life". The French historian and philosopher François Dosse quotes a member of the "franc-tireurs" active with Bloch as later describing how "that eminent professor came to put himself at our command simply and modestly". Bloch used his professional and military skills on their behalf, writing propaganda for them and organising their supplies and materiel, becoming a regional organiser. Bloch also joined the "Mouvements Unis de la Résistance" (Unified Resistance Movement, or MUR), section R1, and edited the underground newsletter, "Cahiers Politique". He went under various pseudonyms: Arpajon, Chevreuse, Narbonne. Often on the move, Bloch used archival research as his excuse for travelling. The journalist-turned-resistance fighter Georges Altman later told how he knew Bloch as, although originally "a man, made for the creative silence of gentle study, with a cabinet full of books" was now "running from street to street, deciphering secret letters in some Lyonaisse Resistance garret"; all Bloch's notes were kept in code. For the first time, suggests Lyon, Bloch was forced to consider the role of the individual in history, rather than the collective; perhaps by then even realising he should have done so earlier. Bloch was arrested at the Place de Pont, Lyon, during a major roundup by the Vichy "milice" on 8 March 1944, and handed over to Klaus Barbie of the Lyon Gestapo. Bloch was using the pseudonym "Maurice Blanchard", and in appearance was "an ageing gentleman, rather short, grey-haired, bespectacled, neatly dressed, holding a briefcase in one hand and a cane in the other". He was renting a room above a dressmakers on the rue des Quatre Chapeaux; the Gestapo raided the place the following day. It is possible Bloch had been denounced by a woman working in the shop. In any case, they found a radio transmitter and many papers. Bloch was imprisoned in Montluc prison, during which time his wife died. He was tortured with, for example, ice-cold baths which knocked him out. His ribs and a wrist were broken, which led to his being returned to his cell unconscious. He eventually caught bronchopneumonia and fell seriously ill. It was later claimed that he gave away no information to his interrogators, and while incarcerated taught French history to other inmates. In the meantime, the allies had invaded Normandy on 6 June 1944. As a result, the Nazi regime was keen to evacuate and wanted to "liquidate their holdings" in France; this meant disposing of as many prisoners as they could. Between May and June 1944 around 700 prisoners were shot in scattered locations to avoid the risk of this becoming common knowledge and inviting Resistance reprisals around southern France. Among those killed was Bloch, one of a group of 26 Resistance prisoners picked out in Montluc and driven along the Saône towards Trévoux on the night of 16 June 1944. Driven to a field near Saint-Didier-de-Formans, they were shot by the Gestapo in groups of four. According to Lyon, Bloch spent his last moments comforting a 16-year-old beside him who was worried that the bullets might hurt. Bloch fell first, reputedly shouting ""Vive la France"" before being shot. A coup de grâce was delivered. One man managed to crawl away and later provided a detailed report of events; the bodies were discovered on 26 June. For some time Bloch's death was merely a "dark rumour" until it was confirmed to Febvre. At his burial, his own words were read at the graveside. With them, Bloch proudly acknowledged his Jewish ancestry while denying religion in favour of his being foremost a Frenchman. He described himself as "a stranger to any formal religious belief as well as any supposed racial solidarity, I have felt myself to be, quite simply French before anything else". According to his instructions, no orthodox prayers were said over his grave, and on it was to be carved his epitaph "dilexi veritatem" ("I have loved the truth"). In 1977, his ashes were transferred from St-Didier to Fougeres and the gravestone was inscribed as he requested. Febvre had not approved of Bloch's decision to join the Resistance, believing it to be a waste of his brain and talents, although, as Davies points out, "such a fate befell many other French intellectuals". Febvre continued publishing "Annales", ("if in a considerably modified form" comments Beatrice Gottlieb), dividing his time between his country château in the Franche-Comté and working at the École Normale in Paris. This caused some outrage, and, after liberation, when classes were returning to a degree of normality, he was booed by his students at the Sorbonne. Bloch's first book was "L'Ile de France", published in 1913. A small book, Lyon calls it "light, readable and far from trivial", and showing the influence of H. J. Fleure in how Bloch combined discussion on geography, language and archaeology. It was translated into English in 1971. Davies says 1920's "Rois et Serfs", ("Kings and Serfs"), is a "long and rather meandering essay", although it had the potential to be Bloch's definitive monograph upon the single topic that "might have evoked his genius at his fullest", the transition from antiquity to the Middle Ages. Loyn also describes it as a "loose-knit monograph", and a program to move forward rather than a full-length academic text. Bloch's most important early work—based on his doctoral dissertation—was published in 1924 as "Rois et Thaumaturges"; it was published in English as "The Royal Touch: Monarchy and Miracles in France and England" in 1973. Here he examined medieval belief in the royal touch, and the degree to which kings used such a belief for propaganda purposes. It was also the first example of Bloch's inter-disciplinary approach, as he used research from the fields of anthropology, medicine, psychology and iconography. It has been described as Bloch's first masterwork. It has a 500-page descriptive analysis of the medieval view of royalty effectively possessing supernatural powers. Verging on the antiquarian in his microscopic approach, and much influenced by the work of Raymond Crawfurd—who saw it as a "dubious if exotic" aspect of medicine, rather than history—Bloch makes diverse use of evidence from different disciplines and periods, assessing the King's Evil as far forward as the 19th century. The book had originally been inspired by discussions Bloch had with Louis, who acted as a medical consultant while his brother worked on it. Bloch concluded that the royal touch involved a degree of mass delusion among those who witnessed it. 1931 saw the publication of "Les caractéres originaux de l'histoire rurale francaise". In this—what Bloch called "mon petit livre"—he used both the traditional techniques of historiographical analysis(for example, scrutinising documents, manuscripts, accounts and rolls) and his newer, multi-faceted approach, with a heavy emphasis on maps as evidence. Bloch did not allow his new methods to detract from the former: he knew, says the historian Daniel Chirot, that the traditional methods of research were "the bread and butter of historical work. One had to do it well to be a minimally accepted historian". The first of "two classic works", says Hughes, and possibly his finest, studies the relationship between physical geographical location and the development of political institutions. Loyn has called Bloch's assessment of medieval French rural law great, but with the addendum that "he is not so good at describing ordinary human beings. He is no Eileen Power, and his peasants do not come to life as hers do". In this study, Chirot says Bloch "entirely abandoned the concept of linear history, and wrote, instead, from the present or near past into the distant past, and back towards the present". Febvre wrote the introduction to the book for its publication, and described the technique as "reading the past from the present", or what Bloch saw as starting with the known and moving into the unknown. "La Société Féodale" was published in two volumes ("The Growth of Ties of Dependence", and "Social Classes and Political Organisation") in 1939, and was translated into English as "Feudal Society" in 1961. Bloch described the study as something of a sketch, although Stirling has called it his "most enduring work ... still a cornerstone of medieval curricula" in 2007 and representative of Bloch at the peak of his career. In "Feudal Society" he used research from the broadest range of disciplines to date to examine feudalism in the broadest possible way—most notably including a study of feudal Japan. He also compared areas where feudalism was imposed, rather than organically developed (such as England after the Norman conquest) and where it was never established (such as Scotland and Scandinavia). Bloch defined feudal society as, "from the peasants' point of view", politically fragmentary, where they are ruled by an aristocratic upper-class. Daniel Chirot has described "The Royal Touch", "French Rural History" and "Feudal Society"—all of which concentrate on the French Middle Ages—as Bloch's most significant works. Conversely, his last two—"The Historian's Craft" and "Strange Defeat"—have been described as unrepresentative of his historical approach in that they discuss contemporary events in which Bloch was personally involved and without access to primary sources. "Strange Defeat" was uncompleted at the time of his death, and both were published posthumously in 1949. Davies has described "The Historian's Craft as" "beautifully sensitive and profound"; the book was written in response to his son, Étienne, asking his father, "what is history?". In his introduction, Bloch wrote to Febvre. Likewise, "Strange Defeat", in the words of R. R. Davies, is a "damning and even intolerant analysis" of the long- and short-term reasons France fell in 1940. Bloch affirmed that the book was more than a personal memoir; rather, he intended it as a deposition and a testament. It contains—"uncomfortably and honestly"—Bloch's own self-appraisal: Bloch emphasises failures in the French mindset: in the loss of morale of the soldiery and a failed education of the officers, effectively a failure of both character and intelligence on behalf of both. He condemns the "mania" for testing in education which, he felt, treated the testing as being an end in itself, draining generations of Frenchmen and Frenchwomen of originality and initiative or thirst for knowledge, and an "appreciation only of successful cheating and sheer luck". "Strange Defeat" has been called Bloch's autopsy of the France of the inter-war years. A collection of essays was published in English in 1961 as "Land and Work in Medieval Europe". The long essay was a favoured medium of Bloch's, including, Davies says, "the famous essay on the water mill and the much-challenged one on the problem of gold in medieval Europe". In the former, Bloch saw one of the most important technological advances of the era, in the latter, the effective creation of a European currency. Although one of his best essays, according to Davies—"Liberté et servitude personelles au Moyen Age, particulement en France"—was not published when it could have been; this, he remarked was "an unpardonable omission". Davies says Bloch was "no mean disputant" in historiographical debate, often reducing an opponent's argument to its most basic weaknesses. His approach was a reaction against the prevailing ideas within French historiography of the day which, when he was young, were still very much based on that of the German School, pioneered by Leopold von Ranke. Within French historiography this led to a forensic focus on administrative history as expounded by historians such as Ernest Lavisse. While he acknowledged his and his generation of historians' debt to their predecessors, he considered that they treated historical research as being little more meaningful than detective work. Bloch later wrote how, in his view, "There is no waste more criminal than that of erudition running ... in neutral gear, nor any pride more vainly misplaced than that in a tool valued as an end in itself". He believed it was wrong for historians to focus on the evidence rather than the human condition of whatever period they were discussing. Administrative historians, he said, understood every element of a government department without understanding anything of those who worked in it. Bloch was very much influenced by Ferdinand Lot, who had already written comparative history, and by the work of Jules Michelet and Fustel de Coulanges with their emphasis on social history, Durkheim's sociological methodology, François Simiand's social economics, and Henri Bergson's philosophy of collectivism. Bloch's emphasis on using comparative history harked back to the Enlightenment, when writers such as Voltaire and Montesquieu decried the notion that history was a linear narrative of individuals and pushed for a greater use of philosophy in studying the past. Bloch condemned the "German-dominated" school of political economy, which he considered "analytically unsophisticated and riddled with distortions". Equally condemned were then-fashionable ideas on racial theories of national identity. Bloch believed that political history on its own could not explain deeper socioeconomics trends and influences. Bloch did not see social history as being a separate field within historical research. Rather, he saw all aspects of history to be inherently a part of social history. By definition, all history was social history, an approach he and Febvre termed ""histoire totale"", not a focus on points of fact such as dates of battles, reigns, and changes of leaders and ministries, and a general confinement by the historian to what he can identify and verify. Bloch explained in a letter to Pirenne that, in Bloch's eyes, the historian's most important quality was the ability to be surprised by what he found—"I am more and more convinced of this", he said; "damn those of us who believe everything is normal!"Bloch identified two types of historical era: the generational era and the era of civilisation: these were defined by the speed with which they underwent change and development. In the latter type of period, which changed gradually, Bloch included physical, structural and psychological aspects of society, while the generational era could experience fundamental change over a relatively few generations. Bloch founded what modern French historians call the "regressive method" of historical scholarship. This method avoids the necessity of relying solely on historical documents as a source, by looking at the issues visible in later historical periods and drawing from them what they may have looked like centuries earlier. Davies says this was particularly useful in Bloch's study of village communities as "the strength of communal traditions often preserves earlier customs in a more or less fossilized state". Bloch studied peasant tools in museums, and in action, and discussed their use with the people themselves. He believed that in observing a plough or an annual harvest one was observing history, as more often than not both the technology and the technique were much the same as they had been hundreds of years earlier. However, the individuals themselves were not his focus, which was on "the collectivity, the community, the society". He wrote about the peasantry, rather than the individual peasant; says Lyon, "he roamed the provinces to become familiar with French agriculture over the long term, with the contours of peasant villages, with agrarian routine, its sounds and smells. Bloch claimed that both fighting alongside the peasantry in the war and his historical research into their history had shown him "the vigorous and unwearied quickness" of their minds. Bloch described his area of study as the comparative history of European society and explained why he did not identify himself as a medievalist: "I refuse to do so. I have no interest in changing labels, nor in clever labels themselves, or those that are thought to be so." He did not leave a full study of his methodology, although it can be effectively reconstructed piecemeal. He believed that history was the "science of movement", but did not accept, for example, the aphorism that one could protect against the future by studying the past. His did not use a revolutionary approach to historiography; rather, he wished to combine the schools of thinking that preceded him into a new broad approach to history and, as he wrote in 1926, to bring to history "ce murmure qui n'était pas de la mort", ("the whisper that was not death'). He criticised what he called the "idol of the origins", where historians concentrate overly hard on the formation of something to the detriment of studying the thing itself. Bloch's comparative history led him to tie his researches in with those of many other schools: social sciences, linguistics, philology, comparative literature, folklore, geography and agronomy. Similarly, he did not restrict himself to French history. At various points in his writings Bloch commented on medieval Corsican, Finnish, Japanese, Norwegian and Welsh history. R. R. Davies has compared Bloch's intelligence with what he calls that of "the Maitland of the 1890s", regarding his breadth of reading, use of language and multidisciplinary approach. Unlike Maitland, however, Bloch also wished to synthesise scientific history with narrative history. According to Stirling, he managed to achieve "an imperfect and volatile imbalance" between them. Bloch did not believe that it was possible to understand or recreate the past by the mere act of compiling facts from sources; rather, he described a source as a witness, "and like most witnesses", he wrote, "it rarely speaks until one begins to question it". Likewise, he viewed historians as detectives who gathered evidence and testimony, as "juges d'instruction" (examining magistrates) "charged with a vast enquiry of the past". Bloch was not only interested in periods or aspects of history but in the importance of history as a subject, regardless of the period, of intellectual exercise. Davies writes, "he was certainly not afraid of repeating himself; and, unlike most English historians, he felt it his duty to reflect on the aims and purposes of history". Bloch considered it a mistake for the historian to confine himself overly rigidly to his own discipline. Much of his editorialising in "Annales" emphasised the importance of parallel evidence to be found in neighbouring fields of study, especially archaeology, ethnography, geography, literature, psychology, sociology, technology, air photography, ecology, pollen analysis and statistics. In Bloch's view, this provided not just for a broader field of study, but a far more comprehensive understanding of the past than would be possible from relying solely on historical sources. Bloch's favourite example of how technology impacts society was the watermill. This can be summed up as illustrating how it was known of but little used in the classical period; it became an economic necessity in the early medieval period; and finally, in the later Middle Ages it represented a scarce resource increasingly concentrated in the nobility's hands. Bloch also emphasised the importance of geography in the study of history, and particularly in the study of rural history. He suggested that, fundamentally, they were the same subjects, although he criticised geographers for failing to take historical chronology or human agency into account. Using a farmer's field as an example, he described it as "fundamentally, a human work, built from generation to generation". Bloch also condemned the view that rural life was immobile. He believed that the Gallic farmer of the Roman period was inherently different to his 18th-century descendants, cultivating different plants, in a different way. He saw England and France's agricultural history as developing similarly, and, indeed, discovered an Enclosure Movement in France throughout the 15th, 16th and 17th centuries on the basis that it had been occurring in England in similar circumstances. Bloch also took a deep interest in the field of linguistics and their use of the comparative method. He believed that using the method in historical research could prevent the historian from ignoring the broader context in the course of his detailed local researches: "a simple application of the comparative method exploded the ethnic theories of historical institutions, beloved of so many German historians". Bloch was not a tall man, being in height and an elegant dresser. Eugen Weber has described Bloch's handwriting as "impossible". He had expressive blue eyes, which could be "mischievous, inquisitive, ironic and sharp". Febvre later said that when he first met Bloch in 1902, he found a slender young man with "a timid face". Bloch was proud of his family's history of defending France: he later wrote, "My great-grandfather was a serving soldier in 1793; ... my father was one of the defenders of Strasbourg in 1870 ... I was brought up in the traditions of patriotism which found no more fervent champions than the Jews of the Alsatian exodus". Bloch was a committed supporter of the Third Republic and politically left wing. He was not a Marxist, although he was impressed by Karl Marx himself, whom he thought was a great historian if possibly "an unbearable man" personally. He viewed contemporary politics as purely moral decisions to be made. He did not, however, let it enter into his work; indeed, he questioned the very idea of a historian studying politics. He believed that society should be governed by the young, and, although politically he was a moderate, he noted that revolutions generally promote the young over the old: "even the Nazis had done this, while the French had done the reverse, bringing to power a generation of the past". According to Epstein, following the First World War, Bloch presented a "curious lack of empathy and comprehension for the horrors of modern warfare", while John Lewis Gaddis has found Bloch's failure to condemn Stalinism in the 1930s "disturbing". Gaddis suggests that Bloch had ample evidence of Stalin's crimes and yet sought to shroud them in utilitarian calculations about the price of what he called 'progress'". Although Bloch was very reserved—and later acknowledged that he had generally been old-fashioned and "timid" with women—he was good friends with Lucien Febvre and Christian Pfister. In July 1919 he married Simonne Vidal, a "cultivated and discreet, timid and energetic" woman, at a Jewish wedding. Her father was the "Inspecteur-Général de Ponts et Chaussées", and a very prosperous and influential man. Undoubtedly, says Friedman, his wife's family wealth allowed Bloch to focus on his research without having to depend on the income he made from it. Bloch was later to say he had found great happiness with her, and that he believed her to have also found it with him. They had six children together, four sons and two daughters. The eldest two were a daughter Alice, and a son, Étienne. As his father had done with him, Bloch took a great interest in his children's education, and regularly helped with their homework. He could, though, be "caustically critical" of his children, particularly Étienne. Bloch accused him in one of his wartime letters of having poor manners, being lazy and stubborn, and of being possessed occasionally by "evil demons". Regarding the facts of life, Bloch told Etienne to attempt always to avoid what Bloch termed "contaminated females". Bloch was certainly agnostic, if not atheist, in matters of religion. His son Étienne later said of his father, "in his life as well as his writings not even the slightest trace of a supposed Jewish identity" can be found. "Marc Bloch was simply French". Some of his pupils believed him to be an Orthodox Jew, but Loyn says this is incorrect. While Bloch's Jewish roots were important to him, this was the result of the political tumult of the Dreyfuss years, said Loyn: that "it was only anti-semitism that made him want to affirm his Jewishness". Bloch's brother Louis became a doctor, and eventually the head of the diphtheria section of the Hôpital des Enfants-Malades. Louis died prematurely in 1922. Their father died in March the following year. Following these deaths, Bloch took on responsibility for his ageing mother as well as his brother's widow and children. Eugen Weber has suggested that Bloch was probably a monomaniac who, in Bloch's own words, "abhorred falsehood". He also abhorred, as a result of both the Franco-Prussian war and more recently the First World War, German nationalism. This extended to that country's culture and scholarship, and is probably the reason he never debated with German historians. Indeed, in Bloch's later career, he rarely mentioned even those German historians with whom he must, professionally, have felt an affinity, such as Karl Lamprecht. Lyon says Lamprecht had denounced what he saw as the German obsession with political history and had focused on art and comparative history, thus "infuriat[ing] the "Rankianer"". Bloch once commented, on English historians, that "en Angleterre, rien qu'en Angleterre" ("in England, only England"). He was not, though, particularly critical of English historiography, and respected the long tradition of rural history in that country as well as more materially the government funding that went into historical research there. It is possible, argues Weber, that had Bloch survived the war, he would have stood to be appointed Minister of Education in a post-war government and reformed the education system he had condemned for losing France the war in 1940. Instead, in 1948, his son Étienne offered the Archives Nationales his father's papers for repository, but they rejected the offer. As a result, the material was placed in the vaults of the École Normale Supérieure, "where it lay untouched for decades". Intellectual historian Peter Burke named Bloch the leader of what he called the "French Historical Revolution", and Bloch became an icon for the post-war generation of new historians. Although he has been described as being, to some extent, the object of a cult in both England and France—"one of the most influential historians of the twentieth century" by Stirling, and "the greatest historian of modern times" by John H. Plumb—this is a reputation mostly acquired postmortem. Henry Loyn suggests it is also one which would have amused and amazed Bloch. According to Stirling, this posed a particular problem within French historiography when Bloch effectively had martyrdom bestowed upon him after the war, leading to much of his work being overshadowed by the last months of his life. This led to "indiscriminate heaps of praise under which he is now almost hopelessly buried". This is partly at least the fault of historians themselves, who have not critically re-examined Bloch's work but rather treat him as a fixed and immutable aspect of the historiographical background. At the turn of the millennium "there is a woeful lack of critical engagement with Marc Bloch's writing in contemporary academic circles" according to Stirling. His legacy has been further complicated by the fact that the second generation of Annalists led by Fernand Braudel has "co-opted his memory", combining Bloch's academic work and Resistance involvement to create "a founding myth". The aspects of his life which made Bloch easy to beatify have been summed up by Henry Loyn as "Frenchman and Jew, scholar and soldier, staff officer and Resistance worker ... articulate on the present as well as the past". The first critical biography of Bloch did not appear until Carole Fink's "Marc Bloch: A Life in History" was published in 1989. This, wrote S. R. Epstein, was the "professional, extensively researched and documented" story of Bloch's life, and, he commented, probably had to "overcome a strong sense of protectiveness among the guardians of Bloch's and the "Annales" memory". Since then, continuing scholarship—such as that by Stirling, who calls Bloch a visionary, although a "flawed" one—has been more critically objective of Bloch's recognisable weaknesses. For example, although he was a keen advocate for chronological precision and textual accuracy, his only major work in this area, a discussion of Osbert of Clare's "Life of Edward the Confessor", was subsequently "seriously criticised" by later experts in the field such as R. W. Southern and Frank Barlow; Epstein later suggested Bloch was "a mediocre theoretician but an adept artisan of method". Colleagues who worked with him occasionally complained that Bloch's manner could be "cold, distant, and both timid and hypocritical" due to the strong views he had held on the failure of the French education system. Bloch's reduction of the role of individuals, and their personal beliefs, in changing society or making history has been challenged. Even Febvre, reviewing "Feudal Society" on its post-war publication, suggested that Bloch had unnecessarily ignored the individual's role in societal development. Bloch has also been accused of ignoring unanswered questions and presenting complete answers when they are perhaps not deserved, and of sometimes ignoring internal inconsistencies. Andrew Wallace-Hadrill has also criticised Bloch's division of the feudal period into two distinct times as artificial. He also says Bloch's theory on the transformation of blood-ties into feudal bonds do not match either the chronological evidence or what is known of the nature of the early family unit. Bloch seems to have occasionally ignored, whether accidentally or deliberately, important contemporaries in his field. Richard Lefebvre des Noëttes, for example, who founded the history of technology as a new discipline, built new harnesses from medieval illustrations, and drew histographical conclusions. Bloch, though, does not seem to have acknowledged the similarities between his and Lefebvre's approaches to physical research, even though he cited much earlier historians. Davies argued that there was a sociological aspect to Bloch's work which often neutralised the precision of his historical writing; as a result, he says, those of Bloch's works with a sociological conception, such as "Feudal Society", have not always "stood the test of time". Comparative history, too, still proved controversial many years after Bloch's death, and Bryce Lyon has posited that, had Bloch survived the war, it is very likely that his views on history—already changing in the early years of the second war, just as they had done in the aftermath of the first—would have re-adjusted themselves against the very school he had founded. Stirling suggests what distinguished Bloch from his predecessors was that he effectively became a new kind of historian, who "strove primarily for transparency of methodology where his predecessors had striven for transparency of data" while continuously critiquing himself at the same time. Davies suggests his legacy lies not so much in the body of work he left behind him, which is not always as definitive as it has been made out to be, but the influence he had on "a whole generation of French historical scholarship". Bloch's emphasis on how rural and village society has been neglected by historians in favour of the lords and manorial courts that ruled them influenced later historians such as R. H. Hilton in the study of the economics of peasant society. Bloch's combination of economics, history and sociology was "forty years before it became fashionable", argues Daniel Chirot, which he says could make Bloch a founding father of post-war sociology scholarship. The English-language journal "Past & Present", published by Oxford University Press, was a direct successor to the "Annales", suggests Loyn. Michel Foucault said of the Annales School, "what Bloch, Febvre and Braudel have shown for history, we can show, I believe, for the history of ideas". Bloch's influence spread beyond historiography after his death. In the 2007 French presidential election, Bloch was quoted many times. For example, candidates Nicolas Sarkozy and Marine Le Pen both cited Bloch's lines from "Strange Defeat": "there are two categories of Frenchmen who will never really grasp the significance of French history: those who refuse to be thrilled by the Consecration of our Kings at Reims, and those who can read unmoved the account of the Festival of Federation". In 1977, Bloch received a state reburial; streets schools and universities have been named after him, and the centennial of Bloch's birth was celebrated at a conference held in Paris in June 1986. It was attended academics of various disciplines, particularly historians and anthropologists.
https://en.wikipedia.org/wiki?curid=19665
Michael Ventris Michael George Francis Ventris, (; 12 July 1922 – 6 September 1956) was an English architect, classicist and philologist who deciphered Linear B, the ancient Mycenaean Greek script. A student of languages, Ventris had pursued the decipherment as a personal vocation since his adolescence. After creating a new field of study, Ventris died in a car crash a few weeks before the publication, with John Chadwick, of "Documents in Mycenaean Greek". Ventris was born into a traditional army family. His grandfather, Francis Ventris, was a major-general and Commander of British Forces in China. His father, Edward Francis Vereker Ventris, was a lieutenant-colonel in the Indian Army, who retired early due to ill health. Edward Ventris married Anna Dorothea Janasz (Dora), who was from a wealthy Jewish Polish paternal background. Michael Ventris was their only child. The family moved to Switzerland for eight years, seeking a healthy environment for Colonel Ventris. Young Michael started school in Gstaad, where classes were taught in French and German. He soon was fluent in both languages and showing proficiency for Swiss German. He was capable of learning a language within a matter of weeks, which allowed him to acquire fluency in a dozen languages. His mother often spoke Polish to him, and he was fluent by the age of eight. At this time, he was reading Adolf Erman's "Die Hieroglyphen" in German. In 1931, the Ventris family returned home. From 1931 to 1935 Ventris was sent to Bickley Hill School in Stowe. His parents divorced in 1935. At this time, he secured a scholarship to Stowe School. At Stowe he learned some Latin and Ancient Greek. He did not do outstanding work there - by then he was spending most of his spare time learning as much as he could about Linear B, some of his study time being spent under the covers at night with a flashlight. When he was not boarding at school, Ventris lived with his mother, before 1935 in coastal hotels, and then in the avant garde Berthold Lubetkin's Highpoint modernist apartments in Highgate. His mother's acquaintances, who frequented the house, included many sculptors, painters, and writers of the day. The money for her sophisticated lifestyle came from Polish estates. Ventris's father died in 1938 and his mother Dora became administrator of the estate. With the German invasion of Poland in 1939, Mrs Ventris lost her private income, and in 1940 Dora's father died. Ventris lost his mother to clinical depression and an overdose of barbiturates. He never spoke of her, assuming instead an ebullient and energetic manner in whatever he decided to do, a trait which won him numerous friends. A friend of the family, Russian sculptor Naum Gabo, took Ventris under his wing. Ventris later said that Gabo was the most family he had ever had. It may have been at Gabo's house that he began the study of Russian. He decided on architecture as a career, and enrolled in the Architectural Association School of Architecture. There he met his wife-to-be Lois Knox-Niven, daughter of Lois Butler. Her social background was similar to Ventris's: her family was well-to-do, she had travelled in Europe, and she was interested in architecture. She was also popular and considered very beautiful. Ventris did not complete his architecture studies, being conscripted in 1942. He chose the Royal Air Force (RAF). His preference was for navigator rather than pilot, and he completed the extensive training in the UK and Canada, to qualify early in 1944 and be commissioned. While training, he studied Russian intensively for several weeks, the purpose of which is not clear. He took part in the bombing of Germany, as aircrew on the Handley Page Halifax with No. 76 Squadron RAF, initially at RAF Breighton and then at RAF Holme-on-Spalding Moor. After the conclusion of the war he served out the rest of his term on the ground in Germany, for which he was chosen because of his knowledge of Russian. His duties are unclear. His friends assumed he was on intelligence duties, interpreting his denials as part of a legal gag. No evidence of such assignments has emerged in the decades since. There is also no evidence that he was ever part of any code-breaking unit, as was Chadwick, even though the public has readily believed this explanation of his genius and success with Linear B. After the war he worked briefly in Sweden, learning enough Swedish to communicate with scholars. Then he came home to complete his architectural education with honours in 1948 and settled down with Lois working as an architect. He designed schools for the Ministry of Education. He and his wife personally designed their family home, 19 North End, Hampstead. Ventris and his wife had two children, a son, Nikki (1942–1984) and a daughter, Tessa (born 1946). Ventris continued with his efforts on Linear B, discovering in 1952 that it was an archaic form of Greek. He was awarded an OBE in 1955 for "services to Mycenaean paleography." In 1959 he was posthumously awarded the British Academy's Kenyon Medal. In 1956 Ventris, who lived in Hampstead, died instantly in a late-night collision in Hatfield, Hertfordshire, with a parked truck while driving home, aged 34. The coroner's verdict was accidental death. An English Heritage blue plaque commemorates Ventris at his home in Hampstead. At the beginning of the 20th century, archaeologist Arthur Evans began excavating an ancient site at Knossos, on the island of Crete. In doing so he uncovered a great many clay tablets inscribed with two unknown scripts, Linear A and Linear B. Evans attempted to decipher both in the following decades, with little success. In 1936, Evans hosted an exhibition on Cretan archaeology at Burlington House in London, home of the Royal Academy. It was the jubilee anniversary (50 years) of the British School of Archaeology in Athens, contemporaneous owners and managers of the Knossos site. Evans had given the site to them some years previously. Villa Ariadne, Evans's home there, was now part of the school. Boys from Stowe school were in attendance at one lecture and tour conducted by Evans himself at age 85. Ventris, 14 years old, was present and remembered Evans walking with a stick. The stick was undoubtedly the cane named Prodger which Evans carried all his life to assist him with his short-sightedness and night blindness. Evans held up tablets of the unknown scripts for the audience to see. During the interview period following the lecture, Ventris immediately confirmed that Linear B was as yet undeciphered, and determined to decipher it. In 1940, the 18-year-old Ventris had an article "Introducing the Minoan Language" published in the American Journal of Archaeology. Ventris's initial theory was that Etruscan and Linear B were related and that this might provide a key to decipherment. Although this proved incorrect, it was a link he continued to explore until the early 1950s. Shortly after Evans died, Alice Kober noted that certain words in Linear B inscriptions had changing word endings — perhaps declensions in the manner of Latin or Greek. Using this clue, Ventris constructed a series of grids associating the symbols on the tablets with consonants and vowels. While "which" consonants and vowels these were remained mysterious, Ventris learned enough about the structure of the underlying language to begin guessing. Alice Kober was a classics professor at Brooklyn College and had done extensive work on Linear B. Ventris acknowledged her work as having made a significant contribution to his own work. Shortly before World War II, American archaeologist Carl Blegen discovered a further 600 or so tablets of Linear B in the Mycenaean palace of Pylos. Photographs of these tablets by archaeologist Alison Frantz facilitated Ventris's later decipherment of the Linear B script. Comparing the Linear B tablets discovered on the Greek mainland, and noting that certain symbol groups appeared only in the Cretan texts, Ventris made the inspired guess that those were place names on the island. This proved to be correct. Armed with the symbols he could decipher from this, Ventris soon unlocked much text and determined that the underlying language of Linear B was in fact Greek. This overturned Evans's theories of Minoan history by establishing that Cretan civilization, at least in the later periods associated with the Linear B tablets, had been part of Mycenean Greece.
https://en.wikipedia.org/wiki?curid=19667
Maniac Mansion Maniac Mansion is a 1987 graphic adventure video game developed and published by Lucasfilm Games. It follows teenage protagonist Dave Miller as he attempts to rescue his girlfriend Sandy Pantz from a mad scientist, whose mind has been enslaved by a sentient meteor. The player uses a point-and-click interface to guide Dave and two of his six playable friends through the scientist's mansion while solving puzzles and avoiding dangers. Gameplay is non-linear, and the game must be completed in different ways based on the player's choice of characters. Initially released for the Commodore 64 and Apple II, "Maniac Mansion" was Lucasfilm Games' first self-published product. The game was conceived in 1985 by Ron Gilbert and Gary Winnick, who sought to tell a comedic story based on horror film and B-movie clichés. They mapped out the project as a paper-and-pencil game before coding commenced. While earlier adventure titles had relied on command lines, Gilbert disliked such systems, and he developed "Maniac Mansion"s simpler point-and-click interface as a replacement. To speed up production, he created a game engine called SCUMM, which was used in many later LucasArts titles. After its release, "Maniac Mansion" was ported to several platforms. A port for the Nintendo Entertainment System had to be reworked heavily, in response to complaints by Nintendo of America that the game was inappropriate for children. "Maniac Mansion" was critically acclaimed: reviewers lauded its graphics, cutscenes, animation, and humor. Writer Orson Scott Card praised it as a step toward "computer games [becoming] a valid storytelling art". It influenced numerous graphic adventure titles, and its point-and-click interface became a standard feature in the genre. The game's success solidified Lucasfilm as a serious rival to adventure game studios such as Sierra On-Line. In 1990, "Maniac Mansion" was adapted into a three-season television series of the same name, written by Eugene Levy and starring Joe Flaherty. A sequel to the game, "Day of the Tentacle", was released in 1993. "Maniac Mansion" is a graphic adventure game in which the player uses a point-and-click interface to guide characters through a two-dimensional game world and to solve puzzles. Fifteen action commands, such as "Walk To" and "Unlock", may be selected by the player from a menu on the screen's lower half. The player starts the game by choosing two out of six characters to accompany protagonist Dave Miller: Bernard, Jeff, Michael, Razor, Syd, and Wendy. Each character possesses unique abilities: for example, Syd and Razor can play musical instruments, while Bernard can repair appliances. The game may be completed with any combination of characters; but, since many puzzles are solvable only by certain characters, different paths must be taken based on the group's composition. "Maniac Mansion" features cutscenes, a word coined by Ron Gilbert, that interrupt gameplay to advance the story and inform the player about offscreen events. The game takes place in the mansion of the fictional Edison family: Dr. Fred, a mad scientist; Nurse Edna, his wife; and their son Weird Ed. Living with the Edisons are two large, disembodied tentacles, one purple and the other green. The intro sequence shows that a sentient meteor crashed near the mansion twenty years earlier; it brainwashed the Edisons and directed Dr. Fred to obtain human brains for use in experiments. The game begins as Dave Miller prepares to enter the mansion to rescue his girlfriend, Sandy Pantz, who had been kidnapped by Dr. Fred. With the exception of the green tentacle, the mansion's inhabitants are hostile, and will throw the player characters into the dungeon—or, in some situations, kill them—if they see them. When a character dies, the player must choose a replacement from the unselected characters; and the game ends if all characters are killed. "Maniac Mansion" has five possible endings, based on which characters are chosen, which survive, and what the characters accomplish. "Maniac Mansion" was conceived in 1985 when Lucasfilm Games employees Ron Gilbert and Gary Winnick were assigned to create an original game. Gilbert had been hired the previous year as a programmer for the game "Koronis Rift". He befriended Winnick over their similar tastes in humor, film, and television. Company management provided little oversight in the creation of "Maniac Mansion", a trend to which Gilbert credited the success of several of his games for Lucasfilm. Gilbert and Winnick co-wrote and co-designed the project, but they worked separately as well: Gilbert on programming and Winnick on visuals. As both of them enjoyed B horror films, they decided to make a comedy-horror game set in a haunted house. They drew inspiration from a film whose name Winnick could not recall. He described it as "a ridiculous teen horror movie", in which teenagers inside a building were killed one by one without any thought of leaving. This film, combined with clichés from popular horror movies such as "Friday the 13th" and "A Nightmare on Elm Street", became the basis for the game's setting. Early work on the game progressed organically: according to Gilbert, "Very little was written down. Gary and I just talked and laughed a lot, and out it came." Lucasfilm Games relocated to the Stable House at Skywalker Ranch during "Maniac Mansion"s conception period, and the ranch's Main House was used as a model for the mansion. Several rooms from the Main House received exact reproductions in the game, such as a library with a spiral staircase and a media room with a large-screen TV and grand piano. Story and characters were a primary concern for Gilbert and Winnick. The pair based the game's cast on friends, family members, acquaintances, and stereotypes. For example, Winnick's girlfriend Ray was the inspiration for Razor, while Dave and Wendy were based, respectively, on Gilbert and a fellow Lucasfilm employee named Wendy. According to Winnick, the Edison family was shaped after characters from EC Comics and Warren Publishing magazines. The sentient meteor that brainwashes Dr. Fred was inspired by a segment from the 1982 anthology film "Creepshow". A man-eating plant, similar to that of "Little Shop of Horrors", was included as well. The developers sought to strike a balance between tension and humor with the game's story. Initially, Gilbert and Winnick struggled to choose a gameplay genre for "Maniac Mansion". While visiting relatives over Christmas, Gilbert saw his cousin play "King's Quest: Quest for the Crown", an adventure game by Sierra On-Line. Although he was a fan of text adventures, this was Gilbert's first experience with a graphic adventure, and he used the holiday to play the game and familiarize himself with the format. As a result, he decided to develop his and Winnick's ideas into a graphic adventure game. "Maniac Mansion"s story and structure were designed before coding commenced. The project's earliest incarnation was a paper-and-pencil board game, in which the mansion's floor plan was used as a game board, and cards represented events and characters. Lines connected the rooms to illustrate pathways by which characters could travel. Strips of cellulose acetate were used to map out the game's puzzles by tracking which items worked together when used by certain characters. Impressed by the map's complexity, Winnick included it in the final game as a poster hung on a wall. Because each character contributes different skills and resources, the pair spent months working on the event combinations that could occur. This extended the game's production time beyond that of previous Lucasfilm Games projects, which almost led to Gilbert's firing. The game's dialogue, written by David Fox, was not created until after programming had begun. Gilbert started programming "Maniac Mansion" in 6502 assembly language, but he quickly decided that the project was too large and complex for this method. He decided that a new game engine would have to be created. Its coding language was initially planned to be Lisp-inspired, but Gilbert opted for one similar to C, Yacc. Lucasfilm employee Chip Morningstar contributed the base code for the engine, which Gilbert then built on. Gilbert hoped to create a "system that could be used on many adventure games, cutting down the time it took to make them". "Maniac Mansion"s first six-to-nine months of production were dedicated largely to engine development. The game was developed around the Commodore 64 home computer, an 8-bit system with only 64 KB of memory. The team wanted to include scrolling screens, but as it was normally impossible to scroll bitmap graphics on the Commodore 64, they had to use lower-detail tile graphics. Winnick gave each character a large head made of three stacked sprites to make them recognizable. Although Gilbert wrote much of the foundational code for "Maniac Mansion", the majority of the game's events were programmed by Lucasfilm employee David Fox. Fox was between projects and planned to work on the game only for a month, but he remained with the team for six months. With Gilbert, he wrote the characters' dialog and choreographed the action. Winnick's concept art inspired him to add new elements to the game: for example, Fox allowed the player to place a hamster inside the kitchen's microwave. The team wanted to avoid punishing the player for applying everyday logic in "Maniac Mansion". Fox noted that one Sierra game features a scene in which the player, without prior warning, may encounter a game over screen simply by picking up a shard of glass. He characterized such game design as "sadistic", and he commented, "I know that in the real world I can successfully pick up a broken piece of mirror without dying". Because of the project's nonlinear puzzle design, the team struggled to prevent no-win scenarios, in which the player unexpectedly became unable to complete the game. As a result of this problem, Gilbert later explained, "We were constantly fighting against the desire just to rip out all the endings and just go with three characters, or even sometimes just one character". Lucasfilm Games had only one playtester, and many dead-ends went undetected as a result. Further playtesting was provided by Gilbert's uncle, to whom Gilbert mailed a floppy disk of the game's latest version each week. The "Maniac Mansion" team wanted to retain the structure of a text-based adventure game, but without the standard command-line interface. Gilbert and Winnick were frustrated by the genre's text parsers and frequent game over screens. While in college, Gilbert had enjoyed "Colossal Cave Adventure" and the games of Infocom, but he disliked their lack of visuals. He found the inclusion of graphics in Sierra On-Line games, such as "King's Quest", to be a step in the right direction, but these games still require the player to type, and to guess which commands must be input. In response, Gilbert programmed a point-and-click graphical user interface that displays every possible command. Fox had made a similar attempt to streamline Lucasfilm's earlier "" and he conceived the entirety of "Maniac Mansion"s interface, according to Gilbert. Forty input commands were planned at first, but the number was gradually reduced to 12. Gilbert finished the "Maniac Mansion" engine—which he later named "Script Creation Utility for Maniac Mansion" (SCUMM)—after roughly one year of work. Although the game was designed for the Commodore 64, the SCUMM engine allowed it to be ported easily to other platforms. After 18 to 24 months of development, "Maniac Mansion" debuted at the 1987 Consumer Electronics Show in Chicago. The game was released for the Commodore 64 and Apple II in October 1987. While previous Lucasfilm Games products had been published by outside companies, "Maniac Mansion" was self-published. This became a trend at Lucasfilm. The company hired Ken Macklin, an acquaintance of Winnick's, to design the game's packaging artwork. Gilbert and Winnick collaborated with the marketing department to design the back cover. The two also created an insert that includes hints, a backstory, and jokes. An MS-DOS port was released in early 1988, developed in part by Lucasfilm employees Aric Wilmunder and Brad Taylor. Ports for the Amiga, Atari ST and Nintendo Entertainment System (NES) followed, with the Amiga and Atari ST ports in 1989 and the NES port in 1990. The 16-bit versions of Maniac Mansion featured a copy protection system requiring the user to enter graphical symbols out of a code book included with the game. This was not present in the Commodore 64 and Apple versions due to lack of disk space, so those instead used an on-disk copy protection. There were two separate versions of the game developed for the NES. The first port was handled and published by Jaleco only in Japan. Released on June 23, 1988, it featured characters redrawn in a cute art style and generally shrunken rooms. No scrolling is present, leading to rooms larger than a single screen to be displayed via flip-screens. Many of the background details are missing, and instead of a save feature a password, over 100 characters long, is required to save progress. In September 1990 Jaleco released an American version of "Maniac Mansion" as the first NES title developed by Lucasfilm Games in cooperation with Realtime Associates. Generally, this port is regarded as being far closer to the original game than the Japanese effort. Company management was occupied with other projects, and so the port received little attention until employee Douglas Crockford volunteered to direct it. The team used a modified version of the SCUMM engine called "NES SCUMM" for the port. According to Crockford, "[One] of the main differences between the NES and PCs is that the NES can do certain things much faster". The graphics had to be entirely redrawn to match the NES's display resolution. Tim Schafer, who later designed "Maniac Mansion"s sequel "Day of the Tentacle", received his first professional credit as a playtester for the NES version of "Maniac Mansion". During "Maniac Mansion"s development for the Commodore 64, Lucasfilm had censored profanity in the script: for instance, the early line of dialogue "Don't be a shit head" became "Don't be a tuna head". Additional content was removed from the NES version to make it suitable for a younger audience, and to conform with Nintendo's policies. Jaleco USA president Howie Rubin warned Crockford about content to which Nintendo might object, such as the word "kill". After reading the NES Game Standards Policy for himself, Crockford suspected that further elements of "Maniac Mansion" could be problematic, and he sent a list of questionable content to Jaleco. When the company replied that the content was reasonable, Lucasfilm Games submitted "Maniac Mansion" for approval. One month later, Nintendo of America contacted Lucasfilm Games to request the removal of offensive text and nude graphics. Crockford censored this content but attempted to leave the game's essence intact. For example, Nintendo wanted graffiti in one room—which provided an important hint to players—removed from the game. Unable to comply without simultaneously removing the hint, the team simply shortened it. Sexually suggestive and otherwise "graphic" dialogue was edited, including a remark from Dr. Fred about "pretty brains [being] sucked out". The nudity described by Nintendo encompassed a swimsuit calendar, a classical sculpture and a poster of a mummy in a Playmate pose. After a brief fight to keep the sculpture, the team ultimately removed all three. The phrase "NES SCUMM" in the credits sequence was censored as well. Lucasfilm Games re-submitted the edited version of "Maniac Mansion" to Nintendo, which then manufactured 250,000 cartridges. Each cartridge was fitted with a battery-powered back-up to save data. Nintendo announced the port through its official magazine in early 1990, and it provided further coverage later that year. The ability to microwave a hamster remained in the game, which Crockford cited as an example of the censors' contradictory criteria. Nintendo later noticed it, and after the first batch of cartridges was sold, Jaleco was forced to remove the content from future shipments. Late in development, Jaleco commissioned Realtime Associates to provide background music, which no previous version of "Maniac Mansion" had featured. Realtime Associates' founder and president David Warhol noted that "video games at that time had to have 'wall to wall' music". He brought in George "The Fat Man" Sanger and his band, along with David Hayes, to compose the score. Their goal was to create songs that suited each character, such as a punk rock theme for Razor, an electronic rock theme for Bernard and a version of Thin Lizzy's "The Boys Are Back in Town" for Dave Miller. Warhol translated their work into NES chiptune music. According to Stuart Hunt of "Retro Gamer", "Maniac Mansion" received highly positive reviews from critics. Nevertheless, Ron Gilbert noted that "it wasn't a huge hit" commercially. In 2011, Hunt wrote that "as so often tends to be the way with cult classics, the popularity it saw was slow in coming." Keith Farrell of "Compute!'s Gazette" was struck by "Maniac Mansion"s similarity to film, particularly in its use of cutscenes to impart "information or urgency". He lauded the game's graphics, animation and high level of detail. "Commodore User"s Bill Scolding and three reviewers from "Zzap!64" compared the game to "The Rocky Horror Picture Show". Further comparisons were drawn to "Psycho", "Friday the 13th", "The Texas Chain Saw Massacre", "The Addams Family" and "Scooby-Doo". Russ Ceccola of "Commodore Magazine" found the cutscenes to be creative and well made, and he commented that the "characters are distinctively Lucasfilm's, bringing facial expressions and personality to each individual character". In "Compute!", Orson Scott Card praised the game's humor, cinematic storytelling and lack of violence. He called it "compellingly good" and evidence of Lucasfilm's push "to make computer games a valid storytelling art". German magazine "Happy-Computer" commended the point-and-click interface and likened it to that of "Uninvited" by ICOM Simulations. The publication highlighted "Maniac Mansion"s graphics, originality, and overall enjoyability: one of the writers called it the best adventure title yet released. "Happy-Computer" later reported that "Maniac Mansion" was the highest-selling video game in West Germany for three consecutive months. The game's humor received praise from "Zzap!64", whose reviewers called the point-and-click controls "tremendous" and the total package "innovative and polished". Shay Addams of "Questbusters: The Adventurer's Newsletter" preferred "Maniac Mansion"s interface to that of "Labyrinth: The Computer Game". He considered the game to be Lucasfilm's best, and he recommended it to Commodore 64 and Apple II users unable to run titles with better visuals, such as those from Sierra On-Line. A writer for "ACE" enjoyed the game's animation and depth, but he noted that fans of text-based adventures would dislike the game's simplicity. "Entertainment Weekly" picked the game as the #20 greatest game available in 1991, saying: "The graphics are merely okay and the music is Nintendo at its tinniest, but Maniac Mansion's plot is enough to overcome these faults. In this command-driven game — adapted from the computer hit — three buddies venture into a sinister haunted mansion and wind up juggling a bunch of wacky story lines." Reviewing the MS-DOS and Atari ST ports, a critic from "The Games Machine" called "Maniac Mansion" "an enjoyable romp" that was structurally superior to later LucasArts adventure games. The writer noticed poor pathfinding and disliked the limited audio. Reviewers for "The Deseret News" lauded the audiovisuals and considered the product "wonderful fun". "Computer Gaming World"s Charles Ardai praised the game for attaining "the necessary and precarious balance between laughs and suspense that so many comic horror films and novels lack". Although he faulted the control system's limited options, he hailed it as "one of the most comfortable ever devised". Writing for "VideoGames & Computer Entertainment", Bill Kunkel and Joyce Worley stated that the game's plot and premise were typical of the horror genre; but they praised the interface and execution. Reviewing "Maniac Mansion"s Amiga version four years after its release, Simon Byron of "The One Amiga" praised the game for retaining "charm and humour", but suggested that its art direction had become "tacky" compared to more recent titles. Stephen Bradly of "Amiga Format" found the game derivative, but he encountered "loads of visual humour" in it; and he added, "Strangely, it's quite compelling after a while." Michael Labiner of Germany's "Amiga Joker" considered "Maniac Mansion" to be one of the best adventure games for the system. He noted minor graphical flaws, such as a limited color palette, but he argued that the gameplay made up for such shortcomings. Writing for "Datormagazin" in Sweden, Ingela Palmér commented that the Amiga and Commodore 64 versions of "Maniac Mansion" were nearly identical. She criticized the graphics and gameplay of both releases but felt the game to be highly enjoyable regardless. Reviewing the NES release, British magazine "Mean Machines" commended the game's presentation, playability, and replay value. The publication also noted undetailed graphics and "ear-bashing tunes". The magazine's Julian Rignall compared "Maniac Mansion" to the title "Shadowgate", but he preferred the former's controls and lack of "death-without-warning situations". Writers for Germany's "Video Games" referred to the NES version as a "classic". Co-reviewer Heinrich Lenhardt stated that "Maniac Mansion" was unlike any other NES adventure game, and that it was no less enjoyable than its home computer releases. Co-reviewer Winnie Forster found it to be "one of the most original representatives of the [adventure game] genre". In retrospective features, "Edge" magazine called the NES version "somewhat neutered" and "GamesTM" referred to it as "infamous" and "heavily censored". Lucasfilm conceived the idea for a television adaptation of "Maniac Mansion", the rights to which were purchased by The Family Channel in 1990. The two companies collaborated with Atlantis Films to produce a sitcom named after the game, which debuted in September of that year. It aired on YTV in Canada and The Family Channel in the United States. Based in part on the video game, the series focuses on the Edison family's life and stars Joe Flaherty as Dr. Fred. Its writing staff was led by Eugene Levy. Gilbert later said that the premise of the series changed during production until it differed heavily from the game's original plot. Upon its debut, the adaptation received positive reviews from "Variety", "Entertainment Weekly" and the "Los Angeles Times". "Time" named it one of the year's best new series. Ken Tucker of "Entertainment Weekly" questioned the decision to air the series on The Family Channel, given Flaherty's subversive humor. Discussing the series in retrospect, Richard Cobbett of "PC Gamer" criticized its generic storylines and lack of relevance to the game. The series lasted for three seasons; sixty-six episodes were filmed. In the early 1990s, LucasArts tasked Dave Grossman and Tim Schafer, both of whom had worked on the "Monkey Island" series, with designing a sequel to "Maniac Mansion". Gilbert and Winnick initially assisted with the project's writing. The team included voice acting and more detailed graphics, which Gilbert had originally envisioned for "Maniac Mansion". The first game's nonlinear design was discarded, and the team implemented a Chuck Jones-inspired visual style, alongside numerous puzzles based on time travel. Bernard and the Edison family were retained. The sequel "Day of the Tentacle" was released in 1993, and came with a fully playable copy of "Maniac Mansion" hidden as an easter egg within the game. In 2010, the staff of "GamesTM" dubbed "Maniac Mansion" a "seminal" title that overhauled the gameplay of the graphic adventure genre. Removing the need to guess syntax allowed players to concentrate on the story and puzzles, which created a smoother and more enjoyable experience, according to the magazine. Eurogamer's Kristan Reed agreed: he believed that the design was "infinitely more elegant and intuitive" than its predecessors and that it freed players from "guessing-game frustration". Designer Dave Grossman, who worked on Lucasfilm Games' later "Day of the Tentacle" and "The Secret of Monkey Island", felt that "Maniac Mansion" had revolutionized the adventure game genre. Although 1985's "Uninvited" had featured a point-and-click interface, it was not influential. "Maniac Mansion"s implementation of the concept was widely imitated in other adventure titles. Writing in the game studies journal "Kinephanos", Jonathan Lessard argued that "Maniac Mansion" led a "Casual Revolution" in the late 1980s, which opened the adventure genre to a wider audience. Similarly, Christopher Buecheler of GameSpy called the game a contributor to its genre's subsequent critical adoration and commercial success. Reed highlighted the "wonderfully ambitious" design of "Maniac Mansion", in reference to its writing, interface, and cast of characters. Game designer Sheri Graner Ray believed the game to challenge "damsel in distress" stereotypes through its inclusion of female protagonists. Conversely, writer Mark Dery argued that the goal of rescuing a kidnapped cheerleader reinforced negative gender roles. The Lucasfilm team built on their experiences from "Maniac Mansion" and became increasingly ambitious in subsequent titles. Gilbert admitted to making mistakes—such as the inclusion of no-win situations—in "Maniac Mansion", and he applied these lessons to future projects. For example, the game relies on timers rather than events to trigger cutscenes, which occasionally results in awkward transitions: Gilbert worked to avoid this flaw with the "Monkey Island" series. Because of "Maniac Mansion"s imperfections, Gilbert considers it his favorite of his games. According to writers Mike and Sandie Morrison, Lucasfilm Games became "serious competition" in the adventure genre after the release of "Maniac Mansion". The game's success solidified Lucasfilm as one of the leading producers of adventure games: authors Rusel DeMaria and Johnny Wilson described it as a "landmark title" for the company. In their view, "Maniac Mansion"—along with "Space Quest: The Sarien Encounter" and "Leisure Suit Larry in the Land of the Lounge Lizards"—inaugurated a "new era of humor-based adventure games". This belief was shared by Reed, who wrote that "Maniac Mansion" "set in motion a captivating chapter in the history of gaming" that encompassed wit, invention, and style. The SCUMM engine was reused by Lucasfilm in eleven later titles; improvements were made to its code with each game. Over time, rival adventure game developers adopted this paradigm in their own software. "GamesTM" attributed the change to a desire to streamline production and create enjoyable games. Following his 1992 departure from LucasArts—a conglomeration of Lucasfilm Games, ILM and Skywalker Sound formed in 1990—Gilbert used SCUMM to create adventure games and "Backyard Sports" titles for Humongous Entertainment. In 2011, Richard Cobbett summarized "Maniac Mansion" as "one of the most intricate and important adventure games ever made". "Retro Gamer" ranked it as one of the ten best Commodore 64 games in 2006, and IGN later named it one of the ten best LucasArts adventure games. Seven years after the NES version's debut, "Nintendo Power" named it the 61st best game ever. The publication dubbed it the 16th best NES title in 2008. The game's uniqueness and clever writing were praised by "Nintendo Power": in 2010, the magazine's Chris Hoffman stated that the game is "unlike anything else out there — a point-and-click adventure with an awesome sense of humor and multiple solutions to almost every puzzle." In its retrospective coverage, "Nintendo Power" several times noted the ability to microwave a hamster, which the staff considered to be an iconic scene. In March 2012, "Retro Gamer" listed the hamster incident as one of the "100 Classic Gaming Moments". "Maniac Mansion" enthusiasts have drawn fan art of its characters, participated in tentacle-themed cosplay and produced a trailer for a fictitious film adaptation of the game. German fan Sascha Borisow created a fan game remake, titled "Maniac Mansion Deluxe", with enhanced audio and visuals. He used the Adventure Game Studio engine to develop the project, which he distributed free of charge on the Internet. By the end of 2004, the remake had over 200,000 downloads. A remake with three-dimensional graphics called "Meteor Mess" was created by the German developer Vampyr Games, and, as of 2011, another group in Germany is producing one with art direction similar to that of "Day of the Tentacle". Fans have created an episodic series of games based on "Maniac Mansion" as well. Gilbert has said that he would like to see an official remake, similar in its graphics and gameplay to "The Secret of Monkey Island: Special Edition" and "Monkey Island 2 Special Edition: LeChuck's Revenge". He also expressed doubts about its potential quality, in light of George Lucas's enhanced remakes of the original "Star Wars" trilogy. In December 2017, Disney, which gained rights to the LucasArts games following its acquisition of Lucasfilms, published "Maniac Mansion" running atop the ScummVM virtual machine to various digital storefronts.
https://en.wikipedia.org/wiki?curid=19668
Marx Brothers The Marx Brothers were an American family comedy act that was successful in vaudeville, on Broadway, and in motion pictures from 1905 to 1949. Five of the Marx Brothers' thirteen feature films were selected by the American Film Institute (AFI) as among the top 100 comedy films, with two of them, "Duck Soup" (1933) and "A Night at the Opera" (1935), in the top fifteen. They are widely considered by critics, scholars, and fans to be among the greatest and most influential comedians of the 20th century. The brothers were included in AFI's 100 Years... 100 Stars list of the 25 greatest male stars of Classical Hollywood cinema, the only performers to be inducted collectively. The brothers are almost universally known today by their stage names: Chico, Harpo, Groucho, Gummo, and Zeppo. There was a sixth brother, the first born, named Manfred (Mannie), who died aged seven months; Zeppo was given the middle name Manfred in his memory. The core of the act was the three elder brothers: Chico, Harpo, and Groucho, each of whom developed a highly distinctive stage persona. After the group essentially disbanded in 1950, Groucho went on to begin a successful second career in television, while Harpo and Chico appeared less prominently. The two younger brothers, Gummo and Zeppo, never developed their stage characters to the same extent as the elder three. Both of them left the act to pursue business careers at which they were successful, and for a time ran a large theatrical agency through which they represented their brothers and others. Gummo was not in any of the movies; Zeppo appeared in the first five films in relatively straight (non-comedic) roles. The early performing lives of the brothers owed much to their mother Minnie Marx (the sister of vaudeville comic Al Shean), who acted as their manager until her death in 1929. The Marx Brothers were born in New York City, the sons of Jewish immigrants from Germany and France. Their mother Miene "Minnie" Schoenberg (professionally known as Minnie Palmer, later the brothers' manager) was from Dornum in East Frisia, and their father Samuel ("Sam"; born Simon) Marx was a native of Mertzwiller, a small Alsacian village, and worked as a tailor. (His name was changed to Samuel Marx, and he was nicknamed "Frenchy".) The family lived in the Yorkville district of New York City's Upper East Side, centered in the Irish, German and Italian quarters. The brothers are best known by their stage names: Another brother, Manfred ("Mannie"), the first-born son of Sam and Minnie, was born in 1886 and died in infancy: Family lore told privately of the firstborn son, Manny, born in 1886 but surviving for only three months, and carried off by tuberculosis. Even some members of the Marx family wondered if he was pure myth. But Manfred can be verified. A death certificate of the Borough of Manhattan reveals that he died, aged seven months, on 17 July 1886, of enterocolitis, with "asthenia" contributing, i.e., probably a victim of influenza. He is buried at New York's Washington Cemetery, beside his grandmother, Fanny Sophie Schönberg (née Salomons), who died on 10 April 1901. The Marx Brothers also had an older sister, actually a cousin, born in January 1885 who had been adopted by Minnie and Frenchie. Her name was Pauline, or "Polly". Groucho talked about her in his 1972 Carnegie Hall concert. Minnie Marx came from a family of performers. Her mother was a yodeling harpist and her father a ventriloquist; both were funfair entertainers. Around 1880, the family emigrated to New York City, where Minnie married Sam in 1884. During the early 20th century, Minnie helped her younger brother Abraham Elieser Adolf Schönberg (stage name Al Shean) to enter show business; he became highly successful on vaudeville and Broadway as half of the musical comedy double act Gallagher and Shean, and this gave the brothers an entree to musical comedy, vaudeville and Broadway at Minnie's instigation. Minnie also acted as the brothers' manager, using the name Minnie Palmer so that agents did not realize that she was also their mother. All the brothers confirmed that Minnie Marx had been the head of the family and the driving force in getting the troupe launched, the only person who could keep them in order; she was said to be a hard bargainer with theatre management. Gummo and Zeppo both became successful businessmen: Gummo gained success through his agency activities and a raincoat business, and Zeppo became a multi-millionaire through his engineering business. The brothers were from a family of artists, and their musical talent was encouraged from an early age. Harpo was particularly talented, learning to play an estimated six different instruments throughout his career. He became a dedicated harpist, which gave him his nickname. Chico was an excellent pianist, Groucho a guitarist and singer, and Zeppo a vocalist. They got their start in vaudeville, where their uncle Albert Schönberg performed as Al Shean of Gallagher and Shean. Groucho's debut was in 1905, mainly as a singer. By 1907, he and Gummo were singing together as "The Three Nightingales" with Mabel O'Donnell. The next year, Harpo became the fourth Nightingale and by 1910, the group briefly expanded to include their mother Minnie and their Aunt Hannah. The troupe was renamed "The Six Mascots". One evening in 1912, a performance at the Opera House in Nacogdoches, Texas, was interrupted by shouts from outside about a runaway mule. The audience hurried out to see what was happening. Groucho was angered by the interruption and, when the audience returned, he made snide comments at their expense, including "Nacogdoches is full of roaches" and "the jackass is the flower of Tex-ass". Instead of becoming angry, the audience laughed. The family then realized that it had potential as a comic troupe. (However, in his autobiography "Harpo Speaks", Harpo Marx stated that the runaway mule incident occurred in Ada, Oklahoma. A 1930 article in the "San Antonio Express" newspaper stated that the incident took place in Marshall, Texas.) The act slowly evolved from singing with comedy to comedy with music. The brothers' sketch "Fun in Hi Skule" featured Groucho as a German-accented teacher presiding over a classroom that included students Harpo, Gummo, and Chico. The last version of the school act was titled "Home Again" and was written by their uncle Al Shean. The "Home Again" tour reached Flint, Michigan in 1915, where 14-year-old Zeppo joined his four brothers for what is believed to be the only time that all five Marx Brothers appeared together on stage. Gummo then left to serve in World War I, reasoning that "anything is better than being an actor!" Zeppo replaced him in their final vaudeville years and in the jump to Broadway, and then to Paramount films. During World War I, anti-German sentiments were common, and the family tried to conceal its German origin. Mother Minnie learned that farmers were excluded from the draft rolls, so she purchased a poultry farm near Countryside, Illinois — but the brothers soon found that chicken ranching was not in their blood. During this time, Groucho discontinued his "German" stage personality. By this time, "The Four Marx Brothers" had begun to incorporate their unique style of comedy into their act and to develop their characters. Both Groucho's and Harpo's memoirs say that their now-famous on-stage personae were created by Al Shean. Groucho began to wear his trademark greasepaint mustache and to use a stooped walk. Harpo stopped speaking onstage and began to wear a red fright wig and carry a taxi-cab horn. Chico spoke with a fake Italian accent, developed off-stage to deal with neighborhood toughs, while Zeppo adopted the role of the romantic (and "peerlessly cheesy", according to James Agee) straight man. The on-stage personalities of Groucho, Chico, and Harpo were said to have been based on their actual traits. Zeppo, on the other hand, was considered the funniest brother offstage, despite his straight stage roles. He was the youngest and had grown up watching his brothers, so he could fill in for and imitate any of the others when illness kept them from performing. "He was so good as Captain Spaulding [in "Animal Crackers"] that I would have let him play the part indefinitely, if they had allowed me to smoke in the audience," Groucho recalled. (Zeppo stood in for Groucho in the film version of "Animal Crackers". Groucho was unavailable to film the scene in which the Beaugard painting is stolen, so the script was contrived to include a power failure, which allowed Zeppo to play the Spaulding part in near-darkness.) In December 1917 the Marx brothers were noted in an advertisement playing in a musical comedy act "Home Again". By the 1920s, the Marx Brothers had become one of America's favorite theatrical acts, with their sharp and bizarre sense of humor. They satirized high society and human hypocrisy, and they became famous for their improvisational comedy in free-form scenarios. A famous early instance was when Harpo arranged to chase a fleeing chorus girl across the stage during the middle of a Groucho monologue to see if Groucho would be thrown off. However, to the audience's delight, Groucho merely reacted by commenting, "First time I ever saw a taxi hail a passenger". When Harpo chased the girl back in the other direction, Groucho calmly checked his watch and ad-libbed, "The 9:20's right on time. You can set your watch by the Lehigh Valley." The brothers' vaudeville act had made them stars on Broadway under Chico's management and with Groucho's creative direction—first with the musical revue "I'll Say She Is" (1924–1925) and then with two musical comedies: "The Cocoanuts" (1925–1926) and "Animal Crackers" (1928–1929). Playwright George S. Kaufman worked on the last two and helped sharpen the brothers' characterizations. Out of their distinctive costumes, the brothers looked alike, even down to their receding hairlines. Zeppo could pass for a younger Groucho, and played the role of his son in "Horse Feathers". A scene in "Duck Soup" finds Groucho, Harpo, and Chico all appearing in the famous greasepaint eyebrows, mustache, and round glasses while wearing nightcaps. The three are indistinguishable, enabling them to carry off the "mirror scene" perfectly. The stage names of the brothers (except Zeppo) were coined by monologist Art Fisher during a poker game in Galesburg, Illinois, based both on the brothers' personalities and Gus Mager's "Sherlocko the Monk", a popular comic strip of the day that included a supporting character named "Groucho". As Fisher dealt each brother a card, he addressed them, for the very first time, by the names they kept for the rest of their lives. The reasons behind Chico's and Harpo's stage names are undisputed, and Gummo's is fairly well established. Groucho's and Zeppo's are far less clear. Arthur was named Harpo because he played the harp, and Leonard became Chico (pronounced "Chick-o") because he was, in the slang of the period, a "chicken chaser". ("Chickens"—later "chicks"—was period slang for women. "In England now," said Groucho, "they were called 'birds'.") In his autobiography, Harpo explained that Milton became Gummo because he crept about the theater like a gumshoe detective. Other sources reported that Gummo was the family's hypochondriac, having been the sickliest of the brothers in childhood, and therefore wore rubber overshoes, called gumshoes, in all kinds of weather. Still others reported that Milton was the troupe's best dancer, and dance shoes tended to have rubber soles. Groucho stated that the source of the name was Gummo wearing galoshes. Whatever the details, the name relates to rubber-soled shoes. The reason that Julius was named Groucho is perhaps the most disputed. There are three explanations: I kept my money in a 'grouch bag'. This was a small chamois bag that actors used to wear around their neck to keep other hungry actors from pinching their dough. Naturally, you're going to think that's where I got my name from. But that's not so. Grouch bags were worn on manly chests long before there was a Groucho. Herbert was not nicknamed by Art Fisher, since he did not join the act until Gummo had departed. As with Groucho, three explanations exist for Herbert's name "Zeppo": Maxine Marx reported in "The Unknown Marx Brothers" that the brothers listed their "real" names (Julius, Leonard, Adolph, Milton, and Herbert) on playbills and in programs, and only used the nicknames behind the scenes, until Alexander Woollcott overheard them calling one another by the nicknames. He asked them why they used their real names publicly when they had such wonderful nicknames, and they replied, "That wouldn't be dignified." Woollcott answered with a belly laugh. Woollcott did not meet the Marx Brothers until the premiere of "I'll Say She Is", which was their first Broadway show, so this would mean that they used their real names throughout their vaudeville days, and that the name "Gummo" never appeared in print during his time in the act. Other sources reported that the Marx Brothers went by their nicknames during their vaudeville era, but briefly listed themselves by their given names when "I'll Say She Is" opened because they were worried that a Broadway audience would reject a vaudeville act if they were perceived as low class. The Marx Brothers' stage shows became popular just as motion pictures were evolving to "talkies". They signed a contract with Paramount Pictures and embarked on their film career at Paramount's studios in New York City's Astoria section. Their first two released films (after an unreleased short silent film titled "Humor Risk") were adaptations of the Broadway shows "The Cocoanuts" (1929) and "Animal Crackers" (1930). Both were written by George S. Kaufman and Morrie Ryskind. Production then shifted to Hollywood, beginning with a short film that was included in Paramount's twentieth anniversary documentary, "The House That Shadows Built" (1931), in which they adapted a scene from "I'll Say She Is". Their third feature-length film, "Monkey Business" (1931), was their first movie not based on a stage production. "Horse Feathers" (1932), in which the brothers satirized the American college system and Prohibition, was their most popular film yet, and won them the cover of "Time" magazine. It included a running gag from their stage work, in which Harpo produces a ludicrous array of props from inside his coat, including a wooden mallet, a fish, a coiled rope, a tie, a poster of a woman in her underwear, a cup of hot coffee, a sword; and, just after Groucho warns him that he "can't burn the candle at both ends," a candle burning at both ends. During this period Chico and Groucho starred in a radio comedy series, "Flywheel, Shyster and Flywheel". Though the series was short lived, much of the material developed for it was used in subsequent films. The show's scripts and recordings were believed lost until copies of the scripts were found in the Library of Congress in the 1980s. After publication in a book they were performed with Marx Brothers impersonators for BBC Radio. Their last Paramount film, "Duck Soup" (1933), directed by the highly regarded Leo McCarey, is the highest rated of the five Marx Brothers films on the American Film Institute's "100 years ... 100 Movies" list. It did not do as well financially as "Horse Feathers", but was the sixth-highest grosser of 1933. The film sparked a dispute between the Marxes and the village of Fredonia, New York. "Freedonia" was the name of a fictional country in the script, and the city fathers wrote to Paramount and asked the studio to remove all references to Freedonia because "it is hurting our town's image". Groucho fired back a sarcastic retort asking them to change the name of their town, because "it's hurting our picture." After expiration of the Paramount contract Zeppo left the act to become an agent. He and brother Gummo went on to build one of the biggest talent agencies in Hollywood, helping the likes of Jack Benny and Lana Turner get their starts. Groucho and Chico did radio, and there was talk of returning to Broadway. At a bridge game with Chico, Irving Thalberg began discussing the possibility of the Marxes joining Metro-Goldwyn-Mayer. They signed, now billed as "Groucho, Chico, Harpo, Marx Bros." Unlike the free-for-all scripts at Paramount, Thalberg insisted on a strong story structure that made the brothers more sympathetic characters, interweaving their comedy with romantic plots and non-comic musical numbers, and targeting their mischief-making at obvious villains. Thalberg was adamant that scripts include a "low point", where all seems lost for both the Marxes and the romantic leads. He instituted the innovation of testing the film's script before live audiences before filming began, to perfect the comic timing, and to retain jokes that earned laughs and replace those that did not. Thalberg restored Harpo's harp solos and Chico's piano solos, which had been omitted from "Duck Soup". The first Marx Brothers/Thalberg film was "A Night at the Opera" (1935), a satire on the world of opera, where the brothers help two young singers in love by throwing a production of "Il Trovatore" into chaos. The film—including its famous scene where an absurd number of people crowd into a tiny stateroom on a ship—was a great success, and was followed two years later by an even bigger hit, "A Day at the Races" (1937), in which the brothers cause mayhem in a sanitarium and at a horse race. The film features Groucho and Chico's famous "Tootsie Frootsie Ice Cream" sketch. In a 1969 interview with Dick Cavett, Groucho said that the two movies made with Thalberg were the best that they ever produced. Despite the Thalberg films' success, the brothers left MGM in 1937; Thalberg had died suddenly on September 14, 1936, two weeks after filming began on "A Day at the Races", leaving the Marxes without an advocate at the studio. After a short experience at RKO ("Room Service", 1938), the Marx Brothers returned to MGM and made three more films: "At the Circus" (1939), "Go West" (1940) and "The Big Store" (1941). Prior to the release of "The Big Store" the team announced they were retiring from the screen. Four years later, however, Chico persuaded his brothers to make two additional films, "A Night in Casablanca" (1946) and "Love Happy" (1949), to alleviate his severe gambling debts. Both pictures were released by United Artists. From the 1940s onward Chico and Harpo appeared separately and together in nightclubs and casinos. Chico fronted a big band, the Chico Marx Orchestra (with 17-year-old Mel Tormé as a vocalist). Groucho made several radio appearances during the 1940s and starred in "You Bet Your Life", which ran from 1947 to 1961 on NBC radio and television. He authored several books, including "Groucho and Me" (1959), "Memoirs of a Mangy Lover" (1964) and "The Groucho Letters" (1967). Groucho and Chico briefly appeared in a 1957 color short film promoting "The Saturday Evening Post" entitled "Showdown at Ulcer Gulch," directed by animator Shamus Culhane, Chico's son-in-law. Groucho, Chico, and Harpo worked together (in separate scenes) in "The Story of Mankind" (1957). In 1959, the three began production of "Deputy Seraph", a TV series starring Harpo and Chico as blundering angels, and Groucho (in every third episode) as their boss, the "Deputy Seraph." The project was abandoned when Chico was found to be uninsurable (and incapable of memorizing his lines) due to severe arteriosclerosis. On March 8 of that year, Chico and Harpo starred as bumbling thieves in "The Incredible Jewel Robbery", a half-hour pantomimed episode of the "General Electric Theater" on CBS. Groucho made a cameo appearance—uncredited, because of constraints in his NBC contract—in the last scene, and delivered the only line of dialogue ("We won't talk until we see our lawyer!"). According to a September 1947 article in "Newsweek", Groucho, Harpo, Chico and Zeppo all signed to appear as themselves in a biopic entitled "The Life and Times of the Marx Brothers". In addition to being a non-fiction biography of the Marxes, the film would have featured the brothers reenacting much of their previously unfilmed material from both their vaudeville and Broadway eras. The film, had it been made, would have been the first performance by the Brothers as a quartet since 1933. The five brothers made only one television appearance together, in 1957, on an early incarnation of "The Tonight Show" called "Tonight! America After Dark", hosted by Jack Lescoulie. Five years later (October 1, 1962) after Jack Paar's tenure, Groucho made a guest appearance to introduce the "Tonight Show's" new host, Johnny Carson. Around 1960, the acclaimed director Billy Wilder considered writing and directing a new Marx Brothers film. Tentatively titled "A Day at the U.N.", it was to be a comedy of international intrigue set around the United Nations building in New York. Wilder had discussions with Groucho and Gummo, but the project was put on hold because of Harpo's ill-health and abandoned when Chico died in 1961. He was 74. Three years later, on September 28, 1964, Harpo died at the age of 75 of a heart attack one day after heart surgery. In 1966 Filmation produced a pilot for a Marx Brothers cartoon. Groucho's voice was supplied by Pat Harrington Jr. and other voices were done by Ted Knight and Joe Besser. In 1969, audio excerpts of dialogue from all five of the Marx Brothers' Paramount films were collected and released on an LP album, "The Original Voice Tracks from Their Greatest Movies", by Decca Records. The excerpts were interspersed with voice-over introductions by disc jockey and voice actor Gary Owens. The album was praised by "Billboard" as "a program of zany antics"; the magazine highlighted the excerpts of Groucho, who was "way ahead of his time in spoofing the 'establishment,' [and] at his hilarious biting best with his film soundtrack one-line zingers on his love life, his son, politics, big business, society, etc." "Village Voice" critic Robert Christgau was less enthusiastic, however, grading the LP a C-plus and recommending it only to fanatics of the comedy group. "This is the sort of record you buy out of duty and then never play, not because it's a comedy record but because it isn't funny out of context", Christgau wrote, while also expressing displeasure with the interspersing of small portions of "annoying music" and Owens's commentary throughout. In 1970, the four Marx Brothers had a brief reunion of sorts in the animated ABC television special "The Mad, Mad, Mad Comedians", produced by Rankin-Bass animation (of "Rudolph the Red-Nosed Reindeer" fame). The special featured animated reworkings of various famous comedians' acts, including W. C. Fields, Jack Benny, George Burns, Henny Youngman, the Smothers Brothers, Flip Wilson, Phyllis Diller, Jack E. Leonard, George Jessel and the Marx Brothers. Most of the comedians provided their own voices for their animated counterparts, except for Fields and Chico Marx (both had died), and Zeppo Marx (who had left show business in 1933). Voice actor Paul Frees filled in for all three (no voice was needed for Harpo). The Marx Brothers' segment was a reworking of a scene from their Broadway play "I'll Say She Is", a parody of Napoleon that Groucho considered among the brothers' funniest routines. The sketch featured animated representations, if not the voices, of all four brothers. Romeo Muller is credited as having written special material for the show, but the script for the classic "Napoleon Scene" was probably supplied by Groucho. On January 16, 1977, the Marx Brothers were inducted into the Motion Picture Hall of Fame. With the deaths of Gummo in April 1977, Groucho in August 1977, and Zeppo in November 1979, the brothers were gone. But their impact on the entertainment community continues well into the 21st century. Among famous comedians who have cited them as influences on their style have been Woody Allen, Alan Alda, Judd Apatow, Mel Brooks, John Cleese, Elliott Gould, Spike Milligan, Monty Python, Carl Reiner, as well as David Zucker, Jerry Zucker and Jim Abrahams. Comedian Frank Ferrante made impersonations of Groucho a career. Other celebrity fans of the comedy ensemble have been Antonin Artaud, The Beatles, Anthony Burgess, Alice Cooper, Robert Crumb, Salvador Dalí, Eugene Ionesco, George Gershwin, (who dressed up as Groucho once ) René Goscinny, Cédric Klapisch, J. D. Salinger and Kurt Vonnegut. Salvador Dali once made a drawing depicting Harpo. Peter Sellers imitates Groucho in "Let's Go Crazy" (1951). In "The Way We Were" (1972) the main characters attend a party, dressed as the Marx Brothers. The real Groucho Marx also visited the set, of which a photograph was taken by David F. Smith. In Woody Allen's "Take the Money and Run" (1969) Virgil's parents give an interview while wearing Groucho masks. "Annie Hall" (1977) starts off with a Groucho Marx joke, which is referred to again later. In "Manhattan" (1979), he names the Marx Brothers as the first thing that makes life worth living. In "Stardust Memories" there is a huge Groucho poster in the main character's flat. In Allen's film "Hannah and Her Sisters" (1986), Woody's character, after a suicide attempt, is inspired to go on living after seeing a revival showing of "Duck Soup". In "Everyone Says I Love You" (1996) (the title itself a reference to Groucho's famous song), Woody Allen and Goldie Hawn dress as Groucho for a Marx Brothers celebration in France, and the song "Hooray for Captain Spaulding", from "Animal Crackers", is performed, with various actors dressed as the brothers, striking poses famous to Marx fans. (The film itself is named after a song from "Horse Feathers", a version of which plays over the opening credits.) In "Mighty Aphrodite" Woody suggests Harpo and Groucho as names for his song. In Terry Gilliam's "Brazil" (1985) a woman in a bathtub is watching "The Cocoanuts" when troops break into her house. In "Twelve Monkeys" (1996) the inmates of an insane asylum watch "Monkey Business" on TV. In the 1989 film "Indiana Jones and the Last Crusade", Professor Henry Jones (Sean Connery) mails his diary to his son Indiana Jones (Harrison Ford) to keep it out of Nazi hands. When Indy misconstrues the purpose of being sent it and returns it to his father instead, his father berates him by saying "I should have mailed it to the Marx Brothers!". In Rob Zombie's 2003 film "House of 1000 Corpses", the clown Captain Spaulding is named after the Marx brothers character, and this is mentioned in the movie. In the Fleischer Brothers' "Betty Boop" cartoon "Betty in Blunderland" (1934) Betty sings "Everyone Says I Love You", a song owned by Paramount Pictures, which also owned Betty's cartoons as well as the Marx Brothers film it was taken from: "Horse Feathers". The Marx Brothers have cameos in the Disney cartoons "The Bird Store" (1932), "Mickey's Gala Premier" (1932), "Mickey's Polo Team" (1936), "Mother Goose Goes Hollywood" (1938) and "The Autograph Hound" (1939). Dopey in "Snow White and the Seven Dwarfs" was inspired by Harpo's mute performances. Tex Avery's cartoon "Hollywood Steps Out" (1941) features appearances by Harpo and Groucho. Bugs Bunny impersonated Groucho Marx in the 1947 cartoon "Slick Hare" (with Elmer Fudd dressing up as Harpo and chasing him with a cleaver), and in "Wideo Wabbit" (1956) he again impersonated Groucho hosting a TV show called "You Beat Your Wife", asking Elmer Fudd if he had stopped beating his wife. Many television shows and movies have used Marx Brothers references. "Animaniacs" and "Tiny Toons", for example, have featured Marx Brothers jokes and skits. The Genie imitates the Marx Brothers in "Aladdin and the King of Thieves". An episode of "Histeria!" about Communism portrays Groucho and Chico, respectively, as Karl Marx and Friedrich Engels. Harpo Marx appeared as himself on a 1955 episode of "I Love Lucy" in which first, he performed "Take Me Out to the Ball Game" on his harp, then, he and Lucille Ball reprised the mirror routine from "Duck Soup", with Lucy dressed up as Harpo. Lucy had worked with the Marxes when she appeared in a supporting role in an earlier Marx Brothers film, "Room Service". Chico once appeared on "I've Got a Secret" dressed up as Harpo; his secret was shown in a caption reading, "I'm pretending to be Harpo Marx (I'm Chico)". Hawkeye Pierce (Alan Alda) on "M*A*S*H" occasionally put on a fake nose and glasses, and, holding a cigar, did a Groucho impersonation to amuse patients recovering from surgery. Early episodes also featured a singing and off-scene character named Captain Spaulding as a tribute. In the second episode of "The Muppet Show" Kermit the Frog sings "Lydia the Tattooed Lady". In the "Airwolf" episode "Condemned", four anti-virus formulae for a deadly plague were named after the four Marx Brothers. In "All in the Family", Rob Reiner often did imitations of Groucho, and Sally Struthers dressed as Harpo in one episode in which she (as Gloria Stivic) and Rob (as Mike Stivic) were going to a Marx Brothers film festival, with Reiner dressing as Groucho. Gabe Kaplan did many Groucho imitations on "Welcome Back, Kotter" and Robert Hegyes sometimes imitated both Chico and Harpo on the show. In an episode of "The Mary Tyler Moore Show" Murray calls the new station owner at home late at night to complain when the song "Hooray for Captain Spaulding" is cut from a showing of "Animal Crackers" because of the new owners' policy to cut more and more from shows to sell more ad time, putting his job on the line. In 1990 three puppets were made of Groucho, Harpo and Chico for the satirical TV show "Spitting Image". They were later used to portray the hunters in a 1994 TV production of "Peter and the Wolf", with Sting as narrator and puppets from the series as characters. The Marx Brothers were spoofed in the second act of the Broadway Review "A Day in Hollywood/A Night in the Ukraine". In the 1996 musical "By Jeeves", based on the Jeeves stories by P.G. Wodehouse, during "The Hallo Song", Gussie Fink-Nottle suggests "You're either Pablo Picasso", to which Cyrus Budge III replies "or maybe Harpo Marx!" Jacques Brel's song "Le Gaz" was inspired by the cabin scene in "A Night at the Opera". Rock band Queen named two of their albums after Marx Brothers films; "A Night at the Opera" (1975) and "A Day at the Races" (1976), and in Freddie Mercury's solo album "Mr. Bad Guy " in the song titled “Living on My Own” he sings; "I ain't got no time for no Monkey Business." In 2002 the band Blind Guardian would also name an album "A Night at the Opera". Groucho Marx can be seen on the cover of "Alice Cooper's Greatest Hits" by Alice Cooper. English punk band The Damned named their single "There Ain't No Sanity Clause" (1980), in reference to a famous quote from "A Night at the Opera". On the 1988 album "Modern Lovers '88" by Modern Lovers there is a track called "When Harpo Played His Harp". The band Karl and the Marx Brothers takes their name from them. Jack Kerouac wrote a poem "To Harpo Marx". Ron Goulart wrote six books between 1998 and 2005 where Groucho Marx was a detective. In the Vlasic Pickles commercials, the stork associated with the product holds a pickle the way Groucho held a cigar and, in a Groucho voice, says, "Now that's the best tastin' pickle I ever heard!" and bites into the pickle. Films with the four Marx Brothers: Films with the three Marx Brothers (post-Zeppo): Solo endeavors: In the 1974 Academy Awards telecast, Jack Lemmon presented Groucho with an honorary Academy Award to a standing ovation. The award was also on behalf of Harpo, Chico, and Zeppo, whom Lemmon mentioned by name. It was one of Groucho's final major public appearances. "I wish that Harpo and Chico could be here to share with me this great honor", he said, naming the two deceased brothers (Zeppo was still alive at the time and in the audience). Groucho also praised the late Margaret Dumont as a great straight woman who never understood any of his jokes. The Marx Brothers were collectively named #20 on AFI's list of the Top 25 American male screen legends of Classic Hollywood. They are the only group to be so honored. The "Sweathogs" of the ABC-TV series "Welcome Back Kotter" (John Travolta, Robert Hegyes, Lawrence Hilton-Jacobs, and Ron Palillo) patterned much of their on-camera banter in that series after the Marx Brothers. Series star Gabe Kaplan was reputedly a big Marx Brothers fan.
https://en.wikipedia.org/wiki?curid=19669
MP3 MP3 (formally MPEG-1 Audio Layer III or MPEG-2 Audio Layer III) is a coding format for digital audio. Originally defined as the third audio format of the MPEG-1 standard, it was retained and further extended—defining additional bit-rates and support for more audio channels—as the third audio format of the subsequent MPEG-2 standard. A third version, known as MPEG 2.5—extended to better support lower bit rates—is commonly implemented, but is not a recognized standard. MP3 (or mp3) as a file format commonly designates files containing an elementary stream of MPEG-1 Audio or MPEG-2 Audio encoded data, without other complexities of the MP3 standard. In regard to audio compression (the aspect of the standard most apparent to end-users, and for which is it best known), MP3 uses lossy data-compression to encode data using inexact approximations and the partial discarding of data. This allows a large reduction in file sizes when compared to uncompressed audio. The combination of small size and acceptable fidelity led to a boom in the distribution of music over the Internet in the mid- to late-1990s, with MP3 serving as an enabling technology at a time when bandwidth and storage were still at a premium. The MP3 format soon became associated with controversies surrounding copyright infringement, music piracy, and the file ripping/sharing services MP3.com and Napster, among others. With the advent of portable media players, a product category also including smartphones, MP3 support remains near-universal. MP3 compression works by reducing (or approximating) the accuracy of certain components of sound that are considered (by psychoacoustic analysis) to be beyond the hearing capabilities of most humans. This method is commonly referred to as perceptual coding or as psychoacoustic modeling. The remaining audio information is then recorded in a space-efficient manner, using MDCT and FFT algorithms. Compared to CD-quality digital audio, MP3 compression can commonly achieve a 75 to 95% reduction in size. For example, an MP3 encoded at a constant bitrate of 128 kbit/s would result in a file approximately 9% of the size of the original CD audio. The Moving Picture Experts Group (MPEG) designed MP3 as part of its MPEG-1, and later MPEG-2, standards. MPEG-1 Audio (MPEG-1 Part 3), which included MPEG-1 Audio Layer I, II and III, was approved as a committee draft for an ISO/IEC standard in 1991, finalised in 1992, and published in 1993 as ISO/IEC 11172-3:1993. An MPEG-2 Audio (MPEG-2 Part 3) extension with lower sample- and bit-rates was published in 1995 as ISO/IEC 13818-3:1995. It requires only minimal modifications to existing MPEG-1 decoders (recognition of the MPEG-2 bit in the header and addition of the new lower sample and bit rates). The MP3 lossy audio data compression algorithm takes advantage of a perceptual limitation of human hearing called auditory masking. In 1894, the American physicist Alfred M. Mayer reported that a tone could be rendered inaudible by another tone of lower frequency. In 1959, Richard Ehmer described a complete set of auditory curves regarding this phenomenon. Between 1967 and 1974, Eberhard F. Zwicker did work in the areas of tuning and masking of critical frequency bands, which in turn built on the fundamental research in the area from Harvey Fletcher and his collaborators at Bell Labs. Perceptual coding was first used for speech coding compression with linear predictive coding (LPC), which has origins in the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. In 1978, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs proposed an LPC speech codec, called adaptive predictive coding, that used a psychoacoustic coding algorithm exploiting the masking properties of the human ear. Further optimisation by Schroeder and Atal with J.L. Hall was later reported in a 1979 paper. That same year, a psychoacoustic masking codec was also proposed by M. A. Krasner, who published and produced hardware for speech (not usable as music bit compression), but the publication of his results in a relatively obscure Lincoln Laboratory Technical Report did not immediately influence the mainstream of psychoacoustic codec development. The discrete cosine transform (DCT), a type of transform coding for lossy compression, was proposed by Nasir Ahmed in 1972, and developed by Ahmed with T. Natarajan and K. R. Rao in 1973, publishing their results in 1974. This led to the development of the modified discrete cosine transform (MDCT), proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986. The MDCT later became a core part of the MP3 algorithm. Ernst Terhardt "et al." created an algorithm describing auditory masking with high accuracy in 1982. This work added to a variety of reports from authors dating back to Fletcher, and to the work that initially determined critical ratios and critical bandwidths. In 1985, Atal and Schroeder presented code-excited linear prediction (CELP), an LPC-based perceptual speech coding algorithm with auditory masking that achieved a significant compression ratio for its time. A wide variety of (mostly perceptual) audio compression algorithms were reported in IEEE's refereed "Journal on Selected Areas in Communications" in 1988. The "Voice Coding for Communications" edition published in February 1988 reported on a wide range of established, working audio bit compression technologies, some of them using auditory masking as part of their fundamental design, and several showing real-time hardware implementations. The genesis of the MP3 technology is fully described in a paper from Professor Hans Musmann, who chaired the ISO MPEG Audio group for several years. In December 1988, MPEG called for an audio coding standard. In June 1989, 14 audio coding algorithms were submitted. Because of certain similarities between these coding proposals, they were clustered into four development groups. The first group was MUSICAM, by Matsushita, CCETT, ITT and Philips. The second group was ASPEC, by AT&T, France Telecom, Fraunhofer Gesellschaft, Deutsche and Thomson-Brandt. The third group was ATAC, by Fujitsu, JVC, NEC and Sony. And the fourth group was SB-ADPCM, by NTT and BTRL. The immediate predecessors of MP3 were "Optimum Coding in the Frequency Domain" (OCF), and Perceptual Transform Coding (PXFM). These two codecs, along with block-switching contributions from Thomson-Brandt, were merged into a codec called ASPEC, which was submitted to MPEG, and which won the quality competition, but that was mistakenly rejected as too complex to implement. The first practical implementation of an audio perceptual coder (OCF) in hardware (Krasner's hardware was too cumbersome and slow for practical use), was an implementation of a psychoacoustic transform coder based on Motorola 56000 DSP chips. Another predecessor of the MP3 format and technology is to be found in the perceptual codec MUSICAM based on an integer arithmetics 32 sub-bands filterbank, driven by a psychoacoustic model. It was primarily designed for Digital Audio Broadcasting (digital radio) and digital TV, and its basic principles disclosed to the scientific community by CCETT (France) and IRT (Germany) in Atlanta during an IEEE-ICASSP conference in 1991, after having worked on MUSICAM with Matsushita and Philips since 1989. This codec incorporated into a broadcasting system using COFDM modulation was demonstrated on air and on the field together with Radio Canada and CRC Canada during the NAB show (Las Vegas) in 1991. The implementation of the audio part of this broadcasting system was based on a two chips encoder (one for the subband transform, one for the psychoacoustic model designed by the team of G. Stoll (IRT Germany), later known as psychoacoustic model I) and a real time decoder using one Motorola 56001 DSP chip running an integer arithmetics software designed by Y.F. Dehery's team (CCETT, France). The simplicity of the corresponding decoder together with the high audio quality of this codec using for the first time a 48 kHz sampling frequency, a 20 bits/sample input format (the highest available sampling standard in 1991, compatible with the AES/EBU professional digital input studio standard) were the main reasons to later adopt the characteristics of MUSICAM as the basic features for an advanced digital music compression codec. During the development of the MUSICAM encoding software, Stoll and Dehery's team made a thorough use of a set of high quality audio assessment material selected by a group of audio professionals from the European Broadcasting Union and later used as a reference for the assessment of music compression codecs . The subband coding technique was found to be efficient, not only for the perceptual coding of the high quality sound materials but especially for the encoding of critical percussive sound materials (drums, triangle, ..) due to the specific temporal masking effect of the MUSICAM sub-band filterbank (this advantage being a specific feature of short transform coding techniques). As a doctoral student at Germany's University of Erlangen-Nuremberg, Karlheinz Brandenburg began working on digital music compression in the early 1980s, focusing on how people perceive music. He completed his doctoral work in 1989. MP3 is directly descended from OCF and PXFM, representing the outcome of the collaboration of Brandenburg—working as a postdoc at AT&T-Bell Labs with James D. Johnston ("JJ") of AT&T-Bell Labs—with the Fraunhofer Institute for Integrated Circuits, Erlangen (where he worked with Bernhard Grill and four other researchers – "The Original Six"), with relatively minor contributions from the MP2 branch of psychoacoustic sub-band coders. In 1990, Brandenburg became an assistant professor at Erlangen-Nuremberg. While there, he continued to work on music compression with scientists at the Fraunhofer Society's Heinrich Herz Institute (in 1993 he joined the staff of Fraunhofer HHI). The song "Tom's Diner" by Suzanne Vega was the first song used by Karlheinz Brandenburg to develop the MP3. Brandenburg adopted the song for testing purposes, listening to it again and again each time refining the scheme, making sure it did not adversely affect the subtlety of Vega's voice. In 1991, there were two available proposals that were assessed for an MPEG audio standard: MUSICAM (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing) and ASPEC (Adaptive Spectral Perceptual Entropy Coding). The MUSICAM technique, proposed by Philips (Netherlands), CCETT (France), the Institute for Broadcast Technology (Germany), and Matsushita (Japan), was chosen due to its simplicity and error robustness, as well as for its high level of computational efficiency. The MUSICAM format, based on sub-band coding, became the basis for the MPEG Audio compression format, incorporating, for example, its frame structure, header format, sample rates, etc. While much of MUSICAM technology and ideas were incorporated into the definition of MPEG Audio Layer I and Layer II, the filter bank alone and the data structure based on 1152 samples framing (file format and byte oriented stream) of MUSICAM remained in the Layer III (MP3) format, as part of the computationally inefficient hybrid filter bank. Under the chairmanship of Professor Musmann of the Leibniz University Hannover, the editing of the standard was delegated to Leon van de Kerkhof (Netherlands), Gerhard Stoll (Germany), and Yves-François Dehery (France), who worked on Layer I and Layer II. ASPEC was the joint proposal of AT&T Bell Laboratories, Thomson Consumer Electronics, Fraunhofer Society and CNET. It provided the highest coding efficiency. A working group consisting of van de Kerkhof, Stoll, Leonardo Chiariglione (CSELT VP for Media), Yves-François Dehery, Karlheinz Brandenburg (Germany) and James D. Johnston (United States) took ideas from ASPEC, integrated the filter bank from Layer II, added some of their own ideas such as the joint stereo coding of MUSICAM and created the MP3 format, which was designed to achieve the same quality at 128 kbit/s as MP2 at 192 kbit/s. The algorithms for MPEG-1 Audio Layer I, II and III were approved in 1991 and finalized in 1992 as part of MPEG-1, the first standard suite by MPEG, which resulted in the international standard ISO/IEC 11172-3 (a.k.a. "MPEG-1 Audio" or "MPEG-1 Part 3"), published in 1993. Files or data streams conforming to this standard must handle sample rates of 48k, 44100 and 32k and continue to be supported by current MP3 players and decoders. Thus the first generation of MP3 defined interpretations of MP3 frame data structures and size layouts. Further work on MPEG audio was finalized in 1994 as part of the second suite of MPEG standards, MPEG-2, more formally known as international standard ISO/IEC 13818-3 (a.k.a. "MPEG-2 Part 3" or backwards compatible "MPEG-2 Audio" or "MPEG-2 Audio BC"), originally published in 1995. MPEG-2 Part 3 (ISO/IEC 13818-3) defined 42 additional bit rates and sample rates for MPEG-1 Audio Layer I, II and III. The new sampling rates are exactly half that of those originally defined in MPEG-1 Audio. This reduction in sampling rate serves to cut the available frequency fidelity in half while likewise cutting the bitrate by 50%. MPEG-2 Part 3 also enhanced MPEG-1's audio by allowing the coding of audio programs with more than two channels, up to 5.1 multichannel. An MP3 coded with MPEG-2 results in half of the bandwidth reproduction of MPEG-1 appropriate for piano and singing. A third generation of "MP3" style data streams (files) extended the "MPEG-2" ideas and implementation but was named "MPEG-2.5" audio, since MPEG-3 already had a different meaning. This extension was developed at Fraunhofer IIS, the registered patent holders of MP3 by reducing the frame sync field in the MP3 header from 12 to 11 bits. As in the transition from MPEG-1 to MPEG-2, MPEG-2.5 adds additional sampling rates exactly half of those available using MPEG-2. It thus widens the scope of MP3 to include human speech and other applications yet requires only 25% of the bandwidth (frequency reproduction) possible using MPEG-1 sampling rates. While not an ISO recognized standard, MPEG-2.5 is widely supported by both inexpensive Chinese and brand name digital audio players as well as computer software based MP3 encoders (LAME), decoders (FFmpeg) and players (MPC) adding additional MP3 frame types. Each generation of MP3 thus supports 3 sampling rates exactly half that of the previous generation for a total of 9 varieties of MP3 format files. The sample rate comparison table between MPEG-1, 2 and 2.5 is given later in the article. MPEG-2.5 is supported by LAME (since 2000), Media Player Classic (MPC), iTunes, and FFmpeg. MPEG-2.5 was not developed by MPEG (see above) and was never approved as an international standard. MPEG-2.5 is thus an unofficial or proprietary extension to the MP3 format. It is nonetheless ubiquitous and especially advantageous for low-bit rate human speech applications. The ISO standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio) defined three formats: the MPEG-1 Audio Layer I, Layer II and Layer III. The ISO standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Audio) defined extended version of the MPEG-1 Audio: MPEG-2 Audio Layer I, Layer II and Layer III. MPEG-2 Audio (MPEG-2 Part 3) should not be confused with MPEG-2 AAC (MPEG-2 Part 7 – ISO/IEC 13818-7). Compression efficiency of encoders is typically defined by the bit rate, because compression ratio depends on the bit depth and sampling rate of the input signal. Nevertheless, compression ratios are often published. They may use the Compact Disc (CD) parameters as references (44.1 kHz, 2 channels at 16 bits per channel or 2×16 bit), or sometimes the Digital Audio Tape (DAT) SP parameters (48 kHz, 2×16 bit). Compression ratios with this latter reference are higher, which demonstrates the problem with use of the term "compression ratio" for lossy encoders. Karlheinz Brandenburg used a CD recording of Suzanne Vega's song "Tom's Diner" to assess and refine the MP3 compression algorithm. This song was chosen because of its nearly monophonic nature and wide spectral content, making it easier to hear imperfections in the compression format during playbacks. Some refer to Suzanne Vega as "The mother of MP3". This particular track has an interesting property in that the two channels are almost, but not completely, the same, leading to a case where Binaural Masking Level Depression causes spatial unmasking of noise artifacts unless the encoder properly recognizes the situation and applies corrections similar to those detailed in the MPEG-2 AAC psychoacoustic model. Some more critical audio excerpts (glockenspiel, triangle, accordion, etc.) were taken from the EBU V3/SQAM reference compact disc and have been used by professional sound engineers to assess the subjective quality of the MPEG Audio formats. LAME is the most advanced MP3 encoder. LAME includes a VBR variable bit rate encoding which uses a quality parameter rather than a bit rate goal. Later versions 2008+) support an n.nnn quality goal which automatically selects MPEG-2 or MPEG-2.5 sampling rates as appropriate for human speech recordings which need only 5512 Hz bandwidth resolution. A reference simulation software implementation, written in the C language and later known as "ISO 11172-5", was developed (in 1991–1996) by the members of the ISO MPEG Audio committee in order to produce bit compliant MPEG Audio files (Layer 1, Layer 2, Layer 3). It was approved as a committee draft of ISO/IEC technical report in March 1994 and printed as document CD 11172-5 in April 1994. It was approved as a draft technical report (DTR/DIS) in November 1994, finalized in 1996 and published as international standard ISO/IEC TR 11172-5:1998 in 1998. The reference software in C language was later published as a freely available ISO standard. Working in non-real time on a number of operating systems, it was able to demonstrate the first real time hardware decoding (DSP based) of compressed audio. Some other real time implementation of MPEG Audio encoders and decoders were available for the purpose of digital broadcasting (radio DAB, television DVB) towards consumer receivers and set top boxes. On 7 July 1994, the Fraunhofer Society released the first software MP3 encoder, called l3enc. The filename extension ".mp3" was chosen by the Fraunhofer team on 14 July 1995 (previously, the files had been named ".bit"). With the first real-time software MP3 player WinPlay3 (released 9 September 1995) many people were able to encode and play back MP3 files on their PCs. Because of the relatively small hard drives of the era (≈500–1000 MB) lossy compression was essential to store multiple albums' worth of music on a home computer as full recordings (as opposed to MIDI notation, or tracker files which combined notation with short recordings of instruments playing single notes). As sound scholar Jonathan Sterne notes, "An Australian hacker acquired l3enc using a stolen credit card. The hacker then reverse-engineered the software, wrote a new user interface, and redistributed it for free, naming it "thank you Fraunhofer"". A hacker named SoloH discovered the source code of the "dist10" MPEG reference implementation shortly after the release on the servers of the University of Erlangen. He developed a higher-quality version and spread it on the internet. This code started the widespread CD ripping and digital music distribution as MP3 over the internet. In the second half of the 1990s, MP3 files began to spread on the Internet, often via underground pirated song networks. The first known experiment in Internet distribution was organized in the early 1990s by the Internet Underground Music Archive, better known by the acronym IUMA. After some experiments using uncompressed audio files, this archive started to deliver on the native worldwide low speed Internet some compressed MPEG Audio files using the MP2 (Layer II) format and later on used MP3 files when the standard was fully completed. The popularity of MP3s began to rise rapidly with the advent of Nullsoft's audio player Winamp, released in 1997. In 1998, the first portable solid state digital audio player MPMan, developed by SaeHan Information Systems which is headquartered in Seoul, South Korea, was released and the Rio PMP300 was sold afterwards in 1998, despite legal suppression efforts by the RIAA. In November 1997, the website mp3.com was offering thousands of MP3s created by independent artists for free. The small size of MP3 files enabled widespread peer-to-peer file sharing of music ripped from CDs, which would have previously been nearly impossible. The first large peer-to-peer filesharing network, Napster, was launched in 1999. The ease of creating and sharing MP3s resulted in widespread copyright infringement. Major record companies argued that this free sharing of music reduced sales, and called it "music piracy". They reacted by pursuing lawsuits against Napster (which was eventually shut down and later sold) and against individual users who engaged in file sharing. Unauthorized MP3 file sharing continues on next-generation peer-to-peer networks. Some authorized services, such as Beatport, Bleep, Juno Records, eMusic, Zune Marketplace, Walmart.com, Rhapsody, the recording industry approved re-incarnation of Napster, and Amazon.com sell unrestricted music in the MP3 format. An MP3 file is made up of MP3 frames, which consist of a header and a data block. This sequence of frames is called an elementary stream. Due to the "bit reservoir", frames are not independent items and cannot usually be extracted on arbitrary frame boundaries. The MP3 Data blocks contain the (compressed) audio information in terms of frequencies and amplitudes. The diagram shows that the MP3 Header consists of a sync word, which is used to identify the beginning of a valid frame. This is followed by a bit indicating that this is the MPEG standard and two bits that indicate that layer 3 is used; hence MPEG-1 Audio Layer 3 or MP3. After this, the values will differ, depending on the MP3 file. "ISO/IEC 11172-3" defines the range of values for each section of the header along with the specification of the header. Most MP3 files today contain ID3 metadata, which precedes or follows the MP3 frames, as noted in the diagram. The data stream can contain an optional checksum. Joint stereo is done only on a frame-to-frame basis. The MP3 encoding algorithm is generally split into four parts. Part 1 divides the audio signal into smaller pieces, called frames, and a modified discrete cosine transform (MDCT) filter is then performed on the output. Part 2 passes the sample into a 1024-point fast Fourier transform (FFT), then the psychoacoustic model is applied and another MDCT filter is performed on the output. Part 3 quantifies and encodes each sample, known as noise allocation, which adjusts itself in order to meet the bit rate and sound masking requirements. Part 4 formats the bitstream, called an audio frame, which is made up of 4 parts, the header, error check, audio data, and ancillary data. The MPEG-1 standard does not include a precise specification for an MP3 encoder, but does provide example psychoacoustic models, rate loop, and the like in the non-normative part of the original standard. MPEG-2 doubles the number of sampling rates which are supported and MPEG-2.5 adds 3 more. When this was written, the suggested implementations were quite dated. Implementers of the standard were supposed to devise their own algorithms suitable for removing parts of the information from the audio input. As a result, many different MP3 encoders became available, each producing files of differing quality. Comparisons were widely available, so it was easy for a prospective user of an encoder to research the best choice. Some encoders that were proficient at encoding at higher bit rates (such as LAME) were not necessarily as good at lower bit rates. Over time, LAME evolved on the SourceForge website until it became the de facto CBR MP3 encoder. Later an ABR mode was added. Work progressed on true variable bit rate using a quality goal between 0 and 10. Eventually numbers (such as -V 9.600) could generate excellent quality low bit rate voice encoding at only 41 kbit/s using the MPEG-2.5 extensions. During encoding, 576 time-domain samples are taken and are transformed to 576 frequency-domain samples. If there is a transient, 192 samples are taken instead of 576. This is done to limit the temporal spread of quantization noise accompanying the transient. (See psychoacoustics.) Frequency resolution is limited by the small long block window size, which decreases coding efficiency. Time resolution can be too low for highly transient signals and may cause smearing of percussive sounds. Due to the tree structure of the filter bank, pre-echo problems are made worse, as the combined impulse response of the two filter banks does not, and cannot, provide an optimum solution in time/frequency resolution. Additionally, the combining of the two filter banks' outputs creates aliasing problems that must be handled partially by the "aliasing compensation" stage; however, that creates excess energy to be coded in the frequency domain, thereby decreasing coding efficiency. Decoding, on the other hand, is carefully defined in the standard. Most decoders are "bitstream compliant", which means that the decompressed output that they produce from a given MP3 file will be the same, within a specified degree of rounding tolerance, as the output specified mathematically in the ISO/IEC high standard document (ISO/IEC 11172-3). Therefore, comparison of decoders is usually based on how computationally efficient they are (i.e., how much memory or CPU time they use in the decoding process). Over time this concern has become less of an issue as CPU speeds transitioned from MHz to GHz. Encoder/decoder overall delay is not defined, which means there is no official provision for gapless playback. However, some encoders such as LAME can attach additional metadata that will allow players that can handle it to deliver seamless playback. When performing lossy audio encoding, such as creating an MP3 data stream, there is a trade-off between the amount of data generated and the sound quality of the results. The person generating an MP3 selects a bit rate, which specifies how many kilobits per second of audio is desired. The higher the bit rate, the larger the MP3 data stream will be, and, generally, the closer it will sound to the original recording. With too low a bit rate, compression artifacts (i.e., sounds that were not present in the original recording) may be audible in the reproduction. Some audio is hard to compress because of its randomness and sharp attacks. When this type of audio is compressed, artifacts such as ringing or pre-echo are usually heard. A sample of applause or a triangle instrument with a relatively low bit rate provide good examples of compression artifacts. Most subjective testings of perceptual codecs tend to avoid using these types of sound materials, however, the artifacts generated by percussive sounds are barely perceptible due to the specific temporal masking feature of the 32 sub-band filterbank of Layer II on which the format is based. Besides the bit rate of an encoded piece of audio, the quality of MP3 encoded sound also depends on the quality of the encoder algorithm as well as the complexity of the signal being encoded. As the MP3 standard allows quite a bit of freedom with encoding algorithms, different encoders do feature quite different quality, even with identical bit rates. As an example, in a public listening test featuring two early MP3 encoders set at about 128 kbit/s, one scored 3.66 on a 1–5 scale, while the other scored only 2.22. Quality is dependent on the choice of encoder and encoding parameters. This observation caused a revolution in audio encoding. Early on bitrate was the prime and only consideration. At the time MP3 files were of the very simplest type: they used the same bit rate for the entire file: this process is known as Constant Bit Rate (CBR) encoding. Using a constant bit rate makes encoding simpler and less CPU intensive. However, it is also possible to create files where the bit rate changes throughout the file. These are known as Variable Bit Rate. The bit reservoir and VBR encoding were actually part of the original MPEG-1 standard. The concept behind them is that, in any piece of audio, some sections are easier to compress, such as silence or music containing only a few tones, while others will be more difficult to compress. So, the overall quality of the file may be increased by using a lower bit rate for the less complex passages and a higher one for the more complex parts. With some advanced MP3 encoders, it is possible to specify a given quality, and the encoder will adjust the bit rate accordingly. Users that desire a particular "quality setting" that is transparent to their ears can use this value when encoding all of their music, and generally speaking not need to worry about performing personal listening tests on each piece of music to determine the correct bit rate. Perceived quality can be influenced by listening environment (ambient noise), listener attention, and listener training and in most cases by listener audio equipment (such as sound cards, speakers and headphones). Furthermore, sufficient quality may be achieved by a lesser quality setting for lectures and human speech applications and reduces encoding time and complexity. A test given to new students by Stanford University Music Professor Jonathan Berger showed that student preference for MP3-quality music has risen each year. Berger said the students seem to prefer the 'sizzle' sounds that MP3s bring to music. An in-depth study of MP3 audio quality, sound artist and composer Ryan Maguire's project "The Ghost in the MP3" isolates the sounds lost during MP3 compression. In 2015, he released the track "moDernisT" (an anagram of "Tom's Diner"), composed exclusively from the sounds deleted during MP3 compression of the song "Tom's Diner", the track originally used in the formulation of the MP3 standard. A detailed account of the techniques used to isolate the sounds deleted during MP3 compression, along with the conceptual motivation for the project, was published in the 2014 Proceedings of the International Computer Music Conference. Bitrate is the product of the sample rate and number of bits per sample used to encode the music. CD audio is 44100 samples per second. The number of bits per sample also depends on the number of audio channels. CD is stereo and 16 bits per channel. So, multiplying 44100 by 32 gives 1411200—the bitrate of uncompressed CD digital audio. MP3 was designed to encode this 1411 kbit/s data at 320 kbit/s or less. As less complex passages are detected by MP3 algorithms then lower bitrates may be employed. When using MPEG-2 instead of MPEG-1, MP3 supports only lower sampling rates (16000, 22050 or 24000 samples per second) and offers choices of bitrate as low as 8 kbit/s but no higher than 160 kbit/s. By lowering the sampling rate, MPEG-2 layer III removes all frequencies above half the new sampling rate that may have been present in the source audio. As shown in these two tables, 14 selected bit rates are allowed in MPEG-1 Audio Layer III standard: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 kbit/s, along with the 3 highest available sampling frequencies of 32, 44.1 and 48 kHz. MPEG-2 Audio Layer III also allows 14 somewhat different (and mostly lower) bit rates of 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160 kbit/s with sampling frequencies of 16, 22.05 and 24 kHz which are exactly half that of MPEG-1 MPEG-2.5 Audio Layer III frames are limited to only 8 bit rates of 8, 16, 24, 32, 40, 48, 56 and 64 kbit/s with 3 even lower sampling frequencies of 8, 11.025, and 12 kHz. MPEG-1 frames contain the most detail in 320 kbit/s mode with silence and simple tones still requiring 32 kbit/s. MPEG-2 frames can capture up to 12 kHz sound reproductions needed up to 160 kbit/s. MP3 files made with MPEG-2 don't have 20 kHz bandwidth because of the Nyquist–Shannon sampling theorem. Frequency reproduction is always strictly less than half of the sampling frequency, and imperfect filters require a larger margin for error (noise level versus sharpness of filter), so an 8 kHz sampling rate limits the maximum frequency to 4 kHz, while a 48 kHz sampling rate limits an MP3 to a maximum 24 kHz sound reproduction. MPEG-2 uses half and MPEG-2.5 only a quarter of MPEG-1 sample rates. For the general field of human speech reproduction, a bandwidth of 5512 Hz is sufficient to produce excellent results (for voice) using the sampling rate of 11025 and VBR encoding from 44100 (standard) WAV file. English speakers average 41–42 kbit/s with -V 9.6 setting but this may vary with amount of silence recorded or the rate of delivery (wpm). Resampling to 12000 (6K bandwidth) is selected by the LAME parameter -V 9.4 Likewise -V 9.2 selects 16000 sample rate and a resultant 8K lowpass filtering. For more info see Nyquist – Shannon. Older versions of LAME and FFmpeg only support integer arguments for variable bit rate quality selection parameter. The n.nnn quality parameter (-V) is documented at lame.sourceforge.net but is only supported in LAME with the new style VBR variable bit rate quality selector—not average bit rate (ABR). A sample rate of 44.1 kHz is commonly used for music reproduction, because this is also used for CD audio, the main source used for creating MP3 files. A great variety of bit rates are used on the Internet. A bit rate of 128 kbit/s is commonly used, at a compression ratio of 11:1, offering adequate audio quality in a relatively small space. As Internet bandwidth availability and hard drive sizes have increased, higher bit rates up to 320 kbit/s are widespread. Uncompressed audio as stored on an audio-CD has a bit rate of 1,411.2 kbit/s, (16 bit/sample × 44100 samples/second × 2 channels / 1000 bits/kilobit), so the bitrates 128, 160 and 192 kbit/s represent compression ratios of approximately 11:1, 9:1 and 7:1 respectively. Non-standard bit rates up to 640 kbit/s can be achieved with the LAME encoder and the freeformat option, although few MP3 players can play those files. According to the ISO standard, decoders are only required to be able to decode streams up to 320 kbit/s. Early MPEG Layer III encoders used what is now called Constant Bit Rate (CBR). The software was only able to use a uniform bitrate on all frames in an MP3 file. Later more sophisticated MP3 encoders were able to use the bit reservoir to target an average bit rate selecting the encoding rate for each frame based on the complexity of the sound in that portion of the recording. A more sophisticated MP3 encoder can produce variable bitrate audio. MPEG audio may use bitrate switching on a per-frame basis, but only layer III decoders must support it. VBR is used when the goal is to achieve a fixed level of quality. The final file size of a VBR encoding is less predictable than with constant bitrate. Average bitrate is a type of VBR implemented as a compromise between the two: the bitrate is allowed to vary for more consistent quality, but is controlled to remain near an average value chosen by the user, for predictable file sizes. Although an MP3 decoder must support VBR to be standards compliant, historically some decoders have bugs with VBR decoding, particularly before VBR encoders became widespread. The most evolved LAME MP3 encoder supports the generation of VBR, ABR, and even the older CBR MP3 formats. Layer III audio can also use a "bit reservoir", a partially full frame's ability to hold part of the next frame's audio data, allowing temporary changes in effective bitrate, even in a constant bitrate stream. Internal handling of the bit reservoir increases encoding delay. There is no scale factor band 21 (sfb21) for frequencies above approx 16 kHz, forcing the encoder to choose between less accurate representation in band 21 or less efficient storage in all bands below band 21, the latter resulting in wasted bitrate in VBR encoding. The ancillary data field can be used to store user defined data. The ancillary data is optional and the number of bits available is not explicitly given. The ancillary data is located after the Huffman code bits and ranges to where the next frame's main_data_begin points to. Encoder mp3PRO used ancillary data to encode extra information which could improve audio quality when decoded with its own algorithm. A "tag" in an audio file is a section of the file that contains metadata such as the title, artist, album, track number or other information about the file's contents. The MP3 standards do not define tag formats for MP3 files, nor is there a standard container format that would support metadata and obviate the need for tags. However, several "de facto" standards for tag formats exist. As of 2010, the most widespread are ID3v1 and ID3v2, and the more recently introduced APEv2. These tags are normally embedded at the beginning or end of MP3 files, separate from the actual MP3 frame data. MP3 decoders either extract information from the tags, or just treat them as ignorable, non-MP3 junk data. Playing and editing software often contains tag editing functionality, but there are also tag editor applications dedicated to the purpose. Aside from metadata pertaining to the audio content, tags may also be used for DRM. ReplayGain is a standard for measuring and storing the loudness of an MP3 file (audio normalization) in its metadata tag, enabling a ReplayGain-compliant player to automatically adjust the overall playback volume for each file. MP3Gain may be used to reversibly modify files based on ReplayGain measurements so that adjusted playback can be achieved on players without ReplayGain capability. The basic MP3 decoding and encoding technology is patent-free in the European Union, all patents having expired there by 2012 at the latest. In the United States, the technology became substantially patent-free on 16 April 2017 (see below). MP3 patents expired in the US between 2007 and 2017. In the past, many organizations have claimed ownership of patents related to MP3 decoding or encoding. These claims led to a number of legal threats and actions from a variety of sources. As a result, uncertainty about which patents must have been licensed in order to create MP3 products without committing patent infringement in countries that allow software patents was a common feature of the early stages of adoption of the technology. The initial near-complete MPEG-1 standard (parts 1, 2 and 3) was publicly available on 6 December 1991 as ISO CD 11172. In most countries, patents cannot be filed after prior art has been made public, and patents expire 20 years after the initial filing date, which can be up to 12 months later for filings in other countries. As a result, patents required to implement MP3 expired in most countries by December 2012, 21 years after the publication of ISO CD 11172. An exception is the United States, where patents in force but filed prior to 8 June 1995 expire after the later of 17 years from the issue date or 20 years from the priority date. A lengthy patent prosecution process may result in a patent issuing much later than normally expected (see submarine patents). The various MP3-related patents expired on dates ranging from 2007 to 2017 in the United States. Patents for anything disclosed in ISO CD 11172 filed a year or more after its publication are questionable. If only the known MP3 patents filed by December 1992 are considered, then MP3 decoding has been patent-free in the US since 22 September 2015, when , which had a PCT filing in October 1992, expired. If the longest-running patent mentioned in the aforementioned references is taken as a measure, then the MP3 technology became patent-free in the United States on 16 April 2017, when , held and administered by Technicolor, expired. As a result, many free and open-source software projects, such as the Fedora operating system, have decided to start shipping MP3 support by default, and users will no longer have to resort to installing unofficial packages maintained by third party software repositories for MP3 playback or encoding. Technicolor (formerly called Thomson Consumer Electronics) claimed to control MP3 licensing of the Layer 3 patents in many countries, including the United States, Japan, Canada and EU countries. Technicolor had been actively enforcing these patents. MP3 license revenues from Technicolor's administration generated about €100 million for the Fraunhofer Society in 2005. In September 1998, the Fraunhofer Institute sent a letter to several developers of MP3 software stating that a license was required to "distribute and/or sell decoders and/or encoders". The letter claimed that unlicensed products "infringe the patent rights of Fraunhofer and Thomson. To make, sell or distribute products using the [MPEG Layer-3] standard and thus our patents, you need to obtain a license under these patents from us." This led to the situation where the LAME MP3 encoder project could not offer its users official binaries that could run on their computer. The project's position was that as source code, LAME was simply a description of how an MP3 encoder "could" be implemented. Unofficially, compiled binaries were available from other sources. Sisvel S.p.A. and its United States subsidiary Audio MPEG, Inc. previously sued Thomson for patent infringement on MP3 technology, but those disputes were resolved in November 2005 with Sisvel granting Thomson a license to their patents. Motorola followed soon after, and signed with Sisvel to license MP3-related patents in December 2005. Except for three patents, the US patents administered by Sisvel had all expired in 2015. The three exceptions are: , expired February 2017; , expired February 2017; and , expired 9 April 2017. In September 2006, German officials seized MP3 players from SanDisk's booth at the IFA show in Berlin after an Italian patents firm won an injunction on behalf of Sisvel against SanDisk in a dispute over licensing rights. The injunction was later reversed by a Berlin judge, but that reversal was in turn blocked the same day by another judge from the same court, "bringing the Patent Wild West to Germany" in the words of one commentator. In February 2007, Texas MP3 Technologies sued Apple, Samsung Electronics and Sandisk in eastern Texas federal court, claiming infringement of a portable MP3 player patent that Texas MP3 said it had been assigned. Apple, Samsung, and Sandisk all settled the claims against them in January 2009. Alcatel-Lucent has asserted several MP3 coding and compression patents, allegedly inherited from AT&T-Bell Labs, in litigation of its own. In November 2006, before the companies' merger, Alcatel sued Microsoft for allegedly infringing seven patents. On 23 February 2007, a San Diego jury awarded Alcatel-Lucent US $1.52 billion in damages for infringement of two of them. The court subsequently revoked the award, however, finding that one patent had not been infringed and that the other was not owned by Alcatel-Lucent; it was co-owned by AT&T and Fraunhofer, who had licensed it to Microsoft, the judge ruled. That defense judgment was upheld on appeal in 2008. See Alcatel-Lucent v. Microsoft for more information. Other lossy formats exist. Among these, Advanced Audio Coding (AAC) is the most widely used, and was designed to be the successor to MP3. There also exist other lossy formats such as mp3PRO and MP2. They are members of the same technological family as MP3 and depend on roughly similar psychoacoustic models and MDCT algorithms. Whereas MP3 uses a hybrid coding approach that is part MDCT and part FFT, AAC is purely MDCT, significantly improving compression efficiency. Many of the basic patents underlying these formats are held by Fraunhofer Society, Alcatel-Lucent, Thomson Consumer Electronics, Bell, Dolby, LG Electronics, NEC, NTT Docomo, Panasonic, Sony Corporation, ETRI, JVC Kenwood, Philips, Microsoft, and NTT. There are also open compression formats like Opus and Vorbis that are available free of charge and without any known patent restrictions. Some of the newer audio compression formats, such as AAC, WMA Pro and Vorbis, are free of some limitations inherent to the MP3 format that cannot be overcome by any MP3 encoder. Besides lossy compression methods, lossless formats are a significant alternative to MP3 because they provide unaltered audio content, though with an increased file size compared to lossy compression. Lossless formats include FLAC (Free Lossless Audio Codec), Apple Lossless and many others.
https://en.wikipedia.org/wiki?curid=19673
Mary Rose The Mary Rose is a carrack-type warship of the English Tudor navy of King Henry VIII. She served for 33 years in several wars against France, Scotland, and Brittany. After being substantially rebuilt in 1536, she saw her last action on 1545. She led the attack on the galleys of a French invasion fleet, but sank in the Solent, the straits north of the Isle of Wight. The wreck of the "Mary Rose" was discovered in 1971 and was raised on 11 October 1982 by the Mary Rose Trust in one of the most complex and expensive maritime salvage projects in history. The surviving section of the ship and thousands of recovered artefacts are of great value as a Tudor-era time capsule. The excavation and raising of the "Mary Rose" was a milestone in the field of maritime archaeology, comparable in complexity and cost to the raising of the 17th-century Swedish warship "Vasa" in 1961. The finds include weapons, sailing equipment, naval supplies, and a wide array of objects used by the crew. Many of the artefacts are unique to the "Mary Rose" and have provided insights into topics ranging from naval warfare to the history of musical instruments. The remains of the hull have been on display at the Portsmouth Historic Dockyard since the mid-1980s while undergoing restoration. An extensive collection of well-preserved artefacts is on display at the Mary Rose Museum, built to display the remains of the ship and its artefacts. The "Mary Rose" was one of the largest ships in the English navy through more than three decades of intermittent war, and she was one of the earliest examples of a purpose-built sailing warship. She was armed with new types of heavy guns that could fire through the recently invented gun-ports. She was substantially rebuilt in 1536 and was also one of the earliest ships that could fire a broadside, although the line of battle tactics had not yet been developed. Several theories have sought to explain the demise of the "Mary Rose", based on historical records, knowledge of 16th-century shipbuilding, and modern experiments. The precise cause of her sinking is still unclear because of conflicting testimonies and a lack of conclusive physical evidence. In the late 15th century, England was still reeling from its dynastic wars first with France and then among its ruling families back on home soil. The great victories against France in the Hundred Years' War were in the past; only the small enclave of Calais in northern France remained of the vast continental holdings of the English kings. The War of the Roses—the civil war between the houses of York and Lancaster—had ended with Henry VII's establishment of the House of Tudor, the new ruling dynasty of England. The ambitious naval policies of Henry V were not continued by his successors, and from 1422 to 1509 only six ships were built for the crown. The marriage alliance between Anne of Brittany and Charles VIII of France in 1491, and his successor Louis XII in 1499, left England with a weakened strategic position on its southern flank. Despite this, Henry VII managed to maintain a comparatively long period of peace and a small but powerful core of a navy. At the onset of the early modern period, the great European powers were France, the Holy Roman Empire and Spain. All three became involved in the War of the League of Cambrai in 1508. The conflict was initially aimed at the Republic of Venice but eventually turned against France. Through the Spanish possessions in the Low Countries, England had close economic ties with the Spanish Habsburgs, and it was the young Henry VIII's ambition to repeat the glorious martial endeavours of his predecessors. In 1509, six weeks into his reign, Henry married the Spanish princess Catherine of Aragon and joined the League, intent on certifying his historical claim as king of both England and France. By 1511 Henry was part of an anti-French alliance that included Ferdinand II of Aragon, Pope Julius II and Holy Roman emperor Maximilian. The small navy that Henry VIII inherited from his father had only two sizeable ships, the carracks "Regent" and "Sovereign". Just months after his accession, two large ships were ordered: the "Mary Rose" and the "Peter Pomegranate" (later known as "Peter" after being rebuilt in 1536) of about 500 and 450 tons respectively. Which king ordered the building of the "Mary Rose" is unclear; although construction began during Henry VIII's reign, the plans for naval expansion could have been in the making earlier. Henry VIII oversaw the project and he ordered additional large ships to be built, most notably the "Henry Grace à Dieu" ("Henry by the Grace of God"), or "Great Harry" at more than 1000 tons burthen. By the 1520s the English state had established a "de facto" permanent "Navy Royal", the organizational ancestor of the modern Royal Navy. The construction of the "Mary Rose" began in 1510 in Portsmouth and she was launched in July 1511. She was then towed to London and fitted with rigging and decking, and supplied with armaments. Other than the structural details needed to sail, stock and arm the "Mary Rose", she was also equipped with flags, banners and streamers (extremely elongated flags that were flown from the top of the masts) that were either painted or gilded. Constructing a warship of the size of the "Mary Rose" was a major undertaking, requiring vast quantities of high-quality material. In the case of building a state-of-the-art warship, these materials were primarily oak. The total amount of timber needed for the construction can only be roughly calculated since only about one third of the ship still exists. One estimate for the number of trees is around 600 mostly large oaks, representing about 16 hectares (40 acres) of woodland. The huge trees that had been common in Europe and the British Isles in previous centuries were by the 16th century quite rare, which meant that timbers were brought in from all over southern England. The largest timbers used in the construction were of roughly the same size as those used in the roofs of the largest cathedrals in the high Middle Ages. An unworked hull plank would have weighed over 300 kg (660 lb), and one of the main deck beams would have weighed close to three-quarters of a tonne. The common explanation for the ship's name was that it was inspired by Henry VIII's favourite sister, Mary Tudor, and the rose as the emblem of the Tudors. According to historians David Childs, David Loades and Peter Marsden, no direct evidence of naming the ship after the King's sister exists. It was far more common at the time to give ships pious Christian names, a long-standing tradition in Western Europe, or to associate them with their royal patrons. Names like "Grace Dieu" (Thank God) and "Holighost" (Holy Spirit) had been common since the 15th century and other Tudor navy ships had names like the "Regent" and "Three Ostrich Feathers" (referring to the crest of the Prince of Wales). The Virgin Mary is a more likely candidate for a namesake, and she was also associated with the Rosa Mystica (mystic rose). The name of the sister ship of the "Mary Rose", the "Peter Pomegranate", is believed to have been named in honour of Saint Peter, and the badge of the Queen Catharine of Aragon, a pomegranate. According to Childs, Loades and Marsden, the two ships, which were built around the same time, were named in honour of the king and queen, respectively. The "Mary Rose" was substantially rebuilt in 1536. The 1536 rebuilding turned a ship of 500 tons into one of 700 tons, and added an entire extra tier of broadside guns to the old carrack-style structure. By consequence, modern research is based mostly on interpretations of the concrete physical evidence of this version of the "Mary Rose". The construction of the original design from 1509 is less known. The "Mary Rose" was built according to the carrack-style with high "castles" in the bow and stern with a low waist of open decking in the middle. The shape of the hull has a so-called tumblehome form and reflected the use of the ship as a platform for heavy guns. Above the waterline, the hull gradually narrows to compensate for the weight of the guns and to make boarding more difficult. Since only part of the hull has survived, it is not possible to determine many of the basic dimensions with any great accuracy. The moulded breadth, the widest point of the ship roughly above the waterline, was about 12 metres (39 ft) and the keel about 32 metres (105 ft), although the ship's overall length is uncertain. The hull had four levels separated by three decks. The terminology for these in the 16th century was still not standardised so the terms used here are those that were applied by the Mary Rose Trust. The "hold" lay furthest down in the ship, right above the bottom planking below the waterline. This is where the kitchen, or galley, was situated and the food was cooked. Directly aft of the galley was the mast step, a rebate in the centre-most timber of the keelson, right above the keel, which supported the main mast, and next to it the main bilge pump. To increase the stability of the ship, the hold was where the ballast was placed and much of the supplies were kept. Right above the hold was the "orlop", the lowest deck. Like the hold it was partitioned and was also used as a storage area for everything from food to spare sails. Above the orlop lay the "main deck" which housed the heaviest guns. The side of the hull on the main deck level had seven gunports on each side fitted with heavy lids that would have been watertight when closed. This was also the highest deck that was caulked and waterproof. Along the sides of the main deck there were cabins under the forecastle and sterncastle which have been identified as belonging to the carpenter, barber-surgeon, pilot and possibly also the master gunner and some of the officers. The top deck in the hull structure was the "upper deck" (or weather deck) which was exposed to the elements in the waist. It was a dedicated fighting deck without any known partitions and a mix of heavy and light guns. Over the open waist the upper deck was entirely covered with a boarding net, a coarse netting that served as a defence measure against boarding. Though very little of the upper deck has survived, it has been suggested that it housed the main living quarters of the crew underneath the sterncastle. A drainage located in this area has been identified as a possible "piss-dale", a general urinal to complement the regular toilets that would probably have been located in the bow. The castles of the "Mary Rose" had additional decks, but since virtually nothing of them survives, their design has had to be reconstructed from historical records. Contemporary ships of equal size were consistently listed as having three decks in both castles. Although speculative, this layout is supported by the illustration in the Anthony Roll and the gun inventories. During the early stages of excavation of the wreck, it was believed that the ship had originally been built with clinker (or clench) planking, a technique where the hull consisted of overlapping planks that bore the structural strength of the ship. Cutting gunports into a clinker-built hull would have meant weakening the ship's structural integrity, and it was assumed that she was later rebuilt to accommodate a hull with carvel edge-to-edge planking with a skeletal structure to support a hull perforated with gunports. Later examination indicates that the clinker planking is not present throughout the ship; only the outer structure of the sterncastle is built with overlapping planking, though not with a true clinker technique. Although only the lower fittings of the rigging survive, a 1514 inventory and the only known contemporary depiction of the ship from the Anthony Roll have been used to determine how the propulsion system of the "Mary Rose" was designed. Nine, or possibly ten, sails were flown from four masts and a bowsprit: the foremast and mainmast had two and three square sails respectively; the mizzen mast had a lateen sail and a small square sail and the bonaventure mizzen had at least one lateen sail, and possibly also a square sail, and the bowsprit flew a small square spritsail. According to the Anthony Roll illustration (see top of this section), the yards (the spars from which the sails were set) on the foremast and mainmast were also equipped with sheerhooks, twin curved blades sharpened on the inside, that were intended to cut an enemy ship's rigging during boarding actions. The sailing capabilities of the "Mary Rose" were commented on by her contemporaries and were once even put to the test. In March 1513 a contest was arranged off The Downs, west of Kent, in which she raced against nine other ships. She won the contest, and Admiral Edward Howard described her enthusiastically as "the noblest ship of sayle [of any] gret ship, at this howr, that I trow [believe] be in Cristendom". Several years later, while sailing between Dover and The Downs, Vice-Admiral William Fitzwilliam noted that both the "Henry Grace à Dieu" and the "Mary Rose" performed very well, riding steadily in rough seas and that it would have been a "hard chose" between the two. Modern experts have been more sceptical to her sailing qualities, believing that ships at this time were almost incapable of sailing close against the wind, and describing the handling of the "Mary Rose" as being like "a wet haystack". The "Mary Rose" represented a transitional ship design in naval warfare. Since ancient times, war at sea had been fought much like that on land: with melee weapons and bows and arrows, but on floating wooden platforms rather than battlefields. Though the introduction of guns was a significant change, it only slowly changed the dynamics of ship-to-ship combat. As guns became heavier and able to take more powerful gunpowder charges, they needed to be placed lower in the ship, closer to the water line. Gunports cut in the hull of ships had been introduced as early as 1501, only about a decade before the "Mary Rose" was built. This made broadsides, coordinated volleys from all the guns on one side of a ship, possible for the first time in history, at least in theory. Naval tactics throughout the 16th century and well into the 17th century focused on countering the oar-powered galleys that were armed with heavy guns in the bow, facing forwards, which were aimed by turning the entire ship against its target. Combined with inefficient gunpowder and the difficulties inherent in firing accurately from moving platforms, this meant that boarding remained the primary tactic for decisive victory throughout the 16th century. As the "Mary Rose" was built and served during a period of rapid development of heavy artillery, her armament was a mix of old designs and innovations. The heavy armament was a mix of older-type wrought iron and cast bronze guns, which differed considerably in size, range and design. The large iron guns were made up of staves or bars welded into cylinders and then reinforced by shrinking iron hoops and breech loaded and equipped with simpler gun-carriages made from hollowed-out elm logs with only one pair of wheels, or without wheels entirely. The bronze guns were cast in one piece and rested on four-wheel carriages which were essentially the same as those used until the 19th century. The breech-loaders were cheaper to produce and both easier and faster to reload, but could take less powerful charges than cast bronze guns. Generally, the bronze guns used cast iron shot and were more suited to penetrate hull sides while the iron guns used stone shot that would shatter on impact and leave large, jagged holes, but both could also fire a variety of ammunition intended to destroy rigging and light structure or injure enemy personnel. The majority of the guns were small iron guns with short range that could be aimed and fired by a single person. The two most common are the "bases", breech-loading swivel guns, most likely placed in the castles, and "hailshot pieces", small muzzle-loaders with rectangular bores and fin-like protrusions that were used to support the guns against the railing and allow the ship structure to take the force of the recoil. Though the design is unknown, there were two "top pieces" in a 1546 inventory (finished after the sinking) which were probably similar to a base, but placed in one or more of the fighting tops. The ship went through several changes in her armament throughout her career, most significantly accompanying her "rebuilding" in 1536 (see below), when the number of anti-personnel guns was reduced and a second tier of carriage-mounted long guns fitted. There are three inventories that list her guns, dating to 1514, 1540 and 1546. Together with records from the armoury at the Tower of London, these show how the configuration of guns changed as gun-making technology evolved and new classifications were invented. In 1514, the armament consisted mostly of anti-personnel guns like the larger breech-loading iron "murderers" and the small "serpentines", "demi-slings" and stone guns. Only a handful of guns in the first inventory were powerful enough to hole enemy ships, and most would have been supported by the ship's structure rather than resting on carriages. The inventories of both the "Mary Rose" and the Tower had changed radically by 1540. There were now the new cast bronze "cannons", "demi-cannons", "culverins" and "sakers" and the wrought iron "port pieces" (a name that indicated they fired through ports), all of which required carriages, had longer range and were capable of doing serious damage to other ships. The analysis of the 1514 inventory combined with hints of structural changes in the ship both indicate that the gunports on the main deck were indeed a later addition. Various types of ammunition could be used for different purposes: plain spherical shot of stone or iron smashed hulls, spiked bar shot and shot linked with chains would tear sails or damage rigging, and canister shot packed with sharp flints produced a devastating shotgun effect. Trials made with replicas of culverins and port pieces showed that they could penetrate wood the same thickness of the "Mary Rose's" hull planking, indicating a stand-off range of at least 90 m (295 ft). The port pieces proved particularly efficient at smashing large holes in wood when firing stone shot and were a devastating anti-personnel weapon when loaded with flakes or pebbles. To defend against being boarded, "Mary Rose" carried large stocks of melee weapons, including pikes and bills; 150 of each kind were stocked on the ship according to the Anthony Roll, a figure confirmed roughly by the excavations. Swords and daggers were personal possessions and not listed in the inventories, but the remains of both have been found in great quantities, including the earliest dated example of a British basket-hilted sword. A total of 250 longbows were carried on board, and 172 of these have so far been found, as well as almost 4,000 arrows, bracers (arm guards) and other archery-related equipment. Longbow archery in Tudor England was mandatory for all able adult men, and despite the introduction of field artillery and handguns, they were used alongside new missile weapons in great quantities. On the "Mary Rose", the longbows could only have been drawn and shot properly from behind protective panels in the open waist or from the top of the castles as the lower decks lacked sufficient headroom. There were several types of bows of various size and range. Lighter bows would have been used as "sniper" bows, while the heavier design could possibly have been used to shoot fire arrows. The inventories of both 1514 and 1546 also list several hundred heavy darts and lime pots that were designed to be thrown onto the deck of enemy ships from the fighting tops, although no physical evidence of either of these weapon types has been identified. Of the 50 handguns listed in the Anthony Roll, the complete stocks of five matchlock muskets and fragments of another eleven have been found. They had been manufactured mainly in Italy, with some originating from Germany. Found in storage were several "gunshields", a rare type of firearm consisting of a wooden shield with a small gun fixed in the middle. Throughout her 33-year career, the crew of the "Mary Rose" changed several times and varied considerably in size. It would have a minimal skeleton crew of 17 men or fewer in peacetime and when she was "laid up in ordinary" (in reserve). The average wartime manning would have been about 185 soldiers, 200 sailors, 20–30 gunners and an assortment of other specialists such as surgeons, trumpeters and members of the admiral's staff, for a total of 400–450 men. When taking part in land invasions or raids, such as in the summer of 1512, the number of soldiers could have swelled to just over 400 for a combined total of more than 700. Even with the normal crew size of around 400, the ship was quite crowded, and with additional soldiers would have been extremely cramped. Little is known of the identities of the men who served on the "Mary Rose", even when it comes to the names of the officers, who would have belonged to the gentry. Two admirals and four captains (including Edward and Thomas Howard, who served both positions) are known through records, as well as a few ship masters, pursers, master gunners and other specialists. Forensic science has been used by artists to create reconstructions of faces of eight crew members, and the results were publicised in May 2013. In addition, researchers have extracted DNA from remains in the hopes of identifying origins of crew, and potentially living descendants. Of the vast majority of the crewmen, soldiers, sailors and gunners alike, nothing has been recorded. The only source of information for these men has been through osteological analysis of the human bones found at the wrecksite. An approximate composition of some of the crew has been conjectured based on contemporary records. The "Mary Rose" would have carried a captain, a master responsible for navigation, and deck crew. There would also have been a purser responsible for handling payments, a boatswain, the captain's second in command, at least one carpenter, a pilot in charge of navigation, and a cook, all of whom had one or more assistants (mates). The ship was also staffed by a barber-surgeon who tended to the sick and wounded, along with an apprentice or mate and possibly also a junior surgeon. The only positively identified person who went down with the ship was Vice-Admiral George Carew. McKee, Stirland and several other authors have also named Roger Grenville, father of Richard Grenville of the Elizabethan-era "Revenge", captain during the final battle, although the accuracy of the sourcing for this has been disputed by maritime archaeologist Peter Marsden. The bones of a total of 179 people were found during the excavations of the "Mary Rose", including 92 "fairly complete skeletons", more or less complete collections of bones associated with specific individuals. Analysis of these has shown that crew members were all male, most of them young adults. Some were no more than 11–13 years old, and the majority (81%) under 30. They were mainly of English origin and, according to archaeologist Julie Gardiner, they most likely came from the West Country; many following their aristocratic masters into maritime service. There were also a few people from continental Europe. An eyewitness testimony right after the sinking refers to a survivor who was a Fleming, and the pilot may very well have been French. Analysis of oxygen isotopes in teeth indicates that some were also of southern European origin. In general they were strong, well-fed men, but many of the bones also reveal tell-tale signs of childhood diseases and a life of grinding toil. The bones also showed traces of numerous healed fractures, probably the result of on-board accidents. There are no extant written records of the make-up of the broader categories of soldiers and sailors, but since the "Mary Rose" carried some 300 longbows and several thousand arrows there had to be a considerable proportion of longbow archers. Examination of the skeletal remains has found that there was a disproportionate number of men with a condition known as "os acromiale", affecting their shoulder blades. This condition is known among modern elite archery athletes and is caused by placing considerable stress on the arm and shoulder muscles, particularly of the left arm that is used to hold the bow to brace against the pull on the bowstring. Among the men who died on the ship it was likely that some had practised using the longbow since childhood, and served on board as specialist archers. A group of six skeletons was found grouped close to one of the 2-tonne bronze culverins on the main deck near the bow. Fusing of parts of the spine and ossification, the growth of new bone, on several vertebrae evidenced all but one of these crewmen to have been strong, well-muscled men who had been engaged in heavy pulling and pushing, the exception possibly being a "powder monkey" not involved in heavy work. These have been tentatively classified as members of a complete gun crew, all having died at their battle station. The "Mary Rose" first saw battle in 1512, in a joint naval operation with the Spanish against the French. The English were to meet the French and Breton fleets in the English Channel while the Spanish attacked them in the Bay of Biscay and then attack Gascony. The 35-year-old Sir Edward Howard was appointed Lord High Admiral in April and chose the "Mary Rose" as his flagship. His first mission was to clear the seas of French naval forces between England to the northern coast of Spain to allow for the landing of supporting troops near the French border at Fuenterrabia. The fleet consisted of 18 ships, among them the large ships the "Regent" and the "Peter Pomegranate", carrying over 5,000 men. Howard's expedition led to the capture of twelve Breton ships and a four-day raiding tour of Brittany where English forces successfully fought against local forces and burned numerous settlements. The fleet returned to Southampton in June where it was visited by King Henry. In August the fleet sailed for Brest where it encountered a joint, but ill-coordinated, French-Breton fleet at the battle of St. Mathieu. The English with one of the great ships in the lead (according to Marsden the "Mary Rose") battered the French ships with heavy gunfire and forced them to retreat. The Breton flagship "Cordelière" put up a fight and was boarded by the 1,000-ton "Regent". By accident or through the unwillingness of the Breton crew to surrender, the powder magazine of the "Cordelière" caught fire and blew up in a violent explosion, setting fire to the "Regent" and eventually sinking her. About 180 English crew members saved themselves by throwing themselves into the sea and only a handful of Bretons survived, only to be captured. The captain of the "Regent", 600 soldiers and sailors, the High Admiral of France and the steward of the town of Morlaix were killed in the incident, making it the focal point of several contemporary chronicles and reports. On , the English burnt 27 French ships, captured another five and landed forces near Brest to raid and take prisoners, but storms forced the fleet back to Dartmouth in Devon and then to Southampton for repairs. In early 1513, the "Mary Rose" was once more chosen by Howard as the flagship for an expedition against the French. Before seeing action, she took part in a race against other ships where she was deemed to be one of the most nimble and the fastest of the great ships in the fleet (see details under "Sails and rigging"). On , Howard's force arrived off Brest only to see a small enemy force join with the larger force in the safety of Brest harbour and its fortifications. The French had recently been reinforced by a force of galleys from the Mediterranean, which sank one English ship and seriously damaged another. Howard landed forces near Brest, but made no headway against the town and was by now getting low on supplies. Attempting to force a victory, he took a small force of small oared vessels on a daring frontal attack on the French galleys on . Howard himself managed to reach the ship of French admiral, Prégent de Bidoux, and led a small party to board it. The French fought back fiercely and cut the cables that attached the two ships, separating Howard from his men. It left him at the mercy of the soldiers aboard the galley, who instantly killed him. Demoralised by the loss of its admiral and seriously short of food, the fleet returned to Plymouth. Thomas Howard, elder brother of Edward, was assigned the new Lord Admiral, and was set to the task of arranging another attack on Brittany. The fleet was not able to mount the planned attack because of adverse winds and great difficulties in supplying the ships adequately and the "Mary Rose" took up winter quarters in Southampton. In August the Scots joined France in war against England, but were dealt a crushing defeat at the Battle of Flodden on 1513. A follow-up attack in early 1514 was supported by a naval force that included the "Mary Rose", but without any known engagements. The French and English mounted raids on each other throughout that summer, but achieved little, and both sides were by then exhausted. By autumn the war was over and a peace treaty was sealed by the marriage of Henry's sister, Mary, to French king Louis XII. After the peace "Mary Rose" was placed in the reserves, "in ordinary". She was laid up for maintenance along with her sister ship the "Peter Pomegranate" in July 1514. In 1518 she received a routine repair and caulking, waterproofing with tar and oakum (old rope fibres) and was then assigned a small skeleton crew who lived on board the ship until 1522. She served briefly on a mission with other warships to "scour the seas" in preparation for Henry VIII's journey across the Channel to the summit with the French king Francis I at the Field of the Cloth of Gold in June 1520. In 1522, England was once again at war with France because of a treaty with the Holy Roman Emperor Charles V. The plan was for an attack on two fronts with an English thrust in northern France. The "Mary Rose" participated in the escort transport of troops in June 1522, and by the Breton port of Morlaix was captured. The fleet sailed home and the "Mary Rose" berthed for the winter in Dartmouth. The war raged on until 1525 and saw the Scots join the French side. Though Charles Brandon came close to capturing Paris in 1523, there was little gained either against France or Scotland throughout the war. With the defeat of the French army and capture of Francis I by Charles V's forces at the Battle of Pavia on 1525, the war was effectively over without any major gains or major victories for the English side. The "Mary Rose" was kept in reserve from 1522 to 1545. She was once more caulked and repaired in 1527 in a newly dug dock at Portsmouth and her longboat was repaired and trimmed. Little documentation about the "Mary Rose" between 1528 and 1539 exists. A document written by Thomas Cromwell in 1536 specifies that the "Mary Rose" and six other ships were "made new" during his service under the king, though it is unclear which years he was referring to and what "made new" actually meant. A later document from January 1536 by an anonymous author states that the "Mary Rose" and other ships were "new made", and dating of timbers from the ship confirms some type of repair being done in 1535 or 1536. This would have coincided with the controversial dissolution of the monasteries that resulted in a major influx of funds into the royal treasury. The nature and extent of this repair is unknown. Many experts, including Margaret Rule, the project leader for the raising of the "Mary Rose", have assumed that it meant a complete rebuilding from clinker planking to carvel planking, and that it was only after 1536 that the ship took on the form that it had when it sank and that was eventually recovered in the 20th century. Marsden has speculated that it could even mean that the "Mary Rose" was originally built in a style that was closer to 15th-century ships, with a rounded, rather than square, stern and without the main deck gunports. Henry's complicated marital situation and his high-handed dissolution of the monasteries angered the Pope and Catholic rulers throughout Europe, which increased England's diplomatic isolation. In 1544 Henry had agreed to attack France together with Emperor Charles V, and English forces captured Boulogne at great cost in September, but soon England was left in the lurch after Charles had achieved his objectives and brokered a separate peace. In May 1545, the French had assembled a large fleet in the estuary of the Seine with the intent to land troops on English soil. The estimates of the size of the fleet varied considerably; between 123 and 300 vessels according to French sources; and up to 226 sailing ships and galleys according to the chronicler Edward Hall. In addition to the massive fleet, 50,000 troops were assembled at Havre de Grâce (modern-day Le Havre). An English force of 160 ships and 12,000 troops under Viscount Lisle was ready at Portsmouth by early June, before the French were ready to set sail, and an ineffective pre-emptive strike was made in the middle of the month. In early July the huge French force under the command of Admiral Claude d'Annebault set sail for England and entered the Solent unopposed with 128 ships on . The English had around 80 ships with which to oppose the French, including the flagship "Mary Rose". But since they had virtually no heavy galleys, the vessels that were at their best in sheltered waters like the Solent, the English fleet promptly retreated into Portsmouth harbour. The English were becalmed in port and unable to manoeuvre. On 1545, the French galleys advanced on the immobilised English fleet, and initially threatened to destroy a force of 13 small galleys, or "rowbarges", the only ships that were able to move against them without a wind. The wind picked up and the sailing ships were able to go on the offensive before the oared vessels were overwhelmed. Two of the largest ships, the "Henry Grace à Dieu" and the "Mary Rose", led the attack on the French galleys in the Solent. Early in the battle something went wrong. While engaging the French galleys the "Mary Rose" suddenly heeled (leaned) heavily over to her starboard (right) side and water rushed in through the open gunports. The crew was powerless to correct the sudden imbalance, and could only scramble for the safety of the upper deck as the ship began to sink rapidly. As she leaned over, equipment, ammunition, supplies and storage containers shifted and came loose, adding to the general chaos. The massive port side brick oven in the galley collapsed completely and the huge 360-litre (90 gallon) copper cauldron was thrown onto the orlop deck above. Heavy guns came free and slammed into the opposite side, impeding escape or crushing men beneath them. For those who were not injured or killed outright by moving objects, there was little time to reach safety, especially for the men who were manning the guns on the main deck or fetching ammunition and supplies in the hold. The companionways that connected the decks with one another would have become bottlenecks for fleeing men, something indicated by the positioning of many of the skeletons recovered from the wreck. What turned the sinking into a major tragedy was the anti-boarding netting that covered the upper decks in the waist (the midsection of the ship) and the sterncastle. With the exception of the men who were stationed in the tops in the masts, most of those who managed to get up from below deck were trapped under the netting; they would have been in view of the surface, and their colleagues above, but with little or no chance to break through, and were dragged down with the ship. Out of a crew of at least 400, fewer than 35 escaped, a casualty rate of over 90%. Many accounts of the sinking have been preserved that describe the incident, but the only confirmed eyewitness account is the testimony of a surviving Flemish crewman written down by the Holy Roman Emperor's ambassador François van der Delft in a letter dated . According to the unnamed Fleming, the ship had fired all of its guns of one side and was turning to present the guns on the other side to the enemy ship, when she was caught in a strong gust of wind, heeled and took in water through the open gunports. In a letter to William Paget dated , former Lord High Admiral John Russel claimed that the ship had been lost because of "rechenes and great negligence". Three years after the sinking, the "Hall's Chronicle" gave the reason for the sinking as being caused by "to[o] much foly ... for she was laden with much ordinaunce, and the portes left open, which were low, & the great ordinaunce unbreached, so that when the ship should turne, the water entered, and sodainly she sanke." Later accounts repeat the explanation that the ship heeled over while going about and that the ship was brought down because of the open gunports. A biography of Peter Carew, brother of George Carew, written by John Hooker sometime after 1575, gives the same reason for the sinking, but adds that insubordination among the crew was to blame. The biography claims that George Carew noted that the "Mary Rose" showed signs of instability as soon as her sails were raised. George's uncle Gawen Carew had passed by with his own ship the "Matthew Gonson" during the battle to inquire about the situation of his nephew's ship. In reply he was told "that he had a sorte of knaves whom he could not rule". Contrary to all other accounts, Martin du Bellay, a French cavalry officer who was present at the battle, stated that the "Mary Rose" had been sunk by French guns. The most common explanation for the sinking among modern historians is that the ship was unstable for a number of reasons. When a strong gust of wind hit the sails at a critical moment, the open gunports proved fatal, the ship flooded and quickly foundered. Coates offered a variant of this hypothesis, which explains why a ship which served for several decades without sinking, and which even fought in actions in the rough seas off Brittany, unexpectedly foundered: the ship had accumulated additional weight over the years in service and finally become unseaworthy. That the ship was turning after firing all the cannons on one side has been questioned by Marsden after examination of guns recovered in both the 19th and 20th centuries; guns from both sides were found still loaded. This has been interpreted to mean that something else could have gone wrong since it is assumed that an experienced crew would not have failed to secure the gunports before making a potentially risky turn. The most recent surveys of the ship indicate that the ship was modified late in her career and have lent support to the idea that the "Mary Rose" was altered too much to be properly seaworthy. Marsden has suggested that the weight of additional heavy guns would have increased her draught so much that the waterline was less than one metre (c. 3 feet) from the gunports on the main deck. Peter Carew's claim of insubordination has been given support by James Watt, former Medical Director-General of the Royal Navy, based on records of an epidemic of dysentery in Portsmouth which could have rendered the crew incapable of handling the ship properly, while historian Richard Barker has suggested that the crew actually knew that the ship was an accident waiting to happen, at which they balked and refused to follow orders. Marsden has noted that the Carew biography is in some details inconsistent with the sequence of events reported by both French and English eyewitnesses. It also reports that there were 700 men on board, an unusually high number. The distance in time to the event it describes may mean that it was embellished to add a dramatic touch. The report of French galleys sinking the "Mary Rose" as stated by Martin du Bellay has been described as "the account of a courtesan" by naval historian Maurice de Brossard. Du Bellay and his two brothers were close to king Francis I and du Bellay had much to gain from portraying the sinking as a French victory. English sources, even if biased, would have nothing to gain from portraying the sinking as the result of crew incompetence rather than conceding to a victory to the much-feared gun galleys. Dominic Fontana, a geographer at the University of Portsmouth, has voiced support for du Bellay's version of the sinking based on the battle as it is depicted in the Cowdray Engraving, and modern GIS analysis of the modern scene of the battle. By plotting the fleets and calculating the conjectured final manoeuvres of the "Mary Rose", Fontana reached the conclusion that the ship had been hit low in the hull by the galleys and was destabilised after taking in water. He has interpreted the final heading of the ship straight due north as a failed attempt to reach the shallows at Spitbank only a few hundred metres away. This theory has been given partial support by Alexzandra Hildred, one of the experts who has worked with the "Mary Rose", though she has suggested that the close proximity to Spitbank could also indicate that the sinking occurred while trying to make a hard turn to avoid running aground. In 2000, the Channel 4 television programme "What Sank the Mary Rose?" attempted to investigate the causes suggested for her sinking by means of experiments with scale models of the ship and metal weights to simulate the presence of troops on the upper decks. Initial tests showed that the ship was able to make the turn described by eyewitnesses without capsizing. In later tests, a fan was used to create a breeze similar to the one reported to have suddenly sprung up on the day of the sinking as the "Mary Rose" went to make the turn. As the model made the turn, the breeze in the upper works forced it to heel more than at calm, forcing the main deck gun ports below the waterline and foundering the model within a few seconds. The sequence of events closely followed what eyewitnesses had reported, particularly the suddenness with which the ship sank. A salvage attempt was ordered by Secretary of State William Paget only days after the sinking, and Charles Brandon, the king's brother-in-law, took charge of practical details. The operation followed the standard procedure for raising ships in shallow waters: strong cables were attached to the sunken ship and fastened to two empty ships, or hulks. At low tide, the ropes were pulled taut with capstans. When the high tide came in, the hulks rose and with them the wreck. It would then be towed into shallower water and the procedure repeated until the whole ship could be raised completely. A list of necessary equipment was compiled by 1 August and included, among other things, massive cables, capstans, pulleys, and 40 pounds of tallow for lubrication. The proposed salvage team comprised 30 Venetian mariners and a Venetian carpenter with 60 English sailors to serve them. The two ships to be used as hulks were "Jesus of Lübeck" and "Samson", each of 700 tons burthen and similar in size to the "Mary Rose". Brandon was so confident of success that he reassured the king that it would only be a matter of days before they could raise the "Mary Rose". The optimism proved unfounded. Since the ship had settled at a 60-degree angle to starboard much of it was stuck deep into the clay of the seabed. This made it virtually impossible to pass cables under the hull and required far more lifting power than if the ship had settled on a hard seabed. An attempt to secure cables to the main mast appears only to have resulted in its being snapped off. The project was only successful in raising rigging, some guns and other items. At least two other salvage teams in 1547 and 1549 received payment for raising more guns from the wreck. Despite the failure of the first salvage operation, there was still lingering belief in the possibility of retrieving the "Mary Rose" at least until 1546, when she was presented as part of the illustrated list of English warships called the Anthony Roll. When all hope of raising the complete ship was finally abandoned is not known. It could have been after Henry VIII's death in January 1547 or even as late as 1549, when the last guns were brought up. The "Mary Rose" was remembered well into the reign of Elizabeth I, and according to one of the queen's admirals, William Monson (1569–1643), the wreck was visible from the surface at low tide in the late 16th century. After the sinking, the partially buried wreck created a barrier at a right angle against the currents of the Solent. Two scour pits, large underwater ditches, formed on either side of the wreck while silt and seaweed was deposited inside the ship. A deep but narrow pit formed on the upward tilting port side, while a shallower, broader pit formed on the starboard side, which had mostly been buried by the force of the impact. The abrasive actions of sand and silt carried by the currents and the activity of fungi, bacteria and wood-boring crustaceans and molluscs, such as the "teredo" "shipworm", began to break down the structure of the ship. Eventually the exposed wooden structure was weakened and gradually collapsed. The timbers and contents of the port side were either deposited in the scour pits and remaining ship structure, or carried off by the currents. Following the collapse of the exposed parts of the ship, the site was levelled with the seabed and gradually covered by layers of sediment, concealing most of the remaining structure. During the 16th century, a hard layer of compacted clay and crushed shells formed over the ship, stabilising the site and sealing the Tudor-era deposits. Further layers of soft silt covered the site during the 18th and 19th centuries, but frequent changes in the tidal patterns and currents in the Solent occasionally exposed some of the timbers, leading to its accidental rediscovery in 1836 and aided in locating the wreck in 1971. After the ship had been raised it was determined that about 40% of the original structure had survived. In mid-1836, a group of five fishermen caught their nets on timbers protruding from the bottom of the Solent. They contacted a diver to help them remove the hindrance, and on , Henry Abbinett became the first person to see the "Mary Rose" in almost 300 years. Later, two other professional divers, John Deane and William Edwards, were employed. Using a recently invented rubber suit and metal diving helmet, Deane and Edwards began to examine the wreck and salvage items from it. Along with an assortment of timbers and wooden objects, including several longbows, they brought up several bronze and iron guns, which were sold to the Board of Ordnance for over £220. Initially, this caused a dispute between Deane (who had also brought in his brother Charles into the project), Abbinett and the fishermen who had hired them. The matter was eventually settled by allowing the fishermen a share of the proceeds from the sale of the first salvaged guns, while Deane received exclusive salvage rights at the expense of Abbinett. The wreck was soon identified as the "Mary Rose" from the inscriptions of one of the bronze guns manufactured in 1537. The identification of the ship led to significant public interest in the salvage operation, and caused a great demand for the objects which were brought up. Though many of the objects could not be properly conserved at the time and subsequently deteriorated, many were documented with pencil sketches and watercolour drawings which survive to this day. John Deane ceased working on the wreck in 1836, but returned in 1840 with new, more destructive methods. With the help of condemned bomb shells filled with gunpowder acquired from the Ordnance Board, he blasted his way into parts of the wreck. Fragments of bombs and traces of blasting craters were found during the modern excavations, but there was no evidence that Deane managed to penetrate the hard layer that had sealed off the Tudor levels. Deane reported retrieving a bilge pump and the lower part of the main mast, both of which would have been located inside the ship. The recovery of small wooden objects like longbows suggests that Deane did manage to penetrate the Tudor levels at some point, though this has been disputed by the excavation project leader Margaret Rule. Newspaper reports on Deane's diving operations in October 1840 report that the ship was clinker built, but since the sterncastle is the only part of the ship with this feature, an alternative explanation has been suggested: Deane did not penetrate the hard shelly layer that covered most of the ship, but only managed to get into remains of the sterncastle that today no longer exist. Despite the rough handling by Deane, the "Mary Rose" escaped the wholesale destruction by giant rakes and explosives that was the fate of other wrecks in the Solent (such as ). The modern search for the "Mary Rose" was initiated by the Southsea branch of the British Sub-Aqua Club in 1965 as part of a project to locate shipwrecks in the Solent. The project was under the leadership of historian, journalist and amateur diver Alexander McKee. Another group led by Lieutenant-Commander Alan Bax of the Royal Navy, sponsored by the Committee for Nautical Archaeology in London, also formed a search team. Initially the two teams had differing views on where to find the wreck, but eventually joined forces. In February 1966 a chart from 1841 was found that marked the positions of the "Mary Rose" and several other wrecks. The charted position coincided with a trench (one of the scour pits) that had already been located by McKee's team, and a definite location was finally established at a position 3 km (1.9 mi) south of the entrance to Portsmouth Harbour () in water with a depth of (36 feet) at low tide. Diving on the site began in 1966 and a sonar scan by Harold Edgerton in 1967–68 revealed some type of buried feature. In 1970 a loose timber was located and on 1971, the first structural details of the buried hull were identified after they were partially uncovered by winter storms. A major problem for the team from the start was that wreck sites in the UK lacked any legal protection from plunderers and treasure hunters. Sunken ships, once being moving objects, were legally treated as chattel and were awarded to those who could first raise them. The Merchant Shipping Act of 1894 also stipulated that any objects raised from a wreck should be auctioned off to finance the salvage operations, and there was nothing preventing anyone from "stealing" the wreck and making a profit. The problem was handled by forming an organisation, the Mary Rose Committee, aiming "to find, excavate, raise and preserve for all time such remains of the ship "Mary Rose" as may be of historical or archaeological interest". To keep intruders at bay, the Committee arranged a lease of the seabed where the wreck lay from the Portsmouth authorities, thereby discouraging anyone from trespassing on the underwater property. In hindsight this was only a legalistic charade which had little chance of holding up in a court of law. In combination with secrecy as to the exact location of the wreck, it saved the project from interference. It was not until the passing of the Protection of Wrecks Act on 1973 that the "Mary Rose" was declared to be of national historic interest that enjoyed full legal protection from any disturbance by commercial salvage teams. Despite this, years after the passing of the 1973 act and the excavation of the ship, lingering conflicts with salvage legislation remained a threat to the "Mary Rose" project as "personal" finds such as chests, clothing and cooking utensils risked being confiscated and auctioned off. Following the discovery of the wreck in 1971, the project became known to the general public and received increasing media attention. This helped bring in more donations and equipment, primarily from private sources. By 1974 the Committee had representatives from the National Maritime Museum, the Royal Navy, the BBC and local organisations. In 1974 the project received royal patronage from Prince Charles, who participated in dives on the site. This attracted yet more publicity, and also more funding and assistance. The initial aims of the Mary Rose Committee were now more officially and definitely confirmed. The Committee had become a registered charity in 1974, which made it easier to raise funds, and the application for excavation and raising of the ship had been officially approved by the UK government. By 1978 the initial excavation work had uncovered a complete and coherent site with an intact ship structure and the orientation of the hull had been positively identified as being on an almost straight northerly heading with a 60-degree heel to starboard and a slight downward tilt towards the bow. As no records of English shipbuilding techniques used in vessels like the "Mary Rose" survive, excavation of the ship would allow for a detailed survey of her design and shed new light on the construction of ships of the era. A full excavation also meant removing the protective layers of silt that prevented the remaining ship structure from being destroyed through biological decay and the scouring of the currents; the operation had to be completed within a predetermined timespan of a few years or it risked irreversible damage. It was also considered desirable to recover and preserve the remains of the hull if possible. For the first time, the project was faced with the practical difficulties of actually raising, conserving and preparing the hull for public display. To handle this new, considerably more complex and expensive task, it was decided that a new organisation was needed. The Mary Rose Trust, a limited charitable trust, with representatives from many organisations would handle the need for a larger operation and a large infusion of funds. In 1979 a new diving vessel was purchased to replace the previous 12 m (40 ft) catamaran "Roger Greenville" which had been used from 1971. The choice fell on the salvage vessel "Sleipner", the same craft that had been used as a platform for diving operations on the "Vasa". The project went from a team of only twelve volunteers working four months a year to over 50 individuals working almost around the clock nine months a year. In addition there were over 500 volunteer divers and a laboratory staff of about 70 that ran the shore base and conservation facilities. During the four diving seasons from 1979 to 1982 over 22,000 diving hours was spent on the site, an effort that amounted to 11.8-man-years. Raising the "Mary Rose" meant overcoming a number of delicate problems that had never been encountered before. The raising of the Swedish warship "Vasa" 1959–61 was the only comparable precedent, but it had been a relatively straightforward operation since the hull was completely intact and rested upright on the seabed. It had been raised with basically the same methods as were in use in Tudor England: cables were slung under the hull and attached to two pontoons on either side of the ship which was then gradually raised and towed into shallower waters. Only one third of the "Mary Rose" was intact and she lay deeply embedded in mud. If the hull were raised in the traditional way, there was no guarantee that it would have enough structural strength to hold together out of water. Many suggestions for raising the ship were discarded, including the construction of a cofferdam around the wreck site, filling the ship with small buoyant objects (such as ping pong balls) or even pumping brine into the seabed and freezing it so that it would float and take the hull with it. After lengthy discussions it was decided in February 1980 that the hull would first be emptied of all its contents and strengthened with steel braces and frames. It would then be lifted to the surface with floating sheerlegs attached to nylon strops passing under the hull and transferred to a cradle. It was also decided that the ship would be recovered before the end of the diving season in 1982. If the wreck stayed uncovered any longer it risked irreversible damage from biological decay and tidal scouring. During the last year of the operation, the massive scope of full excavation and raising was beginning to take its toll on those closely involved in the project. In May 1981, Alexander McKee voiced concerns about the method chosen for raising the timbers and openly questioned Margaret Rule's position as excavation leader. McKee felt ignored in what he viewed as a project where he had always played a central role, both as the initiator of the search for the "Mary Rose" and other ships in the Solent, and as an active member throughout the diving operations. He had several supporters who all pointed to the risk of the project's turning into an embarrassing failure if the ship were damaged during raising operations. To address these concerns it was suggested that the hull should be placed on top of a supporting steel cradle underwater. This would avoid the inherent risks of damaging the wooden structure if it were lifted out of the water without appropriate support. The idea of using nylon strops was also discarded in favour of drilling holes through the hull at 170 points and passing iron bolts through them to allow the attachment of wires connected to a lifting frame. In the spring of 1982, after three intense seasons of archaeological underwater work, preparations began for raising the ship. The operation soon ran into problems: early on there were difficulties with the custom-made lifting equipment; divers on the project belonging to the Royal Engineers had to be pulled because of the outbreak of the Falklands War; and the method of lifting the hull had to be considerably altered as late as June. After the frame was properly attached to the hull, it was slowly jacked up on four legs to pull the ship off the seabed. The massive crane of the barge "Tog Mor" then moved the frame and hull, transferring them underwater to the specially designed cradle, which was padded with water-filled bags. On the morning of 1982, the final lift of the entire package of cradle, hull and lifting frame began. It was watched by the team, Prince Charles and other spectators in boats around the site. At 9:03 am, the first timbers of the "Mary Rose" broke the surface. A second set of bags under the hull was inflated with air, to cushion the waterlogged wood. Finally, the whole package was placed on a barge and taken to the shore. Though eventually successful, the operation was close to foundering on two occasions; first when one of the supporting legs of the lifting frame was bent and had to be removed and later when a corner of the frame, with "an unforgettable crunch", slipped more than a metre (3 feet) and came close to crushing part of the hull. As one of the most ambitious and expensive projects in the history of maritime archaeology, the "Mary Rose" project broke new ground within this field in the UK. Besides becoming one of the first wrecks to be protected under the new Protection of Wrecks Act in 1973 it also created several new precedents. It was the first time that a British privately funded project was able to apply modern scientific standards fully and without having to auction off part of the findings to finance its activities; where previous projects often had to settle for just a partial recovery of finds, everything found in connection with the "Mary Rose" was recovered and recorded. The raising of the vessel made it possible to establish the first historic shipwreck museum in the UK to receive government accreditation and funding. The excavation of the "Mary Rose" wreck site proved that it was possible to achieve a level of exactness in underwater excavations comparable to those on dry land. Throughout the 1970s, the "Mary Rose" was meticulously surveyed, excavated and recorded with the latest methods within the field of maritime archaeology. Working in an underwater environment meant that principles of land-based archaeology did not always apply. Mechanical excavators, airlifts and suction dredges were used in the process of locating the wreck, but as soon as it began to be uncovered in earnest, more delicate techniques were employed. Many objects from the "Mary Rose" had been well preserved in form and shape, but many were quite delicate, requiring careful handling. Artefacts of all sizes were supported with soft packing material, such as old plastic ice cream containers, and some of the arrows that were "soft like cream cheese" had to be brought up in special styrofoam containers. The airlifts that sucked up clay, sand and dirt off-site or to the surface were still used, but with much greater precision since they could potentially disrupt the site. The many layers of sediment that had accumulated on the site could be used to date artefacts in which they were found, and had to be recorded properly. The various types of accretions and remnants of chemicals with artefacts were essential clues to objects that had long since broken down and disappeared, and needed to be treated with considerable care. The excavation and raising of the ship in the 1970s and early 1980s meant that diving operations ceased, even though modern scaffolding and part of the bow were left on the seabed. The pressure on conservators to treat tens of thousands of artefacts and the high costs of conserving, storing and displaying the finds and the ship meant that there were no funds available for diving. In 2002, the UK Ministry of Defence announced plans to build two new aircraft carriers. Because of the great size of the new vessels, the outlet from Portsmouth needed to be surveyed to make sure that they could sail no matter the tide. The planned route for the underwater channel ran close to the "Mary Rose" wrecksite, which meant that funding was supplied to survey and excavate the site once more. Even though the planned carriers were down-sized enough to not require alteration of Portsmouth outlet, the excavations had already exposed timbers and were completed in 2005. Among the most important finds was the ten-metre (32 feet) stem, the forward continuation of the keel, which provided more exact details about the original profile of the ship. Over 26,000 artefacts and pieces of timber were raised along with remains of about half the crew members. The faces of some crew members have been reconstructed. Analysis of the crew skeletons shows many had suffered malnutrition, and had evidence of rickets, scurvy, and other deficiency diseases. Crew members also developed arthritis through the stresses on their joints from heavy lifting and maritime life generally, and suffered bone fractures. As the ship was intended to function as a floating, self-contained community, it was stocked with victuals (food and drink) that could sustain its inhabitants for extended periods of time. The casks used for storage on the "Mary Rose" have been compared with those from a wreck of a trade vessel from the 1560s and have revealed that they were of better quality, more robust and reliable, an indication that supplies for the Tudor navy were given high priority, and their requirements set a high standard for cask manufacturing at the time. As a miniature society at sea, the wreck of the "Mary Rose" held personal objects belonging to individual crew members. This included clothing, games, various items for spiritual or recreational use, and objects related to mundane everyday tasks such as personal hygiene, fishing, and sewing. The master carpenter's chest, for example, contained a backgammon set, a book, three plates, a sundial, and a tankard, goods suggesting he was relatively wealthy. The ship carried several skilled craftsmen and was equipped for handling both routine maintenance and repairing extensive battle damage. In and around one of the cabins on the main deck under the sterncastle, archaeologists found a "collection of woodworking tools ... unprecedented in its range and size", consisting of eight chests of carpentry tools. Along with loose mallets and tar pots used for caulking, this variety of tools belonged to one or several of the carpenters employed on the "Mary Rose". Many of the cannons and other weapons from the "Mary Rose" have provided invaluable physical evidence about 16th-century weapon technology. The surviving gunshields are almost all from the "Mary Rose", and the four small cast iron hailshot pieces are the only known examples of this type of weapon. Animal remains have been found in the wreck of the "Mary Rose". These include the skeletons of a rat, a frog and a dog. The dog, a mongrel between eighteen months and two years in age, was found near the hatch to the ship's carpenter's cabin and is thought to have been brought aboard as a ratter. Nine barrels have been found to contain bones of cattle, indicating that they contained pieces of beef butchered and stored as ship's rations. The bones of pigs and fish, stored in baskets, have also been found. Two fiddles, a bow, a still shawm or "doucaine", three three-hole pipes, and a tabor drum with a drumstick were found throughout the wreck. These would have been used for the personal enjoyment of the crew and to provide a rhythm to work on the rigging and turning the capstans on the upper decks. The tabor drum is the earliest known example of its kind and the drumstick of a previously unknown design. The tabor pipes are considerably longer than any known examples from the period. Their discovery proved that contemporary illustrations, previously viewed with some suspicion, were in fact accurate depictions of the instruments. Before the discovery of the "Mary Rose" shawm, an early predecessor to the oboe, instrument historians had been puzzled by reference to "still shawms", or "soft" shawms, that were said to have a sound that was less shrill than earlier shawms. The still shawm disappeared from the musical scene some time in the 16th century, and the instrument found on the "Mary Rose" is the only surviving example. A reproduction has been made and played. Combined with a pipe and tabor, it provides a "very effective bass part" that would have produced "rich and full sound, which would have provided excellent music for dancing on board ship". Only a few other fiddle-type instruments from the 16th century exist, but none of them of the type found on the "Mary Rose". Reproductions of both fiddles have been made, though less is known of their design than the shawm since the neck and strings were missing. In the remains of a small cabin in the bow of the ship and in a few other locations around the wreck was found the earliest dated set of navigation instruments in Europe found so far: compasses, divider calipers, a stick used for charting, protractors, sounding leads, tide calculators and a logreel, an instrument for calculating speed. Several of these objects are not only unique in having such an early, definite dating, but also because they pre-date written records of their use; protractors would have reasonably been used to measure bearings and courses on maps, but sea charts are not known to have been used by English navigators during the first half of the 16th century, compasses were not depicted on English ships until the 1560s, and the first mention of a logreel is from 1574. The cabin located on the main deck underneath the sterncastle is thought to have belonged to the barber-surgeon. He was a trained professional who saw to the health and welfare of the crew and acted as the medical expert on board. The most important of these finds were found in an intact wooden chest which contained over 60 objects relating to the barber-surgeon's medical practice: the wooden handles of a complete set of surgical tools and several shaving razors (although none of the steel blades had survived), a copper syringe for wound irrigation and treatment of gonorrhoea, and even a skilfully crafted feeding bottle for feeding incapacitated patients. More objects were found around the cabin, such as earscoops, shaving bowls and combs. With this wide selection of tools and medicaments the barber-surgeon, along with one or more assistants, could set bone fractures, perform amputations and deal with other acute injuries, treat a number of diseases and provide crew members with a minimal standard of personal hygiene. One of the first scientifically confirmed ratters was "Hatch" a terrier and whippet dog crossbreed who spent his short life on the Mary Rose. The dog, named Hatch by researchers, was discovered in 1981 during the underwater excavation of the ship. Hatch's main duty was to kill rats on board the ship. Based on the DNA work performed on Hatch's teeth, he was a young adult male, between 18 – 24 months of age, with a brown coat. Hatch's skeleton is on display in the Mary Rose Museum in Portsmouth Historic Dockyard. Preservation of the "Mary Rose" and her contents was an essential part of the project from the start. Though many artefacts, especially those that were buried in silt, had been preserved, the long exposure to an underwater environment had rendered most of them sensitive to exposure to air after recovery. Archaeologists and conservators had to work in tandem from the start to prevent deterioration of the artefacts. After recovery, finds were placed in so-called passive storage, which would prevent any immediate deterioration before the active conservation which would allow them to be stored in an open-air environment. Passive storage depended on the type of material that the object was made of, and could vary considerably. Smaller objects from the most common material, wood, were sealed in polyethylene bags to preserve moisture. Timbers and other objects that were too large to be wrapped were stored in unsealed water tanks. Growth of fungi and microbes that could degrade wood were controlled by various techniques, including low-temperature storage, chemicals, and in the case of large objects, common pond snails that consumed wood-degrading organisms but not the wood itself. Other organic materials such as leather, skin and textiles were treated similarly, by keeping them moist in tanks or sealed plastic containers. Bone and ivory was desalinated to prevent damage from salt crystallisation, as was glass, ceramic and stone. Iron, copper and copper alloy objects were kept moist in a sodium sesquicarbonate solution to prevent oxidisation and reaction with the chlorides that had penetrated the surface. Alloys of lead and pewter are inherently stable in the atmosphere and generally require no special treatment. Silver and gold were the only materials that required no special passive storage. Conserving the hull of the "Mary Rose" was the most complicated and expensive task for the project. In 2002 a donation of from the Heritage Lottery Fund and equivalent monetary support from the Portsmouth City and Hampshire County Councils was needed to keep the work with conservation on schedule. During passive conservation, the ship structure could for practical reasons not be completely sealed, so instead it was regularly sprayed with filtered, recycled water that was kept at a temperature of 2 to 5 °C (35 to 41 °F) to keep it from drying out. Drying waterlogged wood that has been submerged for several centuries without appropriate conservation causes considerable shrinkage (20–50%) and leads to severe warping and cracking as water evaporates from the cellular structure of the wood. The substance polyethylene glycol (PEG) had been used before on archaeological wood, and was during the 1980s being used to conserve the "Vasa". After almost ten years of small-scale trials on timbers, an active three-phase conservation programme of the hull of the "Mary Rose" began in 1994. During the first phase, which lasted from 1994 to 2003, the wood was sprayed with low-molecular-weight PEG to replace the water in the cellular structure of the wood. From 2003 to 2010, a higher-molecular-weight PEG was used to strengthen the mechanical properties of the outer surface layers. The third phase consisted of a controlled air drying ending in 2016. Researchers are planning on using magnetic nanoparticles to remove iron in the ship's wood to reduce the production of harmful sulfuric acid that is causing deterioration. After the decision to raise the "Mary Rose," discussions ensued as to where she would eventually go on permanent display. The east end of Portsea Island at Eastney emerged as an early alternative, but was rejected because of parking problems and the distance from the dockyard where she was originally built. Placing the ship next to the famous flagship of Horatio Nelson, HMS "Victory", at Portsmouth Historic Dockyard was proposed in July 1981. A group called the Maritime Preservation Society even suggested Southsea Castle, where Henry VIII had witnessed the sinking, as a final resting place and there was widespread scepticism to the dockyard location. At one point a county councillor even threatened to withdraw promised funds if the dockyard site became more than an interim solution. As costs for the project mounted, there was a debate in the Council chamber and in the local paper "The News" as to whether the money could be spent more appropriately. Although author David Childs writes that in the early 1980s "the debate was as a fiery one", the project was never seriously threatened because of the great symbolic importance of the "Mary Rose" to the naval history of both Portsmouth and England. Since the mid-1980s, the hull of the "Mary Rose" has been kept in a covered dry dock while undergoing conservation. Although the hull has been open to the public for viewing, the need for keeping the ship saturated first with water and later a polyethylene glycol (PEG) solution meant that, before 2013, visitors were separated from the hull by a glass barrier. By 2007, the specially built ship hall had been visited by over seven million visitors since it first opened on 1983, just under a year after it was successfully raised. A separate Mary Rose Museum was housed in a structure called No. 5 Boathouse near the ship hall and was opened to the public on 1984. containing displays explaining the history of the ship and a small number of conserved artefacts, from entire bronze cannons to household items. In September 2009 the temporary "Mary Rose" display hall was closed to visitors to facilitate construction of the new museum building, which opened to the public on 31 May 2013. The new Mary Rose Museum was designed by architects Wilkinson Eyre, Perkins+Will and built by construction firm Warings. The construction has been challenging because the museum has been built over the ship in the dry dock which is a listed monument. During construction of the museum, conservation of the hull continued inside a sealed "hotbox". In April 2013 the polyethylene glycol sprays were turned off and the process of controlled airdrying began. In 2016 the "hotbox" was removed and for the first time since 1545, the ship was revealed dry. This new museum displays most of the artefacts recovered from within the ship in context with the conserved hull. Since opening it has been visited by over 500,000 people.
https://en.wikipedia.org/wiki?curid=19679
Mario Kart With six "Mario Kart" games released on home consoles, three on portable handheld consoles, four arcade games co-developed with Namco and one for mobile phones, the "Mario Kart" series includes a total of fourteen entries. The latest game in the main series, "Mario Kart Tour", was released on iOS and Android in September 2019. The series has sold over 150 million copies worldwide to date. The first title in the "Mario Kart" series, "Super Mario Kart," was released for the Super Nintendo Entertainment System in 1992. The development of the first game was overseen by Shigeru Miyamoto, the Japanese video game designer who created the original "Super Mario Bros." as well as many other successful games for Nintendo. Darran Jones of NowGamer suggests that the original success of "Super Mario Kart" was the result of including characters previously seen in "Mario Bros." games, while also being a new type of racing game. In the "Mario Kart" series, players compete in go-kart races, controlling one of a selection of characters, typically from the "Mario" franchise. Up to twelve characters can compete in each race; the exact number varies between games. One of the features of the series is the use of various power-up items obtained by driving into item boxes laid out on the course. These power-ups include mushrooms to give players a speed boost, Koopa Shells to be thrown at opponents, banana peels and fake item boxes that can be laid on the course as hazards. The type of weapon received from an item box is influenced by the player's current position in the race. For example, players lagging far behind may receive more powerful items, such as Bullet Bills which give the player a bigger speed boost depending on the place of the player, while the leader may only receive small defensive items, such as shells or bananas. Called rubber banding, this gameplay mechanism allows other players or computers a realistic chance to catch up to the leading player. They can also perform driving techniques during the race such as rocket starts, slipstreaming, and mini-turbos. As the series has progressed, each new installment has introduced new gameplay elements, such as new circuits, items, modes, and playable characters. These changes include: "Mario Kart" mainly features characters from the "Mario" franchise. The "Mario Kart Arcade GP" series features Bandai Namco characters such as Pac-Man. The DLC for "Mario Kart 8" added Link from "The Legend of Zelda", and Villager and Isabelle from "Animal Crossing". "Mario Kart 8 Deluxe" features 42 characters, including the Inklings from "Splatoon". Many course themes recur throughout the series. Most are based on existing areas in the "Mario" franchise (Bowser's Castle being among the most prominent), but there are a number of courses that have not appeared elsewhere, yet still belong in the Mushroom Kingdom, such as Rainbow Road, which usually takes place above a city or in space. Each game in the series (following the original game) includes at least 16 original courses and up to 6 original battle arenas. Each game's tracks are divided into four "cups", or groups in which the player has to have the highest overall ranking to win and they are the Mushroom Cup, the Flower Cup, the Star Cup, and the Special Cup. Most courses can be done in three laps, except in the original game where all circuits required five laps to finish, seven in "Mario Kart: Double Dash!!" when racing on Baby Park, and two in "Double Dash!!" when racing on Wario Colosseum as well as in "Mario Kart Tour". The first game to feature courses from previous games was "Mario Kart: Super Circuit", which contained all of the tracks from the original Super NES game. Starting with "Mario Kart DS", each entry in the series has featured sixteen "nitro" (brand new courses introduced for said game) and 16 "retro" tracks (reappearing courses from previous "Mario Kart" games), spread across four cups each with four races. The four Retro Grand Prix cups are the Shell Cup, the Banana Cup, the Leaf Cup, and the Lightning Cup. In "Mario Kart 8", sixteen additional tracks are available across two downloadable packages, eight for each package downloaded, including seven retro courses, four original courses, and five courses based on other Nintendo franchises, including "Excitebike", "F-Zero", "The Legend of Zelda", and "Animal Crossing" divided into four additional cups; the Egg Cup, the Triforce Cup, the Crossing Cup, and the Bell Cup. "Mario Kart Tour" introduced courses from around the world including New York City, Tokyo, Paris, London, and Vancouver; as well as variants of courses where drivers race in reverse (R), with additional ramps and elevation (T), and a combination of the former two (R/T). Each installment features a variety of different modes. The following five modes recur most often in the series: Several "Mario Kart"-related items appear in the "Super Smash Bros." series, with "Super Smash Bros. Brawl" in particular featuring a Mario Circuit stage based on Figure-8 Circuit from "Mario Kart DS", "Super Smash Bros. for Nintendo 3DS" featuring a Rainbow Road stage based on its appearance in "Mario Kart 7", "Super Smash Bros. for Wii U" featuring a Mario Circuit stage based on its appearance in "Mario Kart 8", along with the returning Mario Circuit stage from "Brawl," and "Super Smash Bros. Ultimate" featuring Spirits and songs based on the series along with the returning stages. Certain courses from the series have also appeared in "F-Zero X", "Fortune Street", the "Mario & Sonic" series, "", and the "WarioWare" series. Various items from the series can also be seen in games such as "Nintendogs" and "Animal Crossing". The "Mario Kart" series has had a range of merchandise released. Among them are a slot car racer series based on "Mario Kart DS", which comes with Mario and Donkey Kong figures, while Wario and Luigi are available separately. A line of radio-controlled karts have also been marketed, with are controlled by Game Boy Advance-shaped controllers, and feature Mario, Donkey Kong, and Yoshi. There are additional, larger karts that depict the same trio and are radio-controlled by a GameCube-shape controller. Japanese figurines of Mario, Luigi, Peach, Toad, Yoshi, Wario, Donkey Kong, and Bowser are also available for purchase as well as for "Mario Kart 64", figures of Mario, Luigi, Wario, Bowser, Donkey Kong, and Yoshi were made by Toybiz. There are also Sound Drops inspired by "Mario Kart Wii" with eight sounds taken from the game including the Spiny Shell and the Item Box. A land-line telephone featuring Mario holding a lightning bolt while seated in his kart, has also been marketed. K'Nex released "Mario Kart Wii" sets, with Mario, Luigi, Yoshi, Donkey Kong, and Bowser in karts and bikes, as well as tracks from the game. "Mario Kart 7" and "Mario Kart 8" K'Nex sets have also been released. LINE has released a sticker set with 24 stickers based on "Mario Kart 8" and "Mario Kart 8 Deluxe". Nintendo's own customer rewards program Club Nintendo released merchandise from the series as well. These included a "Mario Kart 8" soundtrack, a "Mario Kart Wii"-themed stopwatch, and three gold trophies modeled after those in "Mario Kart 7". Before Club Nintendo, a "Mario Kart 64" soundtrack was offered by mail. In 2014, McDonald's released "Mario Kart 8" toys with Happy Meals. They featured eight of the characters in karts that were customizable with stickers. In 2018, Monopoly Gamer features a "Mario Kart" themed board game with courses from "Mario Kart 8" serving as properties, ten playable characters as tokens, (Mario, Luigi, Peach, Toad, Donkey Kong, Shy Guy, Metal Mario, Rosalina, Bowser, and Yoshi) and a special die with power-ups taken from the series. In 2019, Hot Wheels teamed up with "Mario Kart" to release cars and track sets based on the series. The "Mario Kart" series has received acclaim from critics. "Nintendo Power" listed the series as being one of the greatest multiplayer experiences, citing the diversity in game modes as well as the entertainment value found. "Guinness World Records" listed six records set by the "Mario Kart" series, including "First Console Kart Racing Game", "Best Selling Racing Game" and "Longest Running Kart Racing Franchise". "Guinness World Records" ranked the original "Super Mario Kart" number 1 on the list of top 50 console games of all time based on initial impact and lasting legacy. "Super Mario Kart" has been inducted into the World Video Game Hall of Fame in 2019. Like the "Super Mario" series, the "Mario Kart" series has achieved successful sales with over 150 million copies sold in total. "Super Mario Kart" has sold 8.76 million copies and is the fourth best-selling game on the Super Nintendo Entertainment System console. "Mario Kart 64" is the second best-selling game for the Nintendo 64 (behind "Super Mario 64"), selling a total of 9.87 million copies. "" has sold 6.96 million copies. It is the second best-selling game on the GameCube (next to "Super Smash Bros. Melee"). "Mario Kart Wii" has achieved highly successful numbers, selling a total of 37.32 million copies. It is the best-selling installment in the series and is the second best-selling game for the Wii (next to "Wii Sports"). "Mario Kart 8", released for the Wii U, has shipped 1.2 million copies in North America and Europe combined on its first few days since launch, which was the console's fastest-selling game until the record was beaten by "Super Smash Bros. for Wii U". It sold a total of 8.45 million copies and is the Wii U's best-selling game. In contrast, the enhanced port for the Nintendo Switch system, "Mario Kart 8 Deluxe", has sold 459,000 units in the United States in one day of its launch, making it the fastest-selling game in the series to date. "Deluxe" sold a total of 24.77 million copies worldwide, outperforming the original Wii U version, and is the best-selling Nintendo Switch game of all time. Both versions sold a combined total of 33.22 million copies, making it the second best-selling game in the series. In the portable entries, the series also performed outstanding sales. "", has sold a total of 5.9 million copies, making it the fourth best-selling game on the Game Boy Advance. The second portable game, "Mario Kart DS", has sold a total of 23.60 million copies. The third best-selling game for the Nintendo DS, it is also the best-selling portable game in the series. "Mario Kart 7", released for the Nintendo 3DS, has sold 18.71 million copies, and is the best-selling 3DS game as of March 2020. In September 2016, Nintendo filed an objection against the Japanese company MariCar, which rents go-karts modified for use on public roads in Tokyo along with costumes resembling Nintendo characters. MariCar's English website warned customers not to throw "banana peels" or "red turtle shells". The service is popular with tourists. Nintendo argued that the MariCar name was "intended to be mistaken for or confused with" "Mario Kart", citing games commonly known by abbreviations in Japan, such as "Pokémon" (for "Pocket Monsters") and Sumabura ("Super Smash Bros."). In January 2017, the Japan Patent Office dismissed the objection, ruling that MariCar was not widely recognized as an abbreviation of "Mario Kart". In February 2017, Nintendo sued MariCar over copyright infringement for renting unauthorized costumes of Nintendo characters and using their pictures to promote its business. In September 2018, MariCar was ordered to stop using the characters and pay Nintendo ¥10 million in damages. Universal Parks & Resorts and Nintendo have plans on a "Mario Kart" themed ride at Universal Studios Japan at their most recent announcement of the Super Nintendo World theme park. And they have plans to build this ride in Singapore, Orlando and California. Plans should be announced either at the first Nintendo Direct of 2020 or at the 2020 Summer Olympics in Tokyo, Japan. It was confirmed by Nintendo and Universal that their new theme park in Florida, Universal's Epic Universe, will be the home of Nintendo world in Florida.
https://en.wikipedia.org/wiki?curid=19680
Mind map A mind map is a diagram used to visually organize information. A mind map is hierarchical and shows relationships among pieces of the whole. It is often created around a single concept, drawn as an image in the center of a blank page, to which associated representations of ideas such as images, words and parts of words are added. Major ideas are connected directly to the central concept, and other ideas branch out from those major ideas. Mind maps can also be drawn by hand, either as "notes" during a lecture, meeting or planning session, for example, or as higher quality pictures when more time is available. Mind maps are considered to be a type of spider diagram. A similar concept in the 1970s was "idea sun bursting". Although the term "mind map" was first popularized by British popular psychology author and television personality Tony Buzan, the use of diagrams that visually "map" information using branching and radial maps traces back centuries. These pictorial methods record knowledge and model systems, and have a long history in learning, brainstorming, memory, visual thinking, and problem solving by educators, engineers, psychologists, and others. Some of the earliest examples of such graphical records were developed by Porphyry of Tyros, a noted thinker of the 3rd century, as he graphically visualized the concept categories of Aristotle. Philosopher Ramon Llull (1235–1315) also used such techniques. The semantic network was developed in the late 1950s as a theory to understand human learning and developed further by Allan M. Collins and M. Ross Quillian during the early 1960s. Mind maps are similar in structure to concept maps, developed by learning experts in the 1970s, but differ in that mind maps are simplified by focusing around a single central key concept. Buzan's specific approach, and the introduction of the term "mind map", arose during a 1974 BBC TV series he hosted, called "Use Your Head". In this show, and companion book series, Buzan promoted his conception of radial tree, diagramming key words in a colorful, radiant, tree-like structure. Buzan says the idea was inspired by Alfred Korzybski's general semantics as popularized in science fiction novels, such as those of Robert A. Heinlein and A. E. van Vogt. He argues that while "traditional" outlines force readers to scan left to right and top to bottom, readers actually tend to scan the entire page in a non-linear fashion. Buzan's treatment also uses then-popular assumptions about the functions of cerebral hemispheres in order to explain the claimed increased effectiveness of mind mapping over other forms of note making. Cunningham (2005) conducted a user study in which 80% of the students thought "mindmapping helped them understand concepts and ideas in science". Other studies also report some subjective positive effects on the use of mind maps. Positive opinions on their effectiveness, however, were much more prominent among students of art and design than in students of computer and information technology, with 62.5% vs 34% (respectively) agreeing that they were able to understand concepts better with mind mapping software. Farrand, Hussain, and Hennessy (2002) found that spider diagrams (similar to concept maps) had limited, but significant, impact on memory recall in undergraduate students (a 10% increase over baseline for a 600-word text only) as compared to preferred study methods (a 6% increase over baseline). This improvement was only robust after a week for those in the diagram group and there was a significant decrease in motivation compared to the subjects' preferred methods of note taking. A meta study about concept mapping concluded that concept mapping is more effective than "reading text passages, attending lectures, and participating in class discussions". The same study also concluded that concept mapping is slightly more effective "than other constructive activities such as writing summaries and outlines". However, results were inconsistent, with the authors noting "significant heterogeneity was found in most subsets". In addition, they concluded that low-ability students may benefit more from mind mapping than high-ability students. Joeran Beel and Stefan Langer conducted a comprehensive analysis of the content of mind maps. They analysed 19,379 mind maps from 11,179 users of the mind mapping applications (now ) and MindMeister. Results include that average users create only a few mind maps (mean=2.7), average mind maps are rather small (31 nodes) with each node containing about three words (median). However, there were exceptions. One user created more than 200 mind maps, the largest mind map consisted of more than 50,000 nodes and the largest node contained ~7,500 words. The study also showed that between different mind mapping applications (Docear vs MindMeister) significant differences exist related to how users create mind maps. There have been some attempts to create mind maps automatically. Brucks & Schommer created mind maps automatically from full-text streams. Rothenberger et al. extracted the main story of a text and presented it as mind map. And there is a patent about automatically creating sub-topics in mind maps. Mind-mapping software can be used to organize large amounts of information, combining spatial organization, dynamic hierarchical structuring and node folding. Software packages can extend the concept of mind-mapping by allowing individuals to map more than thoughts and ideas with information on their computers and the Internet, like spreadsheets, documents, Internet sites and images. It has been suggested that mind-mapping can improve learning/study efficiency up to 15% over conventional note-taking.
https://en.wikipedia.org/wiki?curid=19688
Machine gun A machine gun is a fully automatic mounted or portable firearm designed to fire rifle cartridges in rapid succession from an ammunition belt or magazine. Not all fully automatic firearms are machine guns. Submachine guns, rifles, assault rifles, battle rifles, shotguns, pistols or cannons may be capable of fully automatic fire, but are not designed for sustained fire. As a class of military rapid-fire guns, machine guns are fully automatic weapons designed to be used as support weapons and generally used when attached to a mount or fired from the ground on a bipod or tripod. Many machine guns also use belt feeding and open bolt operation, features not normally found on rifles. Unlike semi-automatic firearms, which require one trigger pull per round fired, a machine gun is designed to fire for as long as the trigger is held down. Nowadays the term is restricted to relatively heavy weapons, able to provide continuous or frequent bursts of automatic fire for as long as ammunition lasts. Machine guns are normally used against personnel, aircraft and light vehicles, or to provide suppressive fire, either directly or indirectly. They are commonly mounted on fast attack vehicles such as technicals to provide heavy mobile firepower, armored vehicles such as tanks for engaging targets too small to justify use of the primary weaponry or too fast to effectively engage with it, and on aircraft as defensive armament or for strafing ground targets, though on fighter aircraft true machine guns have mostly been supplanted by large-caliber rotary guns. Some machine guns have in practice sustained fire almost continuously for hours; other automatic weapons overheat after less than a minute of use. Because they become very hot, the great majority of designs fire from an open bolt, to permit air cooling from the breech between bursts. They also usually have either a barrel cooling system, slow-heating heavyweight barrel, or removable barrels which allow a hot barrel to be replaced. Although subdivided into "light", "medium", "heavy" or "general-purpose", even the lightest machine guns tend to be substantially larger and heavier than standard infantry arms. Medium and heavy machine guns are either mounted on a tripod or on a vehicle; when carried on foot, the machine gun and associated equipment (tripod, ammunition, spare barrels) require additional crew members. Light machine guns are designed to provide mobile fire support to a squad and are typically air-cooled weapons fitted with a box magazine or drum and a bipod; they may use full-size rifle rounds, but modern examples often use intermediate rounds. Medium machine guns use full-sized rifle rounds and are designed to be used from fixed positions mounted on a tripod. Heavy machine gun is a term originating in World War I to describe heavyweight medium machine guns and persisted into World War II with Japanese Hotchkiss M1914 clones; today, however, it is used to refer to automatic weapons with a caliber of at least .50 in (12.7 mm) but less than 20 mm. A general-purpose machine gun is usually a lightweight medium machine gun that can either be used with a bipod and drum in the light machine gun role or a tripod and belt feed in the medium machine gun role. Machine guns usually have simple iron sights, though the use of optics is becoming more common. A common aiming system for direct fire is to alternate solid ("ball") rounds and tracer ammunition rounds (usually one tracer round for every four ball rounds), so shooters can see the trajectory and "walk" the fire into the target, and direct the fire of other soldiers. Many heavy machine guns, such as the Browning M2 .50 caliber machine gun, are accurate enough to engage targets at great distances. During the Vietnam War, Carlos Hathcock set the record for a long-distance shot at with a .50 caliber heavy machine gun he had equipped with a telescopic sight. This led to the introduction of .50 caliber anti-materiel sniper rifles, such as the Barrett M82. Other automatic weapons are subdivided into several categories based on the size of the bullet used, whether the cartridge is fired from a closed bolt or an open bolt, and whether the action used is locked or is some form of blowback. Fully automatic firearms using pistol-calibre ammunition are called machine pistols or submachine guns largely on the basis of size; those using shotgun cartridges are almost always referred to as automatic shotguns. The term personal defense weapon (PDW) is sometimes applied to weapons firing dedicated armor-piercing rounds which would otherwise be regarded as machine pistols or SMGs, but it is not particularly strongly defined and has historically been used to describe a range of weapons from ordinary SMGs to compact assault rifles. Selective fire rifles firing a full-power rifle cartridge from a closed bolt are called automatic rifles or battle rifles, while rifles that fire an intermediate cartridge are called assault rifles. Assault rifles are a compromise between the size and weight of a pistol-calibre submachine gun and a full-size battle rifle, firing intermediate cartridges and allowing semi-automatic and burst or full-automatic fire options (selective fire), sometimes with both of the latter present. Many machine guns are of the locked breech type, and follow this cycle: The operation is basically the same for all locked breech automatic firearms, regardless of the means of activating these mechanisms. There are also multi-chambered formats, such as revolver cannon, and some automatic weapons, including many submachine guns, the Schwarzlose machine gun etc., that do not lock at all but instead use simple blowback or some type of delayed blowback. Most modern machine guns are of the locking type, and of these, most utilize the principle of gas-operated reloading, which taps off some of the propellant gas from the fired cartridge, using its mechanical pressure to unlock the bolt and cycle the action. The Russian PK machine gun is an example. Another efficient and widely used format is the recoil actuated type, which uses the gun's recoil energy for the same purpose. Machine guns such as the M2 Browning and MG42, are of this second kind. A cam, lever or actuator absorbs part of the energy of the recoil to operate the gun mechanism. An externally actuated weapon uses an external power source, such as an electric motor or hand crank, to move its mechanism through the firing sequence. Modern weapons of this type are often referred to as Gatling guns, after the original inventor (not only of the well-known hand-cranked 19th century proto-machine gun, but also of the first electrically-powered version). They have several barrels each with an associated chamber and action on a rotating carousel and a system of cams that load, cock, and fire each mechanism progressively as it rotates through the sequence; essentially each barrel is a separate bolt-action rifle using a common feed source. The continuous nature of the rotary action, and its relative immunity to overheating allow for an incredibly high cyclic rate of fire, often several thousand rounds per minute. Rotary guns are less prone to jamming than a gun operated by gas or recoil, as the external power source will eject misfired rounds with no further trouble, but this is not possible in the rare cases of self-powered rotary guns. Rotary designs are intrinsically comparatively bulky and expensive, and are therefore generally used with large rounds, 20 mm in diameter or more, often referred to as Rotary cannon – though the rifle-calibre Minigun is an exception to this. Whereas such weapons are highly reliable and formidably effective, one drawback is that the weight and size of the power source and driving mechanism makes them usually impractical for use outside of a vehicle or aircraft mount. Revolver cannons, such as the Mauser MK 213, were developed in World War II by the Germans to provide high-caliber cannons with a reasonable rate of fire and reliability. In contrast to the rotary format, such weapons have a single barrel, and a recoil-operated carriage holding a revolving chamber with typically five chambers. As each round is fired, electrically, the carriage moves back rotating the chamber which also ejects the spent case, indexes the next live round to be fired with the barrel and loads the next round into the chamber. The action is very similar to that of the revolver pistols common in the 19th and 20th centuries, giving this type of weapon its name. A Chain gun is a specific, patented type of Revolver cannon, the name in this case deriving from its driving mechanism. Firing a machine gun for prolonged periods produces large amounts of heat. In a worst-case scenario this may cause a cartridge to overheat and detonate even when the trigger is not pulled, potentially leading to damage or causing the gun to cycle its action and keep firing until it has exhausted its ammunition supply or jammed (this is known as "cooking off", distinct from "runaway fire" where the sear fails to re-engage when the trigger is released). To prevent this, some kind of cooling system is required. Early machine guns were often water-cooled; while this technology was very effective, (and was indeed one of the sources of the notorious efficiency of machine guns during the First World War ), the water jackets also added considerable weight to an already bulky design; they were also vulnerable to bullets themselves. Armour could of course be provided, and in WW I the Germans in particular often did this; but this added yet more weight to the guns. Air-cooled machine guns often feature quick-change barrels (often carried by a crew member), passive cooling fins, or in some designs forced-air cooling, such as that employed by the Lewis Gun. Advances in metallurgy and use of special composites in barrel liners allow for greater heat absorption and dissipation during firing. The higher the rate of fire, the more often barrels must be changed and allowed to cool. To minimize this, most air-cooled guns are fired only in short bursts or at a reduced rate of fire. Some designs – such as the many variants of the MG42 – are capable of rates of fire in excess of 1,200 rounds per minute. Gatling guns are capable of the fastest firing rates of all, partly because this format involves extra energy being injected into the system from outside, instead of depending on energy derived from the propellant contained within the cartridges, and partly because this design deals with the unwanted heat most efficiently – effectively quick-changing the barrel and chamber after every shot. The multiple guns that comprise a Gatling being a much larger bulk of metal than other, single-barreled guns, they are thus much slower to rise in temperature for a given amount of heat. Simultaneously they are much better at shedding the excess, as the extra barrels not only provide a larger surface area from which to dissipate it, but in the nature of the design are spun at very high speed, which has the benefit of producing enhanced air-cooling as a side-effect. In weapons where the round seats and fires at the same time, mechanical timing is essential for operator safety, to prevent the round from firing before it is seated properly. Machine guns are controlled by one or more mechanical sears. When a sear is in place, it effectively stops the bolt at some point in its range of motion. Some sears stop the bolt when it is locked to the rear. Other sears stop the firing pin from going forward after the round is locked into the chamber. Almost all machine guns have a "safety" sear, which simply keeps the trigger from engaging. The first successful machine-gun designs were developed in the mid-19th century. The key characteristic of modern machine guns, their relatively high rate of fire and more importantly mechanical loading, first appeared in the Model 1862 Gatling gun, which was adopted by the United States Navy. These weapons were still powered by hand; however, this changed with Hiram Maxim's idea of harnessing recoil energy to power reloading in his Maxim machine gun. Dr. Gatling also experimented with electric-motor-powered models; as discussed above, this externally powered machine reloading has seen use in modern weapons as well. While technical use of the term "machine gun" has varied, the modern definition used by the Sporting Arms and Ammunition Manufacturers' Institute of America is "a fully automatic firearm that loads, fires and ejects continuously when the trigger is held to the rear until the ammunition is exhausted or pressure on the trigger is released." This definition excludes most early manually operated repeating arms such as volley guns and the Gatling gun. The first known ancestors of multi-shot weapons were medieval organ guns, while the first to have the ability to fire multiple shots from a single barrel without a full manual reload were revolvers made in Europe in the late 1500s. One is a shoulder-gun-length weapon made in Nuremberg, Germany, circa 1580. Another is a revolving arquebus, produced by Hans Stopler of Nuremberg in 1597. True repeating long arms were difficult to manufacture prior to the development of the unitary firearm cartridge; nevertheless, lever-action repeating rifles such as the Kalthoff repeater and Cookson repeater were made in small quantities in the 17th century. Perhaps the earliest examples of predecessors to the modern machine gun are to be found in East Asia. According to the Wu-Pei-Chih, a booklet examining Chinese military equipment produced during the first quarter of the 17th century, the Chinese army had in its arsenal the 'Po-Tzu Lien-Chu-P'ao' or 'string-of-100-bullets cannon'. This was a repeating cannon fed by a hopper which fired its charges sequentially. Another repeating gun was produced by a Chinese commoner in the late 17th century. This weapon was also hopper-fed and never went into mass production. In 1663 the first mention of the automatic principle of machine guns was in a paper presented to the Royal Society of England by an Englishman by the name of Palmer who described a volley gun capable of being operated by either recoil or gas. Another early revolving gun was created by James Puckle, a London lawyer, who patented what he called "The Puckle Gun" on May 15, 1718. It was a design for a manually operated 1.25 in. (32 mm) caliber, flintlock cannon with a revolver cylinder able to fire 6–11 rounds before reloading by swapping out the cylinder, intended for use on ships. It was one of the earliest weapons to be referred to as a machine gun, being called such in 1722, though its operation does not match the modern usage of the term. According to Puckle, it was able to fire round bullets at Christians and square bullets at Turks. However, it was a commercial failure and was not adopted or produced in any meaningful quantity. In 1777, Philadelphia gunsmith Joseph Belton offered the Continental Congress a "new improved gun", which was capable of firing up to twenty shots in five seconds; unlike older repeaters using complex lever-action mechanisms, it used a simpler system of superposed loads, and was loaded with a single large paper cartridge. Congress requested that Belton modify 100 flintlock muskets to fire eight shots in this manner, but rescinded the order when Belton's price proved too high. In the early and mid-19th century, a number of rapid-firing weapons appeared which offered multi-shot fire, mostly volley guns. Volley guns (such as the Mitrailleuse) and double-barreled pistols relied on duplicating all parts of the gun, though the Nock gun used the otherwise-undesirable "chain fire" phenomenon (where multiple chambers are ignited at once) to propagate a spark from a single flintlock mechanism to multiple barrels. Pepperbox pistols also did away with needing multiple hammers but used multiple manually operated barrels. Revolvers further reduced this to only needing a pre-prepared cylinder and linked advancing the cylinder to cocking the hammer. However, these were still manually operated. In the 1830s a machine gun was designed by a Swiss man called Jacob Steuble, who tried to sell it to the Russian, English and French governments. The English and Russian governments showed interest but the former refused to pay Steuble, who later sued them for this transgression, and the latter tried to imprison him. The French government showed interest at first and while they noted that mechanically there was nothing wrong with Steuble's invention they turned him down, stating that the machine both lacked novelty and could not be usefully employed by the army. In 1847 a short description of a prototype electrically-ignited mechanical machine gun was published in Scientific American by a J.R. Nichols. The model described is small in scale and works by rotating a series of barrels vertically so that it is feeding at the top from a 'tube' or hopper and could be discharged immediately at any elevation after having received a charge, according to the author. In 1848 an Italian by the name of Cesar Rosaglio announced his invention of a machine gun capable of being operated by a single man and firing 300 shots a minute or 12,000 in an hour after taking into account the time needed to reload the 'tanks' of ammunition. In June 1851 a model of a 'war engine' allegedly capable of firing 10,000 ball cartridges in 10 minutes was demonstrated by a British inventor called Francis McGetrick. In 1852 a rotary cannon using a unique form of wheellock ignition was designed by an Irish immigrant to America by the name of Delany. In 1854 a British patent for a mechanically operated machine gun was filed by Henry Clarke. This weapon used multiple barrels arranged side by side, fed by a revolving cylinder that was in turn fed by hoppers, similar to the system used by Nichols. The gun could be fired by percussion or electricity, according to the author. Unlike other mechanically operated machine guns of the era, this weapon didn't use any form of self-contained cartridge, with firing being carried out by separate percussion caps. In the same year, water cooling was proposed for machine guns by Henry Bessemer, along with a water cleaning system, though he later abandoned this design. In his patent, Bessemer describes a hydropneumatic blowback-operated, fully automatic cannon. Part of the patent also refers to a steam-operated piston to be used with firearms but the bulk of the patent is spent detailing the former system. In America, a patent for a machine gun type weapon was filed by John Andrus Reynolds in 1855. Another early American patent for a manually operated machine gun was filed by C. E. Barnes in 1856. In France and Britain, a mechanically operated machine gun was patented in 1856 by the Frenchman Francois Julien. This weapon was a cannon that fed from a type of open-ended tubular magazine, only using rollers and an endless chain in place of springs. The Agar Gun, otherwise known as a "coffee-mill gun" because of its resemblance to a coffee mill, was invented by Wilson Agar at the beginning of the US Civil War. The weapon featured mechanized loading using a hand crank linked to a hopper above the weapon. The weapon featured a single barrel and fired through the turning of the same crank; it operated using paper cartridges fitted with percussion caps and inserted into metal tubes which acted as chambers; it was therefore functionally similar to a revolver. The weapon was demonstrated to President Lincoln in 1861. He was so impressed with the weapon that he purchased 10 on the spot for $1,500 apiece. The Union Army eventually purchased a total of 54 of the weapons. However, due to antiquated views of the Ordnance Department the weapons, like its more famous counterpart the Gatling Gun, saw only limited use. The Gatling gun, patented in 1861 by Richard Jordan Gatling, was the first to offer controlled, sequential fire with mechanical loading. The design's key features were machine loading of prepared cartridges and a hand-operated crank for sequential high-speed firing. It first saw very limited action in the American Civil War; it was subsequently improved and used in the Franco-Prussian war and North-West Rebellion. Many were sold to other armies in the late 19th century and continued to be used into the early 20th century, until they were gradually supplanted by Maxim guns. Early multi-barrel guns were approximately the size and weight of contemporary artillery pieces, and were often perceived as a replacement for cannon firing grapeshot or canister shot. The large wheels required to move these guns around required a high firing position, which increased the vulnerability of their crews. Sustained firing of gunpowder cartridges generated a cloud of smoke, making concealment impossible until smokeless powder became available in the late 19th century. Gatling guns were targeted by artillery they could not reach, and their crews were targeted by snipers they could not see. The Gatling gun was used most successfully to expand European colonial empires, since against poorly equipped indigenous armies it did not face such threats. In 1870 a Lt. D. H. Friberg of the Swedish army patented a fully automatic recoil-operated firearm action and may have produced firing prototypes of a derived design around 1882: this was the forerunner to the 1907 Kjellman machine gun, though, due to rapid residue buildup from the use of black powder, Friberg's design was not a practical weapon. Also in 1870, the Bavarian regiment of the Prussian army used a unique mitrailleuse style weapon in the Franco-Prussian war. The weapon was made up of four barrels placed side by side that replaced the manual loading of the French mitrailleuse with a mechanical loading system featuring a hopper containing 41 cartridges at the breech of each barrel. Although it was used effectively at times, mechanical difficulties hindered its operation and it was ultimately abandoned shortly after the war ended (). The first practical self-powered machine gun was invented in 1884 by Sir Hiram Maxim. The Maxim machine gun used the recoil power of the previously fired bullet to reload rather than being hand-powered, enabling a much higher rate of fire than was possible using earlier designs such as the Nordenfelt and Gatling weapons. Maxim also introduced the use of water cooling, via a water jacket around the barrel, to reduce overheating. Maxim's gun was widely adopted, and derivative designs were used on all sides during the First World War. The design required fewer crew and was lighter and more usable than the Nordenfelt and Gatling guns. First World War combat experience demonstrated the military importance of the machine gun. The United States Army issued four machine guns per regiment in 1912, but that allowance increased to 336 machine guns per regiment by 1919. Heavy guns based on the Maxim such as the Vickers machine gun were joined by many other machine weapons, which mostly had their start in the early 20th century such as the Hotchkiss machine gun. Submachine guns (e.g., the German MP 18) as well as lighter machine guns (the first light machine gun deployed in any significant number being the Madsen machine gun, with the Chauchat and Lewis gun soon following) saw their first major use in World War I, along with heavy use of large-caliber machine guns. The biggest single cause of casualties in World War I was actually artillery, but combined with wire entanglements, machine guns earned a fearsome reputation. Another fundamental development occurring before and during the war was the incorporation by gun designers of machine gun auto-loading mechanisms into handguns, giving rise to semi-automatic pistols such as the Borchardt (1890s), automatic machine pistols and later submachine guns (such as the Beretta 1918). Machine guns were mounted in aircraft for the first time in World War I. Immediately this raised a problem. The most effective position for guns in a single-seater fighter was clearly, for the purpose of aiming, directly in front of the pilot; but this placement would obviously result in bullets striking the moving propeller. Early solutions, aside from simply hoping that luck was on the pilot's side with an unsynchronized forward-firing gun, involved either aircraft with pusher props like the Vickers F.B.5, Royal Aircraft Factory F.E.2 and Airco DH.2, wing mounts like that of the Nieuport 10 and Nieuport 11 which avoided the propeller entirely, or armored propeller blades such as those mounted on the Morane-Saulnier L which would allow the propeller to deflect unsynchronized gunfire. By mid 1915, the introduction of a reliable gun synchronizer by the Imperial German Flying Corps made it possible to fire a closed-bolt machine gun forward through a spinning propeller by timing the firing of the gun to miss the blades. The Allies had no equivalent system until 1916 and their aircraft suffered badly as a result, a period known as the Fokker Scourge, after the Fokker Eindecker, the first German plane to incorporate the new technology. As better materials became available following the First World War, light machine guns became more readily portable; designs such as the Bren light machine gun replaced bulky predecessors like the Lewis gun in the squad support weapon role, while the modern division between medium machine guns like the M1919 Browning machine gun and heavy machine guns like the Browning M2 became clearer. New designs largely abandoned water jacket cooling systems as both undesirable, due to a greater emphasis on mobile tactics and unnecessary, thanks to the alternative and superior technique of preventing overheating by swapping barrels. The interwar years also produced the first widely used and successful general-purpose machine gun, the German MG 34. While this machine gun was equally able in the light and medium roles, it proved difficult to manufacture in quantity, and experts on industrial metalworking were called in to redesign the weapon for modern tooling, creating the MG 42. This weapon was simpler, cheaper to produce, fired faster, and replaced the MG 34 in every application except vehicle mounts, since the MG 42's barrel changing system could not be operated when it was mounted. Experience with the MG42 led to the US issuing a requirement to replace the aging Browning Automatic Rifle with a similar weapon, which would also replace the M1919; simply using the MG42 itself was not possible, as the design brief required a weapon which could be fired from the hip or shoulder like the BAR. The resulting design, the M60 machine gun, was issued to troops during the Vietnam War. As it became clear that a high-volume-of-fire weapon would be needed for fast-moving jet aircraft to reliably hit their opponents, Gatling's work with electrically powered weapons was recalled and the 20 mm M61 Vulcan designed; a miniaturized 7.62 mm version initially known as the "mini-Vulcan" and quickly shortened to "minigun" was soon in production for use on helicopters, where the volume of fire could compensate for the instability of the helicopter as a firing platform. The most common interface on machine guns is a pistol grip and trigger. On earlier manual machine guns, the most common type was a hand crank. On externally powered machine guns, such as miniguns, an electronic button or trigger on a joystick is commonly used. Light machine guns often have a butt stock attached, while vehicle and tripod mounted machine guns usually have spade grips. In the late 20th century, scopes and other complex optics became more common as opposed to the more basic iron sights. Loading systems in early manual machine guns were often from a hopper of loose (un-linked) cartridges. Manual-operated volley guns usually had to be reloaded manually all at once (each barrel reloaded by hand, or with a set of cartridges affixed to a plate that was inserted into the weapon). With hoppers, the rounds could often be added while the weapon was firing. This gradually changed to belt-fed types. Belts were either held in the open by the person, or in a bag or box. Some modern vehicle machine guns used linkless feed systems however. Modern machine guns are commonly mounted in one of four ways. The first is a bipod – often these are integrated with the weapon. This is common on light machine guns and some medium machine guns. Another is a tripod, where the person holding it does not form a "leg" of support. Medium and heavy machine guns usually use tripods. On ships, vehicles and aircraft machine guns are usually mounted on a pintle mount – basically a steel post that is connected to the frame or body. Tripod and pintle mounts are usually used with spade grips. The last major mounting type is one that is disconnected from humans, as part of an armament system, such as a tank coaxial or part of an aircraft's armament. These are usually electrically fired and have complex sighting systems, for example, the US Helicopter Armament Subsystems.
https://en.wikipedia.org/wiki?curid=19690
Monopoly (game) Monopoly is a board game currently published by Hasbro. In the game, players roll two six-sided dice to move around the game board, buying and trading properties, and developing them with houses and hotels. Players collect rent from their opponents, with the goal being to drive them into bankruptcy. Money can also be gained or lost through Chance and Community Chest cards, and tax squares; players can end up in jail, which they cannot move from until they have met one of several conditions. The game has numerous house rules, and hundreds of different editions exist, as well as many spin-offs and related media. "Monopoly" has become a part of international popular culture, having been licensed locally in more than 103 countries and printed in more than 37 languages. "Monopoly" is derived from "The Landlord's Game" created by Lizzie Magie in the United States in 1903 as a way to demonstrate that an economy which rewards wealth creation is better than one where monopolists work under few constraints, and to promote the economic theories of Henry George—in particular his ideas about taxation. It was first published by Parker Brothers in 1935 until that firm was eventually absorbed into Hasbro in 1991. The game is named after the economic concept of monopoly—the domination of a market by a single entity. The history of "Monopoly" can be traced back to 1903, when American anti-monopolist Lizzie Magie created a game which she hoped would explain the single tax theory of Henry George. It was intended as an educational tool to illustrate the negative aspects of concentrating land in private monopolies. She took out a patent in 1904. Her game, "The Landlord's Game", was self-published, beginning in 1906. Magie created two sets of rules: an anti-monopolist set in which all were rewarded when wealth was created, and a monopolist set in which the goal was to create monopolies and crush opponents. Several variant board games, based on her concept, were developed from 1906 through the 1930s; they involved both the process of buying land for its development and the sale of any undeveloped property. Cardboard houses were added and rents increased as they were added to a property. Magie patented the game again in 1923. According to an advertisement placed in "The Christian Science Monitor", Charles Todd of Philadelphia recalled the day in 1932 when his childhood friend, Esther Jones, and her husband Charles Darrow came to their house for dinner. After the meal, the Todds introduced Darrow to "The Landlord's Game", which they then played several times. The game was entirely new to Darrow, and he asked the Todds for a written set of the rules. After that night, Darrow went on to utilize this and distribute the game himself as "Monopoly". Parker Brothers bought the game's copyrights from Darrow. When the company learned Darrow was not the sole inventor of the game, it bought the rights to Magie's patent for just $500. Parker Brothers began marketing the game on November 5, 1935. Cartoonist F. O. Alexander contributed the design. U. S. patent number US 2026082 A was issued to Charles Darrow on December 31, 1935, for the game board design and was assigned to Parker Brothers Inc. The original version of the game in this format was based on the streets of Atlantic City, New Jersey. In 1936, Parker Brothers began licensing the game for sale outside the United States. In 1941, the British Secret Intelligence Service had John Waddington Ltd., the licensed manufacturer of the game in the United Kingdom, create a special edition for World War II prisoners of war held by the Nazis. Hidden inside these games were maps, compasses, real money, and other objects useful for escaping. They were distributed to prisoners by fake charity organizations created by the British Secret Service. In the Nazi-occupied Netherlands, the German government and its collaborators were displeased with Dutch people using Monopoly Game sets with American or British locales, and developed a version with Dutch locations. Since that version had in itself no specific pro-Nazi elements, it continued in use after the war, and formed the base for Monopoly games used in the Netherlands up to the present. Economics professor Ralph Anspach published a game "Anti-Monopoly" in 1973, and was sued for trademark infringement by Parker Brothers in 1974. The case went to trial in 1976. Anspach won on appeals in 1979, as the 9th Circuit Court determined that the trademark "Monopoly" was generic and therefore unenforceable. The United States Supreme Court declined to hear the case, allowing the appellate court ruling to stand. This decision was overturned by the passage of Public Law 98-620 in 1984. With that law in place, Parker Brothers and its parent company, Hasbro, continue to hold valid trademarks for the game "Monopoly". However, "Anti-Monopoly" was exempted from the law and Anspach later reached a settlement with Hasbro and markets his game under license from them. The research that Anspach conducted during the course of the litigation was what helped bring the game's history before Charles Darrow into the spotlight. In 1991, Hasbro acquired Parker Bros. and thus "Monopoly". Before the Hasbro acquisition, Parker Bros. acted as a publisher only issuing two versions at a time, a regular and deluxe. Hasbro moved to create and license many other versions of "Monopoly" and sought public input in varying the game. A new wave of licensed products began in 1994, when Hasbro granted a license to USAopoly to begin publishing a San Diego Edition of "Monopoly", which has since been followed by more than a hundred more licensees including Winning Moves Games (since 1995) and Winning Solutions, Inc. (since 2000) in the United States. In 2003, the company held a national tournament on a chartered train going from Chicago to Atlantic City (see ). Also in 2003, Hasbro sued the maker of Ghettopoly and won. In February 2005, the company sued RADGames over their Super Add-On accessory board game that fit in the center of the board. The judge initially issued an injunction on February 25, 2005, to halt production and sales before ruling in RADGames' favor in April 2005. In 2008, the Speed Die was added to all regular Monopoly set. After polling their Facebook followers, Hasbro Gaming took the top house rules and added them to a House Rule Edition released in the Fall of 2014 and added them as optional rules in 2015. In January 2017, Hasbro invited Internet users to vote on a new set of game pieces, with this new regular edition to be issued in March 2017. On May 1, 2018, the Monopoly Mansion hotel agreement was announced by Hasbro's managing director for South-East Asia, Hong Kong and Taiwan, Jenny Chew Yean Nee with M101 Holdings Sdn Bhd. M101 has the five-star, 225-room hotel, then under construction, located at the M101 Bukit Bintang in Kuala Lumpur and would have a 1920s Gatsby feel. M101's Sirocco Group would manage the hotel when it opens in 2019. The "Monopoly" game-board consists of forty spaces containing twenty-eight properties—twenty-two streets (grouped into eight color groups), four railroads, and two utilities—three Chance spaces, three Community Chest spaces, a Luxury Tax space, an Income Tax space, and the four corner squares: GO, (In) Jail/Just Visiting, Free Parking, and Go to Jail. There have since been some changes to the board. Not all of the Chance and Community Chest cards as shown in the 1935 patent were used in editions from 1936/1937 onwards. Graphics with the Mr. Monopoly character (then known as "Rich Uncle Pennybags") were added in that same time-frame. A graphic of a chest containing coins was added to the Community Chest spaces, as were the flat purchase prices of the properties. Traditionally, the Community Chest cards were yellow (although they were sometimes printed on blue stock) with no decoration or text on the back; the Chance cards were orange with no text or decoration on the back. Hasbro commissioned a major graphic redesign to the U.S. Standard Edition of the game in 2008 along with some minor revisions. Among the changes: the colors of Mediterranean and Baltic Avenues changed from purple to brown, and the colors of the GO square changed from red to black. A flat $200 Income Tax was imposed (formerly the player's choice of $200 or 10% of their total holdings, which they could not calculate until after making their final decision). Originally the amount was $300 but was changed a year after the game's debut, and the Luxury Tax amount increased to $100 from $75. There were also changes to the Chance and Community Chest cards; for example, the "poor tax" and "grand opera opening" cards became "speeding fine" and "it is your birthday", respectively; though their effects remained the same; the player must pay only $50 instead of $150 for the school tax. In addition, a player now gets $50 instead of $45 for sale of stock, and the Advance to Illinois Avenue card now has the added text indicating a player collects $200 if they pass Go on the way there. All the Chance and Community Chest cards received a graphic upgrade in 2008 as part of the graphic refresh of the game. Mr. Monopoly's classic line illustration was also now usually replaced by renderings of a 3D Mr. Monopoly model. The backs of the cards have their respective symbols, with Community Chest cards in blue, and Chance cards in orange. Additionally, recent versions of "Monopoly" replace the dollar sign ($) with an M with two horizontal strokes through it. In the U.S. versions shown below, the properties are named after locations in (or near) Atlantic City, New Jersey. Atlantic City's Illinois Avenue was renamed Martin Luther King Jr. Blvd. in the 1980s. St. Charles Place no longer exists, as the Showboat Atlantic City was developed where it once ran. Different versions have been created based on various current consumer interests such as: "Dog-opoly", "Cato-poly", "Bug-opoly", and TV/movie games among others. It was not until 1995 that Parker Brothers acknowledged the misspelling of "Marvin Gardens", formally apologizing to the residents of Marven Gardens. Short Line refers to the Shore Fast Line, a streetcar line that served Atlantic City. The B&O Railroad did not serve Atlantic City. A booklet included with the reprinted 1935 edition states that the four railroads that served Atlantic City in the mid-1930s were the Jersey Central, the Seashore Lines, the Reading Railroad, and the Pennsylvania Railroad. The Baltimore & Ohio (now part of CSX) was the parent of the Reading. There is a tunnel in Philadelphia where track to the south was B. & O. and track to the north is Reading. The Central of N.J. did not have a track to Atlantic City but was the daughter of the Reading (and granddaughter of the B. & O.) Their track ran from the New York City area to Delaware Bay and some trains ran on the Reading-controlled track to Atlantic City. The actual "Electric Company" and "Water Works" serving the city are respectively Atlantic City Electric Company (a subsidiary of Exelon) and the Atlantic City Municipal Utilities Authority. In the 1930s, John Waddington Ltd. (Waddingtons) was a printing company in Leeds that had begun to branch out into packaging and the production of playing cards. Waddingtons had sent the card game "Lexicon" to Parker Brothers hoping to interest them in publishing the game in the United States. In a similar fashion, Parker Brothers sent over a copy of "Monopoly" to Waddingtons early in 1935 before the game had been put into production in the United States. Victor Watson, the managing director of Waddingtons, gave the game to his son Norman, head of the card games division, to test over the weekend. Norman was impressed by the game and persuaded his father to call Parker Brothers on Monday morning – transatlantic calls then being almost unheard of. This call resulted in Waddingtons obtaining a license to produce and market the game outside the United States. Watson felt that for the game to be a success in the United Kingdom, the American locations would have to be replaced, so Victor and his secretary, Marjory Phillips, went to London to scout out locations. The Angel, Islington is not a street in London but a building (and the name of the road intersection where it is located). It had been a coaching inn that stood on the Great North Road. By the 1930s, the inn had become a J. Lyons and Co. tea room (today The Co-operative Bank). Some accounts say that Marjory and Victor met at the Angel to discuss the selection and celebrated the fact by including it on the "Monopoly" board. In 2003, a plaque commemorating the naming was unveiled at the site by Victor Watson's grandson, who is also named Victor. During World War II, the British Secret Service contacted Waddington (who could also print on silk) to make "Monopoly" sets that included escape maps, money, a compass and file, all hidden in copies of the game sent by fake POW relief charities to prisoners of war. The standard British board, produced by Waddingtons, was for many years the version most familiar to people in countries in the Commonwealth (except Canada, where the U.S. edition with Atlantic City-area names was reprinted), although local variants of the board are now also found in several of these countries. In 1998, Winning Moves procured the "Monopoly" license from Hasbro and created new UK city and regional editions with sponsored squares. Initially, in December 1998, the game was sold in just a few W H Smith stores, but demand was high, with almost fifty thousand games shipped in the four weeks leading to Christmas. Winning Moves still produces new annually. The original income tax choice from the 1930s U.S. board is replaced by a flat rate on the UK board, and the $75 Luxury Tax space is replaced with the £100 Super Tax space, the same as the current German board. In 2008, the U.S. Edition was changed to match the UK and various European editions, including a flat $200 Income Tax value and an increased $100 Luxury Tax amount. In cases where a national company produced the game, the $ (dollar) sign was replaced with the £ (pound), but the place names were unchanged. Beginning in the U.K. in 2005, a revised version of the game, titled "Monopoly Here and Now", was produced, replacing game scenarios, properties, and tokens with newer equivalents. Similar boards were produced for Germany and France. Variants of these first editions appeared with Visa-branded debit cards taking the place of cash – the later U.S. "Electronic Banking" edition has unbranded debit cards. The success of the first "Here and Now" editions prompted Hasbro U.S. to allow online voting for twenty-six landmark properties across the United States to take their places along the game-board. The popularity of this voting, in turn, led to the creation of similar websites, and secondary game-boards per popular vote to be created in the U.K., Canada, France, Germany, Australia, New Zealand, Ireland, and other nations. In 2006, Winning Moves Games released the "", with a 30% larger game-board and revised game play. Other streets from Atlantic City (eight, one per color group) were included, along with a third "utility", the Gas Company. In addition, $1,000 denomination notes (first seen in Winning Moves' "Monopoly: The Card Game") are included. Game play is further changed with bus tickets (allowing non-dice-roll movement along one side of the board), a speed die (itself adopted into variants of the "Atlantic City standard edition"; see below), skyscrapers (after houses and hotels), and train depots that can be placed on the Railroad spaces. This edition was adapted for the U.K. market in 2007, and is sold by Winning Moves U.K. After the initial U.S. release, critiques of some of the rules caused the company to issue revisions and clarifications on their website. In September 2006, the U.S. edition of "Monopoly Here and Now" was released. This edition features top landmarks across the U.S. The properties were decided by votes over the Internet in the spring of 2006. Monetary values are multiplied by 10,000 (e.g., one collects $2,000,000 instead of $200 for passing GO and pays that much for Income Tax (or 10% of their total, as this edition was launched prior to 2008), each player starts with $15,000,000 instead of $1,500, etc.). Also, the Chance and Community Chest cards are updated, the Railroads are replaced by Airports (Chicago O'Hare, Los Angeles International, New York City's JFK, and Atlanta's Hartsfield-Jackson), and the Utilities (Electric Company and Water Works) are replaced by Service Providers (Internet Service Provider and Cell Phone Service Provider). The houses and hotels are blue and silver, not green and red as in most editions of "Monopoly". The board uses the traditional U.S. layout; the cheapest properties are purple, not brown, and "Interest on Credit Card Debt" replaces "Luxury Tax". Despite the updated Luxury Tax space, and the Income Tax space no longer using the 10% option, this edition uses paper "Monopoly" money, and not an electronic banking unit like the "Here and Now World Edition". However, a similar edition of "Monopoly", the "Electronic Banking" edition, does feature an electronic banking unit and bank cards, as well as a different set of tokens. Both "Here and Now" and "Electronic Banking" feature an updated set of tokens from the Atlantic City edition. It is also notable that three states (California, Florida, and Texas) are represented by two cities each (Los Angeles and San Francisco, Miami and Orlando, and Dallas and Houston). No other state is represented by more than one city (not including the airports). One landmark, Texas Stadium, has been demolished and no longer exists. Another landmark, Jacobs Field, still exists, but was renamed Progressive Field in 2008. In 2015, in honor of the game's 80th birthday, Hasbro held an online vote to determine which cities would make it into an updated version of the Here and Now edition of the game. This second edition is more a spin-off as the winning condition has changed to completing your passport instead of bankrupting your opponents. Community Chest is replaced with Here and Now cards while the Here and Now space replaced the railroads. Houses and hotels have been removed. Hasbro released a World edition with the top voted cities from all around the world, as well as at least a Here & Now edition with the voted-on U.S. cities. "Monopoly Empire" has uniquely branded tokens and places based on popular brands. Instead of buying properties, players buy popular brands one by one and slide their billboards onto their Empire towers. Instead of building houses and hotels, players collect rent from their rivals based on their tower height. How a player wins is by being the first player to fill his or her tower with billboards. Every space on the board is a brand name, including Xbox, Coca-Cola, McDonald's and Samsung. Monopoly Token Madness This version of Monopoly contains an extra eight "golden" tokens. That includes a penguin, a television, a race car, a Mr. Monopoly emoji, a rubber duck, a watch, a wheel and a bunny slipper. Monopoly Jackpot During the game, players travel around the gameboard buying properties and collecting rent. If they land on a Chance space, or roll the Chance icon on a die, they can spin the Chance spinner to try to make more money. Players may hit the "Jackpot", go bankrupt, or be sent to Jail. The player who has the most cash when the bank crashes wins. Monopoly: Ultimate Banking Edition In this version, there is no cash. The Monopoly Ultimate Banking game features an electronic ultimate banking piece with touch technology. Players can buy properties instantly and set rents by tapping. Each player has a bankcard and their cash is tracked by the Ultimate Banking unit. It can scan the game's property cards and boost or crash the market. Event cards and Location spaces replace Chance and Community Chest cards. On an Event Space, rents may be raised or lowered, a player may earn or lose money, or someone could be sent to Jail. Location Spaces allow players to pay and move to any property space on the gameboard. Monopoly Voice Banking In this version, there's no cash or cards. The Voice Banking game allows the player to respond with your voice with the Top Hat. The hat responds by purchasing properties, paying rent, and making buildings. "Ms. Monopoly" is a version of the game released in 2019, in which female players earn more than male players. All property deeds, houses, and hotels are held by the bank until bought by the players. A standard set of "Monopoly" pieces includes: A deck of thirty-two Chance and Community Chest cards (sixteen each) which players draw when they land on the corresponding squares of the track, and follow the instructions printed on them. A title deed for each property is given to a player to signify ownership, and specifies purchase price, mortgage value, the cost of building houses and hotels on that property, and the various rents depending on how developed the property is. Properties include: The purchase price for properties varies from $60 to $400 on a U.S. Standard Edition set. A pair of six-sided dice is included, with a "Speed Die" added for variation in 2007. The 1999 Millennium Edition featured two jewel-like dice which were the subject of a lawsuit from Michael Bowling, owner of dice maker Crystal Caste. Hasbro lost the suit in 2008 and had to pay $446,182 in royalties. Subsequent printings of the game reverted to normal six-sided dice. 32 houses and 12 hotels made of wood or plastic (the original and current "Deluxe Edition" have wooden houses and hotels; the current "base set" uses plastic buildings). Unlike money, houses and hotels have a finite supply. If no more are available, no substitute is allowed. In most editions, houses are green and hotels red. Older U.S. standard editions of the game included a total of $15,140 in the following denominations: Newer (September 2008 and later) U.S. editions provide a total of $20,580–30 of each denomination instead. The colors of some of the bills are also changed: $10s are now blue instead of yellow, $20s are a brighter green than before, and $50s are now purple instead of blue. Each player begins the game with his or her token on the Go square, and $1,500 (or 1,500 of a localized currency) in play money ($2,500 with the Speed Die). Before September 2008, the money was divided with greater numbers of 20 and 10-dollar bills. Since then, the U.S. version has taken on the British version's initial cash distributions. Although the U.S. version is indicated as allowing eight players, the cash distribution shown above is not possible with all eight players since it requires 32 $100 bills and 40 $1 bills. However, the amount of cash contained in the game is enough for eight players with a slight alteration of bill distribution. Pre-Euro German editions of the game started with 30,000 "Spielmark" in eight denominations (abbreviated as "M."), and later used seven denominations of the "Deutsche Mark" ("DM."). In the classic Italian game, each player received L. 350,000 ($3500) in a two-player game, but L. 50,000 ($500) less for each player more than two. Only in a six-player game does a player receive the equivalent of $1,500. The classic Italian games were played with only four denominations of currency. Both Spanish editions (the Barcelona and Madrid editions) started the game with 150,000 in play money, with a breakdown identical to that of the American version. According to the Parker Brothers rules, Monopoly money is theoretically unlimited; if the bank runs out of money it may issue as much as needed "by merely writing on any ordinary paper". However, Hasbro's published Monopoly rules make no mention of this. Additional paper money can be bought at certain locations, notably game and hobby stores, or downloaded from various websites and printed and cut by hand. One such site has created a $1,000 bill; while a $1,000 bill can be found in "" and "Monopoly: The Card Game", both published by Winning Moves Games, this note is not a standard denomination for "classic" versions of Monopoly. In several countries there is also a version of the game that features electronic banking. Instead of receiving paper money, each player receives a plastic bank card that is inserted into a calculator-like electronic device that keeps track of the player's balance. Each player is represented by a small metal or plastic token that is moved around the edge of the board according to the roll of two six-sided dice. The number of tokens (and the tokens themselves) have changed over the history of the game with many appearing in special editions only, and some available with non-game purchases. After prints with wood tokens in 1937, a set of eight tokens was introduced. Two more were added in late 1937, and tokens changed again in 1942. During World War II, the game tokens were switched back to wood. Early localized editions of the standard edition (including some Canadian editions, which used the U.S. board layout) did not include pewter tokens but instead had generic wooden pawns identical to those that "Sorry!" had. Many of the early tokens were created by companies such as Dowst Miniature Toy Company, which made metal charms and tokens designed to be used on charm bracelets. The battleship and cannon were also used briefly in the Parker Brothers war game "Conflict" (released in 1940), but after the game failed on the market, the premade pieces were recycled for "Monopoly" usage. By 1943, there were ten tokens which included the Battleship, Boot, Cannon, Horse and rider, Iron, Racecar, Scottie Dog, Thimble, Top hat, and Wheelbarrow. These tokens remained the same until the late 1990s, when Parker Brothers was sold to Hasbro. In 1998, a Hasbro advertising campaign asked the public to vote on a new playing piece to be added to the set. The candidates were a "bag of money", a bi-plane, and a piggy bank. The bag ended up winning 51 percent of the vote compared to the other two which failed to go above 30%. This new token was added to the set in 1999 bringing the number of tokens to eleven. Another 1998 campaign poll asked people which monopoly token was their favorite. The most popular was the Race Car at 18% followed by the Dog (16%), Cannon (14%) and Top Hat (10%). The least favorite in the poll was the Wheelbarrow at 3% followed by Thimble (7%) and the Iron (7%). The "Cannon", and "Horse and rider" were both retired in 2000 with no new tokens taking their place. Another retirement came in 2007 with the sack of money that brought down the total token count to eight again. In 2013, a similar promotional campaign was launched encouraging the public to vote on one of several possible new tokens to replace an existing one. The choices were a guitar, a diamond ring, a helicopter, a robot, and a cat. This new campaign was different than the one in 1998 as one piece was retired and replaced with a new one. Both were chosen by a vote that ran on Facebook from January 8 to February 5, 2013. The cat took the top spot with 31% of the vote over the iron which was replaced. In January 2017, Hasbro placed the line of tokens in the regular edition with another vote which included a total of 64 options. The eight playable tokens at the time included the Battleship, Boot, Cat, Racecar, Scottie Dog, Thimble, Top hat, and Wheelbarrow. By March 17, 2017, Hasbro retired three tokens which included the thimble, wheelbarrow, and boot, these were replaced by a penguin, a Tyrannosaurus and a rubber duck. Over the years Hasbro has released tokens for special or collector's editions of the game. One of the first tokens to come out included a Steam Locomotive which was only released in Deluxe Editions. A Director's Chair token was released in 2011 in limited edition copies of "". Shortly after the 2013 Facebook voting campaign, a limited-edition Golden Token set was released exclusively at various national retailers, such as Target in the U.S., and Tesco in the U.K. The set contained the Battleship, Boot, Iron, Racecar, Scottie Dog, Thimble, Top hat and Wheelbarrow as well as the iron's potential replacements. These replacement tokens included the cat, the guitar, the diamond ring, the helicopter, and the robot. Hasbro released a 64-token limited edition set in 2017 called Monopoly Signature Token Collection to include all of the candidates that were not chosen in the vote held that year. Players take turns in order with the initial player determined by chance before the game. A typical turn begins with the rolling of the dice and advancing a piece clockwise around the board the corresponding number of squares. If a player rolls doubles, they roll again after completing that portion of their turn. A player who rolls three consecutive sets of doubles on one turn has been "caught speeding" and is immediately sent to jail instead of moving the amount shown on the dice for the third roll. A player who lands on or passes the Go space collects $200 from the bank. Players who land on either Income Tax or Luxury Tax pay the indicated amount to the bank. In older editions of the game, two options were given for Income Tax: either pay a flat fee of $200 or 10% of total net worth (including the current values of all the properties and buildings owned). No calculation could be made before the choice, and no latitude was given for reversing an unwise calculation. In 2008, the calculation option was removed from the official rules, and simultaneously the Luxury Tax was increased to $100 from its original $75. No reward or penalty is given for landing on Free Parking. Properties can only be developed once a player owns all the properties in that color group. They then must be developed equally. A house must be built on each property of that color before a second can be built. Each property within a group must be within one house level of all the others within that group. If a player lands on a Chance or Community Chest space, they draw the top card from the respective deck and follow its instructions. This may include collecting or paying money to the bank or another player or moving to a different space on the board. Two types of cards that involve jail, "Go to Jail" and "Get Out of Jail Free", are explained below. A player is sent to jail for doing any of the following: When a player is sent to jail, they move directly to the Jail space and their turn ends ("Do not pass Go. Do not collect $200."). If an ordinary dice roll (not one of the above events) ends with the player's token on the Jail corner, they are "Just Visiting" and can move ahead on their next turn without incurring any penalty. If a player is in jail, they do not take a normal turn and must either pay a fine of $50 to be released, use a Chance or Community Chest Get Out of Jail Free card, or attempt to roll doubles on the dice. If a player fails to roll doubles, they lose their turn. Failing to roll doubles for three consecutive turns requires the player to either pay the $50 fine or use a Get Out of Jail Free card, after which they move ahead according to the total rolled. Players in jail may not buy properties directly from the bank since they are unable to move. They can engage all other transactions, such as mortgaging properties, selling/trading properties to other players, buying/selling houses and hotels, collecting rent, and bidding on property auctions. A player who rolls doubles to leave jail does not roll again; however, if the player pays the fine or uses a card to get out and then rolls doubles, they do take another turn. If the player lands on an unowned property, whether street, railroad, or utility, they can buy the property for its listed purchase price. If they decline this purchase, the property is auctioned off by the bank to the highest bidder, including the player who declined to buy. If the property landed on is already owned and unmortgaged, they must pay the owner a given rent; the amount depends on whether the property is part of a set or its level of development. When a player owns all the properties in a color group and none of them are mortgaged, they may develop them during their turn or in between other player's turns. Development involves buying miniature houses or hotels from the bank and placing them on the property spaces; this must be done uniformly across the group. That is, a second house cannot be built on any property within a group until all of them have one house. Once the player owns an entire group, they can collect double rent for any undeveloped properties within it. Although houses and hotels cannot be built on railroads or utilities, the given rent increases if a player owns more than one of either type. If there is a housing shortage (more demand for houses to be built than what remains in the bank), then a housing auction is conducted to determine who will get to purchase each house. Properties can also be mortgaged, although all developments on a monopoly must be sold before any property of that color can be mortgaged or traded. The player receives half the purchase price from the bank for each mortgaged property. This must be repaid with 10% interest to clear the mortgage. Houses and hotels can be sold back to the bank for half their purchase price. Players cannot collect rent on mortgaged properties and may not give improved property away to others; however, trading mortgaged properties is allowed. The player receiving the mortgaged property must immediately pay the bank the mortgage price plus 10% or pay just the 10% amount and keep the property mortgaged; if the player chooses the latter, they must pay the 10% again when they pay off the mortgage. A player who cannot pay what they owe is bankrupt and eliminated from the game. If the bankrupt player owes the bank, they must turn all their assets over to the bank, who then auctions off their properties (if they have any), except buildings. If the debt is owed to another player instead, all assets are given to that opponent, except buildings which must be returned to the bank. The new owner must either pay off any mortgages held by the bank on such properties received or pay a fee of 10% of the mortgaged value to the bank if they choose to leave the properties mortgaged. The winner is the remaining player left after all of the others have gone bankrupt. If a player runs out of money but still has assets that can be converted to cash, they can do so by selling buildings, mortgaging properties, or trading with other players. To avoid bankruptcy the player must be able to raise enough cash to pay the full amount owed. A player cannot choose to go bankrupt; if there is any way to pay what they owe, even by returning all their buildings at a loss, mortgaging all their real estate and giving up all their cash, even knowing they are likely going bankrupt the next time, they must do so. From 1936, the rules booklet included with each Monopoly set contained a short section at the end providing rules for making the game shorter, including dealing out two Title Deed cards to each player before starting the game, by setting a time limit or by ending the game after the second player goes bankrupt. A later version of the rules included this variant, along with the time limit game, in the main rules booklet, omitting the last, the second bankruptcy method, as a third short game. Many house rules have emerged for the game throughout its history. Well-known is the "Free Parking jackpot rule", where all the money collected from Income Tax, Luxury Tax, Chance and Community Chest goes to the center of the board instead of the bank. Many people add $500 to start each pile of Free Parking money, guaranteeing a minimum payout. When a player lands on Free Parking, they may take the money. Another rule is that if a player lands directly on Go, they collect double the amount, or $400, instead of $200. Since these rules provide additional cash to players regardless of their property management choices, they can lengthen the game considerably and limit the role of strategy. Video game and computer game versions of "Monopoly" have options where popular house rules can be used. In 2014, Hasbro determined five popular house rules by public Facebook vote, and released a "House Rules Edition" of the board game. Rules selected include a "Free Parking" house rule without additional money and forcing players to traverse the board once before buying properties. According to Jim Slater in "The Mayfair Set", the Orange property group is the best to own because players land on them more often, as a result of the Chance cards "Go to Jail", "Advance to St. Charles Place (Pall Mall)", "Advance to Reading Railroad (Kings Cross Station)" and "Go Back Three Spaces". In all, during game play, Illinois Avenue (Trafalgar Square) (Red), New York Avenue (Vine Street) (Orange), B&O Railroad (Fenchurch Street Station), and Reading Railroad (Kings Cross Station) are the most frequently landed-upon properties. Mediterranean Avenue (Old Kent Road) (brown), Baltic Avenue (Whitechapel Road) (brown), Park Place (Park Lane) (blue), and Oriental Avenue (The Angel, Islington) (light blue) are the least-landed-upon properties. Among the property groups, the Railroads are most frequently landed upon, as no other group has four properties; Orange has the next highest frequency, followed by Red. One common criticism of "Monopoly" is that although it has carefully defined termination conditions, it may take an unlimited amount of time to reach them. Edward P. Parker, a former president of Parker Brothers, is quoted as saying, "We always felt that forty-five minutes was about the right length for a game, but "Monopoly" could go on for hours. Also, a game was supposed to have a definite end somewhere. In "Monopoly" you kept going around and around." Hasbro states that the longest game of "Monopoly" ever played lasted 70 days. Numerous add-ons have been produced for "Monopoly", sold independently from the game both before its commercialization and after, with three official ones discussed below: The original "Stock Exchange" add-on was published by Capitol Novelty Co. of Rensselaer, New York in early 1936. It was marketed as an add-on for "Monopoly", "Finance", or "Easy Money" games. Shortly after Capitol Novelty introduced "Stock Exchange", Parker Brothers bought it from them then marketed their own, slightly redesigned, version as an add-on specifically for their "new" "Monopoly" game; the Parker Brothers version was available in June 1936. The Free Parking square is covered over by a new Stock Exchange space and the add-on included three Chance and three Community Chest cards directing the player to "Advance to Stock Exchange". The "Stock Exchange" add-on was later redesigned and re-released in 1992 under license by Chessex, this time including a larger number of new Chance and Community Chest cards. This version included ten new Chance cards (five "Advance to Stock Exchange" and five other related cards) and eleven new Community Chest cards (five "Advance to Stock Exchange" and six other related cards; the regular Community Chest card "From sale of stock you get $45" is removed from play when using these cards). Many of the original rules applied to this new version (in fact, one optional play choice allows for playing in the original form by only adding the "Advance to Stock Exchange" cards to each deck). A "Monopoly Stock Exchange Edition" was released in 2001 (although not in the U.S.), this time adding an electronic calculator-like device to keep track of the complex stock figures. This was a full edition, not just an add-on, that came with its own board, money and playing pieces. Properties on the board were replaced by companies on which shares could be floated, and offices and home offices (instead of houses and hotels) could be built. Playmaster, another official add-on, released in 1982, is an electronic device that keeps track of all player movement and dice rolls as well as what properties are still available. It then uses this information to call random auctions and mortgages making it easier to free up cards of a color group. It also plays eight short tunes when key game functions occur; for example when a player lands on a railroad it plays "I've Been Working on the Railroad", and a police car's siren sounds when a player goes to Jail. In 2009, Hasbro released two minigames that can be played as stand-alone games or combined with the "Monopoly" game. In "Get Out of Jail", the goal is to manipulate a spade under a jail cell to flick out various colored prisoners. The game can be used as an alternative to rolling doubles to get out of jail. In "Free Parking", players attempt to balance taxis on a wobbly board. The "Free Parking" add-on can also be used with the Monopoly game. When a player lands on the Free Parking, the player can take the Taxi Challenge, and if successful, can move to any space on the board. First included in Winning Moves' "Monopoly: The Mega Edition" variant, this third, six-sided die is rolled with the other two, and accelerates game-play when in use. In 2007, Parker Brothers began releasing its standard version (also called the Speed Die Edition) of "Monopoly" with the same die (originally in blue, later in red). Its faces are: 1, 2, 3, two "Mr. Monopoly" sides, and a bus. The numbers behave as normal, adding to the other two dice, unless a "triple" is rolled, in which case the player can move to any space on the board. If "Mr. Monopoly" is rolled while there are unowned properties, the player advances forward to the nearest one. Otherwise, the player advances to the nearest property on which rent is owed. In the "Monopoly: Mega Edition", rolling the bus allows the player to take the regular dice move, then either take a bus ticket or move to the nearest draw card space. Mega rules specifies that triples do not count as doubles for going to jail as the player does not roll again. Used in a regular edition, the bus (properly "get off the bus") allows the player to use only one of the two numbered dice or the sum of both, thus a roll of 1, 5, and bus would let the player choose between moving 1, 5, or 6 spaces. The Speed Die is used throughout the game in the "Mega Edition", while in the "Regular Edition" it is used by any player who has passed GO at least once. In these editions it remains optional, although use of the Speed Die was made mandatory for use in the 2009 U.S. and World Monopoly Championship, as well as the 2015 World Championship. Parker Brothers and its licensees have also sold several spin-offs of "Monopoly". These are not add-ons, as they do not function as an addition to the "Monopoly" game, but are simply additional games with the flavor of "Monopoly": Besides the many variants of the actual game (and the "Monopoly Junior" spin-off) released in either video game or computer game formats (e.g., Commodore 64, Macintosh, Windows-based PC, Game Boy, Game Boy Advance, Nintendo Entertainment System, iPad, Genesis, Super NES, etc.), two spin-off computer games have been created. An electronic hand-held version was marketed from 1997 to 2001. "Monopoly"-themed slot machines and lotteries have been produced by WMS Gaming in conjunction with International Game Technology for land-based casinos.WagerWorks, who have the online rights to "Monopoly", have created online "Monopoly" themed games. London's Gamesys Group have also developed "Monopoly"-themed gambling games. The British quiz machine brand itbox also supports a "Monopoly" trivia and chance game. There was also a live, online version of "Monopoly". Six painted taxis drive around London picking up passengers. When the taxis reach their final destination, the region of London that they are in is displayed on the online board. This version takes far longer to play than board-game "Monopoly", with one game lasting 24 hours. Results and position are sent to players via e-mail at the conclusion of the game. The "McDonald's Monopoly" game is a sweepstakes advertising promotion of McDonald's and Hasbro that has been offered in Argentina, Australia, Austria, Brazil, Canada, France, Germany, Hong Kong, Ireland, the Netherlands, New Zealand, Poland, Portugal, Romania, Russia, Singapore, South Africa, Spain, Switzerland, Taiwan, United Kingdom and United States. A short-lived "Monopoly" game show aired on Saturday evenings from June 16 to September 1, 1990, on ABC. The show was produced by Merv Griffin and hosted by Mike Reilly. The show was paired with a summer-long "Super Jeopardy!" tournament, which also aired during this period on ABC. From 2010 to 2014, The Hub aired the game show "Family Game Night" with Todd Newton. For the first two seasons, teams earned cash in the form of "Monopoly Crazy Cash Cards" from the "Monopoly Crazy Cash Corner", which was then inserted to the "Monopoly Crazy Cash Machine" at the end of the show. In addition, beginning with Season 2, teams won "Monopoly Party Packages" for winning the individual games. For Season 3, there was a Community Chest. Each card on Mr. Monopoly had a combination of three colors. Teams used the combination card to unlock the chest. If it was the right combination, they advanced to the Crazy Cash Machine for a brand-new car. For the show's fourth season, a new game was added called Monopoly Remix, featuring Park Place and Boardwalk, as well as Income Tax and Luxury Tax. To honor the game's 80th anniversary, a game show in syndication on March 28, 2015, called "Monopoly Millionaires' Club" was launched. It was connected with a multi-state lottery game of the same name and hosted by comedian Billy Gardell from "Mike & Molly". The game show was filmed at the Rio All Suite Hotel and Casino and at Bally's Las Vegas in Las Vegas, with players having a chance to win up to $1,000,000. However, the lottery game connected with the game show (which provided the contestants) went through multiple complications and variations, and the game show last aired at the end of April 2016. In November 2008, Ridley Scott was announced to direct Universal Pictures' film version of the game, based on a script written by Pamela Pettler. The film was being co-produced by Hasbro's Brian Goldner as part of a deal with Hasbro to develop movies based on the company's line of toys and games. The story was being developed by author Frank Beddor. However, Universal eventually halted development in February 2012 then opted out of the agreement and the rights reverted to Hasbro. In October 2012, Hasbro announced a new partnership with production company Emmett/Furla Films, and said they would develop a live-action version of "Monopoly", along with Action Man and Hungry Hungry Hippos. Emmett/Furla/Oasis dropped out of the production of this satire version that was to be directed by Ridley Scott. In July 2015, Hasbro announced that Lionsgate will distribute a "Monopoly" film with Andrew Niccol writing the film as a family-friendly action adventure film co-financed and produced by Lionsgate and Hasbro's Allspark Pictures. In January 2019, it was announced that Allspark Pictures would now be producing an untitled "Monopoly" film in conjunction with Kevin Hart's company HartBeat Productions and The Story Company. Hart is attached to star in the film and Tim Story is attached to direct. No logline or writer for this iteration of the long-gestating project has been announced. The documentary "", covering the history and players of the game, won an Audience Award for Best Documentary at the 2010 Anaheim International Film Festival. The film played theatrically in the U.S. beginning in March 2011 and was released on Amazon and iTunes on February 14, 2012. The television version of the film won four regional Emmy Awards from the Pacific Southwest Chapter of NATAS. The film is directed by Kevin Tostado and narrated by Zachary Levi. Until 1999, U.S. entrants had to win a state/district/territory competition to represent that state/district/territory at the once every four year national championship. The 1999 U.S. National Tournament had 50 contestants - 49 State Champions (Oklahoma was not represented) and the reigning national champion. Qualifying for the National Championship has been online since 2003. For the 2003 Championship, qualification was limited to the first fifty people who correctly completed an online quiz. Out of concerns that such methods of qualifying might not always ensure a competition of the best players, the 2009 Championship qualifying was expanded to include an online multiple-choice quiz (a score of 80% or better was required to advance); followed by an online five-question essay test; followed by a two-game online tournament at Pogo.com. The process was to have produced a field of 23 plus one: Matt McNally, the 2003 national champion, who received a bye and was not required to qualify. However, at the end of the online tournament, there was an eleven-way tie for the last six spots. The decision was made to invite all of those who had tied for said spots. In fact, two of those who had tied and would have otherwise been eliminated, Dale Crabtree of Indianapolis, Indiana, and Brandon Baker, of Tuscaloosa, Alabama, played in the final game and finished third and fourth respectively. The 2009 "Monopoly" U.S. National Championship was held on April 14–15 in Washington, D.C. In his first tournament ever, Richard Marinaccio, an attorney from Sloan, New York (a suburb of Buffalo), prevailed over a field that included two previous champions to be crowned the 2009 U.S. National Champion. In addition to the title, Marinaccio took home $20,580—the amount of money in the bank of the board game—and competed in the 2009 World Championship in Las Vegas, Nevada, on October 21–22, where he finished in third place. In 2015, Hasbro used a competition that was held solely online to determine who would be the U.S. representative to compete at the 2015 "Monopoly" World Championship. Interested players took a twenty-question quiz on "Monopoly" strategy and rules and submitted a hundred-word essay on how to win a "Monopoly" tournament. Hasbro then selected Brian Valentine of Washington, D.C., to be the U.S. representative. Hasbro conducts a worldwide "Monopoly" tournament. The first "Monopoly" World Championships took place in Grossinger's Resort in New York, in November 1973, but they did not include competitors from outside the United States until 1975. It has been aired in the United States by ESPN. In 2009, forty-one players competed for the title of "Monopoly" World Champion and a cash prize of $20,580 (USD)—the total amount of Monopoly money in the current Monopoly set used in the tournament. The most recent World Championship took place September 2015 in Macau. Italian Nicolò Falcone defeated the defending world champion and players from twenty-six other countries. Monopoly Dreams at The Peak in Hong Kong has stated that it will be the site of the next world championship in March 2021. Because "Monopoly" evolved in the public domain before its commercialization, "Monopoly" has seen many variant games. The game is licensed in 103 countries and printed in thirty-seven languages. Most of the variants are exact copies of the "Monopoly" games with the street names replaced with locales from a particular town, university, or fictional place. National boards have been released as well. Over the years, many specialty "Monopoly" editions, licensed by Parker Brothers/Hasbro, and produced by them, or their licensees (including USAopoly and Winning Moves Games) have been sold to local and national markets worldwide. Two well known "families" of -opoly like games, without licenses from Parker Brothers/Hasbro, have also been produced. Several published games like "Monopoly" include: Other unlicensed editions include: "BibleOpoly", "HomoNoPolis" and Petropolis, among others. There have been a large number of localized editions, broken down here by region: This list is of unauthorized, unlicensed games based on "Monopoly": Ghettopoly Middopoly Memeopolis (Android app) In 2008, Hasbro released "Monopoly Here and Now: The World Edition". This world edition features top locations of the world. The locations were decided by votes over the Internet. The result of the voting was announced on August 20, 2008. Out of these, Gdynia is especially notable, as it is by far the smallest city of those featured and won the vote thanks to a spontaneous, large-scale mobilization of support started by its citizens. The new game uses its own currency unit, the Monopolonian (a game-based take on the Euro; designated by M). The game uses said unit in millions and thousands. As seen below, there is no dark purple color-group, as that is replaced by brown, as in the European version of the game. It is also notable that three cities (Montreal, Toronto, and Vancouver) are from Canada and three other cities (Beijing, Hong Kong, and Shanghai) are from the People's Republic of China. No other countries are represented by more than one city. Of the 68 cities listed on Hasbro Inc.'s website for the vote, Jerusalem was chosen as one of the 20 cities to be featured in the newest "Monopoly" World Edition. Before the vote took place, a Hasbro employee in the London office eliminated the country signifier "Israel" after the city, in response to pressure from pro-Palestinian advocacy groups. After the Israeli government protested, Hasbro Inc. issued a statement that read: "It was a bad decision, one that we rectified relatively quickly. This is a game. We never wanted to enter into any political debate. We apologize to our "Monopoly" fans." A similar online vote was held in early 2015 for an updated version of the game. The resulting board should be released worldwide in late 2015. Lima, Peru won the vote and will hold the Boardwalk space. Hasbro sells a "Deluxe Edition", which is mostly identical to the classic edition but has wooden houses and hotels and gold-toned tokens, including one token in addition to the standard eleven, a railroad locomotive. Other additions to the "Deluxe Edition" include a card carousel, which holds the title deed cards, and money printed with two colors of ink. In 1978, retailer Neiman Marcus manufactured and sold an all-chocolate edition of "Monopoly" through its "Christmas Wish Book" for that year. The entire set was edible, including the money, dice, hotels, properties, tokens and playing board. The set retailed for $600. In 2000, the FAO Schwarz store in New York City sold a custom version called "One-Of-A-Kind Monopoly" for $100,000. This special edition comes in a locking attaché case made with Napolino leather and lined in suede, and features include: The "Guinness Book of World Records" states that a set worth $2,000,000 and made of 23-carat gold, with rubies and sapphires atop the chimneys of the houses and hotels, is the most expensive "Monopoly" set ever produced. This set was designed by artist Sidney Mobell to honor the game's 50th anniversary in 1985, and is now in the Smithsonian Institution. "Wired" magazine believes "Monopoly" is a poorly designed game. Former Wall Streeter Derk Solko explains, "Monopoly has you grinding your opponents into dust. It's a very negative experience. It's all about cackling when your opponent lands on your space and you get to take all their money." Most of the three to four-hour average playing time is spent waiting for other players to play their turn. "Board game enthusiasts disparagingly call this a 'roll your dice, move your mice' format." The hobby-gaming community BoardGameGeek is especially critical. User reviews of Monopoly rank the game among the 20 worst games out of nearly 10,000 ranked in the database with an average rating of 4.36 out of 10 from over 25,000 reviews. Notes Bibliography
https://en.wikipedia.org/wiki?curid=19692
Max Steiner Maximilian Raoul Steiner (May 10, 1888 – December 28, 1971) was an Austrian-born American music composer for theatre and films, as well as a conductor. He was a child prodigy who conducted his first operetta when he was twelve and became a full-time professional, either composing, arranging, or conducting, when he was fifteen. Steiner worked in England, then Broadway, and in 1929 he moved to Hollywood, where he became one of the first composers to write music scores for films. He is referred to as "the father of film music", as Steiner played a major part in creating the tradition of writing music for films, along with composers Dimitri Tiomkin, Franz Waxman, Erich Wolfgang Korngold, Alfred Newman, Bernard Herrmann, and Miklós Rózsa. Steiner composed over 300 film scores with RKO Pictures and Warner Bros., and was nominated for 24 Academy Awards, winning three: "The Informer" (1935); "Now, Voyager" (1942); and "Since You Went Away" (1944). Besides his Oscar-winning scores, some of Steiner's popular works include "King Kong" (1933), "Little Women" (1933), "Jezebel" (1938), and "Casablanca" (1942), though he did not compose its love theme, "As Time Goes By". In addition, Steiner scored "The Searchers" (1956), "A Summer Place" (1959), and "Gone with the Wind" (1939), which ranked second on the AFI's list of best American film scores, and is the film score for which he is best known. He was also the first recipient of the Golden Globe Award for Best Original Score, which he won for his score for "Life with Father". Steiner was a frequent collaborator with some of the best known film directors in history, including Michael Curtiz, John Ford, and William Wyler, and scored many of the films with Errol Flynn, Bette Davis, Humphrey Bogart, and Fred Astaire. Many of his film scores are available as separate soundtrack recordings. Max Steiner was born on May 10, 1888, in Austria-Hungary, as the only child in a wealthy business and theatrical family of Jewish heritage. He was named after his paternal grandfather, Maximilian Steiner (1839–1880), who was credited with first persuading Johann Strauss II to write for the theater, and was the influential manager of Vienna's historic Theater an der Wien. His parents were Marie Josefine/Mirjam (Hasiba) and Hungarian-Jewish (1858–1944, born in Temesvár, Kingdom of Hungary, Austrian Empire), a Viennese impresario, carnival exposition manager, and inventor, responsible for building the Wiener Riesenrad. His father encouraged Steiner's musical talent, and allowed him to conduct an American operetta at the age of twelve, "The Belle of New York" which allowed Steiner to gain early recognition by the operetta's author, Gustave Kerker. Steiner's mother Marie was a dancer in stage productions put on by his grandfather when she was young, but later became involved in the restaurant business. His godfather was the composer Richard Strauss who strongly influenced Steiner's future work. Steiner often credited his family for inspiring his early musical abilities. As early as six years old, Steiner was taking three or four piano lessons a week, yet often became bored of the lessons. Because of this, he would practice improvising on his own, his father encouraging him to write his music down. Steiner cited his early improvisation as an influence of his taste in music, particularly his interest in the music of Claude Debussy which was "avant garde" for the time. In his youth, he began his composing career through his work on marches for regimental bands and hit songs for a show put on by his father. His parents sent Steiner to the Vienna University of Technology, but he expressed little interest in scholastic subjects. He enrolled in the Imperial Academy of Music in 1904, where, due to his precocious musical talents and private tutoring by Robert Fuchs, and Gustav Mahler, he completed a four-year course in only one year, winning himself a gold medal from the academy at the age of fifteen. He studied various instruments including piano, organ, violin, double bass, and trumpet. His preferred and best instrument was the piano, but he acknowledged the importance of being familiar with what the other instruments could do. He also had courses in harmony, counterpoint, and composition. Along with Mahler and Fuchs, he cited his teachers as Felix Weingartner and Edmund Eysler. The music of Edmund Eysler was an early influence in the pieces of Max Steiner; however, one of his first introductions to operettas was by Franz Lehár who worked for a time as a military bandmaster for Steiner's father's theatre. Steiner paid tribute to Lehár through an operetta modeled after Lehár's "Die lustige Witwe" which Steiner staged in 1907 in Vienna. Eysler was well-known for his operettas though as critiqued by Richard Traubner, the libretti were poor, with a fairly simple style, the music often relying too heavily on the Viennese waltz style. As a result, when Steiner started writing pieces for the theater, he was interested in writing libretto as his teacher had, but had minimal success. However, many of his future film scores such as "Dark Victory" (1939), "In This Our Life" (1941) and "Now Voyager" (1942) had frequent waltz melodies as influenced by Eysler. According to author of "Max Steiner's "Now Voyager"" Kate Daubney, Steiner may also have been influenced by Felix Weingartner who conducted the Vienna Opera from 1908 to 1911. Although he took composition classes from Weingartner, as a young boy, Steiner always wanted to be a great conductor. Between 1907 and 1914, Steiner traveled between Britain and Europe to work on theatrical productions. Steiner first entered the world of professional music when he was fifteen. He wrote and conducted the operetta, "The Beautiful Greek Girl", but his father refused to stage it saying it was not good enough. Steiner took the composition to competing impresario Carl Tuschl who offered to produce it. Much to Steiner's pleasure, it ran in the Orpheum Theatre for a year. This led to opportunities to conduct other shows in various cities around the world, including Moscow and Hamburg. Upon returning to Vienna, Steiner found his father in bankruptcy. Having difficulties finding work, he moved to London (in part to follow an English showgirl he had met in Vienna). In London, he was invited to conduct Lehar's "The Merry Widow." He stayed in London for eight years conducting musicals at Daly's Theatre, the Adelphi, the Hippodrome, the London Pavilion and the Blackpool Winter Gardens. Steiner married Beatrice Stilt on September 12, 1912. The exact date of their divorce is unknown. In England, Steiner wrote and conducted theater productions and symphonies. But the beginning of World War I in 1914 led him to be interned as an enemy alien. Fortunately, he was befriended by the Duke of Westminster, who was a fan of his work, and was given exit papers to go to America, although his money was impounded. He arrived in New York City in December 1914, with only $32. Unable to find work, he resorted to menial jobs such as a copyist for Harms Music Publishing which quickly led him to jobs orchestrating stage musicals. In New York, Max Steiner quickly acquired employment and worked for fifteen years as a musical director, arranger, orchestrator, and conductor of Broadway productions. These productions include operettas and musicals written by Victor Herbert, Jerome Kern, Vincent Youmans, and George Gershwin, among others. Steiner's credits include: "George White's Scandals" (1922) (director), "Peaches" (1923) (composer), and "Lady, Be Good" (1924) (conductor and orchestrator). Twenty seven year old Steiner became Fox Film's musical director in 1915. At the time there was no specially written music for films and Steiner told studio founder William Fox his idea to write an original score for "The Bondman" (1916). Fox agreed and they put together a 110 piece orchestra to accompany the screenings. During his time working on Broadway, he married Audree van Lieu on April 27, 1927. They divorced on December 14, 1933. In 1927, Steiner orchestrated and conducted Harry Tierney's "Rio Rita". Tierney himself later requested RKO Pictures in Hollywood hire Steiner to work in their music production departments. William LeBaron, RKO's head of production, traveled to New York to watch Steiner conduct and was impressed by Steiner and his musicians, who each played several instruments. Eventually, Steiner became a Hollywood asset. Steiner's final production on Broadway was "Sons O' Guns" in 1929. By request of Harry Tierney, RKO hired Max Steiner as an orchestrator and his first film job consisted of composing music for the main and end titles and occasional "on screen" music. According to Steiner, the general opinion of filmmakers during the time was that film music was a "necessary evil," and would often slow down production and release of the film after it was filmed. Steiner's first job was for the film "Dixiana"; however, after a while RKO decided to let him go, feeling they were not using him. His agent found him a job as a musical director on an operetta in Atlantic City. Before he left RKO, they offered him a month to month contract as the head of the music department with promise of more work in the future and he agreed. Because the few composers in Hollywood were unavailable, Steiner composed his first film score for "Cimarron". The score was well received and was partially credited for the success of the film. He turned down several offers to teach film scoring technique in Moscow and Peking in order to stay in Hollywood. In 1932, Steiner was asked by David O. Selznick, the new producer at RKO, to add music to "Symphony of Six Million." Steiner composed a short segment; Selznick liked it so much that he asked him to compose the theme and underscoring for the entire picture. Selznick was proud of the film, feeling it gave a realistic view of Jewish family life and tradition. "Music until then had not been used very much for underscoring." Steiner "pioneered the use of original composition as background scoring for films." The successful scoring in "Symphony of Six Million" was a turning point for Steiner's career and for the film industry. After the underscoring of "Symphony of Six Million," a third to half of the success of most films was "attributed to the extensive use of music." The score for "King Kong" (1933) became Steiner's breakthrough and represented a paradigm shift in the scoring of fantasy and adventure films. The score was an integral part of the film, because it added realism to an unrealistic film plot. The studio's bosses were initially skeptical about the need for an original score; however, since they disliked the film's contrived special effects, they let Steiner try to improve the film with music. The studio suggested using old tracks in order to save on the cost of the film; however, "King Kong" producer Merian C. Cooper asked Steiner to score the film and said he would pay for the orchestra. Steiner took advantage of this offer and used an eighty-piece orchestra, explaining the film "was made for music." According to Steiner, "it was the kind of film that allowed you to do anything and everything, from weird chords and dissonances to pretty melodies." Steiner additionally scored the wild tribal music which accompanied the ceremony to sacrifice Ann to Kong. He wrote the score in two weeks and the music recording cost around $50,000. The film became a "landmark of film scoring," as it showed the power music has to manipulate audience emotions. Steiner constructed the score on Wagnerian leitmotif principle, which calls for special themes for leading characters and concepts. The theme of the monster is recognizable as a descending three-note chromatic motif. After the death of King Kong, the Kong theme and the Fay Wray theme converge, underlining the "Beauty and the Beast" type relationship between the characters. The music in the film's finale helped express the tender feelings Kong had for the woman without the film having to explicitly state it. The majority of the music is heavy and loud, but some of the music is a bit lighter. For example, when the ship sails into Skull Island, Steiner keeps the music calm and quiet with a small amount of texture in the harps to help characterize the ship as it cautiously moves through the misty waters. Steiner received a bonus from his work, as Cooper credited 25 percent of the film's success to the film score. Before he died, Steiner admitted "King Kong" was one of his favorite scores. "King Kong" quickly made Steiner one of the most respected names in Hollywood. He continued as RKO's music director for two more years, until 1936. Max married Louise Klos, a harpist, in 1936. They had a son, Ron, together and they divorced in 1946. Steiner composed, arranged and conducted another 55 films, including most of Fred Astaire and Ginger Rogers's dance musicals. Additionally, Steiner wrote a sonata used in Katharine Hepburn's first film, "Bill of Divorcement" (1932). RKO producers, including Selznick, often came to him when they had problems with films, treating him as if he were a music "doctor." Steiner was asked to compose a score for "Of Human Bondage" (1934), which originally lacked music. He added musical touches to significant scenes. Director John Ford called on Steiner to score his film, "The Lost Patrol" (1934), which lacked tension without music. John Ford hired Steiner again to compose for his next film, "The Informer" (1935), before Ford began production of the film. Ford even asked his screenwriter to meet with Steiner during the writing phase to collaborate. This was unusual for Steiner who typically refused to compose a score from anything earlier than a rough cut of the film. Because Steiner scored the music before and during film production, Ford would sometimes shoot scenes in synchronization with the music Steiner composed rather than the usual practice of film composers synchronizing music to the film's scenes. Consequently, Steiner directly influenced the development of the protagonist, Gypo. Victor McLaglen, who played Gypo, rehearsed his walking in order to match the fumbling leitmotif Steiner had created for Gypo. This unique film production practice was successful; the film was nominated for six Academy Awards and won four, including Steiner's first Academy Award for Best Score. This score helped to exemplify Steiner's ability to encompass the essence of a film in a single theme. The main title of the film's soundtrack has three specific aspects. First, the heavy-march like theme helps to describe the oppressive military and main character Gypo's inevitable downfall. Second, the character's theme is stern and sober and puts the audience into the correct mood for the film. Finally, the theme of the music contains some Irish folk song influences which serves to better characterize the Irish historical setting and influence of the film. The theme is not heard consistently throughout the film and serves rather as a framework for the other melodic motifs heard throughout different parts of the film. The score for this film is made up of many different themes which characterize the different personages and situations in the film. Steiner helps portray the genuine love Katie has for the main character Gypo. In one scene, Katie calls after Gypo as a solo violin echos the falling cadence of her voice. In another scene, Gypo sees an advertisement for a steamship to America and instead of the advertisement, sees himself holding Katie's hand on the ship. Wedding bells are heard along with organ music and he sees Katie wearing a veil and holding a bouquet. In a later scene, the Katie theme plays as a drunk Gypo sees a beautiful woman at the bar, insinuating he had mistaken her for Katie. Other musical themes included in the film score are an Irish folk song on French horns for Frankie McPhilip, a warm string theme for Dan and Gallagher and Mary McPhillip, and a sad theme on English horn with harp for the blind man.The most important motif in the film is the theme of betrayal relating to how Gypo betrays his friend Frankie: the "blood-money" motif. The theme is heard as the Captain throws the money on the table after Frankie is killed. The theme is a four note descending tune on harp; the first interval is the tritone. As the men are deciding who will be the executioner, the motif is repeated quietly and perpetually to establish Gypo's guilt and the musical motif is synchronized with the dripping of water in the prison. As it appears in the end of the film, the theme is played at a fortissimo volume as Gypo staggers into the church, ending the climax with the clap of the cymbals, indicating Gypo's penitence, no longer needing to establish his guilt. Silent film mannerisms are still seen in Steiner's composition such as when actions or consequences are accompanied by a "sforzato" chord immediately before it, followed by silence. An example of this is remarked in the part of the film when Frankie confronts Gypo looking at his reward for arrest poster. Steiner uses minor "Mickey Mousing" techniques in the film. Through this score, Steiner showed the potential of film music, as he attempted the show the internal struggles inside of Gypo's mind through the mixing of different themes such as the Irish "Circassian Circle," the "blood-money" motif, and Frankie's theme. The score concludes with an original "Sancta Maria" by Steiner. Some writers have erroneously referred to the cue as featuring Franz Schubert's "Ave Maria". In 1937, Steiner was hired by Frank Capra to conduct Dimitri Tiomkin's score for Lost Horizon (1937) as a safeguard in case Steiner needed to rewrite the score by an inexperienced Tiomkin; however, according to Hugo Friedholfer, Tiomkin specifically asked for Steiner, preferring him over the film studio's then music director. Producer David O. Selznick set up his own production company in 1936 and recruited Steiner to write the scores for his next three films. In April 1937, Steiner left RKO and signed a long-term contract with Warner Bros.; he would, however, continue to work for Selznick. The first film he scored for Warner Bros. was "The Charge of the Light Brigade" (1936). Steiner became a mainstay at Warner Bros., scoring 140 of their films over the next 30 years alongside Hollywood stars such as Bette Davis, Errol Flynn, Humphrey Bogart, and James Cagney. Steiner frequently worked with composer Hugo Friedhofer who was hired as an orchestrator for Warner Bros; Friedholfer would orchestrate more than 50 of Steiner's pieces during his career. In 1938, Steiner wrote and arranged the first "composed for film" piece, "Symphony Moderne" which a character plays on the piano and later plays as a theme in "Four Daughters" (1938) and is performed by a full orchestra in "Four Wives" (1939). In 1939, Steiner was borrowed from Warner Bros. by Selznick to compose the score for "Gone with the Wind" (1939), which became one of Steiner's most notable successes. Steiner was the only composer Selznick considered for scoring the film. Steiner was given only three months to complete the score, despite composing twelve more film scores in 1939, more than he would in any other year of his career. Because Selznick was concerned Steiner wouldn't have enough time to finish the score, he had Franz Waxman write an additional score in the case the Steiner didn't finish. To meet the deadline, Steiner sometimes worked for 20-hours straight, assisted by doctor-administered Benzedrine to stay awake. When the film was released, it was the longest film score ever composed, nearly three hours. The composition consisted of 16 main themes and nearly 300 musical segments. Due to the score's length, Steiner had help from four orchestrators and arrangers, including Heinz Roemheld, to work on the score. Selznick had asked Steiner to use only pre-existing classical music to help cut down on cost and time, but Steiner tried to convince him that filling the picture with swatches of classic concert music or popular works would not be as effective as an original score, which could be used to heighten the emotional content of scenes. Steiner ignored Selznick's wishes and composed an entirely new score. Selznick's opinion about using original scoring may have changed due to the overwhelming reaction to the film, nearly all of which contained Steiner's music. A year later, he even wrote a letter emphasizing the value of original film scores. The most well known of Steiner's themes for the score is the "Tara" theme for the O'Hara family plantation. Steiner explains Scarlett's deep-founded love for her home is why "the 'Tara' theme begins and ends with the picture and permeates the entire score". The film went on to win ten Academy Awards, although not for the best original score, which instead went to Herbert Stothart for the musical "The Wizard of Oz". The score of "Gone With The Wind" is ranked #2 by AFI as the second greatest American film score of all time. "Now, Voyager" would be the film score for which Steiner would win his second Academy Award. Kate Daubney attributes the success of this score to Steiner's ability to "[balance] the scheme of thematic meaning with the sound of the music." Steiner used motifs and thematic elements in the music to emphasize the emotional development of the narrative. After finishing "Now, Voyager" (1942), Steiner was hired to score the music for "Casablanca" (1942). Steiner would typically wait until the film was edited before scoring it, and after watching "Casablanca", he decided the song "As Time Goes By" by Herman Hupfeld wasn't an appropriate addition to the movie and he wanted to replace it with a song of his own composition; however, Ingrid Bergman had just cut her hair short in preparation for filming "For Whom the Bell Tolls" (1943), so she couldn't re-film the section with Steiner's song. Stuck with "As Time Goes By," Steiner embraced the song and made it the center theme of his score. Steiner's score for "Casablanca" was nominated for the Academy Award for best score, losing to "The Song of Bernadette" (1943). Steiner received his third and final Oscar in 1944 for "Since You Went Away" (1944). Steiner actually first composed the theme from "Since You Went Away" while helping counterbalance Franz Waxman's moody score for "Rebecca". Producer David O. Selznick liked the theme so much, he asked Steiner to include it in "Since You Went Away". In 1947, Max married Leonette Blair. Steiner also found success with the film noir genre. "The Big Sleep", "Mildred Pierce" and "The Letter" were his best film noir scores of the 1940's. "The Letter" is set in Singapore, the tale of murder begins with the loud main musical theme during the credits, which sets the tense and violent mood of the film. The main theme characterizes Leslie, the main character, by her tragic passion. The main theme is heard in the confrontation between Leslie and the murdered man's wife in the Chinese shop. Steiner portrays this scene through the jangling of wind chimes which crescendos as the wife emerges through opium smoke. The jangling continues until the wife asks Leslie to take off her shawl, after which the theme blasts indicating the breaking point of emotions of these women. Steiner's score for "The Letter" was nominated for the 1941 Academy award for best original score, losing to Pinocchio. In the score for "The Big Sleep", Steiner uses musical thematic characterization for the characters in the film. The theme for Philip Marlowe is beguiling and ironic, with a playful grace note at the end of the motif, portrayed mixed between major and minor. At the end of the film, his theme is played fully in major chords and finishes by abruptly ending the chord as the film terminates (this was an unusual film music practice in Hollywood at the time). According to Christopher Palmer, the love theme for Humphrey Bogart and Lauren Bacall is one of Steiner strongest themes. Steiner uses the contrast of high strings and low strings and brass to emphasize Bogart's feelings for Bacall opposed with the brutality of the criminal world.In 1947, Steiner scored a film noir western, "Pursued". Steiner had more success with the western genre of film, writing the scores for over twenty large-scale Westerns, most with epic-inspiring scores "about empire building and progress," like "Dodge City" (1939), "The Oklahoma Kid" (1939), and "Virginia City" (1940). "Dodge City", starring Errol Flynn and Olivia de Havilland, is a good example of Steiner's handling of typical scenes of the Western genre. Steiner used a "lifting, loping melody" which reflected the movement and sounds of wagons, horses and cattle. Steiner showed a love for combining Westerns and romance, as he did in "They Died with Their Boots On" (1941), also starring Flynn and de Havilland. "The Searchers" (1956) is, today, considered his greatest Western. Although his contract ended in 1953, Steiner returned to Warner Bros. in 1958 and scored several films such as "Band of Angels", "Marjorie Morningstar", and "John Paul Jones", and later ventured into television. Steiner still preferred large orchestras and leitmotif techniques during this part of his career. Steiner's pace slowed significantly in the mid-1950s, and he began freelancing. In 1954, RCA Victor asked Steiner to prepare and conduct an orchestral suite of music from "Gone with the Wind" for a special LP, which was later issued on CD. There are also acetates of Steiner conducting the Warner Brothers studio orchestra in music from many of his film scores. Composer Victor Young and Steiner were good friends, and Steiner completed the film score for China Gate, because Young had died before he could finish it. The credit frame reads: "Music by Victor Young, extended by his old friend, Max Steiner." There are numerous soundtrack recordings of Steiner's music as soundtracks, collections, and recordings by others. Steiner wrote into his seventies, ailing and near blind, but his compositions "revealed a freshness and fertility of invention." A theme for "A Summer Place" in 1959, written when Steiner was 71, became one of Warner Brothers' biggest hit-tunes for years and a re-recorded pop standard. This memorable instrumental theme spent nine weeks at #1 on the "Billboard" Hot 100 singles chart in 1960 (in an instrumental cover version by Percy Faith). Steiner continued to score films produced by Warner until the mid sixties. In 1963, Steiner began writing his autobiography. Although it was completed, it was never published, and is the only source available on Steiner's childhood. A copy of the manuscript resides with the rest of the Max Steiner Collection at Brigham Young University in Provo, Utah. Steiner scored his last piece in 1965; however, he claimed he would have scored more films had he been offered the opportunity. His lack of work in the last years of his life were due to Hollywood's decreased interest in his scores caused by new film producers and new taste in film music. Another contribution to his declining career was his failing eyesight and deteriorating health, which caused him to reluctantly retire. Tony Thomas cited Steiner's last few scores as, "a weak coda to a mighty career." Steiner died of congestive heart failure in Hollywood, aged 83. He is entombed in the Great Mausoleum at Forest Lawn Memorial Park Cemetery in Glendale, California. In the early days of sound, producers avoided underscoring music behind dialogue, feeling the audience would wonder where the music was coming from. As a result, Steiner noted, "they began to add a little music here and there to support love scenes or silent sequences." But in scenes where music might be expected, such as a nightclub, ballroom or theater, the orchestra fit in more naturally and was used often. In order to justify the addition of music in scenes where it wasn't expected, music was integrated into the scene through characters or added more conspicuously. For example, a shepherd boy might play a flute along with the orchestra heard in the background, or a random, wandering violinist might follow around a couple during a love scene; however, because half of the music was recorded on the set, Steiner says it led to a great deal of inconvenience and cost when scenes were later edited, because the score would often be ruined. As recording technology improved during this period, he was able to record the music synced to the film and could change the score after the film was edited. Steiner explains his own typical method of scoring: Steiner often followed his instincts and his own reasoning in creating film scores. For example, when he chose to go against Selznick's instruction to use classical music for "Gone With the Wind." Steiner stated: Scores from the classics were sometimes harmful to a picture, especially when they drew unwanted attention to themselves by virtue of their familiarity. For example, films like "2001 – A Space Odyssey", "The Sting" and "Manhattan", had scores with recognizable tunes instead of having a preferred "subliminal" effect. Steiner, was among the first to acknowledge the need for original scores for each film. Steiner felt knowing when to start and stop was the hardest part of proper scoring, since incorrect placement of music can speed up a scene meant to be slow and vice versa: "Knowing the difference is what makes a film composer." He also notes that many composers, contrary to his own technique, would fail to subordinate the music to the film: Although some scholars cite Steiner as the inventor of the click track technique, he, along with Roy Webb were only the first to use the technique in film scoring. Carl W. Stalling and Scott Bradley used the technique first, in cartoon music. The click-track allows the composer to sync music and film together more precisely. The technique involves punching holes into the soundtrack film based on the mathematics of metronome speed. As the holes pass through a projector, the orchestra and conductor can hear the clicking sound through headphones, allowing them to record the music along the exact timing of the film. This technique allowed conductors and orchestras to match the music with perfection to the timing of the film, eliminating the previous necessity to cut off or stop music in the middle of recording as had been done previously. Popularized by Steiner in film music, this technique allowed Steiner to "catch the action," creating sounds for small details on screen. In fact, Steiner reportedly spent more of his time matching the action to the music than composing the melodies and motifs, as creating and composing came easy to him. With Steiner's background in his European musical training largely consisting of operas and operettas and his experience with stage music, he brought with him a slew of old-fashioned techniques he contributed to the development of the Hollywood film score. Although Steiner has been called, "the man who invented modern film music," he himself claimed that, "the idea originated with Richard Wagner ... If Wagner had lived in this century, he would have been the No. 1 film composer." Wagner was the inventor of the leitmotif, and this influenced Steiner's composition. In his music, Steiner relied heavily on leitmotifs. He would also quote pre-existing, recognizable melodies in his scores, such as national anthems. Steiner was known and often criticized for his use of Mickey Mousing or "catching the action." This technique is characterized by the precise matching of music with the actions or gestures on screen. Steiner was criticized for using this technique too frequently. For example, in Of Human Bondage, Steiner created a limping effect with his music whenever the clubfooted character walked. One of the important principles that guided Steiner whenever possible was his rule: "Every character should have a theme." "Steiner creates a musical picture that tells us all we need to know about the character." To accomplish this, Steiner synchronized the music, the narrative action and the leitmotif as a structural framework for his compositions. A good example of how the characters and the music worked together is best exemplified by his score for "The Glass Menagerie" (1950): Another film which exemplifies the synchronizing of character and music is "The Fountainhead" (1949): The character of Roark, an idealist architect (played by Gary Cooper): In the same way that Steiner created a theme for each character in a film, Steiner's music developed themes to express emotional aspects of general scenes which originally lacked emotional content. For example: When adding a music score to a picture, Steiner used a "spotting process" in which he and the director of the film would watch the film in its entirety and discuss where underscoring of diegetic music would begin and end. Another technique Steiner used was the mixing of realistic and background music. For example, a character humming to himself is realistic music, and the orchestra might play his tune, creating a background music effect that ties into the film. Steiner was criticized for this technique as the awareness of the film music can ruin the narrative illusion of the film; however, Steiner understood the importance of letting the film take the spotlight, making the music, "subordinate..to the picture," stating that, "if it gets too decorative, it loses its emotional appeal." Before 1932, producers of sound films tried to avoid the use of background music, because viewers would wonder where the music was coming from. Steiner was known for writing using atmospheric music without melodic content for certain neutral scenes in music. Steiner designed a melodic motion to create normal-sounding music without taking too much attention away from the film. In contrast, Steiner sometimes used diegetic, or narrative based music, in order to emphasize certain emotions or contradict them. According to Steiner, there is, "no greater counterpoint ... than gay music underlying a tragic scene or vice versa." Three of Max Steiner's scores won the Academy Award for Best Original Score: "The Informer" (1935), "Now, Voyager" (1942), and "Since You Went Away" (1944). Steiner received a certificate for "The Informer". He originally received plaques for "Now, Voyager" and "Since You Went Away", but those plaques were replaced with Academy Award statuettes in 1946. As an individual, Steiner was nominated for a total of 20 Academy Awards, and won two. Prior to 1939, the Academy recognized a studio's music department, rather than the individual composer, with a nomination in the scoring category. During this time, five of Steiner's scores including "The Lost Patrol" and "The Charge of the Light Brigade" were nominated, but the Academy does not consider these nominations to belong to Max Steiner himself. Consequently, even though Steiner's score for "The Informer" won the Academy Award in 1936, the Academy does not officially consider Steiner as the individual winner of the award, as Steiner accepted the award on behalf of RKO's music department of which he was the department head. Steiner's 20 nominations make him the third most nominated individual in the history of the scoring categories, behind John Williams and Alfred Newman. The United States Postal Service issued its "American Music Series" stamps on September 16, 1999, to pay tribute to renowned Hollywood composers, including Steiner. After Steiner's death, Charles Gerhardt conducted the National Philharmonic Orchestra in an RCA Victor album of highlights from Steiner's career, titled "Now Voyager". He also won a Golden Globe for Best Original Score for "Life with Father" (1947). Additional selections of Steiner scores were included on other RCA classic film albums during the early 1970s. The quadraphonic recordings were later digitally remastered for Dolby surround sound and released on CD. In 1975, Steiner was honored with a star located at 1551 Vine Street on the Hollywood Walk of Fame for his contribution to motion pictures. In 1995, Steiner was inducted posthumously into the Songwriters Hall of Fame. In commemoration of Steiner's 100th birthday, a memorial plaque was unveiled by Helmut Zilk, then mayor of Vienna, in 1988 at Steiner's birthplace, the "Hotel Nordbahn" (now "Austria Classic Hotel Wien") on Praterstraße 72. In 1990, Steiner was one of the first to be recognized for Lifetime Achievement by an online awards site. In Kurt London's "Film Music", London expressed the opinion that American film music was inferior to European film music because it lacked originality of composition; he cited the music of Steiner as an exception to the rule. Steiner, along with contemporaries Erich Wolfgang Korngold and Alfred Newman set the style and forms of film music of the time period and for film scores to come. Known for their similar music styles, Roy Webb was also Steiner's contemporary and they were friends until Steiner's death. Webb's score for Mighty Joe Young was reminiscent of Steiner. James Bond composer John Barry cited Steiner as an influence of his work. James Newton Howard, who composed the score for the 2005 remake of King Kong, stated that he was influenced by Steiner's score; his descending theme when Kong first appears is reminiscent of Steiner's score. In fact, during the tribal sacrifice scene of the 2005 version, the music playing is from Steiner's score of the same scene in the 1933 version. Composer of the Star Wars film score, John Williams cited Steiner as well as other European emigrant composers in the 1930s and 1940s "Golden Age" of film music as influences of his work. In fact, George Lucas wanted Williams to use the scores of Steiner and Korngold as influences for the music for Star Wars, despite the rarity of grandiose film music and the lack of use of leitmotifs and full orchestrations during the 1970s. Often compared to his contemporary Erich Wolfgang Korngold, his rival and friend at Warner Bros., the music of Steiner was often seen by critics as inferior to Korngold. Composer David Raksin stated that the music of Korngold was, "of a higher order with a much wider sweep;" however, according to William Darby and Jack Du Bois's "American Film Music", even though other film score composers may have produced greater individual scores than Steiner, no composer ever created as many "very good" ones as Steiner. Despite the inferiority of Steiner's individual scores, his influence was largely historical. Steiner was the one of the first composers to reintroduce music into films after the invention of talking films. Steiner's score for "King Kong" modeled the method of adding background music into a movie. Some of his contemporaries did not like his music. Miklós Rózsa criticized Steiner for his use of Mickey Mousing and did not like his music, but Rózsa conceded that Steiner had a successful career and had a good "melodic sense." Now referred to as the "father of film music" or the "dean of film music," Steiner had written or arranged music for over three hundred films by the end of his career. George Korngold, son of Erich Korngold, produced the Classic Film Score Series albums which included the music of Steiner. Albert K. Bender established the Max Steiner Music Society with international membership, publishing journals and newsletters and a library of audio recordings. When the Steiner collection went to Brigham Young University in 1981, the organization disbanded. The Max Steiner Memorial Society was formed in the United Kingdom continue the work of the Max Steiner Music Society. The American Film Institute respectively ranked Steiner's scores for "Gone with the Wind" (1939) and "King Kong" (1933) #2 and #13 on their list of the 25 greatest film scores. His scores for the following films were also nominated for the list:
https://en.wikipedia.org/wiki?curid=19693
Mercury (planet) Mercury is the smallest and innermost planet in the Solar System. Its orbit around the Sun takes 87.97 days, the shortest of all the planets in the Solar System. It is named after the Roman deity Mercury, the messenger of the gods. Like Venus, Mercury orbits the Sun within Earth's orbit as an inferior planet, and its apparent distance from the Sun as viewed from Earth never exceeds 28°. This proximity to the Sun means the planet can only be seen near the western horizon after sunset or eastern horizon before sunrise, usually in twilight. At this time, it may appear as a bright star-like object, but is often far more difficult to observe than Venus. The planet telescopically displays the complete range of phases, similar to Venus and the Moon, as it moves in its inner orbit relative to Earth, which recurs over its synodic period of approximately 116 days. Mercury rotates in a way that is unique in the Solar System. It is tidally locked with the Sun in a 3:2 spin–orbit resonance, meaning that relative to the fixed stars, it rotates on its axis exactly three times for every two revolutions it makes around the Sun. As seen from the Sun, in a frame of reference that rotates with the orbital motion, it appears to rotate only once every two Mercurian years. An observer on Mercury would therefore see only one day every two Mercurian years. Mercury's axis has the smallest tilt of any of the Solar System's planets (about degree). Its orbital eccentricity is the largest of all known planets in the Solar System; at perihelion, Mercury's distance from the Sun is only about two-thirds (or 66%) of its distance at aphelion. Mercury's surface appears heavily cratered and is similar in appearance to the Moon's, indicating that it has been geologically inactive for billions of years. Having almost no atmosphere to retain heat, it has surface temperatures that vary diurnally more than on any other planet in the Solar System, ranging from at night to during the day across the equatorial regions. The polar regions are constantly below . The planet has no known natural satellites. Two spacecraft have visited Mercury: "" flew by in 1974 and 1975; and "MESSENGER", launched in 2004, orbited Mercury over 4,000 times in four years before exhausting its fuel and crashing into the planet's surface on April 30, 2015. The "BepiColombo" spacecraft is planned to arrive at Mercury in 2025. Mercury appears to have a solid silicate crust and mantle overlying a solid, iron sulfide outer core layer, a deeper liquid core layer, and a solid inner core. Mercury is one of four terrestrial planets in the Solar System, and is a rocky body like Earth. It is the smallest planet in the Solar System, with an equatorial radius of . Mercury is also smaller—albeit more massive—than the largest natural satellites in the Solar System, Ganymede and Titan. Mercury consists of approximately 70% metallic and 30% silicate material. Mercury's density is the second highest in the Solar System at 5.427 g/cm3, only slightly less than Earth's density of 5.515 g/cm3. If the effect of gravitational compression were to be factored out from both planets, the materials of which Mercury is made would be denser than those of Earth, with an uncompressed density of 5.3 g/cm3 versus Earth's 4.4 g/cm3. Mercury's density can be used to infer details of its inner structure. Although Earth's high density results appreciably from gravitational compression, particularly at the core, Mercury is much smaller and its inner regions are not as compressed. Therefore, for it to have such a high density, its core must be large and rich in iron. Geologists estimate that Mercury's core occupies about 55% of its volume; for Earth this proportion is 17%. Research published in 2007 suggests that Mercury has a molten core. Surrounding the core is a mantle consisting of silicates. Based on data from the mission and Earth-based observation, Mercury's crust is estimated to be thick. One distinctive feature of Mercury's surface is the presence of numerous narrow ridges, extending up to several hundred kilometers in length. It is thought that these were formed as Mercury's core and mantle cooled and contracted at a time when the crust had already solidified. Mercury's core has a higher iron content than that of any other major planet in the Solar System, and several theories have been proposed to explain this. The most widely accepted theory is that Mercury originally had a metal–silicate ratio similar to common chondrite meteorites, thought to be typical of the Solar System's rocky matter, and a mass approximately 2.25 times its current mass. Early in the Solar System's history, Mercury may have been struck by a planetesimal of approximately 1/6 that mass and several thousand kilometers across. The impact would have stripped away much of the original crust and mantle, leaving the core behind as a relatively major component. A similar process, known as the giant impact hypothesis, has been proposed to explain the formation of the Moon. Alternatively, Mercury may have formed from the solar nebula before the Sun's energy output had stabilized. It would initially have had twice its present mass, but as the protosun contracted, temperatures near Mercury could have been between 2,500 and 3,500 K and possibly even as high as 10,000 K. Much of Mercury's surface rock could have been vaporized at such temperatures, forming an atmosphere of "rock vapor" that could have been carried away by the solar wind. A third hypothesis proposes that the solar nebula caused drag on the particles from which Mercury was accreting, which meant that lighter particles were lost from the accreting material and not gathered by Mercury. Each hypothesis predicts a different surface composition, and there are two space missions set to make observations. "MESSENGER", which ended in 2015, found higher-than-expected potassium and sulfur levels on the surface, suggesting that the giant impact hypothesis and vaporization of the crust and mantle did not occur because potassium and sulfur would have been driven off by the extreme heat of these events. "BepiColombo", which will arrive at Mercury in 2025, will make observations to test these hypotheses. The findings so far would seem to favor the third hypothesis; however, further analysis of the data is needed. Mercury's surface is similar in appearance to that of the Moon, showing extensive mare-like plains and heavy cratering, indicating that it has been geologically inactive for billions of years. Because knowledge of Mercury's geology had been based only on the 1975 flyby and terrestrial observations, it is the least understood of the terrestrial planets. As data from "MESSENGER" orbiter are processed, this knowledge will increase. For example, an unusual crater with radiating troughs has been discovered that scientists called "the spider". It was later named Apollodorus. Albedo features are areas of markedly different reflectivity, as seen by telescopic observation. Mercury has dorsa (also called "wrinkle-ridges"), Moon-like highlands, montes (mountains), planitiae (plains), rupes (escarpments), and valles (valleys). Names for features on Mercury come from a variety of sources. Names coming from people are limited to the deceased. Craters are named for artists, musicians, painters, and authors who have made outstanding or fundamental contributions to their field. Ridges, or dorsa, are named for scientists who have contributed to the study of Mercury. Depressions or fossae are named for works of architecture. Montes are named for the word "hot" in a variety of languages. Plains or planitiae are named for Mercury in various languages. Escarpments or rupēs are named for ships of scientific expeditions. Valleys or valles are named for abandoned cities, towns, or settlements of antiquity. Mercury was heavily bombarded by comets and asteroids during and shortly following its formation 4.6 billion years ago, as well as during a possibly separate subsequent episode called the Late Heavy Bombardment that ended 3.8 billion years ago. During this period of intense crater formation, Mercury received impacts over its entire surface, facilitated by the lack of any atmosphere to slow impactors down. During this time Mercury was volcanically active; basins such as the Caloris Basin were filled by magma, producing smooth plains similar to the maria found on the Moon. Data from the October 2008 flyby of "MESSENGER" gave researchers a greater appreciation for the jumbled nature of Mercury's surface. Mercury's surface is more heterogeneous than either Mars's or the Moon's, both of which contain significant stretches of similar geology, such as maria and plateaus. Craters on Mercury range in diameter from small bowl-shaped cavities to multi-ringed impact basins hundreds of kilometers across. They appear in all states of degradation, from relatively fresh rayed craters to highly degraded crater remnants. Mercurian craters differ subtly from lunar craters in that the area blanketed by their ejecta is much smaller, a consequence of Mercury's stronger surface gravity. According to IAU rules, each new crater must be named after an artist that was famous for more than fifty years, and dead for more than three years, before the date the crater is named. The largest known crater is , with a diameter of 1,550 km. The impact that created the Caloris Basin was so powerful that it caused lava eruptions and left a concentric ring over 2 km tall surrounding the impact crater. At the antipode of the Caloris Basin is a large region of unusual, hilly terrain known as the "Weird Terrain". One hypothesis for its origin is that shock waves generated during the Caloris impact traveled around Mercury, converging at the basin's antipode (180 degrees away). The resulting high stresses fractured the surface. Alternatively, it has been suggested that this terrain formed as a result of the convergence of ejecta at this basin's antipode. Overall, about 15 impact basins have been identified on the imaged part of Mercury. A notable basin is the 400 km wide, multi-ring Tolstoj Basin that has an ejecta blanket extending up to 500 km from its rim and a floor that has been filled by smooth plains materials. Beethoven Basin has a similar-sized ejecta blanket and a 625 km diameter rim. Like the Moon, the surface of Mercury has likely incurred the effects of space weathering processes, including Solar wind and micrometeorite impacts. There are two geologically distinct plains regions on Mercury. Gently rolling, hilly plains in the regions between craters are Mercury's oldest visible surfaces, predating the heavily cratered terrain. These inter-crater plains appear to have obliterated many earlier craters, and show a general paucity of smaller craters below about 30 km in diameter. Smooth plains are widespread flat areas that fill depressions of various sizes and bear a strong resemblance to the lunar maria. Notably, they fill a wide ring surrounding the Caloris Basin. Unlike lunar maria, the smooth plains of Mercury have the same albedo as the older inter-crater plains. Despite a lack of unequivocally volcanic characteristics, the localisation and rounded, lobate shape of these plains strongly support volcanic origins. All the smooth plains of Mercury formed significantly later than the Caloris basin, as evidenced by appreciably smaller crater densities than on the Caloris ejecta blanket. The floor of the Caloris Basin is filled by a geologically distinct flat plain, broken up by ridges and fractures in a roughly polygonal pattern. It is not clear whether they are volcanic lavas induced by the impact, or a large sheet of impact melt. One unusual feature of Mercury's surface is the numerous compression folds, or rupes, that crisscross the plains. As Mercury's interior cooled, it contracted and its surface began to deform, creating wrinkle ridges and lobate scarps associated with thrust faults. The scarps can reach lengths of 1000 km and heights of 3 km. These compressional features can be seen on top of other features, such as craters and smooth plains, indicating they are more recent. Mapping of the features has suggested a total shrinkage of Mercury's radius in the range of ~1 to 7 km. Small-scale thrust fault scarps have been found, tens of meters in height and with lengths in the range of a few km, that appear to be less than 50 million years old, indicating that compression of the interior and consequent surface geological activity continue to the present. The Lunar Reconnaissance Orbiter discovered that similar small thrust faults exist on the Moon. Images obtained by "MESSENGER" have revealed evidence for pyroclastic flows on Mercury from low-profile shield volcanoes. "MESSENGER" data has helped identify 51 pyroclastic deposits on the surface, where 90% of them are found within impact craters. A study of the degradation state of the impact craters that host pyroclastic deposits suggests that pyroclastic activity occurred on Mercury over a prolonged interval. A "rimless depression" inside the southwest rim of the Caloris Basin consists of at least nine overlapping volcanic vents, each individually up to 8 km in diameter. It is thus a "compound volcano". The vent floors are at a least 1 km below their brinks and they bear a closer resemblance to volcanic craters sculpted by explosive eruptions or modified by collapse into void spaces created by magma withdrawal back down into a conduit. Scientists could not quantify the age of the volcanic complex system, but reported that it could be of the order of a billion years. The surface temperature of Mercury ranges from at the most extreme places: 0°N, 0°W, or 180°W. It never rises above 180 K at the poles, due to the absence of an atmosphere and a steep temperature gradient between the equator and the poles. The subsolar point reaches about 700 K during perihelion (0°W or 180°W), but only 550 K at aphelion (90° or 270°W). On the dark side of the planet, temperatures average 110 K. The intensity of sunlight on Mercury's surface ranges between 4.59 and 10.61 times the solar constant (1,370 W·m−2). Although the daylight temperature at the surface of Mercury is generally extremely high, observations strongly suggest that ice (frozen water) exists on Mercury. The floors of deep craters at the poles are never exposed to direct sunlight, and temperatures there remain below 102 K; far lower than the global average. Water ice strongly reflects radar, and observations by the 70-meter Goldstone Solar System Radar and the VLA in the early 1990s revealed that there are patches of high radar reflection near the poles. Although ice was not the only possible cause of these reflective regions, astronomers think it was the most likely. The icy regions are estimated to contain about 1014–1015 kg of ice, and may be covered by a layer of regolith that inhibits sublimation. By comparison, the Antarctic ice sheet on Earth has a mass of about 4 kg, and Mars's south polar cap contains about 1016 kg of water. The origin of the ice on Mercury is not yet known, but the two most likely sources are from outgassing of water from the planet's interior or deposition by impacts of comets. Mercury is too small and hot for its gravity to retain any significant atmosphere over long periods of time; it does have a tenuous surface-bounded exosphere containing hydrogen, helium, oxygen, sodium, calcium, potassium and others at a surface pressure of less than approximately 0.5 nPa (0.005 picobars). This exosphere is not stable—atoms are continuously lost and replenished from a variety of sources. Hydrogen atoms and helium atoms probably come from the solar wind, diffusing into Mercury's magnetosphere before later escaping back into space. Radioactive decay of elements within Mercury's crust is another source of helium, as well as sodium and potassium. "MESSENGER" found high proportions of calcium, helium, hydroxide, magnesium, oxygen, potassium, silicon and sodium. Water vapor is present, released by a combination of processes such as: comets striking its surface, sputtering creating water out of hydrogen from the solar wind and oxygen from rock, and sublimation from reservoirs of water ice in the permanently shadowed polar craters. The detection of high amounts of water-related ions like O+, OH−, and H3O+ was a surprise. Because of the quantities of these ions that were detected in Mercury's space environment, scientists surmise that these molecules were blasted from the surface or exosphere by the solar wind. Sodium, potassium and calcium were discovered in the atmosphere during the 1980–1990s, and are thought to result primarily from the vaporization of surface rock struck by micrometeorite impacts including presently from Comet Encke. In 2008, magnesium was discovered by "MESSENGER". Studies indicate that, at times, sodium emissions are localized at points that correspond to the planet's magnetic poles. This would indicate an interaction between the magnetosphere and the planet's surface. On November 29, 2012, NASA confirmed that images from "MESSENGER" had detected that craters at the north pole contained water ice. "MESSENGER" principal investigator Sean Solomon is quoted in "The New York Times" estimating the volume of the ice to be large enough to "encase Washington, D.C., in a frozen block two and a half miles deep". Despite its small size and slow 59-day-long rotation, Mercury has a significant, and apparently global, magnetic field. According to measurements taken by , it is about 1.1% the strength of Earth's. The magnetic-field strength at Mercury's equator is about . Like that of Earth, Mercury's magnetic field is dipolar. Unlike Earth's, Mercury's poles are nearly aligned with the planet's spin axis. Measurements from both the and "MESSENGER" space probes have indicated that the strength and shape of the magnetic field are stable. It is likely that this magnetic field is generated by a dynamo effect, in a manner similar to the magnetic field of Earth. This dynamo effect would result from the circulation of the planet's iron-rich liquid core. Particularly strong tidal effects caused by the planet's high orbital eccentricity would serve to keep the core in the liquid state necessary for this dynamo effect. Mercury's magnetic field is strong enough to deflect the solar wind around the planet, creating a magnetosphere. The planet's magnetosphere, though small enough to fit within Earth, is strong enough to trap solar wind plasma. This contributes to the space weathering of the planet's surface. Observations taken by the spacecraft detected this low energy plasma in the magnetosphere of the planet's nightside. Bursts of energetic particles in the planet's magnetotail indicate a dynamic quality to the planet's magnetosphere. During its second flyby of the planet on October 6, 2008, "MESSENGER" discovered that Mercury's magnetic field can be extremely "leaky". The spacecraft encountered magnetic "tornadoes" – twisted bundles of magnetic fields connecting the planetary magnetic field to interplanetary space – that were up to wide or a third of the radius of the planet. These twisted magnetic flux tubes, technically known as flux transfer events, form open windows in the planet's magnetic shield through which the solar wind may enter and directly impact Mercury's surface via magnetic reconnection This also occurs in Earth's magnetic field. The "MESSENGER" observations showed the reconnection rate is ten times higher at Mercury, but its proximity to the Sun only accounts for about a third of the reconnection rate observed by "MESSENGER". Mercury has the most eccentric orbit of all the planets; its eccentricity is 0.21 with its distance from the Sun ranging from . It takes 87.969 Earth days to complete an orbit. The diagram illustrates the effects of the eccentricity, showing Mercury's orbit overlaid with a circular orbit having the same semi-major axis. Mercury's higher velocity when it is near perihelion is clear from the greater distance it covers in each 5-day interval. In the diagram the varying distance of Mercury to the Sun is represented by the size of the planet, which is inversely proportional to Mercury's distance from the Sun. This varying distance to the Sun leads to Mercury's surface being flexed by tidal bulges raised by the Sun that are about 17 times stronger than the Moon's on Earth. Combined with a 3:2 spin–orbit resonance of the planet's rotation around its axis, it also results in complex variations of the surface temperature. The resonance makes a single solar day on Mercury last exactly two Mercury years, or about 176 Earth days. Mercury's orbit is inclined by 7 degrees to the plane of Earth's orbit (the ecliptic), as shown in the diagram on the right. As a result, transits of Mercury across the face of the Sun can only occur when the planet is crossing the plane of the ecliptic at the time it lies between Earth and the Sun, which is in May or November. This occurs about every seven years on average. Mercury's axial tilt is almost zero, with the best measured value as low as 0.027 degrees. This is significantly smaller than that of Jupiter, which has the second smallest axial tilt of all planets at 3.1 degrees. This means that to an observer at Mercury's poles, the center of the Sun never rises more than 2.1 arcminutes above the horizon. At certain points on Mercury's surface, an observer would be able to see the Sun peek up a little more than two-thirds of the way over the horizon, then reverse and set before rising again, all within the same Mercurian day. This is because approximately four Earth days before perihelion, Mercury's angular orbital velocity equals its angular rotational velocity so that the Sun's apparent motion ceases; closer to perihelion, Mercury's angular orbital velocity then exceeds the angular rotational velocity. Thus, to a hypothetical observer on Mercury, the Sun appears to move in a retrograde direction. Four Earth days after perihelion, the Sun's normal apparent motion resumes. A similar effect would have occurred if Mercury had been in synchronous rotation: the alternating gain and loss of rotation over revolution would have caused a libration of 23.65° in longitude. For the same reason, there are two points on Mercury's equator, 180 degrees apart in longitude, at either of which, around perihelion in alternate Mercurian years (once a Mercurian day), the Sun passes overhead, then reverses its apparent motion and passes overhead again, then reverses a second time and passes overhead a third time, taking a total of about 16 Earth-days for this entire process. In the other alternate Mercurian years, the same thing happens at the other of these two points. The amplitude of the retrograde motion is small, so the overall effect is that, for two or three weeks, the Sun is almost stationary overhead, and is at its most brilliant because Mercury is at perihelion, its closest to the Sun. This prolonged exposure to the Sun at its brightest makes these two points the hottest places on Mercury. Maximum temperature occurs when the Sun is at an angle of about 25 degrees past noon due to diurnal temperature lag, at 0.4 Mercury days and 0.8 Mercury years past sunrise. Conversely, there are two other points on the equator, 90 degrees of longitude apart from the first ones, where the Sun passes overhead only when the planet is at aphelion in alternate years, when the apparent motion of the Sun in Mercury's sky is relatively rapid. These points, which are the ones on the equator where the apparent retrograde motion of the Sun happens when it is crossing the horizon as described in the preceding paragraph, receive much less solar heat than the first ones described above. Mercury attains inferior conjunction (nearest approach to Earth) every 116 Earth days on average, but this interval can range from 105 days to 129 days due to the planet's eccentric orbit. Mercury can come as near as to Earth, and that is slowly declining: The next approach to within is in 2679, and to within in 4487, but it will not be closer to Earth than until 28,622. Its period of retrograde motion as seen from Earth can vary from 8 to 15 days on either side of inferior conjunction. This large range arises from the planet's high orbital eccentricity. On average, Mercury is the closest planet to the Earth, and it is the closest planet to each of the other planets in the Solar System. The longitude convention for Mercury puts the zero of longitude at one of the two hottest points on the surface, as described above. However, when this area was first visited, by , this zero meridian was in darkness, so it was impossible to select a feature on the surface to define the exact position of the meridian. Therefore, a small crater further west was chosen, called Hun Kal, which provides the exact reference point for measuring longitude. The center of Hun Kal defines the 20° west meridian. A 1970 International Astronomical Union resolution suggests that longitudes be measured positively in the westerly direction on Mercury. The two hottest places on the equator are therefore at longitudes 0° W and 180° W, and the coolest points on the equator are at longitudes 90° W and 270° W. However, the "MESSENGER" project uses an east-positive convention. For many years it was thought that Mercury was synchronously tidally locked with the Sun, rotating once for each orbit and always keeping the same face directed towards the Sun, in the same way that the same side of the Moon always faces Earth. Radar observations in 1965 proved that the planet has a 3:2 spin-orbit resonance, rotating three times for every two revolutions around the Sun. The eccentricity of Mercury's orbit makes this resonance stable—at perihelion, when the solar tide is strongest, the Sun is nearly still in Mercury's sky. The rare 3:2 resonant tidal locking is stabilized by the variance of the tidal force along Mercury's eccentric orbit, acting on a permanent dipole component of Mercury's mass distribution. In a circular orbit there is no such variance, so the only resonance stabilized in such an orbit is at 1:1 (e.g., Earth–Moon), when the tidal force, stretching a body along the "center-body" line, exerts a torque that aligns the body's axis of least inertia (the "longest" axis, and the axis of the aforementioned dipole) to point always at the center. However, with noticeable eccentricity, like that of Mercury's orbit, the tidal force has a maximum at perihelion and therefore stabilizes resonances, like 3:2, enforcing that the planet points its axis of least inertia roughly at the Sun when passing through perihelion. The original reason astronomers thought it was synchronously locked was that, whenever Mercury was best placed for observation, it was always nearly at the same point in its 3:2 resonance, hence showing the same face. This is because, coincidentally, Mercury's rotation period is almost exactly half of its synodic period with respect to Earth. Due to Mercury's 3:2 spin-orbit resonance, a solar day (the length between two meridian transits of the Sun) lasts about 176 Earth days. A sidereal day (the period of rotation) lasts about 58.7 Earth days. Simulations indicate that the orbital eccentricity of Mercury varies chaotically from nearly zero (circular) to more than 0.45 over millions of years due to perturbations from the other planets. This was thought to explain Mercury's 3:2 spin-orbit resonance (rather than the more usual 1:1), because this state is more likely to arise during a period of high eccentricity. However, accurate modeling based on a realistic model of tidal response has demonstrated that Mercury was captured into the 3:2 spin-orbit state at a very early stage of its history, within 20 (more likely, 10) million years after its formation. Numerical simulations show that a future secular orbital resonant perihelion interaction with Jupiter may cause the eccentricity of Mercury's orbit to increase to the point where there is a 1% chance that the planet may collide with Venus within the next five billion years. In 1859, the French mathematician and astronomer Urbain Le Verrier reported that the slow precession of Mercury's orbit around the Sun could not be completely explained by Newtonian mechanics and perturbations by the known planets. He suggested, among possible explanations, that another planet (or perhaps instead a series of smaller 'corpuscules') might exist in an orbit even closer to the Sun than that of Mercury, to account for this perturbation. (Other explanations considered included a slight oblateness of the Sun.) The success of the search for Neptune based on its perturbations of the orbit of Uranus led astronomers to place faith in this possible explanation, and the hypothetical planet was named Vulcan, but no such planet was ever found. The perihelion precession of Mercury is 5,600 arcseconds (1.5556°) per century relative to Earth, or 574.10±0.65 arcseconds per century relative to the inertial ICRF. Newtonian mechanics, taking into account all the effects from the other planets, predicts a precession of 5,557 arcseconds (1.5436°) per century. In the early 20th century, Albert Einstein's general theory of relativity provided the explanation for the observed precession, by formalizing gravitation as being mediated by the curvature of spacetime. The effect is small: just 42.98 arcseconds per century for Mercury; it therefore requires a little over twelve million orbits for a full excess turn. Similar, but much smaller, effects exist for other Solar System bodies: 8.62 arcseconds per century for Venus, 3.84 for Earth, 1.35 for Mars, and 10.05 for 1566 Icarus. Einstein's formula for the perihelion shift per revolution is formula_1, where formula_2 is the orbital eccentricity, formula_3 the semi-major axis, and formula_4 the orbital period. Filling in the values gives a result of 0.1035 arcseconds per revolution or 0.4297 arcseconds per Earth year, i.e., 42.97 arcseconds per century. This is in close agreement with the accepted value of Mercury's perihelion advance of 42.98 arcseconds per century. There may be scientific support, based on studies reported in March 2020, for considering that parts of the planet Mercury may have been habitable, and perhaps that life forms, albeit likely primitive microorganisms, may have existed on the planet. Mercury's apparent magnitude is calculated to vary between −2.48 (brighter than Sirius) around superior conjunction and +7.25 (below the limit of naked-eye visibility) around inferior conjunction. The mean apparent magnitude is 0.23 while the standard deviation of 1.78 is the largest of any planet. The mean apparent magnitude at superior conjunction is −1.89 while that at inferior conjunction is +5.93. Observation of Mercury is complicated by its proximity to the Sun, as it is lost in the Sun's glare for much of the time. Mercury can be observed for only a brief period during either morning or evening twilight. Mercury can, like several other planets and the brightest stars, be seen during a total solar eclipse. Like the Moon and Venus, Mercury exhibits phases as seen from Earth. It is "new" at inferior conjunction and "full" at superior conjunction. The planet is rendered invisible from Earth on both of these occasions because of its being obscured by the Sun, except its new phase during a transit. Mercury is technically brightest as seen from Earth when it is at a full phase. Although Mercury is farthest from Earth when it is full, the greater illuminated area that is visible and the opposition brightness surge more than compensates for the distance. The opposite is true for Venus, which appears brightest when it is a crescent, because it is much closer to Earth than when gibbous. Nonetheless, the brightest (full phase) appearance of Mercury is an essentially impossible time for practical observation, because of the extreme proximity of the Sun. Mercury is best observed at the first and last quarter, although they are phases of lesser brightness. The first and last quarter phases occur at greatest elongation east and west of the Sun, respectively. At both of these times Mercury's separation from the Sun ranges anywhere from 17.9° at perihelion to 27.8° at aphelion. At greatest "western" elongation, Mercury rises at its earliest before sunrise, and at greatest "eastern" elongation, it sets at its latest after sunset. Mercury can be easily seen from the tropics and subtropics more than from higher latitudes. Viewed from low latitudes and at the right times of year, the ecliptic intersects the horizon at a steep angle. Mercury is 10° above the horizon when the planet appears directly above the Sun (i.e. its orbit appears vertical) and is at maximum elongation from the Sun (28°) and also when the Sun is 18° below the horizon, so the sky is just completely dark. This angle is the maximum altitude at which Mercury is visible in a completely dark sky. At middle latitudes, Mercury is more often and easily visible from the Southern Hemisphere than from the Northern. This is because Mercury's maximum western elongation occurs only during early autumn in the Southern Hemisphere, whereas its greatest eastern elongation happens only during late winter in the Southern Hemisphere. In both of these cases, the angle at which the planet's orbit intersects the horizon is maximized, allowing it to rise several hours before sunrise in the former instance and not set until several hours after sundown in the latter from southern mid-latitudes, such as Argentina and South Africa. An alternate method for viewing Mercury involves observing the planet during daylight hours when conditions are clear, ideally when it is at its greatest elongation. This allows the planet to be found easily, even when using telescopes with apertures. Care must be taken to ensure the instrument isn't pointed directly towards the Sun because of the risk for eye damage. This method bypasses the limitation of twilight observing when the ecliptic is located at a low elevation (e.g. on autumn evenings). Ground-based telescope observations of Mercury reveal only an illuminated partial disk with limited detail. The first of two spacecraft to visit the planet was , which mapped about 45% of its surface from 1974 to 1975. The second is the "MESSENGER" spacecraft, which after three Mercury flybys between 2008 and 2009, attained orbit around Mercury on March 17, 2011, to study and map the rest of the planet. The Hubble Space Telescope cannot observe Mercury at all, due to safety procedures that prevent its pointing too close to the Sun. Because the shift of 0.15 revolutions in a year makes up a seven-year cycle (0.15 × 7 ≈ 1.0), in the seventh year Mercury follows almost exactly (earlier by 7 days) the sequence of phenomena it showed seven years before. The earliest known recorded observations of Mercury are from the Mul.Apin tablets. These observations were most likely made by an Assyrian astronomer around the 14th century BC. The cuneiform name used to designate Mercury on the Mul.Apin tablets is transcribed as Udu.Idim.Gu\u4.Ud ("the jumping planet"). Babylonian records of Mercury date back to the 1st millennium BC. The Babylonians called the planet Nabu after the messenger to the gods in their mythology. The ancients knew Mercury by different names depending on whether it was an evening star or a morning star. By about 350 BC, the ancient Greeks had realized the two stars were one. They knew the planet as Στίλβων "Stilbōn", meaning "twinkling", and Ἑρμής "Hermēs", for its fleeting motion, a name that is retained in modern Greek (Ερμής "Ermis"). The Romans named the planet after the swift-footed Roman messenger god, Mercury (Latin "Mercurius"), which they equated with the Greek Hermes, because it moves across the sky faster than any other planet. The astronomical symbol for Mercury is a stylized version of Hermes' caduceus. The Greco-Egyptian astronomer Ptolemy wrote about the possibility of planetary transits across the face of the Sun in his work "Planetary Hypotheses". He suggested that no transits had been observed either because planets such as Mercury were too small to see, or because the transits were too infrequent. In ancient China, Mercury was known as "the Hour Star" ("Chen-xing" ). It was associated with the direction north and the phase of water in the Five Phases system of metaphysics. Modern Chinese, Korean, Japanese and Vietnamese cultures refer to the planet literally as the "water star" (), based on the Five elements. Hindu mythology used the name Budha for Mercury, and this god was thought to preside over Wednesday. The god Odin (or Woden) of Germanic paganism was associated with the planet Mercury and Wednesday. The Maya may have represented Mercury as an owl (or possibly four owls; two for the morning aspect and two for the evening) that served as a messenger to the underworld. In medieval Islamic astronomy, the Andalusian astronomer Abū Ishāq Ibrāhīm al-Zarqālī in the 11th century described the deferent of Mercury's geocentric orbit as being oval, like an egg or a pignon, although this insight did not influence his astronomical theory or his astronomical calculations. In the 12th century, Ibn Bajjah observed "two planets as black spots on the face of the Sun", which was later suggested as the transit of Mercury and/or Venus by the Maragha astronomer Qotb al-Din Shirazi in the 13th century. (Note that most such medieval reports of transits were later taken as observations of sunspots.) In India, the Kerala school astronomer Nilakantha Somayaji in the 15th century developed a partially heliocentric planetary model in which Mercury orbits the Sun, which in turn orbits Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century. The first telescopic observations of Mercury were made by Galileo in the early 17th century. Although he observed phases when he looked at Venus, his telescope was not powerful enough to see the phases of Mercury. In 1631, Pierre Gassendi made the first telescopic observations of the transit of a planet across the Sun when he saw a transit of Mercury predicted by Johannes Kepler. In 1639, Giovanni Zupi used a telescope to discover that the planet had orbital phases similar to Venus and the Moon. The observation demonstrated conclusively that Mercury orbited around the Sun. A rare event in astronomy is the passage of one planet in front of another (occultation), as seen from Earth. Mercury and Venus occult each other every few centuries, and the event of May 28, 1737 is the only one historically observed, having been seen by John Bevis at the Royal Greenwich Observatory. The next occultation of Mercury by Venus will be on December 3, 2133. The difficulties inherent in observing Mercury mean that it has been far less studied than the other planets. In 1800, Johann Schröter made observations of surface features, claiming to have observed mountains. Friedrich Bessel used Schröter's drawings to erroneously estimate the rotation period as 24 hours and an axial tilt of 70°. In the 1880s, Giovanni Schiaparelli mapped the planet more accurately, and suggested that Mercury's rotational period was 88 days, the same as its orbital period due to tidal locking. This phenomenon is known as synchronous rotation. The effort to map the surface of Mercury was continued by Eugenios Antoniadi, who published a book in 1934 that included both maps and his own observations. Many of the planet's surface features, particularly the albedo features, take their names from Antoniadi's map. In June 1962, Soviet scientists at the Institute of Radio-engineering and Electronics of the USSR Academy of Sciences, led by Vladimir Kotelnikov, became the first to bounce a radar signal off Mercury and receive it, starting radar observations of the planet. Three years later, radar observations by Americans Gordon H. Pettengill and Rolf B. Dyce, using the 300-meter Arecibo Observatory radio telescope in Puerto Rico, showed conclusively that the planet's rotational period was about 59 days. The theory that Mercury's rotation was synchronous had become widely held, and it was a surprise to astronomers when these radio observations were announced. If Mercury were tidally locked, its dark face would be extremely cold, but measurements of radio emission revealed that it was much hotter than expected. Astronomers were reluctant to drop the synchronous rotation theory and proposed alternative mechanisms such as powerful heat-distributing winds to explain the observations. Italian astronomer Giuseppe Colombo noted that the rotation value was about two-thirds of Mercury's orbital period, and proposed that the planet's orbital and rotational periods were locked into a 3:2 rather than a 1:1 resonance. Data from subsequently confirmed this view. This means that Schiaparelli's and Antoniadi's maps were not "wrong". Instead, the astronomers saw the same features during every "second" orbit and recorded them, but disregarded those seen in the meantime, when Mercury's other face was toward the Sun, because the orbital geometry meant that these observations were made under poor viewing conditions. Ground-based optical observations did not shed much further light on Mercury, but radio astronomers using interferometry at microwave wavelengths, a technique that enables removal of the solar radiation, were able to discern physical and chemical characteristics of the subsurface layers to a depth of several meters. Not until the first space probe flew past Mercury did many of its most fundamental morphological properties become known. Moreover, recent technological advances have led to improved ground-based observations. In 2000, high-resolution lucky imaging observations were conducted by the Mount Wilson Observatory 1.5 meter Hale telescope. They provided the first views that resolved surface features on the parts of Mercury that were not imaged in the mission. Most of the planet has been mapped by the Arecibo radar telescope, with resolution, including polar deposits in shadowed craters of what may be water ice. Reaching Mercury from Earth poses significant technical challenges, because it orbits so much closer to the Sun than Earth. A Mercury-bound spacecraft launched from Earth must travel over into the Sun's gravitational potential well. Mercury has an orbital speed of , whereas Earth's orbital speed is . Therefore, the spacecraft must make a large change in velocity (delta-v) to enter a Hohmann transfer orbit that passes near Mercury, as compared to the delta-v required for other planetary missions. The potential energy liberated by moving down the Sun's potential well becomes kinetic energy; requiring another large delta-v change to do anything other than rapidly pass by Mercury. To land safely or enter a stable orbit the spacecraft would rely entirely on rocket motors. Aerobraking is ruled out because Mercury has a negligible atmosphere. A trip to Mercury requires more rocket fuel than that required to escape the Solar System completely. As a result, only two space probes have visited it so far. A proposed alternative approach would use a solar sail to attain a Mercury-synchronous orbit around the Sun. The first spacecraft to visit Mercury was NASA's (1974–1975). The spacecraft used the gravity of Venus to adjust its orbital velocity so that it could approach Mercury, making it both the first spacecraft to use this gravitational "slingshot" effect and the first NASA mission to visit multiple planets. provided the first close-up images of Mercury's surface, which immediately showed its heavily cratered nature, and revealed many other types of geological features, such as the giant scarps that were later ascribed to the effect of the planet shrinking slightly as its iron core cools. Unfortunately, the same face of the planet was lit at each of close approaches. This made close observation of both sides of the planet impossible, and resulted in the mapping of less than 45% of the planet's surface. The spacecraft made three close approaches to Mercury, the closest of which took it to within of the surface. At the first close approach, instruments detected a magnetic field, to the great surprise of planetary geologists—Mercury's rotation was expected to be much too slow to generate a significant dynamo effect. The second close approach was primarily used for imaging, but at the third approach, extensive magnetic data were obtained. The data revealed that the planet's magnetic field is much like Earth's, which deflects the solar wind around the planet. For many years after the encounters, the origin of Mercury's magnetic field remained the subject of several competing theories. On March 24, 1975, just eight days after its final close approach, ran out of fuel. Because its orbit could no longer be accurately controlled, mission controllers instructed the probe to shut down. is thought to be still orbiting the Sun, passing close to Mercury every few months. A second NASA mission to Mercury, named "MESSENGER" (MErcury Surface, Space ENvironment, GEochemistry, and Ranging), was launched on August 3, 2004. It made a fly-by of Earth in August 2005, and of Venus in October 2006 and June 2007 to place it onto the correct trajectory to reach an orbit around Mercury. A first fly-by of Mercury occurred on January 14, 2008, a second on October 6, 2008, and a third on September 29, 2009. Most of the hemisphere not imaged by was mapped during these fly-bys. The probe successfully entered an elliptical orbit around the planet on March 18, 2011. The first orbital image of Mercury was obtained on March 29, 2011. The probe finished a one-year mapping mission, and then entered a one-year extended mission into 2013. In addition to continued observations and mapping of Mercury, "MESSENGER" observed the 2012 solar maximum. The mission was designed to clear up six key issues: Mercury's high density, its geological history, the nature of its magnetic field, the structure of its core, whether it has ice at its poles, and where its tenuous atmosphere comes from. To this end, the probe carried imaging devices that gathered much-higher-resolution images of much more of Mercury than , assorted spectrometers to determine abundances of elements in the crust, and magnetometers and devices to measure velocities of charged particles. Measurements of changes in the probe's orbital velocity were expected to be used to infer details of the planet's interior structure. "MESSENGER" final maneuver was on April 24, 2015, and it crashed into Mercury's surface on April 30, 2015. The spacecraft's impact with Mercury occurred near 3:26 PM EDT on April 30, 2015, leaving a crater estimated to be in diameter. The European Space Agency and the Japanese Space Agency developed and launched a joint mission called "BepiColombo", which will orbit Mercury with two probes: one to map the planet and the other to study its magnetosphere. Launched on October 20, 2018, "BepiColombo" is expected to reach Mercury in 2025. It will release a magnetometer probe into an elliptical orbit, then chemical rockets will fire to deposit the mapper probe into a circular orbit. Both probes will operate for one terrestrial year. The mapper probe carries an array of spectrometers similar to those on "MESSENGER", and will study the planet at many different wavelengths including infrared, ultraviolet, X-ray and gamma ray.
https://en.wikipedia.org/wiki?curid=19694
Monty Python and the Holy Grail Monty Python and the Holy Grail is a 1975 British comedy film concerning the Arthurian legend, written and performed by the Monty Python comedy group of Graham Chapman, John Cleese, Terry Gilliam, Eric Idle, Terry Jones and Michael Palin, and directed by Gilliam and Jones. It was conceived during the hiatus between the third and fourth series of their BBC television series "Monty Python's Flying Circus". While the group's first film, "And Now for Something Completely Different", was a compilation of sketches from the first two television series, "Holy Grail" is a new story that parodies the legend of King Arthur's quest for the Holy Grail. Thirty years later, Idle used the film as the basis for the musical "Spamalot". "Monty Python and the Holy Grail" grossed more than any other British film exhibited in the US in 1975. In the US, it was selected as the second-best comedy of all time in the ABC special "". In the UK, readers of "Total Film" magazine in 2000 ranked it the fifth-greatest comedy film of all time; a similar poll of Channel 4 viewers in 2006 placed it sixth. In AD 932, King Arthur and his squire, Patsy, travel throughout Britain searching for men to join the Knights of the Round Table. Along the way, Arthur debates whether swallows could carry coconuts, recounts receiving Excalibur from the Lady of the Lake, defeats the Black Knight and observes an impromptu witch trial. He recruits Sir Bedevere the Wise, Sir Lancelot the Brave, Sir Galahad the Pure and Sir Robin the Not-Quite-So-Brave-as-Sir-Lancelot, along with their squires and Robin's minstrels. Arthur leads the knights to Camelot, but upon further consideration (thanks to a musical number) he decides not to go there because it is "a silly place". As they turn away, God (an image of W. G. Grace) appears and orders Arthur to find the Holy Grail. Searching for the Grail, Arthur and his knights arrive at a castle occupied by French soldiers who claim to have the Grail and taunt the Englishmen, before driving them back with a barrage of barnyard animals. Bedevere concocts a plan to sneak in using a Trojan Rabbit, but no one hides inside it, and the Englishmen are forced to flee when the rabbit is flung back at them. Arthur decides the knights should go their separate ways to search for the Grail. A modern-day historian filming a documentary on the Arthurian legends is abruptly killed by an unknown knight on horseback, triggering a police investigation. On the knights' travels, Arthur and Bedevere are given directions by an old man in Scene 24 and then attempt to satisfy the strange requests of the dreaded Knights Who Say "Ni!". Sir Robin avoids a fight with a Three-Headed Knight by running away while the heads are arguing. Sir Galahad is led by a grail-shaped beacon to Castle Anthrax, populated by 150 young women, but is unwillingly "rescued" by Lancelot. Next, Lancelot receives an arrow-shot note from Swamp Castle and believes it to be from a lady being forced to marry against her will. He storms the castle and slaughters several members of the wedding party, only to discover the note was from an effeminate prince being coerced into marriage by his ambitious father. Arthur and his knights regroup and are joined by three new knights as well as Brother Maynard and his monk brethren. They meet Tim the Enchanter, who directs them to a cave where the location of the Grail is said to be written. The entrance to the cave is guarded by the Rabbit of Caerbannog. Fatally underestimating its lethal prowess, the knights attack the Rabbit, which easily kills Sirs Bors, Gawain and Ector. Arthur uses the "Holy Hand Grenade of Antioch", provided by Brother Maynard, to destroy the creature. Inside the cave, they find an inscription from Joseph of Arimathea, directing them to Castle Aarrgh. A giant animated cave monster devours Brother Maynard, but Arthur and the knights escape after the animator suffers a fatal heart attack, ending the beast's existence. The protagonists approach the Bridge of Death, where the bridge-keeper challenges them to answer three questions to pass or they will be cast into the Gorge of Eternal Peril. Lancelot goes first, easily answers the elementary questions and crosses. Robin is done in by an unexpectedly difficult question, and Galahad misses the answer to an easy one; both are flung into the gorge. When the bridge keeper asks "What is the air-speed velocity of an unladen swallow?" Arthur asks for clarification ("African or European?"). The bridge keeper cannot answer, and is thrown into the gorge himself. Upon crossing the bridge, Arthur and Bedevere cannot find Lancelot, unaware he has been arrested by police investigating the historian's death. Arthur and Bedevere reach Castle Aarrgh, only to find it occupied by the same French soldiers who taunted them previously. After being repelled by showers of manure, they summon an army of knights and prepare to assault the castle. Just as the army charges, police arrive and arrest Arthur and Bedevere for the historian's death. The film ends with an officer breaking the camera. Fifteen months before the BBC visited the set in May 1974, the Monty Python troupe assembled the first version of the screenplay. When half of the resulting material was set in the Middle Ages, and half was set in the present day, the group opted to focus on the Middle Ages, revolving on the legend of the Holy Grail. By the fourth or fifth version of their screenplay, the story was complete, and the cast joked the fact that the Grail was never retrieved would be "a big let-down ... a great anti-climax". Graham Chapman said a challenge was incorporating scenes that did not fit the Holy Grail motif. Neither Terry Gilliam nor Terry Jones had directed a film before, and described it as a learning experience in which they would learn to make a film by making an entire full-length film. The cast humorously described the novice directing style as employing the level of mutual disrespect always found in Monty Python's work. The film's initial budget of approximately £200,000 was raised by convincing 10 separate investors to contribute £20,000 apiece. Three of those investors were the rock bands Pink Floyd, Led Zeppelin and Genesis, who were persuaded to help fund the film by Tony Stratton-Smith, head of Charisma Records (the record label that released Python's early comedy albums). According to Terry Gilliam, the Pythons turned to rock stars like Pink Floyd, Led Zeppelin and Elton John for finance as the studios refused to fund the film and rock stars saw it as "a good tax write-off" due to UK income tax being "as high as 90%" at the time. "Monty Python and the Holy Grail" was mostly shot on location in Scotland, particularly around Doune Castle, Glen Coe, and the privately owned Castle Stalker. The many castles seen throughout the film were mainly either Doune Castle shot from different angles or hanging miniatures. There are several exceptions to this: the very first exterior shot of a castle at the beginning of the film is Kidwelly Castle in South Wales, and the single exterior shot of the Swamp Castle during "Tale of Sir Lancelot" is Bodiam Castle in East Sussex; all subsequent shots of the exterior and interior of those scenes were filmed at Doune Castle. Production designer Julian Doyle recounted that his crew constructed walls in the forest near Doune. Terry Jones later recalled the crew had selected more castles around Scotland for locations, but during the two weeks prior to principal photography, the Scottish Department of the Environment declined permission for use of the castles in its jurisdiction, for fear of damage. At the start of "The Tale of Sir Robin", there is a slow camera zoom in on rocky scenery (that in the voice-over is described as "the dark forest of Ewing"). This is actually a still photograph of the gorge at Mount Buffalo National Park in Victoria, Australia. Doyle stated in 2000 during an interview with "Hotdog" magazine that it was a still image filmed with candles underneath the frame (to give a heat haze). This was a low-cost method of achieving a convincing location effect. On the DVD audio commentary, Cleese described challenges shooting and editing Castle Anthrax in "The Tale of Sir Galahad", with what he felt the most comedic take being unused because an anachronistic coat was visible in it. Castle Anthrax was also shot in one part of Doune, where costume designer Hazel Pethig advised against nudity, dressing the girls in shifts. In the scene where the knights were combatting the Rabbit of Caerbannog, a real white rabbit was used, switched with puppets for its killings. It was covered with red liquid to simulate blood, though the rabbit's owner did not want the animal dirty and was kept unaware. The liquid was difficult to remove from the fur. He also stated that he thought that, had they been more experienced in filmmaking, the crew would have just purchased a rabbit instead. Otherwise, the rabbit himself was unharmed. Also, the rabbit-bite effects were done by special puppetry by both Gilliam and SFX technician John Horton. As chronicled in "The Life of Python", "The First 20 Years of Monty Python", and "The Pythons' Autobiography", Chapman suffered from acrophobia, trembling and bouts of forgetfulness during filming due to his alcoholism, prompting him to refrain from drinking while the production continued in order to remain "on an even keel". Nearly three years later, in December 1977, Chapman achieved sobriety. Originally the knight characters were going to ride real horses, but after it became clear that the film's small budget precluded real horses (except for a lone horse appearing in a couple of scenes), the Pythons decided their characters would mime horse-riding while their porters trotted behind them banging coconut shells together. The joke was derived from the old-fashioned sound effect used by radio shows to convey the sound of hooves clattering. This was later referred to in the German release of the film, which translated the title as "Die Ritter der Kokosnuß" ("The Knights of the Coconut"). The opening credits of the film feature pseudo-Swedish subtitles, which soon turn into an appeal to visit Sweden and see the country's moose. The subtitles are soon stopped, but moose references continue throughout the actual credits until the credits are stopped again and restarted in a different visual style and with references to llamas, animals often mentioned in "Flying Circus". The subtitles were written by Michael Palin as a way to "entertain the 'captive' audience" at the beginning of the film. In addition to several songs written by Python regular Neil Innes, several pieces of music were licensed from De Wolfe Music Library. These include: "Monty Python and the Holy Grail" had its theatrical debut in the United Kingdom in 3 April 1975, followed by a United States release on 27 April 1975. It was re-released on 14 October 2015 in the United Kingdom. The film had its television premiere 25 February 1977 on the "CBS Late Movie". Reportedly, the Pythons were displeased to discover a number of edits were done by the network to reduce use of profanity and the showing of blood. The troupe pulled back the rights and thereafter had it broadcast in the United States only on PBS and later other channels such as Comedy Central and IFC, where it runs uncut. In Region 1, The Criterion Collection released a LaserDisc version of the film featuring audio commentary from directors Jones and Gilliam. In 2001, Columbia Tristar published a two-disc, special-edition DVD. Disc one included two commentary tracks featuring Idle, Palin and Cleese in the first, and Jones and Gilliam in the second, and "Subtitles for People Who Don't Like the Film", consisting of lines taken from William Shakespeare's "Henry IV, Part 2". Disc two includes "Monty Python and the Holy Grail in Lego" (also known as "Lego Knights" or "It's Only a Model"), a "brickfilm" version of the "Camelot Song" as sung by Lego minifigures. It was created by Spite Your Face Productions on commission from the Lego Group and Python Pictures. The project was conceived by the original film's respective producer and co-director, John Goldstone and Terry Gilliam. Disc two also includes two scenes from the film's Japanese dub, literally translated back into English through subtitles. "The Quest for the Holy Grail Locations", hosted by Palin and Jones, shows places in Scotland used for the setting titled as "England 932 A.D." (as well as the two Pythons purchasing a copy of their own script as a guide). Also included is a who's who page, advertising galleries and sing-alongs. A limited-edition DVD release additionally included a copy of the screenplay, a limited-edition film cell/senitype, and limited-edition art cards; however, a few of the bonus features from the 'Extraordinarily Deluxe Edition' were omitted. A 35th-anniversary edition on Blu-ray was released in the US on 6 March 2012. Special features include "The Holy Book of Days," a second-screen experience that can be downloaded as an app on an iOS device and played with the Blu-ray to enhance its viewing, lost animation sequences with a new intro from animator Terry Gilliam, outtakes and extended scenes with Python member and the movie's co-director Terry Jones. Contemporary reviews were mixed. Vincent Canby of "The New York Times" wrote in a favourable review that the film had "some low spots," but had gags which were "nonstop, occasionally inspired and should not be divulged, though it's not giving away too much to say that I particularly liked a sequence in which the knights, to gain access to an enemy castle, come up with the idea of building a Trojan rabbit." Charles Champlin of the "Los Angeles Times" was also positive, writing that the film, "like "Mad" comics, is not certain to please every taste. But its youthful exuberance and its rousing zaniness are hard not to like. As a matter of fact, the sense of fun is dangerously contagious." Penelope Gilliatt of "The New Yorker" called the film "often recklessly funny and sometimes a matter of comic genius." Other reviews were less enthusiastic. "Variety" wrote that the storyline was "basically an excuse for set pieces, some amusing, others overdone." Gene Siskel of the "Chicago Tribune" gave the film two-and-a-half stars, writing that he felt "it contained about 10 very funny moments and 70 minutes of silence. Too many of the jokes took too long to set up, a trait shared by both "Blazing Saddles" and "Young Frankenstein". I guess I prefer Monty Python in chunks, in its original, television revue format." Gary Arnold of "The Washington Post" called the film "a fitfully amusing spoof of the Arthurian legends" but "rather poky" in tempo, citing the running gag of Swedish subtitles in the opening credits as an example of how the Pythons "don't know when to let go of any "shtik"". Geoff Brown of "The Monthly Film Bulletin" wrote in a mixed review that "the team's visual buffooneries and verbal rigamaroles (some good, some bad, but mostly indifferent) are piled on top of each other with no attention to judicious timing or structure, and a form which began as a jaunty assault on the well-made revue sketch and an ingenious misuse of television's fragmented style of presentation, threatens to become as unyielding and unfruitful as the conventions it originally attacked." The film's reputation grew over time. In 2000, readers of "Total Film" magazine voted "Holy Grail" the fifth-greatest comedy film of all time. The next Python film, "Life of Brian", was ranked first. A 2006 poll of Channel 4 viewers on the 50 Greatest Comedy Films saw "Holy Grail" placed in sixth place (with "Life of Brian" again topping the list). In 2011, an ABC prime-time special, "", counted down the best films chosen by fans based on results of a poll conducted by ABC and "People". "Holy Grail" was selected as the second best comedy after "Airplane!". In 2016, "Empire" magazine ranked "Holy Grail" 18th in their list of the 100 best British films ("Life of Brian" was ranked 2nd), with their entry stating, "Elvis ordered a print of this comedy classic and watched it five times. If it's good enough for the King, it's good enough for you." In a 2017 interview at Indiana University in Bloomington, John Cleese expressed disappointment with the film's conclusion. "'The ending annoys me the most'", he said after a screening of the film on the Indiana campus, adding that "'It ends the way it does because we couldn't think of any other way'". However, scripts for the film and notebooks that are among Michael Palin's private archive, which he donated to the British Library in 2017, do document at least one alternate ending that the troupe considered: "a battle between the knights of Camelot, the French, and the Killer Rabbit of Caerbannog". Due to the film's small production budget, that idea or a "much pricier option" was discarded by the Pythons in favour of the ending with "King Arthur getting arrested", which Palin deemed "'cheaper'" and "'funnier'". During Fox News's coverage of the 2020 George Floyd protests, and more specifically their reports on Seattle's Capitol Hill Autonomous Zone, news anchor Martha MacCallum mistook a satirical quote from "Monty Python and the Holy Grail" that had been posted on Reddit as being a genuine view held by a protester. Monty Python co-founder John Cleese expressed amusement at the error in a Twitter post. In 2005, the film was adapted as a Tony Award-winning Broadway musical, "Spamalot". Written primarily by Idle, the show has more of an overarching plot and leaves out certain portions of the movie due to difficulties in rendering certain effects on stage. Nonetheless, many of the jokes from the film are present in the show. In 2013, the Pythons lost a legal case to Mark Forstater, the film's producer, over royalties for the derivative work, "Spamalot". They owed a combined £800,000 in legal fees and back royalties to Forstater. To help cover the cost of these royalties and fees, the group arranged and performed in a stage show, "Monty Python Live (Mostly)", held at the O2 Arena in London in July 2014.
https://en.wikipedia.org/wiki?curid=19701
Mutation In biology, a mutation is an alteration in the nucleotide sequence of the genome of an organism, virus, or extrachromosomal DNA. Viral genomes can be of either DNA or RNA. Mutations result from errors during DNA replication, mitosis, and meiosis or other types of damage to DNA (such as pyrimidine dimers that may be caused by exposure to radiation or carcinogens), which then may undergo error-prone repair (especially microhomology-mediated end joining) or cause an error during other forms of repair or else may cause an error during replication (translesion synthesis). Mutations may also result from insertion or deletion of segments of DNA due to mobile genetic elements. Mutations may or may not produce discernible changes in the observable characteristics (phenotype) of an organism. Mutations play a part in both normal and abnormal biological processes including: evolution, cancer, and the development of the immune system, including junctional diversity. Mutation is the ultimate source of all genetic variation, providing the raw material on which evolutionary forces such as natural selection can act. Mutation can result in many different types of change in sequences. Mutations in genes can either have no effect, alter the product of a gene, or prevent the gene from functioning properly or completely. Mutations can also occur in nongenic regions. A 2007 study on genetic variations between different species of "Drosophila" suggested that, if a mutation changes a protein produced by a gene, the result is likely to be harmful, with an estimated 70% of amino acid polymorphisms that have damaging effects, and the remainder being either neutral or marginally beneficial. Due to the damaging effects that mutations can have on genes, organisms have mechanisms such as DNA repair to prevent or correct mutations by reverting the mutated sequence back to its original state. Mutations can involve the duplication of large sections of DNA, usually through genetic recombination. These duplications are a major source of raw material for evolving new genes, with tens to hundreds of genes duplicated in animal genomes every million years. Most genes belong to larger gene families of shared ancestry, detectable by their sequence homology. Novel genes are produced by several methods, commonly through the duplication and mutation of an ancestral gene, or by recombining parts of different genes to form new combinations with new functions. Here, protein domains act as modules, each with a particular and independent function, that can be mixed together to produce genes encoding new proteins with novel properties. For example, the human eye uses four genes to make structures that sense light: three for cone cell or color vision and one for rod cell or night vision; all four arose from a single ancestral gene. Another advantage of duplicating a gene (or even an entire genome) is that this increases engineering redundancy; this allows one gene in the pair to acquire a new function while the other copy performs the original function. Other types of mutation occasionally create new genes from previously noncoding DNA. Changes in chromosome number may involve even larger mutations, where segments of the DNA within chromosomes break and then rearrange. For example, in the Homininae, two chromosomes fused to produce human chromosome 2; this fusion did not occur in the lineage of the other apes, and they retain these separate chromosomes. In evolution, the most important role of such chromosomal rearrangements may be to accelerate the divergence of a population into new species by making populations less likely to interbreed, thereby preserving genetic differences between these populations. Sequences of DNA that can move about the genome, such as transposons, make up a major fraction of the genetic material of plants and animals, and may have been important in the evolution of genomes. For example, more than a million copies of the Alu sequence are present in the human genome, and these sequences have now been recruited to perform functions such as regulating gene expression. Another effect of these mobile DNA sequences is that when they move within a genome, they can mutate or delete existing genes and thereby produce genetic diversity. Nonlethal mutations accumulate within the gene pool and increase the amount of genetic variation. The abundance of some genetic changes within the gene pool can be reduced by natural selection, while other "more favorable" mutations may accumulate and result in adaptive changes. For example, a butterfly may produce offspring with new mutations. The majority of these mutations will have no effect; but one might change the color of one of the butterfly's offspring, making it harder (or easier) for predators to see. If this color change is advantageous, the chances of this butterfly's surviving and producing its own offspring are a little better, and over time the number of butterflies with this mutation may form a larger percentage of the population. Neutral mutations are defined as mutations whose effects do not influence the fitness of an individual. These can increase in frequency over time due to genetic drift. It is believed that the overwhelming majority of mutations have no significant effect on an organism's fitness. Also, DNA repair mechanisms are able to mend most changes before they become permanent mutations, and many organisms have mechanisms for eliminating otherwise-permanently mutated somatic cells. Beneficial mutations can improve reproductive success. Four classes of mutations are (1) spontaneous mutations (molecular decay), (2) mutations due to error-prone replication bypass of naturally occurring DNA damage (also called error-prone translesion synthesis), (3) errors introduced during DNA repair, and (4) induced mutations caused by mutagens. Scientists may also deliberately introduce mutant sequences through DNA manipulation for the sake of scientific experimentation. One 2017 study claimed that 66% of cancer-causing mutations are random, 29% are due to the environment (the studied population spanned 69 countries), and 5% are inherited. Humans on average pass 60 new mutations to their children but fathers pass more mutations depending on their age with every year adding two new mutations to a child. "Spontaneous mutations" occur with non-zero probability even given a healthy, uncontaminated cell. They can be characterized by the specific change: There is increasing evidence that the majority of spontaneously arising mutations are due to error-prone replication (translesion synthesis) past DNA damage in the template strand. Naturally occurring oxidative DNA damages arise at least 10,000 times per cell per day in humans and 50,000 times or more per cell per day in rats. In mice, the majority of mutations are caused by translesion synthesis. Likewise, in yeast, Kunz et al. found that more than 60% of the spontaneous single base pair substitutions and deletions were caused by translesion synthesis. Although naturally occurring double-strand breaks occur at a relatively low frequency in DNA, their repair often causes mutation. Non-homologous end joining (NHEJ) is a major pathway for repairing double-strand breaks. NHEJ involves removal of a few nucleotides to allow somewhat inaccurate alignment of the two ends for rejoining followed by addition of nucleotides to fill in gaps. As a consequence, NHEJ often introduces mutations. Induced mutations are alterations in the gene after it has come in contact with mutagens and environmental causes. "Induced mutations" on the molecular level can be caused by: Whereas in former times mutations were assumed to occur by chance, or induced by mutagens, molecular mechanisms of mutation have been discovered in bacteria and across the tree of life. As S. Rosenberg states, "These mechanisms reveal a picture of highly regulated mutagenesis, up-regulated temporally by stress responses and activated when cells/organisms are maladapted to their environments—when stressed—potentially accelerating adaptation." Since they are self-induced mutagenic mechanisms that increase the adaptation rate of organisms, they have some times been named as adaptive mutagenesis mechanisms, and include the SOS response in bacteria, ectopic intrachromosomal recombination and other chromosomal events such as duplications. The sequence of a gene can be altered in a number of ways. Gene mutations have varying effects on health depending on where they occur and whether they alter the function of essential proteins. Mutations in the structure of genes can be classified into several types. Large-scale mutations in chromosomal structure include: Small-scale mutations affect a gene in one or a few nucleotides. (If only a single nucleotide is affected, they are called point mutations.) Small-scale mutations include: The effect of a mutation on protein sequence depends in part on where in the genome it occurs, especially whether it is in a coding or non-coding region. Mutations in the non-coding regulatory sequences of a gene, such as promoters, enhancers, and silencers, can alter levels of gene expression, but are less likely to alter the protein sequence. Mutations within introns and in regions with no known biological function (e.g. pseudogenes, retrotransposons) are generally neutral, having no effect on phenotype - though intron mutations could alter the protein product if they affect mRNA splicing. Mutations in that occur in coding regions of the genome are more likely to alter the protein product, and can be categorized by their effect on amino acid sequence: In applied genetics, it is usual to speak of mutations as either harmful or beneficial. Attempts have been made to infer the distribution of fitness effects (DFE) using mutagenesis experiments and theoretical models applied to molecular sequence data. DFE, as used to determine the relative abundance of different types of mutations (i.e., strongly deleterious, nearly neutral or advantageous), is relevant to many evolutionary questions, such as the maintenance of genetic variation, the rate of genomic decay, the maintenance of outcrossing sexual reproduction as opposed to inbreeding and the evolution of sex and genetic recombination. DFE can also be tracked by tracking the skewness of the distribution of mutations with putatively severe effects as compared to the distribution of mutations with putatively mild or absent effect. In summary, the DFE plays an important role in predicting evolutionary dynamics. A variety of approaches have been used to study the DFE, including theoretical, experimental and analytical methods. One of the earliest theoretical studies of the distribution of fitness effects was done by Motoo Kimura, an influential theoretical population geneticist. His neutral theory of molecular evolution proposes that most novel mutations will be highly deleterious, with a small fraction being neutral. Hiroshi Akashi more recently proposed a bimodal model for the DFE, with modes centered around highly deleterious and neutral mutations. Both theories agree that the vast majority of novel mutations are neutral or deleterious and that advantageous mutations are rare, which has been supported by experimental results. One example is a study done on the DFE of random mutations in vesicular stomatitis virus. Out of all mutations, 39.6% were lethal, 31.2% were non-lethal deleterious, and 27.1% were neutral. Another example comes from a high throughput mutagenesis experiment with yeast. In this experiment it was shown that the overall DFE is bimodal, with a cluster of neutral mutations, and a broad distribution of deleterious mutations. Though relatively few mutations are advantageous, those that are play an important role in evolutionary changes. Like neutral mutations, weakly selected advantageous mutations can be lost due to random genetic drift, but strongly selected advantageous mutations are more likely to be fixed. Knowing the DFE of advantageous mutations may lead to increased ability to predict the evolutionary dynamics. Theoretical work on the DFE for advantageous mutations has been done by John H. Gillespie and H. Allen Orr. They proposed that the distribution for advantageous mutations should be exponential under a wide range of conditions, which, in general, has been supported by experimental studies, at least for strongly selected advantageous mutations. In general, it is accepted that the majority of mutations are neutral or deleterious, with advantageous mutations being rare; however, the proportion of types of mutations varies between species. This indicates two important points: first, the proportion of effectively neutral mutations is likely to vary between species, resulting from dependence on effective population size; second, the average effect of deleterious mutations varies dramatically between species. In addition, the DFE also differs between coding regions and noncoding regions, with the DFE of noncoding DNA containing more weakly selected mutations. In multicellular organisms with dedicated reproductive cells, mutations can be subdivided into germline mutations, which can be passed on to descendants through their reproductive cells, and somatic mutations (also called acquired mutations), which involve cells outside the dedicated reproductive group and which are not usually transmitted to descendants. Diploid organisms (e.g., humans) contain two copies of each gene—a paternal and a maternal allele. Based on the occurrence of mutation on each chromosome, we may classify mutations into three types. A wild type or homozygous non-mutated organism is one in which neither allele is mutated. A germline mutation in the reproductive cells of an individual gives rise to a "constitutional mutation" in the offspring, that is, a mutation that is present in every cell. A constitutional mutation can also occur very soon after fertilisation, or continue from a previous constitutional mutation in a parent. A germline mutation can be passed down through subsequent generations of organisms. The distinction between germline and somatic mutations is important in animals that have a dedicated germline to produce reproductive cells. However, it is of little value in understanding the effects of mutations in plants, which lack a dedicated germline. The distinction is also blurred in those animals that reproduce asexually through mechanisms such as budding, because the cells that give rise to the daughter organisms also give rise to that organism's germline. A new germline mutation not inherited from either parent is called a "de novo" mutation. A change in the genetic structure that is not inherited from a parent, and also not passed to offspring, is called a somatic mutation"." Somatic mutations are not inherited by an organism's offspring because they do not affect the germline. However, they are passed down to all the progeny of a mutated cell within the same organism during mitosis. A major section of an organism therefore might carry the same mutation. These types of mutations are usually prompted by environmental causes, such as ultraviolet radiation or any exposure to certain harmful chemicals, and can cause diseases including cancer."" With plants, some somatic mutations can be propagated without the need for seed production, for example, by grafting and stem cuttings. These type of mutation have led to new types of fruits, such as the "Delicious" apple and the "Washington" navel orange. Human and mouse somatic cells have a mutation rate more than ten times higher than the germline mutation rate for both species; mice have a higher rate of both somatic and germline mutations per cell division than humans. The disparity in mutation rate between the germline and somatic tissues likely reflects the greater importance of genome maintenance in the germline than in the soma. In order to categorize a mutation as such, the "normal" sequence must be obtained from the DNA of a "normal" or "healthy" organism (as opposed to a "mutant" or "sick" one), it should be identified and reported; ideally, it should be made publicly available for a straightforward nucleotide-by-nucleotide comparison, and agreed upon by the scientific community or by a group of expert geneticists and biologists, who have the responsibility of establishing the "standard" or so-called "consensus" sequence. This step requires a tremendous scientific effort. Once the consensus sequence is known, the mutations in a genome can be pinpointed, described, and classified. The committee of the Human Genome Variation Society (HGVS) has developed the standard human sequence variant nomenclature, which should be used by researchers and DNA diagnostic centers to generate unambiguous mutation descriptions. In principle, this nomenclature can also be used to describe mutations in other organisms. The nomenclature specifies the type of mutation and base or amino acid changes. Mutation rates vary substantially across species, and the evolutionary forces that generally determine mutation are the subject of ongoing investigation. The genomes of RNA viruses are based on RNA rather than DNA. The RNA viral genome can be double-stranded (as in DNA) or single-stranded. In some of these viruses (such as the single-stranded human immunodeficiency virus), replication occurs quickly, and there are no mechanisms to check the genome for accuracy. This error-prone process often results in mutations. Changes in DNA caused by mutation in a coding region of DNA can cause errors in protein sequence that may result in partially or completely non-functional proteins. Each cell, in order to function correctly, depends on thousands of proteins to function in the right places at the right times. When a mutation alters a protein that plays a critical role in the body, a medical condition can result. Some mutations alter a gene's DNA base sequence but do not change the function of the protein made by the gene. One study on the comparison of genes between different species of "Drosophila" suggests that if a mutation does change a protein, the mutation will most likely be harmful, with an estimated 70 percent of amino acid polymorphisms having damaging effects, and the remainder being either neutral or weakly beneficial. However, studies have shown that only 7% of point mutations in noncoding DNA of yeast are deleterious and 12% in coding DNA are deleterious. The rest of the mutations are either neutral or slightly beneficial. If a mutation is present in a germ cell, it can give rise to offspring that carries the mutation in all of its cells. This is the case in hereditary diseases. In particular, if there is a mutation in a DNA repair gene within a germ cell, humans carrying such germline mutations may have an increased risk of cancer. A list of 34 such germline mutations is given in the article DNA repair-deficiency disorder. An example of one is albinism, a mutation that occurs in the OCA1 or OCA2 gene. Individuals with this disorder are more prone to many types of cancers, other disorders and have impaired vision. DNA damage can cause an error when the DNA is replicated, and this error of replication can cause a gene mutation that, in turn, could cause a genetic disorder. DNA damages are repaired by the DNA repair system of the cell. Each cell has a number of pathways through which enzymes recognize and repair damages in DNA. Because DNA can be damaged in many ways, the process of DNA repair is an important way in which the body protects itself from disease. Once DNA damage has given rise to a mutation, the mutation cannot be repaired. On the other hand, a mutation may occur in a somatic cell of an organism. Such mutations will be present in all descendants of this cell within the same organism. The accumulation of certain mutations over generations of somatic cells is part of cause of malignant transformation, from normal cell to cancer cell. Cells with heterozygous loss-of-function mutations (one good copy of gene and one mutated copy) may function normally with the unmutated copy until the good copy has been spontaneously somatically mutated. This kind of mutation happens often in living organisms, but it is difficult to measure the rate. Measuring this rate is important in predicting the rate at which people may develop cancer. Point mutations may arise from spontaneous mutations that occur during DNA replication. The rate of mutation may be increased by mutagens. Mutagens can be physical, such as radiation from UV rays, X-rays or extreme heat, or chemical (molecules that misplace base pairs or disrupt the helical shape of DNA). Mutagens associated with cancers are often studied to learn about cancer and its prevention. Prions are proteins and do not contain genetic material. However, prion replication has been shown to be subject to mutation and natural selection just like other forms of replication. The human gene PRNP codes for the major prion protein, PrP, and is subject to mutations that can give rise to disease-causing prions. Although mutations that cause changes in protein sequences can be harmful to an organism, on occasions the effect may be positive in a given environment. In this case, the mutation may enable the mutant organism to withstand particular environmental stresses better than wild-type organisms, or reproduce more quickly. In these cases a mutation will tend to become more common in a population through natural selection. Examples include the following: HIV resistance: a specific 32 base pair deletion in human CCR5 (CCR5-Δ32) confers HIV resistance to homozygotes and delays AIDS onset in heterozygotes. One possible explanation of the etiology of the relatively high frequency of CCR5-Δ32 in the European population is that it conferred resistance to the bubonic plague in mid-14th century Europe. People with this mutation were more likely to survive infection; thus its frequency in the population increased. This theory could explain why this mutation is not found in Southern Africa, which remained untouched by bubonic plague. A newer theory suggests that the selective pressure on the CCR5 Delta 32 mutation was caused by smallpox instead of the bubonic plague. Malaria resistance: An example of a harmful mutation is sickle-cell disease, a blood disorder in which the body produces an abnormal type of the oxygen-carrying substance hemoglobin in the red blood cells. One-third of all indigenous inhabitants of Sub-Saharan Africa carry the allele, because, in areas where malaria is common, there is a survival value in carrying only a single sickle-cell allele (sickle cell trait). Those with only one of the two alleles of the sickle-cell disease are more resistant to malaria, since the infestation of the malaria "Plasmodium" is halted by the sickling of the cells that it infests. Antibiotic resistance: Practically all bacteria develop antibiotic resistance when exposed to antibiotics. In fact, bacterial populations already have such mutations that get selected under antibiotic selection. Obviously, such mutations are only beneficial for the bacteria but not for those infected. Lactase persistence. A mutation allowed humans to express the enzyme lactase after they are naturally weaned from breast milk, allowing adults to digest lactose, which is likely one of the most beneficial mutations in recent human evolution. Mutationism is one of several alternatives to evolution by natural selection that have existed both before and after the publication of Charles Darwin's 1859 book, "On the Origin of Species". In the theory, mutation was the source of novelty, creating new forms and new species, potentially instantaneously, in a sudden jump. This was envisaged as driving evolution, which was limited by the supply of mutations. Before Darwin, biologists commonly believed in saltationism, the possibility of large evolutionary jumps, including immediate speciation. For example, in 1822 Étienne Geoffroy Saint-Hilaire argued that species could be formed by sudden transformations, or what would later be called macromutation. Darwin opposed saltation, insisting on gradualism in evolution as in geology. In 1864, Albert von Kölliker revived Geoffroy's theory. In 1901 the geneticist Hugo de Vries gave the name "mutation" to seemingly new forms that suddenly arose in his experiments on the evening primrose "Oenothera lamarckiana", and in the first decade of the 20th century, mutationism, or as de Vries named it "mutationstheorie", became a rival to Darwinism supported for a while by geneticists including William Bateson, Thomas Hunt Morgan, and Reginald Punnett. Understanding of mutationism is clouded by the mid-20th century portrayal of the early mutationists by supporters of the modern synthesis as opponents of Darwinian evolution and rivals of the biometrics school who argued that selection operated on continuous variation. In this portrayal, mutationism was defeated by a synthesis of genetics and natural selection that supposedly started later, around 1918, with work by the mathematician Ronald Fisher. However, the alignment of Mendelian genetics and natural selection began as early as 1902 with a paper by Udny Yule, and built up with theoretical and experimental work in Europe and America. Despite the controversy, the early mutationists had by 1918 already accepted natural selection and explained continuous variation as the result of multiple genes acting on the same characteristic, such as height. Mutationism, along with other alternatives to Darwinism like Lamarckism and orthogenesis, was discarded by most biologists as they came to see that Mendelian genetics and natural selection could readily work together; mutation took its place as a source of the genetic variation essential for natural selection to work on. However, mutationism did not entirely vanish. In 1940, Richard Goldschmidt again argued for single-step speciation by macromutation, describing the organisms thus produced as "hopeful monsters", earning widespread ridicule. In 1987, Masatoshi Nei argued controversially that evolution was often mutation-limited. Modern biologists such as Douglas J. Futuyma conclude that essentially all claims of evolution driven by large mutations can be explained by Darwinian evolution.
https://en.wikipedia.org/wiki?curid=19702
Microgyrus A microgyrus is an area of the cerebral cortex that includes only four cortical layers instead of six. Microgyria are believed by some to be part of the genetic lack of prenatal development which is a cause of, or one of the causes of, dyslexia. Albert Galaburda of Harvard Medical School noticed that language centers in dyslexic brains showed microscopic flaws known as ectopias and microgyria (Galaburda "et al.", 2006, "Nature Neuroscience" 9(10): 1213-1217). Both affect the normal six-layer structure of the cortex. These flaws affect connectivity and functionality of the cortex in critical areas related to sound and visual processing. These and similar structural abnormalities may be the basis of the inevitable and hard to overcome difficulty in reading.
https://en.wikipedia.org/wiki?curid=19705
Mercantilism Mercantilism is an economic policy that is designed to maximize the exports and minimize the imports for an economy. It promotes imperialism, tariffs and subsidies on traded goods to achieve that goal. These policies aim to reduce a possible current account deficit or reach a current account surplus. Mercantilism includes measures aimed at accumulating monetary reserves through a positive balance of trade, especially of finished goods. Historically, such policies frequently led to war and also motivated colonial expansion. Mercantilist theory varies in sophistication from one writer to another and has evolved over time. Mercantilism was dominant in modernized parts of Europe from the 16th to the 18th centuries, a period of proto-industrialization, before falling into decline, although some commentators argue that it is still practiced in the economies of industrializing countries, in the form of economic interventionism. It promotes government regulation of a nation's economy for the purpose of augmenting state power at the expense of rival national powers. High tariffs, especially on manufactured goods, were an almost universal feature of mercantilist policy. With the efforts of supranational organizations such as the World Trade Organization to reduce tariffs globally, non-tariff barriers to trade have assumed a greater importance in neomercantilism. Mercantilism became the dominant school of economic thought in Europe throughout the late Renaissance and the early-modern period (from the 15th to the 18th centuries). Evidence of mercantilistic practices appeared in early-modern Venice, Genoa, and Pisa regarding control of the Mediterranean trade in bullion. However, the empiricism of the Renaissance, which first began to quantify large-scale trade accurately, marked mercantilism's birth as a codified school of economic theories. The Italian economist and mercantilist Antonio Serra is considered to have written one of the first treatises on political economy with his 1613 work, "A Short Treatise on the Wealth and Poverty of Nations". Mercantilism in its simplest form is bullionism, yet mercantilist writers emphasize the circulation of money and reject hoarding. Their emphasis on monetary metals accords with current ideas regarding the money supply, such as the stimulative effect of a growing money-supply. Fiat money and floating exchange rates have since rendered specie concerns irrelevant. In time, industrial policy supplanted the heavy emphasis on money, accompanied by a shift in focus from the capacity to carry on wars to promoting general prosperity. Mature neomercantilist theory recommends selective high tariffs for "infant" industries or the promotion of the mutual growth of countries through national industrial specialization. England began the first large-scale and integrative approach to mercantilism during the Elizabethan Era (1558–1603). An early statement on national balance of trade appeared in "Discourse of the Common Weal of this Realm of England", 1549: "We must always take heed that we buy no more from strangers than we sell them, for so should we impoverish ourselves and enrich them." The period featured various but often disjointed efforts by the court of Queen Elizabeth (reigned 1558-1603) to develop a naval and merchant fleet capable of challenging the Spanish stranglehold on trade and of expanding the growth of bullion at home. Queen Elizabeth promoted the Trade and Navigation Acts in Parliament and issued orders to her navy for the protection and promotion of English shipping. A systematic and coherent explanation of balance of trade emerged in Thomas Mun's argument "England's Treasure by Forraign Trade or the Balance of our Forraign Trade is The Rule of Our Treasure" - written in the 1620s and published in 1664. Elizabeth's efforts [needs a quote to verify]organized national resources sufficiently in the defense of England against the far larger and more powerful Spanish Empire, and in turn, paved the foundation for establishing a global empire in the 19th century. Authors noted most for establishing the English mercantilist system include Gerard de Malynes ( 1585–1641) and Thomas Mun (1571-1641), who first articulated the Elizabethan system ("England's Treasure by Forraign Trade or the Balance of Forraign Trade is the Rule of Our Treasure"), which Josiah Child ( 1630/31 – 1699) then developed further. Numerous French authors helped cement French policy around mercantilism in the 17th century. Jean-Baptiste Colbert (Intendant général, 1661–1665; Contrôleur général des finances, 1661–1683) best articulated this French mercantilism. French economic policy liberalized greatly under Napoleon (in power from 1799 to 1814/1815) Many nations applied the theory, notably France, which was the most important state economically in Europe at the time. King Louis XIV (reigned 1643-1715) followed the guidance of Jean Baptiste Colbert, his Controller-General of Finances from 1665 to 1683. It was determined that the state should rule in the economic realm as it did in the diplomatic, and that the interests of the state as identified by the king were superior to those of merchants and of everyone else. Mercantilist economic policies aimed to build up the state, especially in an age of incessant warfare, and theorists charged the state with looking for ways to strengthen the economy and to weaken foreign adversaries. In Europe, academic belief in mercantilism began to fade in the late-18th century after the British seized control of the Mughal Bengal, a major trading nation, and the establishment of the British India through the activities of the East India Company, in light of the arguments of Adam Smith (1723-1790) and of the classical economists. The British Parliament's repeal of the Corn Laws under Robert Peel in 1846 symbolized the emergence of free trade as an alternative system. Most of the European economists who wrote between 1500 and 1750 are today generally considered mercantilists; this term was initially used solely by critics, such as Mirabeau and Smith, but was quickly adopted by historians. Originally the standard English term was "mercantile system". The word "mercantilism" was introduced into English from German in the early 19th century. The bulk of what is commonly called "mercantilist literature" appeared in the 1620s in Great Britain. Smith saw the English merchant Thomas Mun (1571–1641) as a major creator of the mercantile system, especially in his posthumously published "Treasure by Foreign Trade" (1664), which Smith considered the archetype or manifesto of the movement. Perhaps the last major mercantilist work was James Steuart's "Principles of Political Economy", published in 1767. Mercantilist literature also extended beyond England. Italy and France produced noted writers of mercantilist themes, including Italy's Giovanni Botero (1544–1617) and Antonio Serra (1580–?) and, in France, Jean Bodin and Colbert. Themes also existed in writers from the German historical school from List, as well as followers of the American system and British free-trade imperialism, thus stretching the system into the 19th century. However, many British writers, including Mun and Misselden, were merchants, while many of the writers from other countries were public officials. Beyond mercantilism as a way of understanding the wealth and power of nations, Mun and Misselden are noted for their viewpoints on a wide range of economic matters. The Austrian lawyer and scholar Philipp Wilhelm von Hornick, one of the pioneers of Cameralism, detailed a nine-point program of what he deemed effective national economy in his "Austria Over All, If She Only Will" of 1684, which comprehensively sums up the tenets of mercantilism: Other than Von Hornick, there were no mercantilist writers presenting an overarching scheme for the ideal economy, as Adam Smith would later do for classical economics. Rather, each mercantilist writer tended to focus on a single area of the economy. Only later did non-mercantilist scholars integrate these "diverse" ideas into what they called mercantilism. Some scholars thus reject the idea of mercantilism completely, arguing that it gives "a false unity to disparate events". Smith saw the mercantile system as an enormous conspiracy by manufacturers and merchants against consumers, a view that has led some authors, especially Robert E. Ekelund and Robert D. Tollison, to call mercantilism "a rent-seeking society". To a certain extent, mercantilist doctrine itself made a general theory of economics impossible. Mercantilists viewed the economic system as a zero-sum game, in which any gain by one party required a loss by another. Thus, any system of policies that benefited one group would by definition harm the other, and there was no possibility of economics being used to maximize the commonwealth, or common good. Mercantilists' writings were also generally created to rationalize particular practices rather than as investigations into the best policies. Mercantilist domestic policy was more fragmented than its trade policy. While Adam Smith portrayed mercantilism as supportive of strict controls over the economy, many mercantilists disagreed. The early modern era was one of letters patent and government-imposed monopolies; some mercantilists supported these, but others acknowledged the corruption and inefficiency of such systems. Many mercantilists also realized that the inevitable results of quotas and price ceilings were black markets. One notion that mercantilists widely agreed upon was the need for economic oppression of the working population; laborers and farmers were to live at the "margins of subsistence". The goal was to maximize production, with no concern for consumption. Extra money, free time, and education for the lower classes were seen to inevitably lead to vice and laziness, and would result in harm to the economy. The mercantilists saw a large population as a form of wealth that made possible the development of bigger markets and armies. Opposite to mercantilism was the doctrine of physiocracy, which predicted that mankind would outgrow its resources. The idea of mercantilism was to protect the markets as well as maintain agriculture and those who were dependent upon it. Mercantilist ideas were the dominant economic ideology of all of Europe in the early modern period, and most states embraced it to a certain degree. Mercantilism was centred on England and France, and it was in these states that mercantilist policies were most often enacted. The policies have included: Mercantilism arose in France in the early 16th century soon after the monarchy had become the dominant force in French politics. In 1539, an important decree banned the import of woolen goods from Spain and some parts of Flanders. The next year, a number of restrictions were imposed on the export of bullion. Over the rest of the 16th century, further protectionist measures were introduced. The height of French mercantilism is closely associated with Jean-Baptiste Colbert, finance minister for 22 years in the 17th century, to the extent that French mercantilism is sometimes called Colbertism. Under Colbert, the French government became deeply involved in the economy in order to increase exports. Protectionist policies were enacted that limited imports and favored exports. Industries were organized into guilds and monopolies, and production was regulated by the state through a series of more than one thousand directives outlining how different products should be produced. To encourage industry, foreign artisans and craftsmen were imported. Colbert also worked to decrease internal barriers to trade, reducing internal tariffs and building an extensive network of roads and canals. Colbert's policies were quite successful, and France's industrial output and the economy grew considerably during this period, as France became the dominant European power. He was less successful in turning France into a major trading power, and Britain and the Netherlands remained supreme in this field. France imposed its mercantilist philosophy on its colonies in North America, especially New France. It sought to derive the maximum material benefit from the colony, for the homeland, with a minimum of imperial investment in the colony itself. The ideology was embodied in New France through the establishment under Royal Charter of a number of corporate trading monopolies including La Compagnie des Marchands, which operated from 1613 to 1621, and the Compagnie de Montmorency, from that date until 1627. It was in turn replaced by La Compagnie des Cent-Associés, created in 1627 by King Louis XIII, and the Communauté des habitants in 1643. These were the first corporations to operate in what is now Canada. In England, mercantilism reached its peak during the Long Parliament government (1640–60). Mercantilist policies were also embraced throughout much of the Tudor and Stuart periods, with Robert Walpole being another major proponent. In Britain, government control over the domestic economy was far less extensive than on the Continent, limited by common law and the steadily increasing power of Parliament. Government-controlled monopolies were common, especially before the English Civil War, but were often controversial. With respect to its colonies, British mercantilism meant that the government and the merchants became partners with the goal of increasing political power and private wealth, to the exclusion of other empires. The government protected its merchants—and kept others out—through trade barriers, regulations, and subsidies to domestic industries in order to maximize exports from and minimize imports to the realm. The government had to fight smuggling, which became a favorite American technique in the 18th century to circumvent the restrictions on trading with the French, Spanish, or Dutch. The goal of mercantilism was to run trade surpluses so that gold and silver would pour into London. The government took its share through duties and taxes, with the remainder going to merchants in Britain. The government spent much of its revenue on a superb Royal Navy, which not only protected the British colonies but threatened the colonies of the other empires, and sometimes seized them. Thus the British Navy captured New Amsterdam (New York) in 1664. The colonies were captive markets for British industry, and the goal was to enrich the mother country. British mercantilist writers were themselves divided on whether domestic controls were necessary. British mercantilism thus mainly took the form of efforts to control trade. A wide array of regulations were put in place to encourage exports and discourage imports. Tariffs were placed on imports and bounties given for exports, and the export of some raw materials was banned completely. The Navigation Acts expelled foreign merchants from England's domestic trade. The nation aggressively sought colonies and once under British control, regulations were imposed that allowed the colony to only produce raw materials and to only trade with Britain. This led to friction with the inhabitants of these colonies, and mercantilist policies (such as forbidding trade with other empires and controls over smuggling) were a major irritant leading to the American Revolution. Mercantilism taught that trade was a zero-sum game, with one country's gain equivalent to a loss sustained by the trading partner. Overall, however, mercantilist policies had a positive impact on Britain helping turn it into the world's dominant trader and the global hegemon. One domestic policy that had a lasting impact was the conversion of "wastelands" to agricultural use. Mercantilists believed that to maximize a nation's power, all land and resources had to be used to their highest and best use, and this era thus saw projects like the draining of The Fens. The other nations of Europe also embraced mercantilism to varying degrees. The Netherlands, which had become the financial centre of Europe by being its most efficient trader, had little interest in seeing trade restricted and adopted few mercantilist policies. Mercantilism became prominent in Central Europe and Scandinavia after the Thirty Years' War (1618–48), with Christina of Sweden, Jacob Kettler of Courland, and Christian IV of Denmark being notable proponents. The Habsburg Holy Roman Emperors had long been interested in mercantilist policies, but the vast and decentralized nature of their empire made implementing such notions difficult. Some constituent states of the empire did embrace Mercantilism, most notably Prussia, which under Frederick the Great had perhaps the most rigidly controlled economy in Europe. Spain benefited from mercantilism early on as it brought a large amount of precious metals such as gold and silver into their treasury by way of the new world. In the long run, Spain's economy collapsed as it was unable to adjust to the inflation that came with the large influx of bullion. Heavy intervention from the crown put crippling laws for the protection of Spanish goods and services. Mercantilist protectionist policy in Spain caused the long-run failure of the Castilian textile industry as the efficiency severely dropped off with each passing year due to the production being held at a specific level. Spain's heavily protected industries led to famines as much of its agricultural land was required to be used for sheep instead of grain. Much of their grain was imported from the Baltic region of Europe which caused a shortage of food in the inner regions of Spain. Spain limiting the trade of their colonies is one of the causes that lead to the separation of the Dutch from the Spanish Empire. The culmination of all of these policies lead to Spain defaulting in 1557, 1575, and 1596. During the economic collapse of the 17th century, Spain had little coherent economic policy, but French mercantilist policies were imported by Philip V with some success. Russia under Peter I (Peter the Great) attempted to pursue mercantilism, but had little success because of Russia's lack of a large merchant class or an industrial base. Mercantilism was the economic version of warfare using economics as a tool for warfare by other means backed up by the state apparatus and was well suited to an era of military warfare. Since the level of world trade was viewed as fixed, it followed that the only way to increase a nation's trade was to take it from another. A number of wars, most notably the Anglo-Dutch Wars and the Franco-Dutch Wars, can be linked directly to mercantilist theories. Most wars had other causes but they reinforced mercantilism by clearly defining the enemy, and justified damage to the enemy's economy. Mercantilism fueled the imperialism of this era, as many nations expended significant effort to conquer new colonies that would be sources of gold (as in Mexico) or sugar (as in the West Indies), as well as becoming exclusive markets. European power spread around the globe, often under the aegis of companies with government-guaranteed monopolies in certain defined geographical regions, such as the Dutch East India Company or the British Hudson's Bay Company (operating in present-day Canada). With the establishment of overseas colonies by European powers early in the 17th century, mercantile theory gained a new and wider significance, in which its aim and ideal became both national and imperialistic. Mercantilism as a weapon has continued to be used by nations through the 21st century by way of modern tariffs as it puts smaller economies in a position to conform to the larger economies goals or risk economic ruin due to an imbalance in trade. Trade wars are often dependent on such tariffs and restrictions hurting the opposing economy. The term "mercantile system" was used by its foremost critic, Adam Smith, but Mirabeau (1715–1789) had used "mercantilism" earlier. Mercantilism functioned as the economic counterpart of the older version of political power: divine right of kings and absolute monarchy. Scholars debate over why mercantilism dominated economic ideology for 250 years. One group, represented by Jacob Viner, sees mercantilism as simply a straightforward, common-sense system whose logical fallacies remained opaque to people at the time, as they simply lacked the required analytical tools. The second school, supported by scholars such as Robert B. Ekelund, portrays mercantilism not as a mistake, but rather as the best possible system for those who developed it. This school argues that rent-seeking merchants and governments developed and enforced mercantilist policies. Merchants benefited greatly from the enforced monopolies, bans on foreign competition, and poverty of the workers. Governments benefited from the high tariffs and payments from the merchants. Whereas later economic ideas were often developed by academics and philosophers, almost all mercantilist writers were merchants or government officials. Monetarism offers a third explanation for mercantilism. European trade exported bullion to pay for goods from Asia, thus reducing the money supply and putting downward pressure on prices and economic activity. The evidence for this hypothesis is the lack of inflation in the British economy until the Revolutionary and Napoleonic Wars, when paper money came into vogue. A fourth explanation lies in the increasing professionalisation and technification of the wars of the era, which turned the maintenance of adequate reserve funds (in the prospect of war) into a more and more expensive and eventually competitive business. Mercantilism developed at a time of transition for the European economy. Isolated feudal estates were being replaced by centralized nation-states as the focus of power. Technological changes in shipping and the growth of urban centers led to a rapid increase in international trade. Mercantilism focused on how this trade could best aid the states. Another important change was the introduction of double-entry bookkeeping and modern accounting. This accounting made extremely clear the inflow and outflow of trade, contributing to the close scrutiny given to the balance of trade. Of course, the impact of the discovery of America cannot be ignored. New markets and new mines propelled foreign trade to previously inconceivable volumes, resulting in "the great upward movement in prices" and an increase in "the volume of merchant activity itself". Prior to mercantilism, the most important economic work done in Europe was by the medieval scholastic theorists. The goal of these thinkers was to find an economic system compatible with Christian doctrines of piety and justice. They focused mainly on microeconomics and on local exchanges between individuals. Mercantilism was closely aligned with the other theories and ideas that began to replace the medieval worldview. This period saw the adoption of the very Machiavellian realpolitik and the primacy of the "raison d'état" in international relations. The mercantilist idea of all trade as a zero-sum game, in which each side was trying to best the other in a ruthless competition, was integrated into the works of Thomas Hobbes. This dark view of human nature also fit well with the Puritan view of the world, and some of the most stridently mercantilist legislation, such as the Navigation Ordinance of 1651, was enacted by the government of Oliver Cromwell. Jean-Baptiste Colbert's work in 17th-century France came to exemplify classical mercantilism. In the English-speaking world, its ideas were criticized by Adam Smith with the publication of "The Wealth of Nations" in 1776 and later by David Ricardo with his explanation of comparative advantage. Mercantilism was rejected by Britain and France by the mid-19th century. The British Empire embraced free trade and used its power as the financial center of the world to promote the same. The Guyanese historian Walter Rodney describes mercantilism as the period of the worldwide development of European commerce, which began in the 15th century with the voyages of Portuguese and Spanish explorers to Africa, Asia, and the New World. Adam Smith and David Hume were the founding fathers of anti-mercantilist thought. A number of scholars found important flaws with mercantilism long before Smith developed an ideology that could fully replace it. Critics like Hume, Dudley North and John Locke undermined much of mercantilism and it steadily lost favor during the 18th century. In 1690, Locke argued that prices vary in proportion to the quantity of money. Locke's "Second Treatise" also points towards the heart of the anti-mercantilist critique: that the wealth of the world is not fixed, but is created by human labor (represented embryonically by Locke's labor theory of value). Mercantilists failed to understand the notions of absolute advantage and comparative advantage (although this idea was only fully fleshed out in 1817 by David Ricardo) and the benefits of trade. For instance, imagine that Portugal was a more efficient producer of wine than England, yet in England, cloth could be produced more efficiently than it could in Portugal. Thus if Portugal specialized in wine and England in cloth, "both" states would end up "better off" if they traded. This is an example of the reciprocal benefits of trade (whether due to comparative or absolute advantage). In modern economic theory, trade is "not" a zero-sum game of cutthroat competition, because both sides can benefit from it. Hume famously noted the impossibility of the mercantilists' goal of a constant positive balance of trade. As bullion flowed into one country, the supply would increase, and the value of bullion in that state would steadily decline relative to other goods. Conversely, in the state exporting bullion, its value would slowly rise. Eventually, it would no longer be cost-effective to export goods from the high-price country to the low-price country, and the balance of trade would reverse. Mercantilists fundamentally misunderstood this, long arguing that an increase in the money supply simply meant that everyone gets richer. The importance placed on bullion was also a central target, even if many mercantilists had themselves begun to de-emphasize the importance of gold and silver. Adam Smith noted that at the core of the mercantile system was the "popular folly of confusing wealth with money", that bullion was just the same as any other commodity, and that there was no reason to give it special treatment. More recently, scholars have discounted the accuracy of this critique. They believe Mun and Misselden were not making this mistake in the 1620s, and point to their followers Josiah Child and Charles Davenant, who in 1699 wrote, "Gold and Silver are indeed the Measures of Trade, but that the Spring and Original of it, in all nations is the Natural or Artificial Product of the Country; that is to say, what this Land or what this Labour and Industry Produces." The critique that mercantilism was a form of rent seeking has also seen criticism, as scholars such Jacob Viner in the 1930s pointed out that merchant mercantilists such as Mun understood that they would not gain by higher prices for English wares abroad. The first school to completely reject mercantilism was the physiocrats, who developed their theories in France. Their theories also had several important problems, and the replacement of mercantilism did not come until Adam Smith published "The Wealth of Nations" in 1776. This book outlines the basics of what is today known as classical economics. Smith spent a considerable portion of the book rebutting the arguments of the mercantilists, though often these are simplified or exaggerated versions of mercantilist thought. Scholars are also divided over the cause of mercantilism's end. Those who believe the theory was simply an error hold that its replacement was inevitable as soon as Smith's more accurate ideas were unveiled. Those who feel that mercantilism amounted to rent-seeking hold that it ended only when major power shifts occurred. In Britain, mercantilism faded as the Parliament gained the monarch's power to grant monopolies. While the wealthy capitalists who controlled the House of Commons benefited from these monopolies, Parliament found it difficult to implement them because of the high cost of group decision making. Mercantilist regulations were steadily removed over the course of the 18th century in Britain, and during the 19th century, the British government fully embraced free trade and Smith's laissez-faire economics. On the continent, the process was somewhat different. In France, economic control remained in the hands of the royal family, and mercantilism continued until the French Revolution. In Germany, mercantilism remained an important ideology in the 19th and early 20th centuries, when the historical school of economics was paramount. Adam Smith rejected the mercantilist focus on production, arguing that consumption was paramount to production. He added that mercantilism was popular among merchants because it was what is now called rent seeking. John Maynard Keynes argued that encouraging production was just as important as encouraging consumption, and he favored the "new mercantilism". Keynes also noted that in the early modern period the focus on the bullion supplies was reasonable. In an era before paper money, an increase in bullion was one of the few ways to increase the money supply. Keynes said mercantilist policies generally improved both domestic and foreign investment—domestic because the policies lowered the domestic rate of interest, and investment by foreigners by tending to create a favorable balance of trade. Keynes and other economists of the 20th century also realized that the balance of payments is an important concern. Keynes also supported government intervention in the economy as necessity, as did mercantilism. , the word "mercantilism" remains a pejorative term, often used to attack various forms of protectionism. The similarities between Keynesianism (and its successor ideas) and mercantilism have sometimes led critics to call them neo-mercantilism. Paul Samuelson, writing within a Keynesian framework, wrote of mercantilism, "With employment less than full and Net National Product suboptimal, all the debunked mercantilist arguments turn out to be valid." Some other systems that copy several mercantilist policies, such as Japan's economic system, are also sometimes called neo-mercantilist. In an essay appearing in the 14 May 2007 issue of "Newsweek", business columnist Robert J. Samuelson wrote that China was pursuing an essentially neo-mercantilist trade policy that threatened to undermine the post–World War II international economic structure. Murray Rothbard, representing the Austrian School of economics, describes it this way: In specific instances, protectionist mercantilist policies also had an important and positive impact on the state that enacted them. Adam Smith, for instance, praised the Navigation Acts, as they greatly expanded the British merchant fleet and played a central role in turning Britain into the world's naval and economic superpower from the 18th century onward. Some economists thus feel that protecting infant industries, while causing short-term harm, can be beneficial in the long term.
https://en.wikipedia.org/wiki?curid=19708
Meat Puppets Meat Puppets are an American rock band formed in January 1980 in Phoenix, Arizona. The group's original lineup was Curt Kirkwood (guitar/vocals), his brother Cris Kirkwood (bass guitar/vocals), and Derrick Bostrom (drums). The Kirkwood brothers met Bostrom while attending Brophy Prep High School in Phoenix. The three then moved to Tempe, Arizona (a Phoenix suburb and home to Arizona State University), where the Kirkwood brothers purchased two adjacent homes, one of which had a shed in the back where they regularly practiced. Meat Puppets started as a punk rock band, but like most of their labelmates on SST Records, they established their own unique style, blending punk with country and psychedelic rock, and featuring Curt's warbling vocals. Meat Puppets later gained significant exposure when the Kirkwood brothers served as guest musicians on Nirvana's MTV Unplugged performance in 1993. The band's 1994 album "Too High to Die" subsequently became their most successful release. The band broke up twice, in 1996 and 2002, but reunited again in 2006. Meat Puppets have influenced a number of rock bands, including Nirvana, Soundgarden, Dinosaur Jr, Sebadoh, Pavement, and Jawbreaker. In the late 1970s, drummer Derrick Bostrom played with guitarist Jack Knetzger in a band called Atomic Bomb Club, which began as a duo, but would come to include bassist Cris Kirkwood. The band played a few local shows and recorded some demos, but began to dissolve quickly thereafter. Derrick and Cris began rehearsing together with Cris' brother Curt Kirkwood by learning songs from Bostrom's collection of punk rock 45s. After briefly toying with the name "The Bastions of Immaturity", they settled on the name Meat Puppets in June, 1980 after a song by Curt of the same name which appears on their first album. Their earliest EP "In A Car" was made entirely of short hardcore punk with goofy lyrics, and attracted the attention of Joe Carducci as he was starting to work with legendary punk label SST Records. Carducci suggested they sign with the label, and Meat Puppets released their first album "Meat Puppets" in 1982, which among several new originals and a pair of heavily skewed Doc Watson and Bob Nolan covers, featured the songs "The Gold Mine" and "Melons Rising", two tunes Derrick and Cris originally had written and performed as Atomic Bomb Club previously. Years later, when the Meat Puppets reissued all of their albums in 1999, the five songs on In A Car would be combined with their debut album. By the release of 1984's "Meat Puppets II", the bandmembers "were so sick of the hardcore thing," according to Bostrom. "We were really into pissing off the crowd." Here, the band experimented with acid rock and country and western sounds, while still retaining some punk influence on the tracks "Split Myself in Two" and "New Gods." This album contains some of the band's best known songs, such as "Lake of Fire" and "Plateau." While the album had been recorded in early 1983, the album's release was delayed for a year by SST. "Meat Puppets II" turned the band into one of the leading bands on SST Records, and along with the Violent Femmes, the Gun Club and others, helped establish the genre called "cow punk". Meat Puppets II was followed by 1985's "Up on the Sun". The album's psychedelic sound resembled the folk-rock of The Byrds, while the songs still retained hardcore influences in the lengths of the songs and the tempos. Examples of this new style are the self titled track, "Enchanted Porkfist" and "Swimming Ground." "Up On The Sun" featured the Kirkwood brothers harmonizing their vocals for the first time. These two albums were mainstays of college and independent radio at that time. During the rest of the 1980s, Meat Puppets remained on SST and released a series of albums while touring relentlessly. Between tours they would regularly play small shows in bars around the Phoenix area such as The Mason Jar (now The Rebel Lounge) and The Sun Club in Tempe. After the release of the hard-rock styled "Out My Way" EP in 1986, however, the band was briefly sidelined by an accident when Curt's finger was broken after being slammed in their touring van's door. The accident delayed the band's next album, the even more psychedelic "Mirage", until the next year. The final result included synthesizers and electronic drums, and as such was considered their most polished sounding album to date. The tour for Mirage lasted less than 6 months, as the band found it difficult to recreate many of this album's songs in a concert atmosphere. Their next album, the ZZ-Top inspired "Huevos", came out less than six months afterward, in late summer of 1987. In stark contrast to its predecessor, "Huevos" was recorded in a swift, fiery fashion, with many first takes, and minimal second guessing. These recordings were completed in only a matter of days, and along with a few drawings and one of Curt's paintings taken from the wall to serve as cover art (a dish of three boiled eggs, a green pepper, and a bottle of Tabasco sauce), were all sent to SST shortly before the band returned to the road en route to their next gig. Curt revealed in an interview that one of the reasons for the album being called Huevos (meaning 'eggs' in Spanish) was because of the multitude of first-takers on the record, as similarly eggs can only be used once. "Monsters" was released in 1989, featuring new elements to their sound with extended jams (such as "Touchdown King" and "Flight of the Fire Weasel") and heavy metal ("Attacked by Monsters"). This album was mostly motivated by the Meat Puppets' desire to attract the attention of a major label, as they were becoming frustrated with SST Records by this time. As numerous bands from the seminal SST label and other kindred punk-oriented indies had before them, Meat Puppets grappled with the decision to switch to a major label. Two years after their final studio recording for SST, 1989's "Monsters", the trio released its major-label debut, "Forbidden Places", on the indie-friendly London Records. The band chose London Records because it was the first label that ZZ Top, one of their favorite bands, was signed to. "Forbidden Places" combined many elements of the band's sounds over the years (cowpunk, psychedelia, riffy heavier rock) while some songs had a more laid back early alternative sound. Songs include "Sam" and "Whirlpool," and the title track. Despite being a fan favorite, Forbidden Places is now out of print, and as such it remains a highly sought-after collectible online. In 1992 following his departure from the Red Hot Chili Peppers, guitarist John Frusciante auditioned for the band. Cris Kirkwood stated "He showed up with his guitar out of its case and barefoot. We were on a major label then, we just got signed, and those guys had blown up to where they were at and John needed to get out. John gets to our pad and we started getting ready to play and I said, 'You want to use my tuner?' He said, 'No, I'll bend it in.' It was so far out. Then we jammed but it didn't come to anything. Maybe he wasn't in the right place and we were a tight little unit. It just didn't quite happen but it could have worked." In late 1993, Meat Puppets achieved mainstream popularity when Nirvana's Kurt Cobain, who became a fan after seeing them open for Black Flag in the ‘80s, invited Cris and Curt to join him on MTV Unplugged for acoustic performances of "Plateau", "Oh Me" and "Lake of Fire" (all originally from "Meat Puppets II"). The resulting album, "MTV Unplugged in New York," served as a swan song for Nirvana, as Cobain died less than 5 months after the concert. "Lake of Fire" became a cult favorite for its particularly wrenching vocal performance from Cobain. Subsequently, the Nirvana exposure and the strength of the single "Backwater" (their highest charting single) helped lift Meat Puppets to new commercial heights. The band's studio return was 1994's "Too High To Die", produced by Butthole Surfers guitarist Paul Leary. The album featured "Backwater", which reached #47 on the Billboard Hot 100, and a hidden-track update of "Lake of Fire." This album features a more straightforward alternative rock style, with occasional moments of pop, country and neo-psychedelic moments. "Too High To Die" earned the band a gold record (500,000 sold), outselling their previous records combined. 1995's "No Joke!" was the final album recorded by the original Meat Puppets lineup. Stylistically it is very similar to Too High to Die, although much heavier and with darker lyrics. Examples of this are the single "Scum" and "Eyeball," however the band's usual laid-back style is still heard on tracks like "Chemical Garden." Though the band's drug use included cocaine, heroin, LSD and many others, Cris' use of heroin and crack cocaine became so bad he rarely left his house except to obtain more drugs. At least two people (including his wife and one of his best friends) died of overdoses at his house in Tempe, AZ during this time. The Kirkwood brothers had always had a legendary appetite for illegal substances and during the tour to support "Too High To Die" with Stone Temple Pilots, the easy availability of drugs was too much for Cris. When it was over, he was severely addicted to cocaine and heroin. When their record label discovered Cris' addictions, support for No Joke! was subsequently dropped and it was met with poor sales figures. Derrick recorded a solo EP under the moniker "Today's Sounds" in 1996, and later on in 1999 took charge of re-issuing the Puppets' original seven records on Rykodisc as well as putting out their first live album, "Live in Montana." Curt formed a new band in Austin, Texas called the Royal Neanderthal Orchestra, but they changed their name to Meat Puppets for legal reasons and released a promotional EP entitled "You Love Me" in 1999, "Golden Lies" in 2000 and "Live" in 2002. The line-up was Curt (voc/git), Kyle Ellison (voc/git), Andrew Duplantis (voc/bass) and Shandon Sahm (drums). Sahm's father was the legendary fiddler-singer-songwriter Doug Sahm of The Sir Douglas Quintet and Texas Tornados. The concluding track to "Classic Puppets" entitled "New Leaf" also dates from this incarnation of the band. Around 2002, Meat Puppets dissolved after Duplantis left the band. Curt went on to release albums with the groups Eyes Adrift and Volcano. In 2005, he released his first solo album entitled "Snow". Bassist Cris was arrested in December 2003 for attacking a security guard at the main post office in downtown Phoenix, AZ with the guard's baton. The guard shot Kirkwood in the stomach at least twice during the melee, causing serious gunshot injuries requiring major surgery. Kirkwood was subsequently denied bail, the judge citing Kirkwood's previous drug arrests and probation violations. He eventually went to prison at the Arizona state prison in Florence, Arizona for felony assault. He was released in July 2005. Derrick Bostrom began a web site for the band about six months before the original trio stopped working together. The site went through many different permutations before it was essentially mothballed in 2003. In late 2005, Bostrom revamped it, this time as a "blog" for his recollections and as a place to share pieces of Meat Puppets history. On March 24, 2006, Curt Kirkwood polled fans at his MySpace page with a bulletin that asked: "Question for all ! Would the original line up of Meat Puppets interest anyone ? Feedback is good – do you want a reunion!?" The response from fans was overwhelmingly positive within a couple of hours, leading to speculation of a full-blown Meat Puppets reunion in the near future. However, a post made by Derrick Bostrom on the official Meat Puppets site dismissed the notion. In April 2006 "Billboard" reported that the Kirkwood brothers would reunite as Meat Puppets without original drummer Derrick Bostrom. Although Primus drummer Tim Alexander was announced as Bostrom's replacement, the position was later filled by Ted Marcus. The new lineup recorded a new full-length album, "Rise to Your Knees", in mid-to-late 2006. The album was released by Anodyne Records on July 17, 2007. On January 20, 2007, Meat Puppets brothers performed two songs during an Army of Anyone concert, at La Zona Rosa in Austin, Texas. The first song was played with Curt Kirkwood and Cris Kirkwood along with Army of Anyone's Ray Luzier and Dean DeLeo. Then the second song was played with original members Curt and Cris Kirkwood and new Meat Puppets drummer Ted Marcus. This was in the middle of Army of Anyone's set, which they listed as "Meat Puppet Theatre" on the evening's set list. The band performed several new songs in March at the South by Southwest festival. On March 28, 2007, the band announced a West Coast tour through their MySpace page. This is the first tour with original bassist Cris in eleven years. The tour continued into the east coast and midwest later in 2007. In 2008 they performed their classic second album live in its entirety at the ATP New York festival. The band parted ways with Anodyne, signed to Megaforce and began recording new material in the winter of 2008. The resulting album, entitled "Sewn Together", was released on May 12, 2009. In the summer of 2009 the band continued to tour across America. They appeared in Rochester, Minnesota outside in front of over 5,000 fans, after playing Summerfest in Milwaukee, Wisconsin the night prior. Meat Puppets performed at the 2009 Voodoo Music Experience in New Orleans over the Halloween weekend. As of November 2009, Shandon Sahm was back as the drummer in Meat Puppets, replacing Ted Marcus. The band was chosen by Animal Collective to perform the album 'Up on the Sun' live in its entirety at the All Tomorrow's Parties festival that they curated in May 2011. The band's thirteenth studio album, entitled "Lollipop", was released on April 12, 2011. The Dandies supported Meat Puppets on all European dates in 2011. Meat Puppets have played several gigs in their hometown since 2009, such as the Marquee show in June 2011 with Dead Confederate. As of early 2011 Elmo Kirkwood, son of Curt Kirkwood and nephew of Cris Kirkwood, was touring regularly with the band playing rhythm guitar. Meat Puppets also contributed to Spin Magazine's exclusive album "", playing Nirvana's "Smells Like Teen Spirit". In June 2012, a book titled "Too High to Die: Meet the Meat Puppets" by author Greg Prato was released, which featured all-new interviews with band members past and present and friends of the band (including Peter Buck, Kim Thayil, Scott Asheton, Mike Watt, and Henry Rollins, among others), and covered the band's entire career. In October 2012, it was announced that the group had just completed recording new songs. "Rat Farm", the band's 14th album, was released in April 2013. In March 2013, Meat Puppets opened for Dave Grohl's Sound City Players at the SXSW Festival in Austin, Texas. In April 2014, Meat Puppets completed a tour with The Moistboyz, and in the summer of 2015, they toured with Soul Asylum. The Meat Puppets were picked to open for an 11 show tour as support of The Dean Ween Group in October 2016 after Curt Kirkwood and drummer Chuck Treece contribute to "The Deaner Album". Also the same year, Cris either produced and/or played with the following artists for Slope Records - The Exterminators, the Linecutters, and Sad Kid. On August 17, 2017, original drummer Derrick Bostrom posted an update on his website derrickbostrom.net. He performed with Cris, Curt and Elmo Kirkwood at a concert honoring the Meat Puppets. It appears that, while Bostrom enjoyed himself, this was a one-off performance. On July 8, 2018, it was confirmed that Bostrom had replaced Sahm as the drummer for the band, and that keyboardist Ron Stabinsky had joined, as well. The band released their 15th studio album, "Dusty Notes", on March 8, 2019. On April 21, 2018 a fan-sponsored petition on MoveOn.org was initiated to induct the Meat Puppets into the Rock and Roll Hall of Fame.
https://en.wikipedia.org/wiki?curid=19709
List of mathematics competitions Mathematics competitions or mathematical olympiads are competitive events where participants sit a mathematics test. These tests may require multiple choice or numeric answers, or a detailed written solution or proof. Generally, registering for these contests is based on the grade level of math at which the student works rather than the age or the enrolled grade of the student. Also normally only competitions where the participants write a full proof are called Mathematical Olympiads.
https://en.wikipedia.org/wiki?curid=19710
Michael Polanyi Michael Polanyi (; ; 11 March 1891 – 22 February 1976) was a Hungarian-British polymath, who made important theoretical contributions to physical chemistry, economics, and philosophy. He argued that positivism supplies a false account of knowing, which if taken seriously undermines humanity's highest achievements. His wide-ranging research in physical science included chemical kinetics, x-ray diffraction, and adsorption of gases. He pioneered the theory of fibre diffraction analysis in 1921, and the dislocation theory of plastic deformation of ductile metals and other materials in 1934. He emigrated to Germany, in 1926 becoming a chemistry professor at the Kaiser Wilhelm Institute in Berlin, and then in 1933 to England, becoming first a chemistry professor, and then a social sciences professor at the University of Manchester. Two of his pupils, and his son John Charles Polanyi won Nobel Prizes in Chemistry. In 1944 Polanyi was elected to the Royal Society. The contributions which Polanyi made to the social sciences include an understanding of tacit knowledge, and the concept of a polycentric spontaneous order to intellectual inquiry were developed in the context of his opposition to central planning. Polanyi, born Mihály Pollacsek in Budapest, was the fifth child of Mihály and Cecília Pollacsek (born as Cecília Wohl), secular Jews from Ungvár (then in Hungary but now in Ukraine) and Wilno, then Russian Empire, respectively. His father's family were entrepreneurs, while his mother's father – Osher Leyzerovich Vol (1833 – after 1906) – was the senior teacher of Jewish history at the Vilna rabbinic seminary, from which he had graduated as a rabbi. The family moved to Budapest and Magyarized their surname to Polányi. His father built much of the Hungarian railway system, but lost most of his fortune in 1899 when bad weather caused a railway building project to go over budget. He died in 1905. Cecília Polányi established a salon that was well known among Budapest's intellectuals, and which continued until her death in 1939. His older brother was Karl Polanyi, the political economist and anthropologist, and his niece was Eva Zeisel, a world-renowned ceramist. In 1909, after leaving his teacher-training secondary school (Mintagymnasium), Polanyi studied to be a physician, obtaining his medical diploma in 1914. He was an active member of the Galilei Society. With the support of Ignác Pfeifer, professor of chemistry at the József Technical University of Budapest, he obtained a scholarship to study chemistry at the Technische Hochschule in Karlsruhe, Germany. In the First World War, he served in the Austro-Hungarian army as a medical officer, and was sent to the Serbian front. While on sick-leave in 1916, he wrote a PhD thesis on adsorption. His research, which was encouraged by Albert Einstein, was supervised by Gusztáv Buchböck, and in 1919 the University of Budapest awarded him a doctorate. In October 1918, Mihály Károlyi established the Hungarian Democratic Republic, and Polanyi became Secretary to the Minister of Health. When the Communists seized power in March 1919, he returned to medicine. When the Hungarian Soviet Republic was overthrown, Polanyi emigrated to Karlsruhe in Germany, and was invited by Fritz Haber to join the Kaiser Wilhelm Institut für Faserstoffchemie (fiber chemistry) in Berlin. In 1923 he converted to Christianity, and in a Roman Catholic ceremony married Magda Elizabeth Kemeny. In 1926 he became the professorial head of department of the Institut für Physikalische Chemie und Elektrochemie (now the Fritz Haber Institute). In 1929, Magda gave birth to their son John, who was awarded a Nobel Prize in chemistry in 1986. Their other son, George Polanyi, who predeceased him, became a well-known economist. His experience of runaway inflation and high unemployment in Weimar Germany led Polanyi to become interested in economics. With the coming to power in 1933 of the Nazi party, he accepted a chair in physical chemistry at the University of Manchester. Two of his pupils, Eugene Wigner and Melvin Calvin went on to win a Nobel Prize. Because of his increasing interest in the social sciences, Manchester University created a new chair in Social Science (1948–58) for him. In 1944 Polanyi was elected a member of the Royal Society, and on his retirement from the University of Manchester in 1958 he was elected a senior research fellow at Merton College, Oxford. In 1962 he was elected a foreign honorary member of the American Academy of Arts and Sciences. Polanyi's scientific interests were extremely diverse, including work in chemical kinetics, x-ray diffraction, and the adsorption of gases at solid surfaces. He is also well known for his potential adsorption theory, which was disputed for quite some time. In 1921, he laid the mathematical foundation of fibre diffraction analysis. In 1934, Polanyi, at about the same time as G. I. Taylor and Egon Orowan, realised that the plastic deformation of ductile materials could be explained in terms of the theory of dislocations developed by Vito Volterra in 1905. The insight was critical in developing the field of solid mechanics. In 1936, as a consequence of an invitation to give lectures for the Ministry of Heavy Industry in the USSR, Polanyi met Bukharin, who told him that in socialist societies all scientific research is directed to accord with the needs of the latest Five Year Plan. Polanyi noted what had happened to the study of genetics in the Soviet Union once the doctrines of Trofim Lysenko had gained the backing of the State. Demands in Britain, for example by the Marxist John Desmond Bernal, for centrally planned scientific research led Polanyi to defend the claim that science requires free debate. Together with John Baker, he founded the influential Society for Freedom in Science. In a series of articles, re-published in "The Contempt of Freedom" (1940) and "The Logic of Liberty" (1951), Polanyi claimed that co-operation amongst scientists is analogous to the way agents co-ordinate themselves within a free market. Just as consumers in a free market determine the value of products, science is a spontaneous order that arises as a consequence of open debate amongst specialists. Science (contrary to the claims of Bukharin) flourishes when scientists have the liberty to pursue truth as an end in itself: [S]cientists, freely making their own choice of problems and pursuing them in the light of their own personal judgment, are in fact co-operating as members of a closely knit organization. Such self-co-ordination of independent initiatives leads to a joint result which is unpremeditated by any of those who bring it about. Any attempt to organize the group ... under a single authority would eliminate their independent initiatives, and thus reduce their joint effectiveness to that of the single person directing them from the centre. It would, in effect, paralyse their co-operation. He derived the phrase spontaneous order from Gestalt psychology, and it was adopted by the classical liberal economist Friederich Hayek, although the concept can be traced back to at least Adam Smith. Polanyi (unlike Hayek) argued that there are higher and lower forms of spontaneous order, and he asserted that defending scientific inquiry on utilitarian or sceptical grounds undermined the practice of science. He extends this into a general claim about free societies. Polanyi defends a free society not on the negative grounds that we ought to respect "private liberties", but on the positive grounds that "public liberties" facilitate our pursuit of objective ideals. According to Polanyi, a free society that strives to be value-neutral undermines its own justification. But it is not enough for the members of a free society to believe that ideals such as truth, justice, and beauty, are objective, they also have to accept that they transcend our ability to wholly capture them. The objectivity of values must be combined with acceptance that all knowing is fallible. In "Full Employment and Free Trade" (1948) Polanyi analyses the way money circulates around an economy, and in a monetarist analysis that, according to Paul Craig Roberts, was thirty years ahead of its time, he argues that a free market economy should not be left to be wholly self-adjusting. A central bank should attempt to moderate economic booms/busts via a strict/loose monetary policy. In his book "Science, Faith and Society" (1946), Polanyi set out his opposition to a positivist account of science, noting that it ignores the role personal commitments play in the practice of science. Polanyi gave the Gifford Lectures in 1951–52 at Aberdeen, and a revised version of his lectures were later published as "Personal Knowledge" (1958). In this book Polanyi claims that all knowledge claims (including those that derive from rules) rely on personal judgements. He denies that a scientific method can yield truth mechanically. All knowing, no matter how formalised, relies upon commitments. Polanyi argued that the assumptions that underlie critical philosophy are not only false, they undermine the commitments that motivate our highest achievements. He advocates a fiduciary post-critical approach, in which we recognise that we believe more than we can prove, and know more than we can say. The literary critic Rita Felski has named Polanyi as an important precursor to the project of postcritique within literary studies. A knower does not stand apart from the universe, but participates personally within it. Our intellectual skills are driven by passionate commitments that motivate discovery and validation. According to Polanyi, a great scientist not only identifies patterns, but also chooses significant questions likely to lead to a successful resolution. Innovators risk their reputation by committing to a hypothesis. Polanyi cites the example of Copernicus, who declared that the Earth revolves around the Sun. He claims that Copernicus arrived at the Earth's true relation to the Sun not as a consequence of following a method, but via "the greater intellectual satisfaction he derived from the celestial panorama as seen from the Sun instead of the Earth." His writings on the practice of science influenced Thomas Kuhn and Paul Feyerabend. Polanyi rejected the claim by British Empiricists that experience can be reduced into sense data, but he also rejects the notion that "indwelling" within (sometimes incompatible) interpretative frameworks traps us within them. Our tacit awareness connects us, albeit fallibly, with reality. It supplies us with the context within which our articulations have meaning. Contrary to the views of his colleague and friend Alan Turing, whose work at the Victoria University of Manchester prepared the way for the first modern computer, he denied that minds are reducible to collections of rules. His work influenced the critique by Hubert Dreyfus of "First Generation" artificial intelligence. It was while writing "Personal Knowledge" that he identified the "structure of tacit knowing". He viewed it as his most important discovery. He claimed that we experience the world by integrating our subsidiary awareness into a focal awareness. In his later work, for example his Terry Lectures, later published as "The Tacit Dimension" (1966), he distinguishes between the phenomenological, instrumental, semantic, and ontological aspects of tacit knowing, as discussed (but not necessarily identified as such) in his previous writing. In "Life's irreducible structure" (1968), Polanyi argues that the information contained in the DNA molecule is not reducible to the laws of physics and chemistry. Although a DNA molecule cannot exist without physical properties, these properties are constrained by higher-level ordering principles. In "Transcendence and Self-transcendence" (1970), Polanyi criticises the mechanistic world view that modern science inherited from Galileo. Polanyi advocates emergence i.e. the claim that there are several levels of reality and of causality. He relies on the assumption that boundary conditions supply degrees of freedom that, instead of being random, are determined by higher-level realities, whose properties are dependent on but distinct from the lower level from which they emerge. An example of a higher-level reality functioning as a downward causal force is consciousness – intentionality – generating meanings – intensionality. Mind is a higher-level expression of the capacity of living organisms for discrimination. Our pursuit of self-set ideals such as truth and justice transforms our understanding of the world. The reductionistic attempt to reduce higher-level realities into lower-level realities generates what Polanyi calls a moral inversion, in which the higher is rejected with moral passion. Polanyi identifies it as a pathology of the modern mind and traces its origins to a false conception of knowledge; although it is relatively harmless in the formal sciences, that pathology generates nihilism in the humanities. Polanyi considered Marxism an example of moral inversion. The State, on the grounds of an appeal to the logic of history, uses its coercive powers in ways that disregard any appeals to morality. Tacit knowledge, as distinct from explicit knowledge, is an influential term developed by Polanyi in "The Tacit Dimension" to describe the idea of know how, or the ability to do something, without necessarily being able to articulate it or even be aware of all the dimensions, for example being able to ride a bicycle or play a musical instrument.
https://en.wikipedia.org/wiki?curid=19711
Methanol Methanol, also known as methyl alcohol amongst other names, is a chemical with the formula CH3OH (a methyl group linked to a hydroxyl group, often abbreviated MeOH). It is a light, volatile, colourless, flammable liquid with a distinctive alcoholic odour similar to that of ethanol (drinking alcohol). A polar solvent, methanol acquired the name wood alcohol because it was once produced chiefly by the destructive distillation of wood. Today, methanol is mainly produced industrially by hydrogenation of carbon monoxide. Methanol is the simplest alcohol, consisting of a methyl group linked to a hydroxyl group. With more than 20 million tons produced annually, it is used as a precursor to other commodity chemicals, including formaldehyde, acetic acid, methyl tert-butyl ether, as well as a host of more specialized chemicals. Small amounts of methanol are present in normal, healthy human individuals. One study found a mean of 4.5 ppm in the exhaled breath of test subjects. The mean endogenous methanol in humans of 0.45 g/d may be metabolized from pectin found in fruit; one kilogram of apple produces up to 1.4 g methanol. Methanol is produced by anaerobic bacteria and phytoplankton. Methanol is also found in abundant quantities in star-forming regions of space and is used in astronomy as a marker for such regions. It is detected through its spectral emission lines. In 2006, astronomers using the MERLIN array of radio telescopes at Jodrell Bank Observatory discovered a large cloud of methanol in space, 288 billion miles (463 billion km) across. In 2016, astronomers detected methanol in a planet-forming disc around the young star TW Hydrae using ALMA radio telescope. Methanol has low acute toxicity in humans, but is dangerous because it is occasionally ingested in large volumes, often together with ethanol. As little as of pure methanol can cause permanent blindness by destruction of the optic nerve. is potentially fatal. The median lethal dose is , "i.e.", 1–2 mL/kg body weight of pure methanol. The reference dose for methanol is 0.5 mg/kg in a day. Toxic effects begin hours after ingestion, and antidotes can often prevent permanent damage. Because of its similarities in both appearance and odor to ethanol (the alcohol in beverages), it is difficult to differentiate between the two; such is also the case with denatured alcohol, adulterated liquors or very low quality alcoholic beverages. However, cases exist of methanol resistance, such as that of Mike Malloy who was the victim of a failed murder attempt by methanol in the early 1930s. Methanol is toxic by two mechanisms. First, methanol can be fatal due to effects on the central nervous system, acting as a central nervous system depressant in the same manner as ethanol poisoning. Second, in a process of toxication, it is metabolized to formic acid (which is present as the formate ion) via formaldehyde in a process initiated by the enzyme alcohol dehydrogenase in the liver. Methanol is converted to formaldehyde via alcohol dehydrogenase (ADH) and formaldehyde is converted to formic acid (formate) via aldehyde dehydrogenase (ALDH). The conversion to formate via ALDH proceeds completely, with no detectable formaldehyde remaining. Formate is toxic because it inhibits mitochondrial cytochrome c oxidase, causing hypoxia at the cellular level, and metabolic acidosis, among a variety of other metabolic disturbances. Outbreaks of methanol poisoning have occurred primarily due to contamination of drinking alcohol. This is more common in the developing world. In 2013 more than 1700 cases nonetheless occurred in the United States. Those affected are often adult men. Outcomes may be good with early treatment. Toxicity to methanol was described as early as 1856. Because of its toxic properties, methanol is frequently used as a denaturant additive for ethanol manufactured for industrial uses. This addition of methanol exempts industrial ethanol (commonly known as "denatured alcohol" or "methylated spirit") from liquor excise taxation in the US and some other countries. Methanol is primarily converted to formaldehyde, which is widely used in many areas, especially polymers. The conversion entails oxidation: Acetic acid can be produced from methanol. Methanol and isobutene are combined to give methyl "tert"-butyl ether (MTBE). MTBE is a major octane booster in gasoline. Condensation of methanol to produce hydrocarbons and even aromatic systems is the basis of several technologies related to gas to liquids. These include methanol-to-hydrocarbons (MTH), methanol to gasoline (MTG), and methanol to olefins (MTO), and methanol to propylene (MTP). These conversions are catalyzed by zeolites as heterogeneous catalysts. The MTG process was once commercialized at Motunui in New Zealand. The European Fuel Quality Directive allows fuel producers to blend up to 3% methanol, with an equal amount of cosolvent, with gasoline sold in Europe. China uses more than 4.5 billion liters of methanol per year as a transportation fuel in low level blends for conventional vehicles, and high level blends in vehicles designed for methanol fuels. Methanol is the precursor to most simple methylamines, methyl halides, and methyl ethers. Methyl esters are produced from methanol, including the transesterification of fats and production of biodiesel via transesterification. Methanol is a promising energy carrier because, as a liquid, it is easier to store than hydrogen and natural gas. Its energy density is however low reflecting the fact that it represents partially combusted methane. Its energy density is 15.6 MJ/L, whereas ethanol's is 24 and gasoline's is 33 MJ/L. Further advantages for methanol is its ready biodegradability and low toxicity. It does not persist in either aerobic (oxygen-present) or anaerobic (oxygen-absent) environments. The half-life for methanol in groundwater is just one to seven days, while many common gasoline components have half-lives in the hundreds of days (such as benzene at 10–730 days). Since methanol is miscible with water and biodegradable, it is unlikely to accumulate in groundwater, surface water, air or soil. Methanol is occasionally used to fuel internal combustion engines. It burns forming carbon dioxide and water: One problem with high concentrations of methanol in fuel is that alcohols corrode some metals, particularly aluminium. Methanol fuel has been proposed for ground transportation. The chief advantage of a methanol economy is that it could be adapted to gasoline internal combustion engines with minimum modification to the engines and to the infrastructure that delivers and stores liquid fuel. Its energy density is however only half that of gasoline, meaning that twice the volume of methanol would be required. Methanol is a traditional denaturant for ethanol, the product being known as "denatured alcohol" or "methylated spirit". This was commonly used during the Prohibition to discourage consumption of bootlegged liquor, and ended up causing several deaths. Methanol is used as a solvent and as an antifreeze in pipelines and windshield washer fluid. Methanol was used as an automobile coolant antifreeze in the early 1900s. As of May 2018, methanol was banned in the EU for use in windscreen washing or defrosting due to its risk of human consumption. In some wastewater treatment plants, a small amount of methanol is added to wastewater to provide a carbon food source for the denitrifying bacteria, which convert nitrates to nitrogen gas and reduce the nitrification of sensitive aquifers. Methanol is used as a destaining agent in polyacrylamide gel electrophoresis. Direct-methanol fuel cells are unique in their low temperature, atmospheric pressure operation, which lets them be greatly miniaturized. This, combined with the relatively easy and safe storage and handling of methanol, may open the possibility of fuel cell-powered consumer electronics, such as laptop computers and mobile phones. Methanol is also a widely used fuel in camping and boating stoves. Methanol burns well in an unpressurized burner, so alcohol stoves are often very simple, sometimes little more than a cup to hold fuel. This lack of complexity makes them a favorite of hikers who spend extended time in the wilderness. Similarly, the alcohol can be gelled to reduce risk of leaking or spilling, as with the brand "Sterno". Methanol is mixed with water and injected into high performance diesel and gasoline engines for an increase of power and a decrease in intake air temperature in a process known as water methanol injection. Carbon monoxide and hydrogen react over a catalyst to produce methanol. Today, the most widely used catalyst is a mixture of copper and zinc oxides, supported on alumina, as first used by ICI in 1966. At 5–10 MPa (50–100 atm) and , the reaction is characterized by high selectivity (>99.8%): The production of synthesis gas from methane produces three moles of hydrogen for every mole of carbon monoxide, whereas the synthesis consumes only two moles of hydrogen gas per mole of carbon monoxide. One way of dealing with the excess hydrogen is to inject carbon dioxide into the methanol synthesis reactor, where it, too, reacts to form methanol according to the equation: In terms of mechanism, the process occurs via initial conversion of CO into CO2, which is then hydrogenated: where the H2O byproduct is recycled via the water-gas shift reaction This gives an overall reaction, which is the same as listed above. The catalytic conversion of methane to methanol is effected by enzymes including methane monooxygenases. These enzymes are mixed-function oxygenases, i.e. oxygenation is coupled with production of water and NAD+. CH4 + O2 + NADPH + H+ → CH3OH + H2O + NAD+ Both Fe- and Cu-dependent enzymes have been characterized. Intense but largely fruitless efforts have been undertaken to emulate this reactivity. Methanol is more easily oxidized than is the feedstock methane, so the reactions tend not to be selective. Some strategies exist to circumvent this problem. Examples include Shilov systems and Fe- and Cu containing zeolites. These systems do not necessarily mimick the mechanisms employed by metalloenzymes, but draw some inspiration from them. Active sites can vary substantially from those known in the enzymes. For example, a dinuclear active site is proposed in the sMMO enzyme, whereas a mononuclear iron (alpha-oxygen) is proposed in the Fe-zeolite. Methanol is highly flammable. Its vapours are heavier than air, can travel and ignite. Methanol fires should be extinguished with dry chemical, carbon dioxide, water spray or alcohol-resistant foam. Methanol is available commercially in various purity grades. Commercial methanol is generally classified according to ASTM purity grades A and AA. Methanol for chemical use normally corresponds to Grade AA. In addition to water, typical impurities include acetone and ethanol (which are very difficult to separate by distillation). UV-vis spectroscopy is a convenient method for detecting aromatic impurities. Water content can be determined by the Karl-Fischer titration. In their embalming process, the ancient Egyptians used a mixture of substances, including methanol, which they obtained from the pyrolysis of wood. Pure methanol, however, was first isolated in 1661 by Robert Boyle, when he produced it via the distillation of buxus (boxwood). It later became known as "pyroxylic spirit". In 1834, the French chemists Jean-Baptiste Dumas and Eugene Peligot determined its elemental composition. They also introduced the word "methylène" to organic chemistry, forming it from Greek "methy" = "alcoholic liquid" + "hȳlē" = "forest, wood, timber, material". "Methylène" designated a "radical" that was about 14% hydrogen by weight and contained one carbon atom. This would be CH2, but at the time carbon was thought to have an atomic weight only six times that of hydrogen, so they gave the formula as CH. They then called wood alcohol (l'esprit de bois) "bihydrate de méthylène" (bihydrate because they thought the formula was C4H8O4 = (CH)4(H2O)2). The term "methyl" was derived in about 1840 by back-formation from "methylene", and was then applied to describe "methyl alcohol". This was shortened to "methanol" in 1892 by the International Conference on Chemical Nomenclature.
https://en.wikipedia.org/wiki?curid=19712
Milk Milk is a white, nutrient-rich liquid food produced in the mammary glands of mammals. It is the primary source of nutrition for infant mammals (including humans who are breastfed) before they are able to digest other types of food. Early-lactation milk contains colostrum, which carries the mother's antibodies to its young and can reduce the risk of many diseases. It contains many other nutrients including protein and lactose. Interspecies consumption of milk is not uncommon, particularly among humans, many of whom consume the milk of other mammals. As an agricultural product, milk, also called "dairy milk", is extracted from farm animals during or soon after pregnancy. Dairy farms produced about 730 million tonnes of milk in 2011, from 260 million dairy cows. India is the world's largest producer of milk, and is the leading exporter of skimmed milk powder, yet it exports few other milk products. The ever-increasing rise in domestic demand for dairy products and a large demand-supply gap could lead to India being a net importer of dairy products in the future. New Zealand, Germany and the Netherlands are the largest exporters of milk products. China and Russia were the world's largest importers of milk and milk products until 2016 when both countries became self-sufficient, contributing to a worldwide glut of milk. Throughout the world, more than six billion people consume milk and milk products. Between 750 and 900 million people live in dairy farming households. The term "milk" comes from "Old English "meoluc" (West Saxon), "milc" (Anglian), from Proto-Germanic *"meluks" "milk" (source also of Old Norse "mjolk", Old Frisian "melok", Old Saxon "miluk", Dutch "melk", Old High German "miluh", German "Milch", Gothic "miluks")". In food use, from 1961, the term "milk" has been defined under Codex Alimentarius standards as: "the normal mammary secretion of milking animals obtained from one or more milkings without either addition to it or extraction from it, intended for consumption as liquid milk or for further processing." The term "dairy" relates to animal milk and animal milk production. A substance secreted by pigeons to feed their young is called "crop milk" and bears some resemblance to mammalian milk, although it is not consumed as a milk substitute. The definition above precludes non-animal products which resemble dairy milk in color and texture, such as almond milk, coconut milk, rice milk, and soy milk. In English, the word "milk" has been used to refer to "milk-like plant juices" since 1200 AD. Traditionally a variety of non-dairy products have been described with the word "milk", including the traditional digestive remedies milk of magnesia and milk of bismuth. Latex, the complex inedible emulsion that exudes from the stems of certain plants, is generally described as "milky" and is often sold as ""rubber milk"" because of its white appearance. The word "latex" itself is deducted from the Spanish word for milk. A 2018 survey by the International Food Information Council Foundation suggests consumers in the United States do not typically confuse plant-based milk analogues with animal milk and dairy products. In the US, (mostly plant-based) milk alternatives now command 13% of the "milk" market, leading the US dairy industry to attempt, multiple times, to sue producers of dairy milk alternatives, to have the name "milk" limited to animal milk, so far without success. The Food and Drug Administration generally supports restricting the term "milk", while the US Department of Agriculture supports the continued use of terms such as "soymilk". In the European Union, words such as milk, butter, cheese, cream and yogurt are legally restricted to animal products, with exceptions such as "coconut milk", "almond milk", "peanut butter", and "ice cream". Production of milk substitutes from vats of brewer's yeast is under development by organizations including Impossible Foods, Muufri, and the biohacker group "Real Vegan Cheese". Some components would be chemically identical to those in animal-derived milk; others, such as lactose, to which many people are allergic, may be substituted. Milk consumption occurs in two distinct overall types: a natural source of nutrition for all infant mammals and a food product obtained from other mammals for consumption by humans of all ages. In almost all mammals, milk is fed to infants through breastfeeding, either directly or by expressing the milk to be stored and consumed later. The early milk from mammals is called colostrum. Colostrum contains antibodies that provide protection to the newborn baby as well as nutrients and growth factors. The makeup of the colostrum and the period of secretion varies from species to species. For humans, the World Health Organization recommends exclusive breastfeeding for six months and breastfeeding in addition to other food for up to two years of age or more. In some cultures it is common to breastfeed children for three to five years, and the period may be longer. Fresh goats' milk is sometimes substituted for breast milk, which introduces the risk of the child developing electrolyte imbalances, metabolic acidosis, megaloblastic anemia, and a host of allergic reactions. In many cultures, especially in the West, humans continue to consume milk beyond infancy, using the milk of other mammals (especially cattle, goats and sheep) as a food product. Initially, the ability to digest milk was limited to children as adults did not produce lactase, an enzyme necessary for digesting the lactose in milk. People therefore converted milk to curd, cheese and other products to reduce the levels of lactose. Thousands of years ago, a chance mutation spread in human populations in Europe that enabled the production of lactase in adulthood. This mutation allowed milk to be used as a new source of nutrition which could sustain populations when other food sources failed. Milk is processed into a variety of products such as cream, butter, yogurt, kefir, ice cream, and cheese. Modern industrial processes use milk to produce casein, whey protein, lactose, condensed milk, powdered milk, and many other food-additives and industrial products. Whole milk, butter and cream have high levels of saturated fat. The sugar lactose is found only in milk, forsythia flowers, and a few tropical shrubs. The enzyme needed to digest lactose, lactase, reaches its highest levels in the human small intestine after birth and then begins a slow decline unless milk is consumed regularly. Those groups who do continue to tolerate milk, however, often have exercised great creativity in using the milk of domesticated ungulates, not only of cattle, but also sheep, goats, yaks, water buffalo, horses, reindeer and camels. India is the largest producer and consumer of cattle and buffalo milk in the world. Humans first learned to consume the milk of other mammals regularly following the domestication of animals during the Neolithic Revolution or the development of agriculture. This development occurred independently in several global locations from as early as 9000–7000BC in Mesopotamia to 3500–3000BC in the Americas. People first domesticated the most important dairy animals – cattle, sheep and goats – in Southwest Asia, although domestic cattle had been independently derived from wild aurochs populations several times since. Initially animals were kept for meat, and archaeologist Andrew Sherratt has suggested that dairying, along with the exploitation of domestic animals for hair and labor, began much later in a separate secondary products revolution in the fourth millennium BC. Sherratt's model is not supported by recent findings, based on the analysis of lipid residue in prehistoric pottery, that shows that dairying was practiced in the early phases of agriculture in Southwest Asia, by at least the seventh millennium BC. From Southwest Asia domestic dairy animals spread to Europe (beginning around 7000 BC but did not reach Britain and Scandinavia until after 4000 BC), and South Asia (7000–5500#nbsp;BC). The first farmers in central Europe and Britain milked their animals. Pastoral and pastoral nomadic economies, which rely predominantly or exclusively on domestic animals and their products rather than crop farming, were developed as European farmers moved into the Pontic-Caspian steppe in the fourth millennium BC, and subsequently spread across much of the Eurasian steppe. Sheep and goats were introduced to Africa from Southwest Asia, but African cattle may have been independently domesticated around 7000–6000BC. Camels, domesticated in central Arabia in the fourth millennium BC, have also been used as dairy animals in North Africa and the Arabian Peninsula. The earliest Egyptian records of burn treatments describe burn dressings using milk from mothers of male babies. In the rest of the world (i.e., East and Southeast Asia, the Americas and Australia) milk and dairy products were historically not a large part of the diet, either because they remained populated by hunter-gatherers who did not keep animals or the local agricultural economies did not include domesticated dairy species. Milk consumption became common in these regions comparatively recently, as a consequence of European colonialism and political domination over much of the world in the last 500 years. In the Middle Ages, milk was called the "virtuous white liquor" because alcoholic beverages were safer to consume than water. The growth in urban population, coupled with the expansion of the railway network in the mid-19th century, brought about a revolution in milk production and supply. Individual railway firms began transporting milk from rural areas to London from the 1840s and 1850s. Possibly the first such instance was in 1846, when St Thomas's Hospital in Southwark contracted with milk suppliers outside London to ship milk by rail. The Great Western Railway was an early and enthusiastic adopter, and began to transport milk into London from Maidenhead in 1860, despite much criticism. By 1900, the company was transporting over 25 million gallons annually. The milk trade grew slowly through the 1860s, but went through a period of extensive, structural change in the 1870s and 1880s. Urban demand began to grow, as consumer purchasing power increased and milk became regarded as a required daily commodity. Over the last three decades of the 19th century, demand for milk in most parts of the country doubled or, in some cases, tripled. Legislation in 1875 made the adulteration of milk illegal– This combined with a marketing campaign to change the image of milk. The proportion of rural imports by rail as a percentage of total milk consumption in London grew from under 5% in the 1860s to over 96% by the early 20th century. By that point, the supply system for milk was the most highly organized and integrated of any food product. The first glass bottle packaging for milk was used in the 1870s. The first company to do so may have been the New York Dairy Company in 1877. The Express Dairy Company in England began glass bottle production in 1880. In 1884, Hervey Thatcher, an American inventor from New York, invented a glass milk bottle, called "Thatcher's Common Sense Milk Jar," which was sealed with a waxed paper disk. Later, in 1932, plastic-coated paper milk cartons were introduced commercially. In 1863, French chemist and biologist Louis Pasteur invented pasteurization, a method of killing harmful bacteria in beverages and food products. He developed this method while on summer vacation in Arbois, to remedy the frequent acidity of the local wines. He found out experimentally that it is sufficient to heat a young wine to only about for a brief time to kill the microbes, and that the wine could be nevertheless properly aged without sacrificing the final quality. In honor of Pasteur, the process became known as "pasteurization". Pasteurization was originally used as a way of preventing wine and beer from souring. Commercial pasteurizing equipment was produced in Germany in the 1880s, and producers adopted the process in Copenhagen and Stockholm by 1885. Continued improvements in the efficiency of milk production led to a worldwide glut of milk by 2016. Russia and China became self-sufficient and stopped importing milk. Canada has tried to restrict milk production by forcing new farmers/increased capacity to "buy in" at C$24,000 per cow. Importing milk is prohibited. The European Union theoretically stopped subsidizing dairy farming in 2015. Direct subsidies were replaced by "environmental incentives" which results in the government buying milk when the price falls to €200 per . The United States has a voluntary insurance program that pays farmers depending upon the price of milk and the cost of feed. The females of all mammal species can by definition produce milk, but cow's milk dominates commercial production. In 2011, FAO estimates 85% of all milk worldwide was produced from cows. Human milk is not produced or distributed industrially or commercially; however, human milk banks collect donated human breastmilk and redistribute it to infants who may benefit from human milk for various reasons (premature neonates, babies with allergies, metabolic diseases, etc.) but who cannot breastfeed. In the Western world, cow's milk is produced on an industrial scale and is by far the most commonly consumed form of milk. Commercial dairy farming using automated milking equipment produces the vast majority of milk in developed countries. Dairy cattle such as the Holstein have been bred selectively for increased milk production. About 90% of the dairy cows in the United States and 85% in Great Britain are Holsteins. Other dairy cows in the United States include Ayrshire, Brown Swiss, Guernsey, Jersey and Milking Shorthorn (Dairy Shorthorn). Aside from cattle, many kinds of livestock provide milk used by humans for dairy products. These animals include water buffalo, goat, sheep, camel, donkey, horse, reindeer and yak. The first four respectively produced about 11%, 2%, 1.4% and 0.2% of all milk worldwide in 2011. In Russia and Sweden, small moose dairies also exist. According to the U.S. National Bison Association, American bison (also called American buffalo) are not milked commercially; however, various sources report cows resulting from cross-breeding bison and domestic cattle are good milk producers, and have been used both during the European settlement of North America and during the development of commercial Beefalo in the 1970s and 1980s. Swine are almost never milked, even though their milk is similar to cow's milk and perfectly suitable for human consumption. The main reasons for this are that milking a sow's numerous small teats is very cumbersome, and that sows can not store their milk as cows can. A few pig farms do sell pig cheese as a novelty item; these cheeses are exceedingly expensive. In 2012, the largest producer of milk and milk products was India followed by the United States of America, China, Pakistan and Brazil. All 28 European Union members together produced 153.8 million tonnes of milk in 2013, the largest by any politico-economic union. Increasing affluence in developing countries, as well as increased promotion of milk and milk products, has led to a rise in milk consumption in developing countries in recent years. In turn, the opportunities presented by these growing markets have attracted investments by multinational dairy firms. Nevertheless, in many countries production remains on a small scale and presents significant opportunities for diversification of income sources by small farms. Local milk collection centers, where milk is collected and chilled prior to being transferred to urban dairies, are a good example of where farmers have been able to work on a cooperative basis, particularly in countries such as India. FAO reports Israel dairy farms are the most productive in the world, with a yield of milk per cow per year. This survey over 2001 and 2007 was conducted by ICAR (International Committee for Animal Recording) across 17 developed countries. The survey found that the average herd size in these developed countries increased from 74 to 99 cows per herd between 2001 and 2007. A dairy farm had an average of 19 cows per herd in Norway, and 337 in New Zealand. Annual milk production in the same period increased from per cow in these developed countries. The lowest average production was in New Zealand at per cow. The milk yield per cow depended on production systems, nutrition of the cows, and only to a minor extent different genetic potential of the animals. What the cow ate made the most impact on the production obtained. New Zealand cows with the lowest yield per year grazed all year, in contrast to Israel with the highest yield where the cows ate in barns with an energy-rich mixed diet. The milk yield per cow in the United States, the world's largest cow milk producer, was per year in 2010. In contrast, the milk yields per cow in India and China– the second and third largest producers– were respectively and per year. It was reported in 2007 that with increased worldwide prosperity and the competition of bio-fuel production for feed stocks, both the demand for and the price of milk had substantially increased worldwide. Particularly notable was the rapid increase of consumption of milk in China and the rise of the price of milk in the United States above the government subsidized price. In 2010 the Department of Agriculture predicted farmers would receive an average of $1.35 per U.S. gallon of cow's milk (35 cents per liter), which is down 30 cents per gallon from 2007 and below the break-even point for many cattle farmers. The consumption of cow's milk poses numerous threats to the natural environment. Compared to plant milks, cow's milk requires the most land and water, and its production results in the greatest amount of greenhouse gas (GHG) emissions, air pollution, and water pollution. A 2010 UN report, "Assessing the Environmental Impacts of Consumption and Production", argued that animal products, including dairy, "in general require more resources and cause higher emissions than plant-based alternatives". It proposed a move away from animal products to reduce environmental damage. The global water footprint of animal agriculture is 2,422 billion cubic meters of water (one-fourth of the total global water footprint), 19 percent of which is related to dairy cattle. A 2012 study found that 98 percent of milk's footprint can be traced back to the cows food. A 2010 Food and Agriculture Organization report found that the global dairy sector contributes to four percent of the total global anthropogenic GHG emissions. This figure includes emissions allotted to milk production, processing and transportation, and the emissions from fattening and slaughtering dairy cows. The same report found that 52 percent of the GHGs produced by dairy cattle is methane, and nitrous oxide makes up for another 27 percent of dairy cattle's GHG emission. It is estimated that cows produce between 250 and 500 liters of methane a day. Methane has a heat-trapping potential nearly 100 times larger than carbon dioxide, and nitrous oxide has a global warming potential almost 300 times greater than carbon dioxide. Milk is an emulsion or colloid of butterfat globules within a water-based fluid that contains dissolved carbohydrates and protein aggregates with minerals. Because it is produced as a food source for the young, all of its contents provide benefits for growth. The principal requirements are energy (lipids, lactose, and protein), biosynthesis of non-essential amino acids supplied by proteins (essential amino acids and amino groups), essential fatty acids, vitamins and inorganic elements, and water. The pH of milk ranges from 6.4 to 6.8 and it changes over time. Milk from other bovines and non-bovine mammals varies in composition, but has a similar pH. Initially milk fat is secreted in the form of a fat globule surrounded by a membrane. Each fat globule is composed almost entirely of triacylglycerols and is surrounded by a membrane consisting of complex lipids such as phospholipids, along with proteins. These act as emulsifiers which keep the individual globules from coalescing and protect the contents of these globules from various enzymes in the fluid portion of the milk. Although 97–98% of lipids are triacylglycerols, small amounts of di- and monoacylglycerols, free cholesterol and cholesterol esters, free fatty acids, and phospholipids are also present. Unlike protein and carbohydrates, fat composition in milk varies widely in the composition due to genetic, lactational, and nutritional factor difference between different species. Like composition, fat globules vary in size from less than 0.2 to about 15 micrometers in diameter between different species. Diameter may also vary between animals within a species and at different times within a milking of a single animal. In unhomogenized cow's milk, the fat globules have an average diameter of two to four micrometers and with homogenization, average around 0.4 micrometers. The fat-soluble vitamins A, D, E, and K along with essential fatty acids such as linoleic and linolenic acid are found within the milk fat portion of the milk. Normal bovine milk contains 30–35 grams of protein per liter of which about 80% is arranged in casein micelles. Total proteins in milk represent 3.2% of its composition (nutrition table). The largest structures in the fluid portion of the milk are "casein micelles": aggregates of several thousand protein molecules with superficial resemblance to a surfactant micelle, bonded with the help of nanometer-scale particles of calcium phosphate. Each casein micelle is roughly spherical and about a tenth of a micrometer across. There are four different types of casein proteins: αs1-, αs2-, β-, and κ-caseins. Most of the casein proteins are bound into the micelles. There are several competing theories regarding the precise structure of the micelles, but they share one important feature: the outermost layer consists of strands of one type of protein, k-casein, reaching out from the body of the micelle into the surrounding fluid. These kappa-casein molecules all have a negative electrical charge and therefore repel each other, keeping the micelles separated under normal conditions and in a stable colloidal suspension in the water-based surrounding fluid. Milk contains dozens of other types of proteins beside caseins and including enzymes. These other proteins are more water-soluble than caseins and do not form larger structures. Because the proteins remain suspended in whey remaining when caseins coagulate into curds, they are collectively known as "whey proteins". Lactoglobulin is the most common whey protein by a large margin. The ratio of caseins to whey proteins varies greatly between species; for example, it is 82:18 in cows and around 32:68 in humans. Minerals or milk salts, are traditional names for a variety of cations and anions within bovine milk. Calcium, phosphate, magnesium, sodium, potassium, citrate, and chloride are all included as minerals and they typically occur at concentration of 5–40mM. The milk salts strongly interact with casein, most notably calcium phosphate. It is present in excess and often, much greater excess of solubility of solid calcium phosphate. In addition to calcium, milk is a good source of many other vitamins. Vitamins A, B6, B12, C, D, K, E, thiamine, niacin, biotin, riboflavin, folates, and pantothenic acid are all present in milk. For many years the most accepted theory of the structure of a micelle was that it was composed of spherical casein aggregates, called submicelles, that were held together by calcium phosphate linkages. However, there are two recent models of the casein micelle that refute the distinct micellular structures within the micelle. The first theory attributed to de Kruif and Holt, proposes that nanoclusters of calcium phosphate and the phosphopeptide fraction of beta-casein are the centerpiece to micellular structure. Specifically in this view, unstructured proteins organize around the calcium phosphate giving rise to their structure and thus no specific structure is formed. The second theory proposed by Horne, the growth of calcium phosphate nanoclusters begins the process of micelle formation but is limited by binding phosphopeptide loop regions of the caseins. Once bound, protein-protein interactions are formed and polymerization occurs, in which K-casein is used as an end cap, to form micelles with trapped calcium phosphate nanoclusters. Some sources indicate that the trapped calcium phosphate is in the form of Ca9(PO4)6; whereas, others say it is similar to the structure of the mineral brushite CaHPO4 -2H2O. Milk contains several different carbohydrate including lactose, glucose, galactose, and other oligosaccharides. The lactose gives milk its sweet taste and contributes approximately 40% of whole cow's milk's calories. Lactose is a disaccharide composite of two simple sugars, glucose and galactose. Bovine milk averages 4.8% anhydrous lactose, which amounts to about 50% of the total solids of skimmed milk. Levels of lactose are dependent upon the type of milk as other carbohydrates can be present at higher concentrations than lactose in milks. Other components found in raw cow's milk are living white blood cells, mammary gland cells, various bacteria, and a large number of active enzymes. Both the fat globules and the smaller casein micelles, which are just large enough to deflect light, contribute to the opaque white color of milk. The fat globules contain some yellow-orange carotene, enough in some breeds (such as Guernsey and Jersey cattle) to impart a golden or "creamy" hue to a glass of milk. The riboflavin in the whey portion of milk has a greenish color, which sometimes can be discerned in skimmed milk or whey products. Fat-free skimmed milk has only the casein micelles to scatter light, and they tend to scatter shorter-wavelength blue light more than they do red, giving skimmed milk a bluish tint. In most Western countries, centralized dairy facilities process milk and products obtained from milk, such as cream, butter, and cheese. In the U.S., these dairies usually are local companies, while in the Southern Hemisphere facilities may be run by large multi-national corporations such as Fonterra. Pasteurization is used to kill harmful pathogenic bacteria by heating the milk for a short time and then immediately cooling it. Types of pasteurized milk include full cream, reduced fat, skim milk, calcium enriched, flavored, and UHT. The standard high temperature short time (HTST) process of 72 °C for 15 seconds completely kills pathogenic bacteria in milk, rendering it safe to drink for up to three weeks if continually refrigerated. Dairies print best before dates on each container, after which stores remove any unsold milk from their shelves. A side effect of the heating of pasteurization is that some vitamin and mineral content is lost. Soluble calcium and phosphorus decrease by 5%, thiamin and vitamin B12 by 10%, and vitamin C by 20%. Because losses are small in comparison to the large amount of the two B-vitamins present, milk continues to provide significant amounts of thiamin and vitamin B12. The loss of vitamin C is not nutritionally significant, as milk is not an important dietary source of vitamin C. Microfiltration is a process that partially replaces pasteurization and produces milk with fewer microorganisms and longer shelf life without a change in the taste of the milk. In this process, cream is separated from the skimmed milk and is pasteurized in the usual way, but the skimmed milk is forced through ceramic microfilters that trap 99.9% of microorganisms in the milk (as compared to 99.999% killing of microorganisms in standard HTST pasteurization). The skimmed milk then is recombined with the pasteurized cream to reconstitute the original milk composition. Ultrafiltration uses finer filters than microfiltration, which allow lactose and water to pass through while retaining fats, calcium and protein. As with microfiltration, the fat may be removed before filtration and added back in afterwards. Ultrafiltered milk is used in cheesemaking, since it has reduced volume for a given protein content, and is sold directly to consumers as a higher protein, lower sugar content, and creamier alternative to regular milk. Upon standing for 12 to 24 hours, fresh milk has a tendency to separate into a high-fat cream layer on top of a larger, low-fat milk layer. The cream often is sold as a separate product with its own uses. Today the separation of the cream from the milk usually is accomplished rapidly in centrifugal cream separators. The fat globules rise to the top of a container of milk because fat is less dense than water. The smaller the globules, the more other molecular-level forces prevent this from happening. The cream rises in cow's milk much more quickly than a simple model would predict: rather than isolated globules, the fat in the milk tends to form into clusters containing about a million globules, held together by a number of minor whey proteins. These clusters rise faster than individual globules can. The fat globules in milk from goats, sheep, and water buffalo do not form clusters as readily and are smaller to begin with, resulting in a slower separation of cream from these milks. Milk often is homogenized, a treatment that prevents a cream layer from separating out of the milk. The milk is pumped at high pressures through very narrow tubes, breaking up the fat globules through turbulence and cavitation. A greater number of smaller particles possess more total surface area than a smaller number of larger ones, and the original fat globule membranes cannot completely cover them. Casein micelles are attracted to the newly exposed fat surfaces. Nearly one-third of the micelles in the milk end up participating in this new membrane structure. The casein weighs down the globules and interferes with the clustering that accelerated separation. The exposed fat globules are vulnerable to certain enzymes present in milk, which could break down the fats and produce rancid flavors. To prevent this, the enzymes are inactivated by pasteurizing the milk immediately before or during homogenization. Homogenized milk tastes blander but feels creamier in the mouth than unhomogenized. It is whiter and more resistant to developing off flavors. Creamline (or cream-top) milk is unhomogenized. It may or may not have been pasteurized. Milk that has undergone high-pressure homogenization, sometimes labeled as "ultra-homogenized", has a longer shelf life than milk that has undergone ordinary homogenization at lower pressures. Ultra Heat Treatment (UHT), is a type of milk processing where all bacteria are destroyed with high heat to extend its shelf life for up to 6 months, as long as the package is not opened. Milk is firstly homogenized and then is heated to 138 degrees Celsius for 1–3seconds. The milk is immediately cooled down and packed into a sterile container. As a result of this treatment, all the pathogenic bacteria within the milk are destroyed, unlike when the milk is just pasteurised. The milk will now keep for up for 6 months if unopened. UHT milk does not need to be refrigerated until the package is opened, which makes it easier to ship and store. But in this process there is a loss of vitamin B1 and vitamin C and there is also a slight change in the taste of the milk. The composition of milk differs widely among species. Factors such as the type of protein; the proportion of protein, fat, and sugar; the levels of various vitamins and minerals; and the size of the butterfat globules, and the strength of the curd are among those that may vary. For example: Donkey and horse milk have the lowest fat content, while the milk of seals and whales may contain more than 50% fat. These compositions vary by breed, animal, and point in the lactation period. The protein range for these four breeds is 3.3% to 3.9%, while the lactose range is 4.7% to 4.9%. Milk fat percentages may be manipulated by dairy farmers' stock diet formulation strategies. Mastitis infection can cause fat levels to decline. Processed cow's milk was formulated to contain differing amounts of fat during the 1950s. One cup (250 mL) of 2%-fat cow's milk contains 285 mg of calcium, which represents 22% to 29% of the daily recommended intake (DRI) of calcium for an adult. Depending on its age, milk contains 8 grams of protein, and a number of other nutrients (either naturally or through fortification) including: Overall, cow's milk can be useful to improve a diet which is otherwise of poor quality, but for a diet with is otherwise healthy, it is redundant or possibly harmful. There is no good evidence that drinking milk helps prevent bone fractures, even though the American government recommends it for that purposes. A 2008 review found evidence suggesting that consumption of milk is effective at promoting muscle growth. Some studies have suggested that conjugated linoleic acid, which can be found in dairy products, is an effective supplement for reducing body fat. Calcium from dairy products has a greater bioavailability than calcium from certain vegetables, such as spinach, that contain high levels of calcium-chelating agents, but a similar or lesser bioavailability than calcium from low-oxalate vegetables such as kale, broccoli, or other vegetables in the genus "Brassica". A 2009 meta-analysis concluded that milk consumption is associated with acne. Lactose, the disaccharide sugar component of all milk, must be cleaved in the small intestine by the enzyme lactase, in order for its constituents, galactose and glucose, to be absorbed. Lactose intolerance is a condition in which people have symptoms due to not enough of the enzyme lactase in the small intestines. Those affected vary in the amount of lactose they can tolerate before symptoms develop. These may include abdominal pain, bloating, diarrhea, gas, and nausea. Severity depends on the amount a person eats or drinks. Those affected are usually able to drink at least one cup of milk without developing significant symptoms, with greater amounts tolerated if drunk with a meal or throughout the day. Lactose intolerance does not cause damage to the gastrointestinal tract. There are four types: primary, secondary, developmental, and congenital. Primary lactose intolerance is when the amount of lactase decline as people age. Secondary lactose intolerance is due to injury to the small intestine such as from infection, celiac disease, inflammatory bowel disease, or other diseases. Developmental lactose intolerance may occur in premature babies and usually improves over a short period of time. Congenital lactose intolerance is an extremely rare genetic disorder in which little or no lactase is made from birth. When lactose intolerance is due to secondary lactase deficiency, treatment of the underlying disease allows lactase activity to return to normal levels. Lactose intolerance is different from a milk allergy. The number of people with lactose intolerance is unknown. The number of adults who cannot produce enough lactase in their small intestine varies markedly in different populations. Since lactase's only function is the digestion of lactose in milk, in most mammal species the activity of the enzyme is dramatically reduced after weaning. Within most human populations, however, some individuals have developed, by natural evolution, the ability to maintain throughout their life high levels of lactose in their small intestine, as an adaptation to the consumption of nonhuman milk and dairy products beyond infancy. This ability, which allows them to digest lactose into adulthood, is called lactase persistence. The distribution of people with lactase persistence is not homogeneous in the world. For instance, those people with lactase persistence are more than 90% of the population in North Europe, and as low as 5% in parts of Asia and Africa. Milk and dairy products have the potential for causing serious infection in newborn infants. Unpasteurized milk and cheeses can promote the growth of "Listeria" bacteria. "Listeria monocytogenes" can also cause serious infection in an infant and pregnant woman and can be transmitted to her infant in utero or after birth. The infection has the potential of seriously harming or even causing the death of a preterm infant, an infant of low or very low birth weight, or an infant with a congenital defect of the immune system. The presence of this pathogen can sometimes be determined by the symptoms that appear as a gastrointestinal illness in the mother. The mother can also acquire infection from ingesting food that contains other animal products such as hot dogs, delicatessen meats, and cheese. Cow's milk allergy (CMA) is an immunologically mediated adverse reaction, rarely fatal, to one or more cow's milk proteins. 2.2–3.5% of the global infant population are allergic to cow's milk. Milk must be offered at every meal if a United States school district wishes to get reimbursement from the federal government. A quarter of the largest school districts in the U.S. offer rice or soy milk and almost 17% of all U.S. school districts offer lactose-free milk. Of the milk served in U.S. school cafeterias, 71% is flavored, causing some school districts to propose a ban because flavored milk has added sugars. (Though some flavored milk products use artificial sweeteners instead.) The Boulder, Colorado, school district banned flavored milk in 2009. To keep the consumption up, the school installed a milk dispenser. The mammary gland is thought to have derived from apocrine skin glands. It has been suggested that the original function of lactation (milk production) was keeping eggs moist. Much of the argument is based on monotremes (egg-laying mammals). The original adaptive significance of milk secretions may have been nutrition or immunological protection. This secretion gradually became more copious and accrued nutritional complexity over evolutionary time. Tritylodontid cynodonts seem to have displayed lactation, based on their dental replacement patterns. Since November 1993, recombinant bovine somatotropin (rbST), also called rBGH, has been sold to dairy farmers with FDA approval. Cows produce bovine growth hormone naturally, but some producers administer an additional recombinant version of BGH which is produced through genetically engineered E. coli to increase milk production. Bovine growth hormone also stimulates liver production of insulin-like growth factor 1 (IGF1). The U.S. Food and Drug Administration, the National Institutes of Health and the World Health Organization have reported that both of these compounds are safe for human consumption at the amounts present. Milk from cows given rBST may be sold in the United States, and the FDA stated that no significant difference has been shown between milk derived from rBST-treated and that from non-rBST-treated cows. Milk that advertises that it comes from cows not treated with rBST, is required to state this finding on its label. Cows receiving rBGH supplements may more frequently contract an udder infection known as mastitis. Problems with mastitis have led to Canada, Australia, New Zealand, and Japan banning milk from rBST treated cows. Mastitis, among other diseases, may be responsible for the fact that levels of white blood cells in milk vary naturally. rBGH is also banned in the European Union, for reasons of animal welfare. Vegans and some other vegetarians do not consume milk for reasons mostly related to animal rights and environmental concerns. They may object to features of dairy farming including the necessity of keeping dairy cows pregnant, the killing of almost all the male offspring of dairy cows (either by disposal soon after birth, for veal production, or for beef), the routine separation of mother and calf soon after birth, other perceived inhumane treatment of dairy cattle, and culling of cows after their productive lives. Other people who do not drink milk are convinced that milk is not necessary for good health or that it may cause adverse health effects. Several studies have indicated that milk consumption does not result in stronger bones. Other studies have found that milk intake increases the risk of acquiring acne. Related is the criticism of governments' promotion of milk and cheese consumption, in particular the U.S. First, the science remains unsettled as to the effect(s) of calcium-intake on human iron-processing. In addition, the U.S. government, particularly through school food-programs, motivates the (daily) consumption of milk and, especially growing since the 1970s, cheese. According to Daniel Imhoff and Christina Badracco (2019), since the 1970s the "subsidy programs have steadily supported an oversupply of milk. Although milk consumption has decreased over the same time span [...] Cheese is now the top source of saturated fat in the US diet, contributing almost 9 percent". All United States schools that are a part of the federally funded National School Lunch Act are required by the federal government to provide milk for all students. The Office of Dietary Supplements recommends that healthy adults between ages 19 and 50 get about 1,000 mg of calcium per day. It is often argued that it is unnatural for humans to drink milk from cows (or other animals) because mammals normally do not drink milk beyond the weaning period, nor do they drink milk from another species. Milk production is also resource intensive. On a global weighted average, for the production of a given volume of milk, a thousand times as much water has to be used. Milk products are sold in a number of varieties based on types/degrees of: Milk preserved by the UHT process does not need to be refrigerated before opening and has a much longer shelf life (six months) than milk in ordinary packaging. It is typically sold unrefrigerated in the UK, U.S., Europe, Latin America, and Australia. Lactose-free milk can be produced by passing milk over lactase enzyme bound to an inert carrier. Once the molecule is cleaved, there are no lactose ill effects. Forms are available with reduced amounts of lactose (typically 30% of normal), and alternatively with nearly 0%. The only noticeable difference from regular milk is a slightly sweeter taste due to the generation of glucose by lactose cleavage. It does not, however, contain more glucose, and is nutritionally identical to regular milk. Finland, where approximately 17% of the Finnish-speaking population has hypolactasia, has had "HYLA" (acronym for "hydrolysed lactose") products available for many years. Lactose of low-lactose level cow's milk products, ranging from ice cream to cheese, is enzymatically hydrolysed into glucose and galactose. The ultra-pasteurization process, combined with aseptic packaging, ensures a long shelf life. In 2001, Valio launched a lactose-free milk drink that is not sweet like HYLA milk but has the fresh taste of ordinary milk. Valio patented the chromatographic separation method to remove lactose. Valio also markets these products in Sweden, Estonia, Belgium, and the United States, where the company says ultrafiltration is used. In the UK, where an estimated 4.7% of the population are affected by lactose intolerance, Lactofree produces milk, cheese, and yogurt products that contain only 0.03% lactose. To aid digestion in those with lactose intolerance, milk with added bacterial cultures such as "Lactobacillus acidophilus" ("acidophilus milk") and bifidobacteria ("a/B milk") is available in some areas. Another milk with "Lactococcus lactis" bacteria cultures ("cultured buttermilk") often is used in cooking to replace the traditional use of naturally soured milk, which has become rare due to the ubiquity of pasteurization, which also kills the naturally occurring Lactococcus bacteria. Lactose-free and lactose-reduced milk can also be produced via ultra filtration, which removes smaller molecules such as lactose and water while leaving calcium and proteins behind. Milk produced via these methods has a lower sugar content than regular milk. In areas where the cattle (and often the people) live indoors, commercially sold milk commonly has vitamin D added to it to make up for lack of exposure to UVB radiation. Reduced-fat milks often have added vitamin A palmitate to compensate for the loss of the vitamin during fat removal; in the United States this results in reduced fat milks having a higher vitamin A content than whole milk. Milk often has flavoring added to it for better taste or as a means of improving sales. Chocolate milk has been sold for many years and has been followed more recently by strawberry milk and others. Some nutritionists have criticized flavored milk for adding sugar, usually in the form of high-fructose corn syrup, to the diets of children who are already commonly obese in the U.S. Due to the short shelf life of normal milk, it used to be delivered to households daily in many countries; however, improved refrigeration at home, changing food shopping patterns because of supermarkets, and the higher cost of home delivery mean that daily deliveries by a milkman are no longer available in most countries. In Australia and New Zealand, prior to metrication, milk was generally distributed in 1 pint (568mL) glass bottles. In Australia and Ireland there was a government funded "free milk for school children" program, and milk was distributed at morning recess in 1/3 pint bottles. With the conversion to metric measures, the milk industry were concerned that the replacement of the pint bottles with 500mL bottles would result in a 13.6% drop in milk consumption; hence, all pint bottles were recalled and replaced by 600mL bottles. With time, due to the steadily increasing cost of collecting, transporting, storing and cleaning glass bottles, they were replaced by cardboard cartons. A number of designs were used, including a tetrahedron which could be close-packed without waste space, and could not be knocked over accidentally. (slogan: No more crying over spilt milk.) However, the industry eventually settled on a design similar to that used in the United States. Milk is now available in a variety of sizes in paperboard milk cartons (250 mL, 375 mL, 600 mL, 1 liter and 1.5 liters) and plastic bottles (1, 2 and 3 liters). A significant addition to the marketplace has been "long-life" milk (UHT), generally available in 1 and 2 liter rectangular cardboard cartons. In urban and suburban areas where there is sufficient demand, home delivery is still available, though in suburban areas this is often 3 times per week rather than daily. Another significant and popular addition to the marketplace has been flavored milks; for example, as mentioned above, Farmers Union Iced Coffee outsells Coca-Cola in South Australia. In rural India, milk is home delivered, daily, by local milkmen carrying bulk quantities in a metal container, usually on a bicycle. In other parts of metropolitan India, milk is usually bought or delivered in plastic bags or cartons via shops or supermarkets. The current milk chain flow in India is from milk producer to milk collection agent. Then it is transported to a milk chilling center and bulk transported to the processing plant, then to the sales agent and finally to the consumer. A 2011 survey by the Food Safety and Standards Authority of India found that nearly 70% of samples had not conformed to the standards set for milk. The study found that due to lack of hygiene and sanitation in milk handling and packaging, detergents (used during cleaning operations) were not washed properly and found their way into the milk. About 8% of samples in the survey were found to have detergents, which are hazardous to health. In Pakistan, milk is supplied in jugs. Milk has been a staple food, especially among the pastoral tribes in this country. Since the late 1990s, milk-buying patterns have changed drastically in the UK. The classic milkman, who travels his local milk round (route) using a milk float (often battery powered) during the early hours and delivers milk in 1 pint glass bottles with aluminium foil tops directly to households, has almost disappeared. Two of the main reasons for the decline of UK home deliveries by milkmen are household refrigerators (which lessen the need for daily milk deliveries) and private car usage (which has increased supermarket shopping). Another factor is that it is cheaper to purchase milk from a supermarket than from home delivery. In 1996, more than 2.5 billion liters of milk were still being delivered by milkmen, but by 2006 only 637 million liters (13% of milk consumed) was delivered by some 9,500 milkmen. By 2010, the estimated number of milkmen had dropped to 6,000. Assuming that delivery per milkman is the same as it was in 2006, this means milkmen deliveries now only account for 6–7% of all milk consumed by UK households (6.7 billion liters in 2008/2009). Almost 95% of all milk in the UK is thus sold in shops today, most of it in plastic bottles of various sizes, but some also in milk cartons. Milk is hardly ever sold in glass bottles in UK shops. In the United States, glass milk bottles have been replaced mostly with milk cartons and plastic jugs. Gallons of milk are almost always sold in jugs, while half gallons and quarts may be found in both paper cartons and plastic jugs, and smaller sizes are almost always in cartons. The "half pint" () milk carton is the traditional unit as a component of school lunches, though some companies have replaced that unit size with a plastic bottle, which is also available at retail in 6- and 12-pack size. Glass milk bottles are now rare. Most people purchase milk in bags, plastic bottles, or plastic-coated paper cartons. Ultraviolet (UV) light from fluorescent lighting can alter the flavor of milk, so many companies that once distributed milk in transparent or highly translucent containers are now using thicker materials that block the UV light. Milk comes in a variety of containers with local variants: Practically everywhere, condensed milk and evaporated milk are distributed in metal cans, 250 and 125 mL paper containers and 100 and 200 mL squeeze tubes, and powdered milk (skim and whole) is distributed in boxes or bags. When raw milk is left standing for a while, it turns "sour". This is the result of fermentation, where lactic acid bacteria ferment the lactose in the milk into lactic acid. Prolonged fermentation may render the milk unpleasant to consume. This fermentation process is exploited by the introduction of bacterial cultures (e.g. "Lactobacilli sp., Streptococcus sp., Leuconostoc sp.", etc.) to produce a variety of fermented milk products. The reduced pH from lactic acid accumulation denatures proteins and causes the milk to undergo a variety of different transformations in appearance and texture, ranging from an aggregate to smooth consistency. Some of these products include sour cream, yogurt, cheese, buttermilk, viili, kefir, and kumis. "See Dairy product" for more information. Pasteurization of cow's milk initially destroys any potential pathogens and increases the shelf life, but eventually results in spoilage that makes it unsuitable for consumption. This causes it to assume an unpleasant odor, and the milk is deemed non-consumable due to unpleasant taste and an increased risk of food poisoning. In raw milk, the presence of lactic acid-producing bacteria, under suitable conditions, ferments the lactose present to lactic acid. The increasing acidity in turn prevents the growth of other organisms, or slows their growth significantly. During pasteurization, however, these lactic acid bacteria are mostly destroyed. In order to prevent spoilage, milk can be kept refrigerated and stored between in bulk tanks. Most milk is pasteurized by heating briefly and then refrigerated to allow transport from factory farms to local markets. The spoilage of milk can be forestalled by using ultra-high temperature (UHT) treatment. Milk so treated can be stored unrefrigerated for several months until opened but has a characteristic "cooked" taste. Condensed milk, made by removing most of the water, can be stored in cans for many years, unrefrigerated, as can evaporated milk. The most durable form of milk is powdered milk, which is produced from milk by removing almost all water. The moisture content is usually less than 5% in both drum- and spray-dried powdered milk. Freezing of milk can cause fat globule aggregation upon thawing, resulting in milky layers and butterfat lumps. These can be dispersed again by warming and stirring the milk. It can change the taste by destruction of milk-fat globule membranes, releasing oxidized flavors. Milk is used to make yogurt, cheese, ice milk, pudding, hot chocolate and french toast, among many other products. Milk is often added to dry breakfast cereal, porridge and granola. Milk is mixed with ice cream and flavoured syrups in a blender to make milkshakes. Milk is often served in coffee and tea. Frothy steamed milk is used to prepare espresso-based drinks such as cafe latte. The importance of milk in human culture is attested to by the numerous expressions embedded in our languages, for example, "the milk of human kindness", the expression "there's no use crying over spilt milk" (which means don't "be unhappy about what cannot be undone"), "don't milk the ram" (this means "to do or attempt something futile") and "Why buy a cow when you can get milk for free?" (which means "why pay for something that you can get for free otherwise"). In Greek mythology, the Milky Way was formed after the trickster god Hermes suckled the infant Heracles at the breast of Hera, the queen of the gods, while she was asleep. When Hera awoke, she tore Heracles away from her breast and splattered her breast milk across the heavens. In another version of the story, Athena, the patron goddess of heroes, tricked Hera into suckling Heracles voluntarily, but he bit her nipple so hard that she flung him away, spraying milk everywhere. In many African and Asian countries, butter is traditionally made from fermented milk rather than cream. It can take several hours of churning to produce workable butter grains from fermented milk. Holy books have also mentioned milk. The Bible contains references to the "Land of Milk and Honey." In the Qur'an, there is a request to wonder on milk as follows: "And surely in the livestock there is a lesson for you, We give you to drink of that which is in their bellies from the midst of digested food and blood, pure milk palatable for the drinkers" (16-The Honeybee, 66). The Ramadan fast is traditionally broken with a glass of milk and dates. Abhisheka is conducted by Hindu and Jain priests, by pouring libations on the idol of a deity being worshipped, amidst the chanting of mantras. Usually offerings such as milk, yogurt, ghee, honey may be poured among other offerings depending on the type of abhishekam being performed. A milksop is an "effeminate spiritless man," an expression which is attested to in the late 14th century. Milk toast is a dish consisting of milk and toast. Its soft blandness served as inspiration for the name of the timid and ineffectual comic strip character Caspar Milquetoast, drawn by H. T. Webster from 1924 to 1952. Thus, the term "milquetoast" entered the language as the label for a timid, shrinking, apologetic person. Milk toast also appeared in Disney's "Follow Me Boys" as an undesirable breakfast for the aging main character Lem Siddons. To "milk" someone, in the vernacular of many English-speaking countries, is to take advantage of the person, by analogy to the way a farmer "milks" a cow and takes its milk. The word "milk" has had many slang meanings over time. In the 19th century, milk was used to describe a cheap and very poisonous alcoholic drink made from methylated spirits (methanol) mixed with water. The word was also used to mean defraud, to be idle, to intercept telegrams addressed to someone else, and a weakling or "milksop." In the mid-1930s, the word was used in Australia to refer to siphoning gas from a car. Besides serving as a beverage or source of food, milk has been described as used by farmers and gardeners as an organic fungicide and fertilizer, however, its effectiveness is debated. Diluted milk solutions have been demonstrated to provide an effective method of preventing powdery mildew on grape vines, while showing it is unlikely to harm the plant. Milk paint is a nontoxic water-based paint. It can be made from milk and lime, generally with pigments added for color. In other recipes, borax is mixed with milk's casein protein in order to activate the casein and as a preservative. Milk has been used for centuries as a hair and skin treatment. Hairstylist Richard Marin states that some women rinse their hair with milk to add a shiny appearance to their hair. Cosmetic chemist Ginger King states that milk can "help exfoliate and remove debris [from skin] and make hair softer. Hairstylist Danny Jelaca states that milk's keratin proteins may "add weight to the hair". Some commercial hair products contain milk. A milk bath is a bath taken in milk rather than just water. Often additives such as oatmeal, honey, and scents such as rose, daisies and essential oils are mixed in. Milk baths use lactic acid, an alpha hydroxy acid, to dissolve the proteins which hold together dead skin cells.
https://en.wikipedia.org/wiki?curid=19714
Magnetism Magnetism is a class of physical phenomena that are mediated by magnetic fields. Electric currents and the magnetic moments of elementary particles give rise to a magnetic field, which acts on other currents and magnetic moments. Magnetism is one aspect of the combined phenomenon of electromagnetism. The most familiar effects occur in ferromagnetic materials, which are strongly attracted by magnetic fields and can be magnetized to become permanent magnets, producing magnetic fields themselves. Demagnetizing a magnet is also possible. Only a few substances are ferromagnetic; the most common ones are iron, cobalt and nickel and their alloys. The prefix "" refers to iron, because permanent magnetism was first observed in lodestone, a form of natural iron ore called magnetite, Fe3O4. All substances exhibit some type of magnetism. Ferromagnetism is responsible for most of the effects of magnetism encountered in everyday life, but there are actually several types of magnetism. Paramagnetic substances, such as aluminum and oxygen, are weakly attracted to an applied magnetic field; diamagnetic substances, such as copper and carbon, are weakly repelled; while antiferromagnetic materials, such as chromium and spin glasses, have a more complex relationship with a magnetic field. The force of a magnet on paramagnetic, diamagnetic, and antiferromagnetic materials is usually too weak to be felt and can be detected only by laboratory instruments, so in everyday life, these substances are often described as non-magnetic. The magnetic state (or magnetic phase) of a material depends on temperature, pressure, and the applied magnetic field. A material may exhibit more than one form of magnetism as these variables change. The strength of a magnetic field almost always decreases with distance, though the exact mathematical relationship between strength and distance varies. Different configurations of magnetic moments and electric currents can result in complicated magnetic fields. Only magnetic dipoles have been observed, although some theories predict the existence of magnetic monopoles. Magnetism was first discovered in the ancient world, when people noticed that lodestones, naturally magnetized pieces of the mineral magnetite, could attract iron. The word "magnet" comes from the Greek term μαγνῆτις λίθος "magnētis lithos", "the Magnesian stone, lodestone." In ancient Greece, Aristotle attributed the first of what could be called a scientific discussion of magnetism to the philosopher Thales of Miletus, who lived from about 625 BC to about 545 BC. The ancient Indian medical text "Sushruta Samhita" describes using magnetite to remove arrows embedded in a person's body. In ancient China, the earliest literary reference to magnetism lies in a 4th-century BC book named after its author, "The Sage of Ghost Valley". The 2nd-century BC annals, "Lüshi Chunqiu", also notes: "The lodestone makes iron approach, or it attracts it." The earliest mention of the attraction of a needle is in a 1st-century work "Lunheng" ("Balanced Inquiries"): "A lodestone attracts a needle." The 11th-century Chinese scientist Shen Kuo was the first person to write—in the "Dream Pool Essays"—of the magnetic needle compass and that it improved the accuracy of navigation by employing the astronomical concept of true north. By the 12th century, the Chinese were known to use the lodestone compass for navigation. They sculpted a directional spoon from lodestone in such a way that the handle of the spoon always pointed south. Alexander Neckam, by 1187, was the first in Europe to describe the compass and its use for navigation. In 1269, Peter Peregrinus de Maricourt wrote the "Epistola de magnete", the first extant treatise describing the properties of magnets. In 1282, the properties of magnets and the dry compasses were discussed by Al-Ashraf, a Yemeni physicist, astronomer, and geographer. Leonardo Garzoni's only extant work, the "Due trattati sopra la natura, e le qualità della calamita", is the first known example of a modern treatment of magnetic phenomena. Written in years near 1580 and never published, the treatise had a wide diffusion. In particular, Garzoni is referred to as an expert in magnetism by Niccolò Cabeo, whose Philosophia Magnetica (1629) is just a re-adjustment of Garzoni's work. Garzoni's treatise was known also to Giovanni Battista Della Porta and William Gilbert. In 1600, William Gilbert published his "De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure" ("On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth"). In this work he describes many of his experiments with his model earth called the terrella. From his experiments, he concluded that the Earth was itself magnetic and that this was the reason compasses pointed north (previously, some believed that it was the pole star (Polaris) or a large magnetic island on the north pole that attracted the compass). An understanding of the relationship between electricity and magnetism began in 1819 with work by Hans Christian Ørsted, a professor at the University of Copenhagen, who discovered by the accidental twitching of a compass needle near a wire that an electric current could create a magnetic field. This landmark experiment is known as Ørsted's Experiment. Several other experiments followed, with André-Marie Ampère, who in 1820 discovered that the magnetic field circulating in a closed-path was related to the current flowing through a surface enclosed by the path; Carl Friedrich Gauss; Jean-Baptiste Biot and Félix Savart, both of whom in 1820 came up with the Biot–Savart law giving an equation for the magnetic field from a current-carrying wire; Michael Faraday, who in 1831 found that a time-varying magnetic flux through a loop of wire induced a voltage, and others finding further links between magnetism and electricity. James Clerk Maxwell synthesized and expanded these insights into Maxwell's equations, unifying electricity, magnetism, and optics into the field of electromagnetism. In 1905, Albert Einstein used these laws in motivating his theory of special relativity, requiring that the laws held true in all inertial reference frames. Electromagnetism has continued to develop into the 21st century, being incorporated into the more fundamental theories of gauge theory, quantum electrodynamics, electroweak theory, and finally the standard model. Magnetism, at its root, arises from two sources: The magnetic properties of materials are mainly due to the magnetic moments of their atoms' orbiting electrons. The magnetic moments of the nuclei of atoms are typically thousands of times smaller than the electrons' magnetic moments, so they are negligible in the context of the magnetization of materials. Nuclear magnetic moments are nevertheless very important in other contexts, particularly in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). Ordinarily, the enormous number of electrons in a material are arranged such that their magnetic moments (both orbital and intrinsic) cancel out. This is due, to some extent, to electrons combining into pairs with opposite intrinsic magnetic moments as a result of the Pauli exclusion principle (see "electron configuration"), and combining into filled subshells with zero net orbital motion. In both cases, the electrons preferentially adopt arrangements in which the magnetic moment of each electron is canceled by the opposite moment of another electron. Moreover, even when the electron configuration "is" such that there are unpaired electrons and/or non-filled subshells, it is often the case that the various electrons in the solid will contribute magnetic moments that point in different, random directions so that the material will not be magnetic. Sometimes, either spontaneously, or owing to an applied external magnetic field—each of the electron magnetic moments will be, on average, lined up. A suitable material can then produce a strong net magnetic field. The magnetic behavior of a material depends on its structure, particularly its electron configuration, for the reasons mentioned above, and also on the temperature. At high temperatures, random thermal motion makes it more difficult for the electrons to maintain alignment. Diamagnetism appears in all materials and is the tendency of a material to oppose an applied magnetic field, and therefore, to be repelled by a magnetic field. However, in a material with paramagnetic properties (that is, with a tendency to enhance an external magnetic field), the paramagnetic behavior dominates. Thus, despite its universal occurrence, diamagnetic behavior is observed only in a purely diamagnetic material. In a diamagnetic material, there are no unpaired electrons, so the intrinsic electron magnetic moments cannot produce any bulk effect. In these cases, the magnetization arises from the electrons' orbital motions, which can be understood classically as follows: This description is meant only as a heuristic; the Bohr-van Leeuwen theorem shows that diamagnetism is impossible according to classical physics, and that a proper understanding requires a quantum-mechanical description. All materials undergo this orbital response. However, in paramagnetic and ferromagnetic substances, the diamagnetic effect is overwhelmed by the much stronger effects caused by the unpaired electrons. In a paramagnetic material there are "unpaired electrons"; i.e., atomic or molecular orbitals with exactly one electron in them. While paired electrons are required by the Pauli exclusion principle to have their intrinsic ('spin') magnetic moments pointing in opposite directions, causing their magnetic fields to cancel out, an unpaired electron is free to align its magnetic moment in any direction. When an external magnetic field is applied, these magnetic moments will tend to align themselves in the same direction as the applied field, thus reinforcing it. A ferromagnet, like a paramagnetic substance, has unpaired electrons. However, in addition to the electrons' intrinsic magnetic moment's tendency to be parallel to an applied field, there is also in these materials a tendency for these magnetic moments to orient parallel to each other to maintain a lowered-energy state. Thus, even in the absence of an applied field, the magnetic moments of the electrons in the material spontaneously line up parallel to one another. Every ferromagnetic substance has its own individual temperature, called the Curie temperature, or Curie point, above which it loses its ferromagnetic properties. This is because the thermal tendency to disorder overwhelms the energy-lowering due to ferromagnetic order. Ferromagnetism only occurs in a few substances; common ones are iron, nickel, cobalt, their alloys, and some alloys of rare-earth metals. The magnetic moments of atoms in a ferromagnetic material cause them to behave something like tiny permanent magnets. They stick together and align themselves into small regions of more or less uniform alignment called magnetic domains or Weiss domains. Magnetic domains can be observed with a magnetic force microscope to reveal magnetic domain boundaries that resemble white lines in the sketch. There are many scientific experiments that can physically show magnetic fields. When a domain contains too many molecules, it becomes unstable and divides into two domains aligned in opposite directions, so that they stick together more stably, as shown at the right. When exposed to a magnetic field, the domain boundaries move, so that the domains aligned with the magnetic field grow and dominate the structure (dotted yellow area), as shown at the left. When the magnetizing field is removed, the domains may not return to an unmagnetized state. This results in the ferromagnetic material's being magnetized, forming a permanent magnet. When magnetized strongly enough that the prevailing domain overruns all others to result in only one single domain, the material is magnetically saturated. When a magnetized ferromagnetic material is heated to the Curie point temperature, the molecules are agitated to the point that the magnetic domains lose the organization, and the magnetic properties they cause cease. When the material is cooled, this domain alignment structure spontaneously returns, in a manner roughly analogous to how a liquid can freeze into a crystalline solid. In an antiferromagnet, unlike a ferromagnet, there is a tendency for the intrinsic magnetic moments of neighboring valence electrons to point in "opposite" directions. When all atoms are arranged in a substance so that each neighbor is anti-parallel, the substance is antiferromagnetic. Antiferromagnets have a zero net magnetic moment, meaning that no field is produced by them. Antiferromagnets are less common compared to the other types of behaviors and are mostly observed at low temperatures. In varying temperatures, antiferromagnets can be seen to exhibit diamagnetic and ferromagnetic properties. In some materials, neighboring electrons prefer to point in opposite directions, but there is no geometrical arrangement in which "each" pair of neighbors is anti-aligned. This is called a spin glass and is an example of geometrical frustration. Like ferromagnetism, ferrimagnets retain their magnetization in the absence of a field. However, like antiferromagnets, neighboring pairs of electron spins tend to point in opposite directions. These two properties are not contradictory, because in the optimal geometrical arrangement, there is more magnetic moment from the sublattice of electrons that point in one direction, than from the sublattice that points in the opposite direction. Most ferrites are ferrimagnetic. The first discovered magnetic substance, magnetite, is a ferrite and was originally believed to be a ferromagnet; Louis Néel disproved this, however, after discovering ferrimagnetism. When a ferromagnet or ferrimagnet is sufficiently small, it acts like a single magnetic spin that is subject to Brownian motion. Its response to a magnetic field is qualitatively similar to the response of a paramagnet, but much larger. An electromagnet is a type of magnet in which the magnetic field is produced by an electric current. The magnetic field disappears when the current is turned off. Electromagnets usually consist of a large number of closely spaced turns of wire that create the magnetic field. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet. The main advantage of an electromagnet over a permanent magnet is that the magnetic field can be quickly changed by controlling the amount of electric current in the winding. However, unlike a permanent magnet that needs no power, an electromagnet requires a continuous supply of current to maintain the magnetic field. Electromagnets are widely used as components of other electrical devices, such as motors, generators, relays, solenoids, loudspeakers, hard disks, MRI machines, scientific instruments, and magnetic separation equipment. Electromagnets are also employed in industry for picking up and moving heavy iron objects such as scrap iron and steel. Electromagnetism was discovered in 1820. As a consequence of Einstein's theory of special relativity, electricity and magnetism are fundamentally interlinked. Both magnetism lacking electricity, and electricity without magnetism, are inconsistent with special relativity, due to such effects as length contraction, time dilation, and the fact that the magnetic force is velocity-dependent. However, when both electricity and magnetism are taken into account, the resulting theory (electromagnetism) is fully consistent with special relativity. In particular, a phenomenon that appears purely electric or purely magnetic to one observer may be a mix of both to another, or more generally the relative contributions of electricity and magnetism are dependent on the frame of reference. Thus, special relativity "mixes" electricity and magnetism into a single, inseparable phenomenon called electromagnetism, analogous to how relativity "mixes" space and time into spacetime. All observations on electromagnetism apply to what might be considered to be primarily magnetism, e.g. perturbations in the magnetic field are necessarily accompanied by a nonzero electric field, and propagate at the speed of light. In a vacuum, where is the vacuum permeability. In a material, The quantity is called "magnetic polarization". If the field is small, the response of the magnetization in a diamagnet or paramagnet is approximately linear: the constant of proportionality being called the magnetic susceptibility. If so, In a hard magnet such as a ferromagnet, is not proportional to the field and is generally nonzero even when is zero (see Remanence). The phenomenon of magnetism is "mediated" by the magnetic field. An electric current or magnetic dipole creates a magnetic field, and that field, in turn, imparts magnetic forces on other particles that are in the fields. Maxwell's equations, which simplify to the Biot–Savart law in the case of steady currents, describe the origin and behavior of the fields that govern these forces. Therefore, magnetism is seen whenever electrically charged particles are in motion—for example, from movement of electrons in an electric current, or in certain cases from the orbital motion of electrons around an atom's nucleus. They also arise from "intrinsic" magnetic dipoles arising from quantum-mechanical spin. The same situations that create magnetic fields—charge moving in a current or in an atom, and intrinsic magnetic dipoles—are also the situations in which a magnetic field has an effect, creating a force. Following is the formula for moving charge; for the forces on an intrinsic dipole, see magnetic dipole. When a charged particle moves through a magnetic field B, it feels a Lorentz force F given by the cross product: where Because this is a cross product, the force is perpendicular to both the motion of the particle and the magnetic field. It follows that the magnetic force does no work on the particle; it may change the direction of the particle's movement, but it cannot cause it to speed up or slow down. The magnitude of the force is where formula_8 is the angle between v and B. One tool for determining the direction of the velocity vector of a moving charge, the magnetic field, and the force exerted is labeling the index finger "V", the middle finger "B", and the thumb "F" with your right hand. When making a gun-like configuration, with the middle finger crossing under the index finger, the fingers represent the velocity vector, magnetic field vector, and force vector, respectively. See also right-hand rule. A very common source of magnetic field found in nature is a dipole, with a "South pole" and a "North pole", terms dating back to the use of magnets as compasses, interacting with the Earth's magnetic field to indicate North and South on the globe. Since opposite ends of magnets are attracted, the north pole of a magnet is attracted to the south pole of another magnet. The Earth's North Magnetic Pole (currently in the Arctic Ocean, north of Canada) is physically a south pole, as it attracts the north pole of a compass. A magnetic field contains energy, and physical systems move toward configurations with lower energy. When diamagnetic material is placed in a magnetic field, a "magnetic dipole" tends to align itself in opposed polarity to that field, thereby lowering the net field strength. When ferromagnetic material is placed within a magnetic field, the magnetic dipoles align to the applied field, thus expanding the domain walls of the magnetic domains. Since a bar magnet gets its ferromagnetism from electrons distributed evenly throughout the bar, when a bar magnet is cut in half, each of the resulting pieces is a smaller bar magnet. Even though a magnet is said to have a north pole and a south pole, these two poles cannot be separated from each other. A monopole—if such a thing exists—would be a new and fundamentally different kind of magnetic object. It would act as an isolated north pole, not attached to a south pole, or vice versa. Monopoles would carry "magnetic charge" analogous to electric charge. Despite systematic searches since 1931, , they have never been observed, and could very well not exist. Nevertheless, some theoretical physics models predict the existence of these magnetic monopoles. Paul Dirac observed in 1931 that, because electricity and magnetism show a certain symmetry, just as quantum theory predicts that individual positive or negative electric charges can be observed without the opposing charge, isolated South or North magnetic poles should be observable. Using quantum theory Dirac showed that if magnetic monopoles exist, then one could explain the quantization of electric charge—that is, why the observed elementary particles carry charges that are multiples of the charge of the electron. Certain grand unified theories predict the existence of monopoles which, unlike elementary particles, are solitons (localized energy packets). The initial results of using these models to estimate the number of monopoles created in the Big Bang contradicted cosmological observations—the monopoles would have been so plentiful and massive that they would have long since halted the expansion of the universe. However, the idea of inflation (for which this problem served as a partial motivation) was successful in solving this problem, creating models in which monopoles existed but were rare enough to be consistent with current observations. Some organisms can detect magnetic fields, a phenomenon known as magnetoception. Some materials in living things are ferromagnetic, though it is unclear if the magnetic properties serve a special function or are merely a byproduct of containing iron. For instance, chitons, a type of marine mollusk, produce magnetite to harden their teeth, and even humans produce magnetite in bodily tissue. Magnetobiology studies the effects of magnetic fields on living organisms; fields naturally produced by an organism are known as biomagnetism. Many biological organisms are mostly made of water, and because water is diamagnetic, extremely strong magnetic fields can repel these living things. While heuristic explanations based on classical physics can be formulated, diamagnetism, paramagnetism and ferromagnetism can only be fully explained using quantum theory. A successful model was developed already in 1927, by Walter Heitler and Fritz London, who derived, quantum-mechanically, how hydrogen molecules are formed from hydrogen atoms, i.e. from the atomic hydrogen orbitals formula_9 and formula_10 centered at the nuclei "A" and "B", see below. That this leads to magnetism is not at all obvious, but will be explained in the following. According to the Heitler–London theory, so-called two-body molecular formula_11-orbitals are formed, namely the resulting orbital is: Here the last product means that a first electron, r1, is in an atomic hydrogen-orbital centered at the second nucleus, whereas the second electron runs around the first nucleus. This "exchange" phenomenon is an expression for the quantum-mechanical property that particles with identical properties cannot be distinguished. It is specific not only for the formation of chemical bonds, but also for magnetism. That is, in this connection the term exchange interaction arises, a term which is essential for the origin of magnetism, and which is stronger, roughly by factors 100 and even by 1000, than the energies arising from the electrodynamic dipole-dipole interaction. As for the "spin function" formula_13, which is responsible for the magnetism, we have the already mentioned Pauli's principle, namely that a symmetric orbital (i.e. with the + sign as above) must be multiplied with an antisymmetric spin function (i.e. with a − sign), and "vice versa". Thus: I.e., not only formula_15 and formula_10 must be substituted by "α" and "β", respectively (the first entity means "spin up", the second one "spin down"), but also the sign + by the − sign, and finally ri by the discrete values "s"i (= ±½); thereby we have formula_17 and formula_18. The "singlet state", i.e. the − sign, means: the spins are "antiparallel", i.e. for the solid we have antiferromagnetism, and for two-atomic molecules one has diamagnetism. The tendency to form a (homoeopolar) chemical bond (this means: the formation of a "symmetric" molecular orbital, i.e. with the + sign) results through the Pauli principle automatically in an "antisymmetric" spin state (i.e. with the − sign). In contrast, the Coulomb repulsion of the electrons, i.e. the tendency that they try to avoid each other by this repulsion, would lead to an "antisymmetric" orbital function (i.e. with the − sign) of these two particles, and complementary to a "symmetric" spin function (i.e. with the + sign, one of the so-called "triplet functions"). Thus, now the spins would be "parallel" (ferromagnetism in a solid, paramagnetism in two-atomic gases). The last-mentioned tendency dominates in the metals iron, cobalt and nickel, and in some rare earths, which are "ferromagnetic". Most of the other metals, where the first-mentioned tendency dominates, are "nonmagnetic" (e.g. sodium, aluminium, and magnesium) or "antiferromagnetic" (e.g. manganese). Diatomic gases are also almost exclusively diamagnetic, and not paramagnetic. However, the oxygen molecule, because of the involvement of π-orbitals, is an exception important for the life-sciences. The Heitler-London considerations can be generalized to the Heisenberg model of magnetism (Heisenberg 1928). The explanation of the phenomena is thus essentially based on all subtleties of quantum mechanics, whereas the electrodynamics covers mainly the phenomenology.
https://en.wikipedia.org/wiki?curid=19716
Filter (mathematics) In mathematics, a filter is a special subset of a partially ordered set. Filters appear in order and lattice theory, but can also be found in topology, from where they originate. The dual notion of a filter is an order ideal. Filters were introduced by Henri Cartan in 1937 and subsequently used by Bourbaki in their book "Topologie Générale" as an alternative to the similar notion of a net developed in 1922 by E. H. Moore and H. L. Smith. Intuitively, a filter in a partially ordered set ("poset"), "P", is a subset of "P" that includes as members those elements that are large enough to satisfy some given criterion. For example, if "x" is an element of the poset, then the set of elements that are above "x" is a filter, called the principal filter at "x". (If "x" and "y" are incomparable elements of the poset, then neither of the principal filters at "x" and "y" is contained in the other one, and conversely.) Similarly, a filter on a set contains those subsets that are sufficiently large to contain some given "thing". For example, if the set is the real line and "x" is one of its points, then the family of sets that include "x" in their interior is a filter, called the filter of neighbourhoods of "x". The "thing" in this case is slightly larger than "x", but it still doesn't contain any other specific point of the line. The above interpretations explain conditions 1 and 3 in the section General definition: Clearly the empty set is not "large enough", and clearly the collection of "large enough" things should be "upward-closed". However, they do not really, without elaboration, explain condition 2 of the general definition. For, why should two "large enough" things contain a "common" "large enough" thing? Alternatively, a filter can be viewed as a "locating scheme": When trying to locate something (a point or a subset) in the space "X", call a filter the collection of subsets of "X" that might contain "what is looked for". Then this "filter" should possess the following natural structure: An ultrafilter can be viewed as a "perfect locating scheme" where "each" subset "E" of the space "X" can be used in deciding whether "what is looked for" might lie in "E". From this interpretation, compactness (see the mathematical characterization below) can be viewed as the property that "no location scheme can end up with nothing", or, to put it another way, "always something will be found". The mathematical notion of filter provides a precise language to treat these situations in a rigorous and general way, which is useful in analysis, general topology and logic. A subset "F" of a partially ordered set ("P", ≤) is a filter if the following conditions hold: A filter is proper if it is not equal to the whole set "P". This condition is sometimes added to the definition of a filter. While the above definition is the most general way to define a filter for arbitrary posets, it was originally defined for lattices only. In this case, the above definition can be characterized by the following equivalent statement: A subset "F" of a lattice ("P", ≤) is a filter, if and only if it is a non-empty upper set that is closed under finite infima (or meets), i.e., for all "x", "y" in "F", it is also the case that "x" ∧ "y" is in "F". A subset "S" of "F" is a filter basis if the upper set generated by "S" is all of "F". The smallest filter that contains a given element "p" is a principal filter and "p" is a principal element in this situation. The principal filter for "p" is just given by the set formula_1 and is denoted by prefixing "p" with an upward arrow: The dual notion of a filter, i.e. the concept obtained by reversing all ≤ and exchanging ∧ with ∨, is ideal. Because of this duality, the discussion of filters usually boils down to the discussion of ideals. Hence, most additional information on this topic (including the definition of maximal filters and prime filters) is to be found in the article on ideals. There is a separate article on ultrafilters. A special case of a filter is a filter defined on a set. Given a set "S", a partial ordering ⊆ can be defined on the powerset P("S") by subset inclusion, turning (P("S"), ⊆) into a lattice. Define a filter "F" on "S" as a non-empty subset of P("S") with the following properties: With this definition, a filter on a set is indeed a filter. The second property entails that "S" is in "F" (since "F" is a non-empty subset of P("S")). If the empty set is not in "F", we say "F" is a proper filter. Property 1 implies that a proper filter on a set has the finite intersection property. The only nonproper filter on "S" is P("S"). A filter base (or filter basis) is a subset "B" of P("S") with the properties that B is non-empty and the intersection of any two members of B includes (as a subset) a member of B ("B is closed under finite intersection"). If the empty set is not a member of B, we say B is a proper filter base. Given a filter base "B", the filter generated or spanned by "B" is defined as the minimum filter containing "B". It is the family of all the subsets of "S" that include a member of "B". Every filter is also a filter base, so the process of passing from filter base to filter may be viewed as a sort of completion. If "B" and "C" are two filter bases on "S", one says "C" is finer than "B" (or that "C" is a refinement of "B") if for each "B"0 ∈ "B", there is a "C"0 ∈ "C" such that "C"0 ⊆ "B"0. If also "B" is finer than "C", one says that they are equivalent filter bases. For every subset "T" of P("S") there is a smallest (possibly nonproper) filter "F" containing "T", called the filter generated or spanned by "T". It is constructed by taking all finite intersections of "T", which then form a filter base for "F". This filter is proper if and only if every finite intersection of elements of "T" is non-empty, and in that case we say that "T" is a filter subbase. For every filter "F" on a set "S", the set function defined by is finitely additive—a "measure" if that term is construed rather loosely. Therefore, the statement can be considered somewhat analogous to the statement that φ holds "almost everywhere". That interpretation of membership in a filter is used (for motivation, although it is not needed for actual "proofs") in the theory of ultraproducts in model theory, a branch of mathematical logic. In topology and analysis, filters are used to define convergence in a manner similar to the role of sequences in a metric space. In topology and related areas of mathematics, a filter is a generalization of a net. Both nets and filters provide very general contexts to unify the various notions of limit to arbitrary topological spaces. A sequence is usually indexed by the natural numbers, which are a totally ordered set. Thus, limits in first-countable spaces can be described by sequences. However, if the space is not first-countable, nets or filters must be used. Nets generalize the notion of a sequence by requiring the index set simply be a directed set. Filters can be thought of as sets built from multiple nets. Therefore, both the limit of a filter and the limit of a net are conceptually the same as the limit of a sequence. Let "X" be a topological space and "x" a point of "X". Let "X" be a topological space and "x" a point of "X". Indeed: (i) implies (ii): if "F" is a filter base satisfying the properties of (i), then the filter associated to "F" satisfies the properties of (ii). (ii) implies (iii): if "U" is any open neighborhood of "x" then by the definition of convergence "U" contains an element of "F"; since also "Y" is an element of "F", "U" and "Y" have non-empty intersection. (iii) implies (i): Define formula_9. Then "F" is a filter base satisfying the properties of (i). Let "X" be a topological space and "x" a point of "X". Let "X" be a topological space. Let formula_11, formula_12 be topological spaces. Let formula_13 be a filter base on formula_11 and formula_15 be a function. The image of formula_13 under formula_17 is defined as the set formula_18. The image is denoted formula_19 and forms a filter base on formula_12. Let formula_25 be a metric space.
https://en.wikipedia.org/wiki?curid=19719
Metallurgy Metallurgy is a domain of materials science and engineering that studies the physical and chemical behavior of metallic elements, their inter-metallic compounds, and their mixtures, which are called alloys. Metallurgy encompasses both the science and the technology of metals. That is, the way in which science is applied to the production of metals, and the engineering of metal components used in products for both consumers and manufacturers. Metallurgy is distinct from the craft of metalworking. Metalworking relies on metallurgy in a similar manner to how medicine relies on medical science for technical advancement. A specialist practitioner of metallurgy is known as a . The science of metallurgy is subdivided into two broad categories: chemical metallurgy and physical metallurgy. Chemical metallurgy is chiefly concerned with the reduction and oxidation of metals, and the chemical performance of metals. Subjects of study in chemical metallurgy include mineral processing, the extraction of metals, thermodynamics, electrochemistry, and chemical degradation (corrosion). In contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. Topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms. Historically, metallurgy has predominately focused on the production of metals. Metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. Metal alloys are often a blend of at least two different metallic elements. However, non-metallic elements are often added to alloys in order to achieve properties suitable for an application. The study of metal production is subdivided into ferrous metallurgy (also known as "black metallurgy") and non-ferrous metallurgy (also known as "colored metallurgy"). Ferrous metallurgy involves processes and alloys based on iron while non-ferrous metallurgy involves processes and alloys based on other metals. The production of ferrous metals accounts for 95 percent of world metal production. Modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists, and other engineers. Some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals (including welding, brazing, and soldering). Emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials (semiconductors), and surface engineering. "Metallurgy" derives from the Ancient Greek , , "worker in metal", from , , "mine, metal" + , , "work". The word was originally an alchemist's term for the extraction of metals from minerals, the ending "-urgy" signifying a process, especially manufacturing: it was discussed in this sense in the 1797 "Encyclopædia Britannica". In the late 19th century it was extended to the more general scientific study of metals, alloys, and related processes. In English, the pronunciation is the more common one in the UK and Commonwealth. The pronunciation is the more common one in the US, and is the first-listed variant in various American dictionaries (e.g., "Merriam-Webster Collegiate", "American Heritage"). The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves dating to the late Paleolithic period, c. 40,000 BC. Silver, copper, tin and meteoric iron can also be found in native form, allowing a limited amount of metalworking in early cultures. Egyptian weapons made from meteoric iron in about 3000 BC were highly prized as "daggers from heaven". Certain metals, notably tin, lead, and at a higher temperature, copper, can be recovered from their ores by simply heating the rocks in a fire or blast furnace, a process known as smelting. The first evidence of this extractive metallurgy, dating from the 5th and 6th millennia BC, has been found at archaeological sites in Majdanpek, Jarmovac near Priboj and Pločnik, in present-day Serbia. To date, the earliest evidence of copper smelting is found at the Belovode site near Plocnik. This site produced a copper axe from 5500 BC, belonging to the Vinča culture. The earliest use of lead is documented from the late neolithic settlement of Yarim Tepe in Iraq, "The earliest lead (Pb) finds in the ancient Near East are a 6th millennium BC bangle from Yarim Tepe in northern Iraq and a slightly later conical lead piece from Halaf period Arpachiyah, near Mosul. As native lead is extremely rare, such artifacts raise the possibility that lead smelting may have begun even before copper smelting." Copper smelting is also documented at this site at about the same time period (soon after 6000 BC), although the use of lead seems to precede copper smelting. Early metallurgy is also documented at the nearby site of Tell Maghzaliyah, which seems to be dated even earlier, and completely lacks that pottery. The Balkans were the site of major Neolithic cultures, including Butmir, Vinča, Varna, Karanovo, and Hamangia. The Varna Necropolis, Bulgaria, is a burial site in the western industrial zone of Varna (approximately 4 km from the city centre), internationally considered one of the key archaeological sites in world prehistory. The oldest gold treasure in the world, dating from 4,600 BC to 4,200 BC, was discovered at the site. The gold piece dating from 4,500 BC, recently founded in Durankulak, near Varna is another important example. Other signs of early metals are found from the third millennium BC in places like Palmela (Portugal), Los Millares (Spain), and Stonehenge (United Kingdom). However, the ultimate beginnings cannot be clearly ascertained and new discoveries are both continuous and ongoing. In the Near East, about 3500 BC, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. This represented a major technological shift known as the Bronze Age. The extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. The process appears to have been invented by the Hittites in about 1200 BC, beginning the Iron Age. The secret of extracting and working iron was a key factor in the success of the Philistines. Historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. This includes the ancient and medieval kingdoms and empires of the Middle East and Near East, ancient Iran, ancient Egypt, ancient Nubia, and Anatolia (Turkey), Ancient Nok, Carthage, the Greeks and Romans of ancient Europe, medieval Europe, ancient and medieval China, ancient and medieval India, ancient and medieval Japan, amongst others. Many applications, practices, and devices associated or involved in metallurgy were established in ancient China, such as the innovation of the blast furnace, cast iron, hydraulic-powered trip hammers, and double acting piston bellows. A 16th century book by Georg Agricola called "De re metallica" describes the highly developed and complex processes of mining metal ores, metal extraction and metallurgy of the time. Agricola has been described as the "father of metallurgy". Extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. In order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. Extractive metallurgists are interested in three primary streams: feed, concentrate (valuable metal oxide/sulphide) and tailings(waste). After mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough where each particle is either mostly valuable or mostly waste. Concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. Mining may not be necessary, if the ore body and physical environment are conducive to leaching. Leaching dissolves minerals in an ore body and results in an enriched solution. The solution is collected and processed to extract valuable metals. Ore bodies often contain more than one valuable metal. Tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. Additionally, a concentrate may contain more than one valuable metal. That concentrate would then be processed to separate the valuable metals into individual constituents. Common engineering metals include aluminium, chromium, copper, iron, magnesium, nickel, titanium, zinc, and silicon. These metals are most often used as alloys with the noted exception of silicon. Much effort has been placed on understanding the iron-carbon alloy system, which includes steels and cast irons. Plain carbon steels (those that contain essentially only carbon as an alloying element) are used in low-cost, high-strength applications where neither weight nor corrosion are a major concern. Cast irons, including ductile iron, are also part of the iron-carbon system. Stainless steel, particularly Austenitic stainless steels, galvanized steel, , titanium alloys, or occasionally copper alloys are used where resistance to corrosion is important. Aluminium alloys and magnesium alloys are commonly used when a lightweight strong part is required such as in automotive and aerospace applications. Copper-nickel alloys (such as Monel) are used in highly corrosive environments and for non-magnetic applications. Iron-Manganese-Chromium alloys (Hadfield-type steels) are also used in non-magnetic applications such as directional drilling. Nickel-based superalloys like Inconel are used in high-temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. For extremely high temperatures, single crystal alloys are used to minimize creep. In modern electronics, high purity single crystal silicon is essential for metal-oxide-silicon transistors (MOS) and integrated circuits. In production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. This involves the production of alloys, the shaping, the heat treatment and the surface treatment of the product. Determining the hardness of the metal using the Rockwell, Vickers, and Brinell hardness scales is a commonly used practice that helps better understand the metal's elasticity and plasticity for different applications and production processes. The task of the metallurgist is to achieve balance between material properties such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance, and performance in temperature extremes. To achieve this goal, the operating environment must be carefully considered. In a saltwater environment, most ferrous metals and some non-ferrous alloys corrode quickly. Metals exposed to cold or cryogenic conditions may undergo a ductile to brittle transition and lose their toughness, becoming more brittle and prone to cracking. Metals under continual cyclic loading can suffer from metal fatigue. Metals under constant stress at elevated temperatures can creep. Metals are shaped by processes such as: Cold-working processes, in which the product's shape is altered by rolling, fabrication or other processes while the product is cold, can increase the strength of the product by a process called work hardening. Work hardening creates microscopic defects in the metal, which resist further changes of shape. Various forms of casting exist in industry and academia. These include sand casting, investment casting (also called the lost wax process), die casting, and continuous castings. Each of these forms has advantages for certain metals and applications considering factors like magnetism and corrosion. Metals can be heat-treated to alter the properties of strength, ductility, toughness, hardness and resistance to corrosion. Common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering. The annealing process softens the metal by heating it and then allowing it to cool very slowly, which gets rid of stresses in the metal and makes the grain structure large and soft-edged so that when the metal is hit or stressed it dents or perhaps bends, rather than breaking; it is also easier to sand, grind, or cut annealed metal. Quenching is the process of cooling a high-carbon steel very quickly after heating, thus "freezing" the steel's molecules in the very hard martensite form, which makes the metal harder. There is a balance between hardness and toughness in any steel; the harder the steel, the less tough or impact-resistant it is, and the more impact-resistant it is, the less hard it is. Tempering relieves stresses in the metal that were caused by the hardening process; tempering makes the metal less hard while making it better able to sustain impacts without breaking. Often, mechanical and thermal treatments are combined in what are known as thermo-mechanical treatments for better properties and more efficient processing of materials. These processes are common to high-alloy special steels, superalloys and titanium alloys. Electroplating is a chemical surface-treatment technique. It involves bonding a thin layer of another metal such as gold, silver, chromium or zinc to the surface of the product. This is done by selecting the coating material electrolyte solution which is the material that is going to coat the workpiece (gold, silver, zinc). There needs to be two electrodes of different materials: one the same material as the coating material and one that is receiving the coating material. Two electrodes are electrically charged and the coating material is stuck to the work piece. It is used to reduce corrosion as well as to improve the product's aesthetic appearance. It is also used to make inexpensive metals look like the more expensive ones (gold, silver). Shot peening is a cold working process used to finish metal parts. In the process of shot peening, small round shot is blasted against the surface of the part to be finished. This process is used to prolong the product life of the part, prevent stress corrosion failures, and also prevent fatigue. The shot leaves small dimples on the surface like a peen hammer does, which cause compression stress under the dimple. As the shot media strikes the material over and over, it forms many overlapping dimples throughout the piece being treated. The compression stress in the surface of the material strengthens the part and makes it more resistant to fatigue failure, stress failures, corrosion failure, and cracking. Thermal spraying techniques are another popular finishing option, and often have better high temperature properties than electroplated coatings.Thermal spraying, also known as a spray welding process, is an industrial coating process that consists of a heat source (flame or other) and a coating material that can be in a powder or wire form which is melted then sprayed on the surface of the material being treated at a high velocity. The spray treating process is known by many different names such as HVOF (High Velocity Oxygen Fuel), plasma spray, flame spray, arc spray, and metalizing. Metallurgists study the microscopic and macroscopic structure of metals using metallography, a technique invented by Henry Clifton Sorby. In metallography, an alloy of interest is ground flat and polished to a mirror finish. The sample can then be etched to reveal the microstructure and macrostructure of the metal. The sample is then examined in an optical or electron microscope, and the image contrast provides details on the composition, mechanical properties, and processing history. Crystallography, often using diffraction of x-rays or electrons, is another valuable tool available to the modern metallurgist. Crystallography allows identification of unknown materials and reveals the crystal structure of the sample. Quantitative crystallography can be used to calculate the amount of phases present as well as the degree of strain to which a sample has been subjected.
https://en.wikipedia.org/wiki?curid=19722
MUMPS MUMPS ("Massachusetts General Hospital Utility Multi-Programming System"), or M, is a general-purpose computer programming language originally designed in 1966 for the healthcare industry. Its differentiating feature is its "built-in" database, enabling high-level access to disk storage using simple symbolic program variables and subscripted arrays; similar to the variables used by most languages to access main memory. It continues to be used today by many large hospitals and banks to provide high-throughput transaction data processing. MUMPS was developed by Neil Pappalardo, Robert Greenes, and Curt Marble in Dr. Octo Barnett's animal lab at the Massachusetts General Hospital (MGH) in Boston during 1966 and 1967. It was later rewritten by technical leaders Dennis "Dan" Brevik and Paul Stylos of DEC in 1970 and 1971. Evelyn Dow was the original Marketing representative. The original MUMPS system was, like Unix a few years later, built on a spare DEC PDP-7. Octo Barnett and Neil Pappalardo were also involved with MGH's planning for a Hospital Information System, obtained a backward compatible PDP-9, and began using MUMPS in the admissions cycle and laboratory test reporting. MUMPS was then an interpreted language, yet even then, incorporated a hierarchical database file system to standardize interaction with the data and abstract disk operations so they were only done by the MUMPS language itself. Some aspects of MUMPS can be traced from Rand Corporation's JOSS through BBN's TELCOMP and STRINGCOMP. The MUMPS team deliberately chose to include portability between machines as a design goal. Another feature, not widely supported for machines of the era, in operating systems or in computer hardware, was multitasking, which was also built into the language itself as evidenced by the "-MPS" in the language name of "MUMPS" stands for Multi-Programming System. Although timesharing on Mainframe computers was increasingly common in systems such as Multics, most mini-computers (and they were the smallest available at that time) did not run parallel programs and threading was not available at all. Even on Mainframes, the variant of batch processing where a program was run to completion was the most common implementation for an operating system of multi-programming. It was a few years until Unix was developed. The lack of memory management hardware also meant that all multi-processing was fraught with the possibility that a memory pointer could change some other process. MUMPS programs do not have a standard way to refer to memory directly at all, in contrast to C language, so since the multitasking was enforced by the language, not by any program written in the language it was impossible to have the risk that existed for other systems. Dan Brevik's DEC MUMPS-15 system was adapted to a DEC PDP-15, where it lived for some time. It was first installed at Health Data Management Systems of Denver in May of 1971. The portability proved to be useful and MUMPS was awarded a government research grant, and so MUMPS was released to the public domain which was the a requirement for grants. MUMPS was soon ported to a number of other systems including the popular DEC PDP-8, the Data General Nova and on DEC PDP-11 and the Artronix PC12 minicomputer. Word about MUMPS spread mostly through the medical community, and was in widespread use, often being locally modified for their own needs. By the early 1970's, there were many and varied implementations of MUMPS on a range of hardware platforms. Another noteworthy platform was Paul Stylos' DEC MUMPS-11 on the PDP-11, and MEDITECH's MIIS. In the Fall of 1972, many MUMPS users attended a conference in Boston which standardized the then-fractured language, and created the MUMPS Users Group and MUMPS Development Committee (MDC) to do so. These efforts proved successful; a standard was complete by 1974, and was approved, on September 15, 1977, as ANSI standard, X11.1-1977. At about the same time DEC launched DSM-11 (Digital Standard MUMPS) for the PDP-11. This quickly dominated the market, and became the reference implementation of the time. Also, InterSystems sold ISM-11 for the PDP-11 (which was identical to DSM-11). During the early 1980s several vendors brought MUMPS-based platforms that met the ANSI standard to market. The most significant were: Other companies developed important MUMPS implementations: This period also saw considerable MDC activity. The second revision of the ANSI standard for MUMPS (X11.1-1984) was approved on November 15, 1984. The US Department of Veterans Affairs (formerly the Veterans Administration) was one of the earliest major adopters of the MUMPS language. Their development work (and subsequent contributions to the free MUMPS application codebase) was an influence on many medical users worldwide. In 1995, the Veterans Affairs' patient Admission/Tracking/Discharge system, Decentralized Hospital Computer Program (DHCP) was the recipient of the Computerworld Smithsonian Award for best use of Information Technology in Medicine. In July 2006, the Department of Veterans Affairs (VA) / Veterans Health Administration (VHA) was the recipient of the Innovations in American Government Award presented by the Ash Institute of the John F. Kennedy School of Government at Harvard University for its extension of DHCP into the Veterans Health Information Systems and Technology Architecture (VistA). Nearly the entire VA hospital system in the United States, the Indian Health Service, and major parts of the Department of Defense CHCS hospital system use MUMPS databases for clinical data tracking. In 2015 the Department of Defense awarded a 10 year contract to Leidos, Cerner, and Accenture to replace CHCS. In 2017 the Veterans Health Administration (VHA) announced that it would replace VistA with Cerner by 2024 or 2025. In 2019 the VHA announced that it would also replace its Epic Systems online appointment scheduling system with one from Cerner by 2024. Other healthcare IT companies using MUMPS include Epic, MEDITECH, GE Healthcare (formerly IDX Systems and Centricity), AmeriPath (part of Quest Diagnostics), Care Centric, Allscripts, Coventry Healthcare, EMIS, and Sunquest Information Systems (formerly Misys Healthcare). Many reference laboratories, such as DASA, Quest Diagnostics, and Dynacare, use MUMPS software written by or based on Antrim Corporation code. Antrim was purchased by Misys Healthcare (now Sunquest Information Systems) in 2001. MUMPS is also widely used in financial applications. MUMPS gained an early following in the financial sector and is in use at many banks and credit unions. It is used by Ameritrade, the largest online trading service in the US, with over 12 billion transactions per day, as well as by the Bank of England and Barclays Bank, among others. Since 2005, the use of MUMPS has been either in the form of GT.M or InterSystems Caché. The latter is being aggressively marketed by InterSystems and has had success in penetrating new markets, such as telecommunications, in addition to existing markets. The European Space Agency announced on May 13, 2010 that it will use the InterSystems Caché database to support the Gaia mission. This mission aims to map the Milky Way with unprecedented precision. MUMPS is a language intended for and designed to build database applications. Secondary language features were included to help programmers make applications using minimal computing resources. The original implementations were interpreted, though modern implementations may be fully or partially compiled. Individual "programs" run in memory "partitions". Early MUMPS memory partitions were limited to 2048 bytes so aggressive abbreviation greatly aided multi-programming on severely resource limited hardware, because more than one MUMPS job could fit into the very small memories extant in hardware at the time. The ability to provide multi-user systems was another language design feature. The word "Multi-Programming" in the acronym points to this. Even the earliest machines running MUMPS supported multiple jobs running at the same time. With the change from mini-computers to micro-computers a few years later, even a "single user PC" with a single 8-bit CPU and 16K or 64K of memory could support multiple users, who could connect to it from (non-graphical) video display terminals. Since memory was tight originally, the language design for MUMPS valued very terse code. Thus, every MUMPS command or function name could be abbreviated from one to three letters in length, e.g. Quit (exit program) as Q, $P = $Piece function, R = Read command, $TR = $Translate function. Spaces and end-of-line markers are significant in MUMPS because line scope promoted the same terse language design. Thus, a single line of program code could express, with few characters, an idea for which other programming languages could require 5 to 10 times as many characters. Abbreviation was a common feature of languages designed in this period (e.g., FOCAL-69, early BASICs such as Tiny BASIC, etc.). An unfortunate side effect of this, coupled with the early need to write minimalist code, was that MUMPS programmers routinely did not comment code and used extensive abbreviations. This meant that even an expert MUMPS programmer could not just skim through a page of code to see its function but would have to analyze it line by line. Database interaction is transparently built into the language. The MUMPS language provides a hierarchical database made up of persistent sparse arrays, which is implicitly "opened" for every MUMPS application. All variable names prefixed with the caret character ("^") use permanent (instead of RAM) storage, will maintain their values after the application exits, and will be visible to (and modifiable by) other running applications. Variables using this shared and permanent storage are called "Globals" in MUMPS, because the scoping of these variables is "globally available" to all jobs on the system. The more recent and more common use of the name "global variables" in other languages is a more limited scoping of names, coming from the fact that unscoped variables are "globally" available to any programs running in the same process, but not shared among multiple processes. The MUMPS Storage mode (i.e. Globals stored as persistent sparse arrays), gives the MUMPS database the characteristics of a document-oriented database. All variable names which are not prefixed with caret character ("^") are temporary and private. Like global variables, they also have a hierarchical storage model, but are only "locally available" to a single job, thus they are called "locals". Both "globals" and "locals" can have child nodes (called "subscripts" in MUMPS terminology). Subscripts are not limited to numerals—any ASCII character or group of characters can be a subscript identifier. While this is not uncommon for modern languages such as Perl or JavaScript, it was a highly unusual feature in the late 1970s. This capability was not universally implemented in MUMPS systems before the 1984 ANSI standard, as only canonically numeric subscripts were required by the standard to be allowed. Thus, the variable named 'Car' can have subscripts "Door", "Steering Wheel" and "Engine", each of which can contain a value and have subscripts of their own. The variable ^Car("Door") could have a nested variable subscript of "Color" for example. Thus, you could say SET ^Car("Door","Color")="BLUE" to modify a nested child node of ^Car. In MUMPS terms, "Color" is the 2nd subscript of the variable ^Car (both the names of the child-nodes and the child-nodes themselves are likewise called subscripts). Hierarchical variables are similar to objects with properties in many object oriented languages. Additionally, the MUMPS language design requires that all subscripts of variables are automatically kept in sorted order. Numeric subscripts (including floating-point numbers) are stored from lowest to highest. All non-numeric subscripts are stored in alphabetical order following the numbers. In MUMPS terminology, this is "canonical order". By using only non-negative integer subscripts, the MUMPS programmer can emulate the arrays data type from other languages. Although MUMPS does not natively offer a full set of DBMS features such as mandatory schemas, several DBMS systems have been built on top of it that provide application developers with flat-file, relational and network database features. Additionally, there are built-in operators which treat a delimited string (e.g., comma-separated values) as an array. Early MUMPS programmers would often store a structure of related information as a delimited string, parsing it after it was read in; this saved disk access time and offered considerable speed advantages on some hardware. MUMPS has no data types. Numbers can be treated as strings of digits, or strings can be treated as numbers by numeric operators ("coerced", in MUMPS terminology). Coercion can have some odd side effects, however. For example, when a string is coerced, the parser turns as much of the string (starting from the left) into a number as it can, then discards the rest. Thus the statement codice_1 is evaluated as codice_2 in MUMPS. Other features of the language are intended to help MUMPS applications interact with each other in a multi-user environment. Database locks, process identifiers, and atomicity of database update transactions are all required of standard MUMPS implementations. In contrast to languages in the C or Wirth traditions, some space characters between MUMPS statements are significant. A single space separates a command from its argument, and a space, or newline, separates each argument from the next MUMPS token. Commands which take no arguments (e.g., codice_3) require two following spaces. The concept is that one space separates the command from the (nonexistent) argument, the next separates the "argument" from the next command. Newlines are also significant; an codice_4, codice_3 or codice_6 command processes (or skips) everything else till the end-of-line. To make those statements control multiple lines, you must use the codice_7 command to create a code block. A simple Hello world program in MUMPS might be: hello() and would be run from the MUMPS command line with the command codice_8. Since MUMPS allows commands to be strung together on the same line, and since commands can be abbreviated to a single letter, this routine could be made more compact: hello() w "Hello, World!",! q The 'codice_9' after the text generates a newline. ANSI X11.1-1995 gives a complete, formal description of the language; an annotated version of this standard is available online. Data types: There is one universal datatype, which is implicitly coerced to string, integer, or floating-point datatypes as context requires. Booleans (called "truthvalues" in MUMPS): In IF commands and other syntax that has expressions evaluated as conditions, any string value is evaluated as a numeric value, and if that is a nonzero value, then it is interpreted as True. codice_10 yields 1 if a is less than b, 0 otherwise. Declarations: None. All variables are dynamically created at the first time a value is assigned. Lines: are important syntactic entities, unlike their status in languages patterned on C or Pascal. Multiple statements per line are allowed and are common. The scope of any IF, ELSE, and FOR command is "the remainder of current line." Case sensitivity: Commands and intrinsic functions are case-insensitive. In contrast, variable names and labels are case-sensitive. There is no special meaning for upper vs. lower-case and few widely followed conventions. The percent sign (%) is legal as first character of variables and labels. Postconditionals: execution of almost any command can be controlled by following it with a colon and a truthvalue expression. codice_11 sets A to "FOO" if N is less than 10; codice_12 performs PRINTERR if N is greater than 100. This construct provides a conditional whose scope is less than a full line. Abbreviation: You can abbreviate nearly all commands and native functions to one, two, or three characters. Reserved words: None. Since MUMPS interprets source code by context, there is no need for reserved words. You may use the names of language commands as variables, so the following is perfectly legal MUMPS code: GREPTHIS() THEN IF IF,SET&KILL SET SET=SET+KILL QUIT MUMPS can be made more obfuscated by using the contracted operator syntax, as shown in this terse example derived from the example above: GREPTHIS() T I I,S&K S S=S+K Q Arrays: are created dynamically, stored as B-trees, are sparse (i.e. use almost no space for missing nodes), can use any number of subscripts, and subscripts can be strings or numeric (including floating point). Arrays are always automatically stored in sorted order, so there is never any occasion to sort, pack, reorder, or otherwise reorganize the database. Built in functions such as $DATA, $ORDER, $NEXT(deprecated) and $QUERY functions provide efficient examination and traversal of the fundamental array structure, on disk or in memory. for i=10000:1:12345 set sqtable(i)=i*i set address("Smith","Daniel")="dpbsmith@world.std.com" Local arrays: variable names not beginning with caret (i.e. "^") are stored in memory by process, are private to the creating process, and expire when the creating process terminates. The available storage depends on implementation. For those implementations using partitions, it is limited to the partition size (a small partition might be 32K). For other implementations, it may be several megabytes. Global arrays: codice_13. These are stored on disk, are available to all processes, and are persistent when the creating process terminates. Very large globals (for example, hundreds of gigabytes) are practical and efficient in most implementations. This is MUMPS' main "database" mechanism. It is used instead of calling on the operating system to create, write, and read files. Indirection: in many contexts, codice_14 can be used, and effectively substitutes the contents of VBL into another MUMPS statement. codice_15 sets the variable ABC to 123. codice_16 performs the subroutine named REPORT. This substitution allows for lazy evaluation and late binding as well as effectively the operational equivalent of "pointers" in other languages. Piece function: This breaks variables into segmented pieces guided by a user specified separator string (sometimes called a "delimiter"). Those who know awk will find this familiar. codice_17 means the "third caret-separated piece of STRINGVAR." The piece function can also appear as an assignment (SET command) target. codice_18 yields "std". After SET X="dpbsmith@world.std.com" codice_19 causes X to become "office@world.std.com" (note that $P is equivalent to $PIECE and could be written as such). Order function: This function treats its input as a structure, and finds the next index that exists which has the same structure except for the last subscript. It returns the sorted value that is ordered after the one given as input. (This treats the array reference as a content-addressable data rather than an address of a value.) Set stuff(6)="xyz",stuff(10)=26,stuff(15)="" codice_20 yields 6, codice_21 yields 10, codice_22 yields 10, codice_23 yields 15, codice_24 yields "". Set i="" For Set i=$O(stuff(i)) Quit:i="" Write !,i,10,stuff(i) Here, the argument-less "For" repeats until stopped by a terminating "Quit". This line prints a table of i and stuff(i) where i is successively 6, 10, and 15. For iterating the database, the Order function returns the next key to use. GTM>S n="" GTM>S n=$order(^nodex(n)) GTM>zwr n n=" building" GTM>S n=$order(^nodex(n)) GTM>zwr n n=" name:gd" GTM>S n=$order(^nodex(n)) GTM>zwr n n="%kml:guid" Multi-User/Multi-Tasking/Multi-Processor: MUMPS supports multiple simultaneous users and processes even when the underlying operating system does not (e.g., MS-DOS). Additionally, there is the ability to specify an environment for a variable, such as by specifying a machine name in a variable (as in codice_25), which can allow you to access data on remote machines. Some aspects of MUMPS syntax differ strongly from that of more modern languages, which can cause confusion. Whitespace is not allowed within expressions, as it ends a statement: codice_26 is an error, and must be written codice_27. All operators have the same precedence and are left-associative (codice_28 evaluates to 50). The operators for "less than or equal to" and "greater than or equal to" are codice_29 and codice_30 (that is, the boolean negation operator codice_31 plus a strict comparison operator). Periods (codice_32) are used to indent the lines in a DO block, not whitespace. The ELSE command does not need a corresponding IF, as it operates by inspecting the value in the builtin system variable codice_33. MUMPS scoping rules are more permissive than other modern languages. Declared local variables are scoped using the stack. A routine can normally see all declared locals of the routines below it on the call stack, and routines cannot prevent routines they call from modifying their declared locals, unless the caller manually creates a new stack level (codice_34) and aliases each of the variables they wish to protect (codice_35) before calling any child routines. By contrast, undeclared variables (variables created by using them, rather than declaration) are in scope for all routines running in the same process, and remain in scope until the program exits. Because MUMPS database references differ from to internal variable references only in the caret prefix, it is dangerously easy to unintentionally edit the database, or even to delete a database "table". All of the following positions can be, and have been, supported by knowledgeable people at various times: Some of the contention arose in response to strong M advocacy on the part of one commercial interest, InterSystems, whose chief executive disliked the name MUMPS and felt that it represented a serious marketing obstacle. Thus, favoring M to some extent became identified as alignment with InterSystems. The dispute also reflected rivalry between organizations (the M Technology Association, the MUMPS Development Committee, the ANSI and ISO Standards Committees) as to who determines the "official" name of the language. Some writers have attempted to defuse the issue by referring to the language as "M[UMPS]", square brackets being the customary notation for optional syntax elements. A leading authority, and the author of an open source MUMPS implementation, Professor Kevin O'Kane, uses only 'MUMPS'. The most recent standard (ISO/IEC 11756:1999, re-affirmed on 25 June 2010), still mentions both M and MUMPS as officially accepted names. Massachusetts General Hospital registered "MUMPS" as a trademark with the USPTO on November 28, 1971, and renewed it on November 16, 1992 - but let it expire on August 30, 2003. MUMPS invites comparison with the Pick operating system. Similarities include:
https://en.wikipedia.org/wiki?curid=19723
Mercury (programming language) Mercury is a functional logic programming language made for real-world uses. The first version was developed at the University of Melbourne, Computer Science department, by Fergus Henderson, Thomas Conway, and Zoltan Somogyi, under Somogyi's supervision, and released on April 8, 1995. Mercury is a purely declarative logic programming language. It is related to both Prolog and Haskell. It features a strong, static, polymorphic type system, and a strong mode and determinism system. The official implementation, the Melbourne Mercury Compiler, is available for most Unix and Unix-like platforms, including Linux, macOS, and for Windows. Mercury is based on the logic programming language Prolog. It has the same syntax and the same basic concepts such as the selective linear definite clause resolution (SLD) algorithm. It can be viewed as a pure subset of Prolog with strong types and modes. As such, it is often compared to its predecessor in features and run-time efficiency. The language is designed using software engineering principles. Unlike the original implementations of Prolog, it has a separate compilation phase, rather than being directly interpreted. This allows a much wider range of errors to be detected before running a program. It features a strict static type and mode system and a module system. By using information obtained at compile time (such as type and mode), programs written in Mercury typically perform significantly faster than equivalent programs written in Prolog. Its authors claim that Mercury is the fastest logic language in the world, by a wide margin. Mercury is a purely declarative language, unlike Prolog, since it lacks "extra-logical" Prolog statements such as codice_1 (cut) and imperative input/output (I/O). This enables advanced static program analysis and program optimization, including compile-time garbage collection, but it can make certain programming constructs (such as a switch over a number of options, with a default) harder to express. (While Mercury does allow impure functionality, this serves mainly as a way to call foreign language code. All impure code must be marked explicitly.) Operations which would typically be impure (such as input/output) are expressed using pure constructs in Mercury using linear types, by threading a dummy "world" value through all relevant code. Notable programs written in Mercury include the Mercury compiler and the Prince XML formatter. Software company Mission Critical IT has also been using Mercury since 2000 to develop enterprise applications and its Ontology-Driven software development platform, ODASE. Mercury has several back-ends, which enable compiling Mercury code into several languages, including: Mercury also features a foreign language interface, allowing code in other languages (depending on the chosen back-end) to be linked with Mercury code. The following foreign languages are possible: Other languages can then be interfaced to by calling them from these languages. However, this means that foreign language code may need to be written several times for the different backends, otherwise portability between backends will be lost. The most commonly used back-end is the original low-level C back-end. Hello World: Calculating the 10th Fibonacci number (in the most obvious way): codice_2 is a "state variable", which is syntactic sugar for a pair of variables which are assigned concrete names at compilation; for example, the above is desugared to something like: Releases are named according to the year and month of release. The current stable release is 14.01.1 (September 2014). Prior releases were numbered 0.12, 0.13, etc., and the time between stable releases can be as long as 3 years. There is often also a snapshot "release of the day" (ROTD) consisting of the latest features and bug fixes added to the last stable release.
https://en.wikipedia.org/wiki?curid=19726
Michael Faraday Michael Faraday (; 22 September 1791 – 25 August 1867) was an English scientist who contributed to the study of electromagnetism and electrochemistry. His main discoveries include the principles underlying electromagnetic induction, diamagnetism and electrolysis. Although Faraday received little formal education, he was one of the most influential scientists in history. It was by his research on the magnetic field around a conductor carrying a direct current that Faraday established the basis for the concept of the electromagnetic field in physics. Faraday also established that magnetism could affect rays of light and that there was an underlying relationship between the two phenomena. He similarly discovered the principles of electromagnetic induction and diamagnetism, and the laws of electrolysis. His inventions of electromagnetic rotary devices formed the foundation of electric motor technology, and it was largely due to his efforts that electricity became practical for use in technology. As a chemist, Faraday discovered benzene, investigated the clathrate hydrate of chlorine, invented an early form of the Bunsen burner and the system of oxidation numbers, and popularised terminology such as "anode", "cathode", "electrode" and "ion". Faraday ultimately became the first and foremost Fullerian Professor of Chemistry at the Royal Institution, a lifetime position. Faraday was an excellent experimentalist who conveyed his ideas in clear and simple language; his mathematical abilities, however, did not extend as far as trigonometry and were limited to the simplest algebra. James Clerk Maxwell took the work of Faraday and others and summarized it in a set of equations which is accepted as the basis of all modern theories of electromagnetic phenomena. On Faraday's uses of lines of force, Maxwell wrote that they show Faraday "to have been in reality a mathematician of a very high order – one from whom the mathematicians of the future may derive valuable and fertile methods." The SI unit of capacitance is named in his honour: the farad. Albert Einstein kept a picture of Faraday on his study wall, alongside pictures of Isaac Newton and James Clerk Maxwell. Physicist Ernest Rutherford stated, "When we consider the magnitude and extent of his discoveries and their influence on the progress of science and of industry, there is no honour too great to pay to the memory of Faraday, one of the greatest scientific discoverers of all time." Michael Faraday was born on 22 September 1791 in Newington Butts, which is now part of the London Borough of Southwark but was then a suburban part of Surrey. His family was not well off. His father, James, was a member of the Glassite sect of Christianity. James Faraday moved his wife and two children to London during the winter of 1790 from Outhgill in Westmorland, where he had been an apprentice to the village blacksmith. Michael was born in the autumn of that year. The young Michael Faraday, who was the third of four children, having only the most basic school education, had to educate himself. At the age of 14 he became an apprentice to George Riebau, a local bookbinder and bookseller in Blandford Street. During his seven-year apprenticeship Faraday read many books, including Isaac Watts's "The Improvement of the Mind", and he enthusiastically implemented the principles and suggestions contained therein. He also developed an interest in science, especially in electricity. Faraday was particularly inspired by the book "Conversations on Chemistry" by Jane Marcet. In 1812, at the age of 20 and at the end of his apprenticeship, Faraday attended lectures by the eminent English chemist Humphry Davy of the Royal Institution and the Royal Society, and John Tatum, founder of the City Philosophical Society. Many of the tickets for these lectures were given to Faraday by William Dance, who was one of the founders of the Royal Philharmonic Society. Faraday subsequently sent Davy a 300-page book based on notes that he had taken during these lectures. Davy's reply was immediate, kind, and favourable. In 1813, when Davy damaged his eyesight in an accident with nitrogen trichloride, he decided to employ Faraday as an assistant. Coincidentally one of the Royal Institution's assistants, John Payne, was sacked and Sir Humphry Davy had been asked to find a replacement; thus he appointed Faraday as Chemical Assistant at the Royal Institution on 1 March 1813. Very soon Davy entrusted Faraday with the preparation of nitrogen trichloride samples, and they both were injured in an explosion of this very sensitive substance. Faraday married Sarah Barnard (1800–1879) on 12 June 1821. They met through their families at the Sandemanian church, and he confessed his faith to the Sandemanian congregation the month after they were married. They had no children. Faraday was a devout Christian; his Sandemanian denomination was an offshoot of the Church of Scotland. Well after his marriage, he served as deacon and for two terms as an elder in the meeting house of his youth. His church was located at Paul's Alley in the Barbican. This meeting house relocated in 1862 to Barnsbury Grove, Islington; this North London location was where Faraday served the final two years of his second term as elder prior to his resignation from that post. Biographers have noted that "a strong sense of the unity of God and nature pervaded Faraday's life and work." In June 1832, the University of Oxford granted Faraday an honorary Doctor of Civil Law degree. During his lifetime, he was offered a knighthood in recognition for his services to science, which he turned down on religious grounds, believing that it was against the word of the Bible to accumulate riches and pursue worldly reward, and stating that he preferred to remain "plain Mr Faraday to the end". Elected a member of the Royal Society in 1824, he twice refused to become President. He became the first Fullerian Professor of Chemistry at the Royal Institution in 1833. In 1832, Faraday was elected a Foreign Honorary Member of the American Academy of Arts and Sciences. He was elected a foreign member of the Royal Swedish Academy of Sciences in 1838, and was one of eight foreign members elected to the French Academy of Sciences in 1844. In 1849 he was elected as associated member to the Royal Institute of the Netherlands, which two years later became the Royal Netherlands Academy of Arts and Sciences and he was subsequently made foreign member. Faraday suffered a nervous breakdown in 1839 but eventually returned to his investigations into electromagnetism. In 1848, as a result of representations by the Prince Consort, Faraday was awarded a grace and favour house in Hampton Court in Middlesex, free of all expenses and upkeep. This was the Master Mason's House, later called Faraday House, and now No. 37 Hampton Court Road. In 1858 Faraday retired to live there. Having provided a number of various service projects for the British government, when asked by the government to advise on the production of chemical weapons for use in the Crimean War (1853–1856), Faraday refused to participate citing ethical reasons. Faraday died at his house at Hampton Court on 25 August 1867, aged 75. He had some years before turned down an offer of burial in Westminster Abbey upon his death, but he has a memorial plaque there, near Isaac Newton's tomb. Faraday was interred in the dissenters' (non-Anglican) section of Highgate Cemetery. Faraday's earliest chemical work was as an assistant to Humphry Davy. Faraday was specifically involved in the study of chlorine; he discovered two new compounds of chlorine and carbon. He also conducted the first rough experiments on the diffusion of gases, a phenomenon that was first pointed out by John Dalton. The physical importance of this phenomenon was more fully revealed by Thomas Graham and Joseph Loschmidt. Faraday succeeded in liquefying several gases, investigated the alloys of steel, and produced several new kinds of glass intended for optical purposes. A specimen of one of these heavy glasses subsequently became historically important; when the glass was placed in a magnetic field Faraday determined the rotation of the plane of polarisation of light. This specimen was also the first substance found to be repelled by the poles of a magnet. Faraday invented an early form of what was to become the Bunsen burner, which is in practical use in science laboratories around the world as a convenient source of heat. Faraday worked extensively in the field of chemistry, discovering chemical substances such as benzene (which he called bicarburet of hydrogen) and liquefying gases such as chlorine. The liquefying of gases helped to establish that gases are the vapours of liquids possessing a very low boiling point and gave a more solid basis to the concept of molecular aggregation. In 1820 Faraday reported the first synthesis of compounds made from carbon and chlorine, C2Cl6 and C2Cl4, and published his results the following year. Faraday also determined the composition of the chlorine clathrate hydrate, which had been discovered by Humphry Davy in 1810. Faraday is also responsible for discovering the laws of electrolysis, and for popularizing terminology such as anode, cathode, electrode, and ion, terms proposed in large part by William Whewell. Faraday was the first to report what later came to be called metallic nanoparticles. In 1847 he discovered that the optical properties of gold colloids differed from those of the corresponding bulk metal. This was probably the first reported observation of the effects of quantum size, and might be considered to be the birth of nanoscience. Faraday is best known for his work regarding electricity and magnetism. His first recorded experiment was the construction of a voltaic pile with seven ha'penny coins, stacked together with seven disks of sheet zinc, and six pieces of paper moistened with salt water. With this pile he decomposed sulfate of magnesia (first letter to Abbott, 12 July 1812). In 1821, soon after the Danish physicist and chemist Hans Christian Ørsted discovered the phenomenon of electromagnetism, Davy and British scientist William Hyde Wollaston tried, but failed, to design an electric motor. Faraday, having discussed the problem with the two men, went on to build two devices to produce what he called "electromagnetic rotation". One of these, now known as the homopolar motor, caused a continuous circular motion that was engendered by the circular magnetic force around a wire that extended into a pool of mercury wherein was placed a magnet; the wire would then rotate around the magnet if supplied with current from a chemical battery. These experiments and inventions formed the foundation of modern electromagnetic technology. In his excitement, Faraday published results without acknowledging his work with either Wollaston or Davy. The resulting controversy within the Royal Society strained his mentor relationship with Davy and may well have contributed to Faraday's assignment to other activities, which consequently prevented his involvement in electromagnetic research for several years. From his initial discovery in 1821, Faraday continued his laboratory work, exploring electromagnetic properties of materials and developing requisite experience. In 1824, Faraday briefly set up a circuit to study whether a magnetic field could regulate the flow of a current in an adjacent wire, but he found no such relationship. This experiment followed similar work conducted with light and magnets three years earlier that yielded identical results. During the next seven years, Faraday spent much of his time perfecting his recipe for optical quality (heavy) glass, borosilicate of lead, which he used in his future studies connecting light with magnetism. In his spare time, Faraday continued publishing his experimental work on optics and electromagnetism; he conducted correspondence with scientists whom he had met on his journeys across Europe with Davy, and who were also working on electromagnetism. Two years after the death of Davy, in 1831, he began his great series of experiments in which he discovered electromagnetic induction, recording in his laboratory diary on 28 October 1831 he was; "making many experiments with the great magnet of the Royal Society". Faraday's breakthrough came when he wrapped two insulated coils of wire around an iron ring, and found that upon passing a current through one coil, a momentary current was induced in the other coil. This phenomenon is now known as mutual induction. The iron ring-coil apparatus is still on display at the Royal Institution. In subsequent experiments, he found that if he moved a magnet through a loop of wire an electric current flowed in that wire. The current also flowed if the loop was moved over a stationary magnet. His demonstrations established that a changing magnetic field produces an electric field; this relation was modelled mathematically by James Clerk Maxwell as Faraday's law, which subsequently became one of the four Maxwell equations, and which have in turn evolved into the generalization known today as field theory. Faraday would later use the principles he had discovered to construct the electric dynamo, the ancestor of modern power generators and the electric motor. In 1832, he completed a series of experiments aimed at investigating the fundamental nature of electricity; Faraday used "static", batteries, and "animal electricity" to produce the phenomena of electrostatic attraction, electrolysis, magnetism, etc. He concluded that, contrary to the scientific opinion of the time, the divisions between the various "kinds" of electricity were illusory. Faraday instead proposed that only a single "electricity" exists, and the changing values of quantity and intensity (current and voltage) would produce different groups of phenomena. Near the end of his career, Faraday proposed that electromagnetic forces extended into the empty space around the conductor. This idea was rejected by his fellow scientists, and Faraday did not live to see the eventual acceptance of his proposition by the scientific community. Faraday's concept of lines of flux emanating from charged bodies and magnets provided a way to visualize electric and magnetic fields; that conceptual model was crucial for the successful development of the electromechanical devices that dominated engineering and industry for the remainder of the 19th century. In 1845, Faraday discovered that many materials exhibit a weak repulsion from a magnetic field: a phenomenon he termed diamagnetism. Faraday also discovered that the plane of polarization of linearly polarized light can be rotated by the application of an external magnetic field aligned with the direction in which the light is moving. This is now termed the Faraday effect. In Sept 1845 he wrote in his notebook, "I have at last succeeded in "illuminating a magnetic curve" or "line of force" and in "magnetising a ray of light"". Later on in his life, in 1862, Faraday used a spectroscope to search for a different alteration of light, the change of spectral lines by an applied magnetic field. The equipment available to him was, however, insufficient for a definite determination of spectral change. Pieter Zeeman later used an improved apparatus to study the same phenomenon, publishing his results in 1897 and receiving the 1902 Nobel Prize in Physics for his success. In both his 1897 paper and his Nobel acceptance speech, Zeeman made reference to Faraday's work. In his work on static electricity, Faraday's ice pail experiment demonstrated that the charge resided only on the exterior of a charged conductor, and exterior charge had no influence on anything enclosed within a conductor. This is because the exterior charges redistribute such that the interior fields emanating from them cancel one another. This shielding effect is used in what is now known as a Faraday cage. Faraday had a long association with the Royal Institution of Great Britain. He was appointed Assistant Superintendent of the House of the Royal Institution in 1821. He was elected a member of the Royal Society in 1824. In 1825, he became Director of the Laboratory of the Royal Institution. Six years later, in 1833, Faraday became the first Fullerian Professor of Chemistry at the Royal Institution of Great Britain, a position to which he was appointed for life without the obligation to deliver lectures. His sponsor and mentor was John 'Mad Jack' Fuller, who created the position at the Royal Institution for Faraday. Beyond his scientific research into areas such as chemistry, electricity, and magnetism at the Royal Institution, Faraday undertook numerous, and often time-consuming, service projects for private enterprise and the British government. This work included investigations of explosions in coal mines, being an expert witness in court, and along with two engineers from Chance Brothers c.1853, the preparation of high-quality optical glass, which was required by Chance for its lighthouses. In 1846, together with Charles Lyell, he produced a lengthy and detailed report on a serious explosion in the colliery at Haswell, County Durham, which killed 95 miners. Their report was a meticulous forensic investigation and indicated that coal dust contributed to the severity of the explosion. The report should have warned coal owners of the hazard of coal dust explosions, but the risk was ignored for over 60 years until the Senghenydd Colliery Disaster of 1913. As a respected scientist in a nation with strong maritime interests, Faraday spent extensive amounts of time on projects such as the construction and operation of lighthouses and protecting the bottoms of ships from corrosion. His workshop still stands at Trinity Buoy Wharf above the Chain and Buoy Store, next to London's only lighthouse where he carried out the first experiments in electric lighting for lighthouses. Faraday was also active in what would now be called environmental science, or engineering. He investigated industrial pollution at Swansea and was consulted on air pollution at the Royal Mint. In July 1855, Faraday wrote a letter to "The Times" on the subject of the foul condition of the River Thames, which resulted in an often-reprinted cartoon in "Punch". (See also The Great Stink). Faraday assisted with the planning and judging of exhibits for the Great Exhibition of 1851 in London. He also advised the National Gallery on the cleaning and protection of its art collection, and served on the National Gallery Site Commission in 1857. Education was another of Faraday's areas of service; he lectured on the topic in 1854 at the Royal Institution, and in 1862 he appeared before a Public Schools Commission to give his views on education in Great Britain. Faraday also weighed in negatively on the public's fascination with table-turning, mesmerism, and seances, and in so doing chastised both the public and the nation's educational system. Before his famous Christmas lectures, Faraday delivered chemistry lectures for the City Philosophical Society from 1816 to 1818 in order to refine the quality of his lectures. Between 1827 and 1860 at the Royal Institution in London, Faraday gave a series of nineteen Christmas lectures for young people, a series which continues today. The objective of Faraday's Christmas lectures was to present science to the general public in the hopes of inspiring them and generating revenue for the Royal Institution. They were notable events on the social calendar among London's gentry. Over the course of several letters to his close friend Benjamin Abbott, Faraday outlined his recommendations on the art of lecturing: Faraday wrote "a flame should be lighted at the commencement and kept alive with unremitting splendour to the end". His lectures were joyful and juvenile, he delighted in filling soap bubbles with various gasses (in order to determine whether or not they are magnetic) in front of his audiences and marveled at the rich colors of polarized lights, but the lectures were also deeply philosophical. In his lectures he urged his audiences to consider the mechanics of his experiments: "you know very well that ice floats upon water ... Why does the ice float? Think of that, and philosophise". His subjects consisted of Chemistry and Electricity, and included: 1841 The Rudiments of Chemistry, 1843 First Principles of Electricity, 1848 The Chemical History of a Candle, 1851 Attractive Forces, 1853 Voltaic Electricity, 1854 The Chemistry of Combustion, 1855 The Distinctive Properties of the Common Metals, 1857 Static Electricity, 1858 The Metallic Properties, 1859 The Various Forces of Matter and their Relations to Each Other. A statue of Faraday stands in Savoy Place, London, outside the Institution of Engineering and Technology. The Michael Faraday Memorial, designed by brutalist architect Rodney Gordon and completed in 1961, is at the Elephant & Castle gyratory system, near Faraday's birthplace at Newington Butts, London. Faraday School is located on Trinity Buoy Wharf where his workshop still stands above the Chain and Buoy Store, next to London's only lighthouse. Faraday Gardens is a small park in Walworth, London, not far from his birthplace at Newington Butts. It lies within the local council ward of Faraday in the London Borough of Southwark. Michael Faraday Primary school is situated on the Aylesbury Estate in Walworth. A building at London South Bank University, which houses the institute's electrical engineering departments is named the Faraday Wing, due to its proximity to Faraday's birthplace in Newington Butts. A hall at Loughborough University was named after Faraday in 1960. Near the entrance to its dining hall is a bronze casting, which depicts the symbol of an electrical transformer, and inside there hangs a portrait, both in Faraday's honour. An eight-story building at the University of Edinburgh's science & engineering campus is named for Faraday, as is a recently built hall of accommodation at Brunel University, the main engineering building at Swansea University, and the instructional and experimental physics building at Northern Illinois University. The former UK Faraday Station in Antarctica was named after him. Streets named for Faraday can be found in many British cities (e.g., London, Fife, Swindon, Basingstoke, Nottingham, Whitby, Kirkby, Crawley, Newbury, Swansea, Aylesbury and Stevenage) as well as in France (Paris), Germany (Berlin-Dahlem, Hermsdorf), Canada (Quebec City, Quebec; Deep River, Ontario; Ottawa, Ontario), and the United States (Reston, Virginia). A Royal Society of Arts blue plaque, unveiled in 1876, commemorates Faraday at 48 Blandford Street in London's Marylebone district. From 1991 until 2001, Faraday's picture featured on the reverse of Series E £20 banknotes issued by the Bank of England. He was portrayed conducting a lecture at the Royal Institution with the magneto-electric spark apparatus. In 2002, Faraday was ranked number 22 in the BBC's list of the 100 Greatest Britons following a UK-wide vote. The Faraday Institute for Science and Religion derives its name from the scientist, who saw his faith as integral to his scientific research. The logo of the Institute is also based on Faraday's discoveries. It was created in 2006 by a $2,000,000 grant from the John Templeton Foundation to carry out academic research, to foster understanding of the interaction between science and religion, and to engage public understanding in both these subject areas. Faraday's life and contributions to electromagnetics was the principal topic of the tenth episode, titled "The Electric Boy", of the 2014 American science documentary series, "", which was broadcast on Fox and the National Geographic Channel. Aldous Huxley, the literary giant who was also the grandson of T. H. Huxley, the grandnephew of Matthew Arnold, the brother of Julian Huxley, and the half-brother of Andrew Huxley, was well-versed in science. He wrote about Faraday in an essay entitled, "A Night in Pietramala": “He is always the natural philosopher. To discover truth is his sole aim and interest…even if I could be Shakespeare, I think I should still choose to be Faraday.” In honor and remembrance of his great scientific contributions, several institutions have created prizes and awards in his name. This include: Faraday's books, with the exception of "Chemical Manipulation", were collections of scientific papers or transcriptions of lectures. Since his death, Faraday's diary has been published, as have several large volumes of his letters and Faraday's journal from his travels with Davy in 1813–1815.
https://en.wikipedia.org/wiki?curid=19727
Marriage Marriage, also called matrimony or wedlock, is a culturally recognised union between people, called spouses, that establishes rights and obligations between them, as well as between them and their children, and between them and their in-laws. The definition of marriage varies around the world, not only between cultures and between religions, but also throughout the history of any given culture and religion. Over time, it has expanded and also constricted in terms of who and what is encompassed. Typically, it is an institution in which interpersonal relationships, usually sexual, are acknowledged or sanctioned. In some cultures, marriage is recommended or considered to be compulsory before pursuing any sexual activity. When defined broadly, marriage is considered a cultural universal. A marriage ceremony is called a wedding. Individuals may marry for several reasons, including legal, social, libidinal, emotional, financial, spiritual, and religious purposes. Whom they marry may be influenced by gender, socially determined rules of incest, prescriptive marriage rules, parental choice and individual desire. In some areas of the world, arranged marriage, child marriage, polygamy, and sometimes forced marriage, may be practiced as a cultural tradition. Conversely, such practices may be outlawed and penalized in parts of the world out of concerns regarding the infringement of women's rights or children's rights (both female and male) or as a result of international law. Around the world, primarily in developed democracies, there has been a general trend towards ensuring equal rights for women within marriage and legally recognizing the marriages of interfaith, interracial, and same-sex couples. These trends coincide with the broader human rights movement. Marriage can be recognized by a state, an organization, a religious authority, a tribal group, a local community, or peers. It is often viewed as a contract. When a marriage is performed and carried out by a government institution in accordance with the marriage laws of the jurisdiction, without religious content, it is a civil marriage. Civil marriage recognizes and creates the rights and obligations intrinsic to matrimony in the eyes of the state. When a marriage is performed with religious content under the auspices of a religious institution, it is a religious marriage. Religious marriage recognizes and creates the rights and obligations intrinsic to matrimony in the eyes of that religion. Religious marriage is known variously as sacramental marriage in Catholicism, nikah in Islam, nissuin in Judaism, and various other names in other faith traditions, each with their own constraints as to what constitutes, and who can enter into, a valid religious marriage. Some countries do not recognize locally performed religious marriage on its own, and require a separate civil marriage for official purposes. Conversely, civil marriage does not exist in some countries governed by a religious legal system, such as Saudi Arabia, where marriages contracted abroad might not be recognized if they were contracted contrary to Saudi interpretations of Islamic religious law. In countries governed by a mixed secular-religious legal system, such as Lebanon and Israel, locally performed civil marriage does not exist within the country, which prevents interfaith and various other marriages that contradict religious laws from being entered into in the country; however, civil marriages performed abroad may be recognized by the state even if they conflict with religious laws. For example, in the case of recognition of marriage in Israel, this includes recognition of not only interfaith civil marriages performed abroad, but also overseas same-sex civil marriages. The act of marriage usually creates normative or legal obligations between the individuals involved, and any offspring they may produce or adopt. In terms of legal recognition, most sovereign states and other jurisdictions limit marriage to opposite-sex couples and a diminishing number of these permit polygyny, child marriages, and forced marriages. In modern times, a growing number of countries, primarily developed democracies, have lifted bans on, and have established legal recognition for, the marriages of interfaith, interracial, and same-sex couples. In some areas, child marriages and polygamy may occur in spite of national laws against the practice. Since the late twentieth century, major social changes in Western countries have led to changes in the demographics of marriage, with the age of first marriage increasing, fewer people marrying, and more couples choosing to cohabit rather than marry. For example, the number of marriages in Europe decreased by 30% from 1975 to 2005. Historically, in most cultures, married women had very few rights of their own, being considered, along with the family's children, the property of the husband; as such, they could not own or inherit property, or represent themselves legally (see, for example, coverture). In Europe, the United States, and other places in the developed world, beginning in the late 19th century, marriage has undergone gradual legal changes, aimed at improving the rights of the wife. These changes included giving wives legal identities of their own, abolishing the right of husbands to physically discipline their wives, giving wives property rights, liberalizing divorce laws, providing wives with reproductive rights of their own, and requiring a wife's consent when sexual relations occur. These changes have occurred primarily in Western countries. In the 21st century, there continue to be controversies regarding the legal status of married women, legal acceptance of or leniency towards violence within marriage (especially sexual violence), traditional marriage customs such as dowry and bride price, forced marriage, marriageable age, and criminalization of consensual behaviors such as premarital and extramarital sex. The word "marriage" derives from Middle English "mariage", which first appears in 1250–1300 CE. This in turn is derived from Old French, "marier" (to marry), and ultimately Latin, "marītāre", meaning to provide with a husband or wife and "marītāri" meaning to get married. The adjective "marīt-us -a, -um" meaning matrimonial or nuptial could also be used in the masculine form as a noun for "husband" and in the feminine form for "wife". The related word "matrimony" derives from the Old French word "matremoine", which appears around 1300 CE and ultimately derives from Latin "mātrimōnium", which combines the two concepts: "mater" meaning "mother" and the suffix -"monium" signifying "action, state, or condition". Anthropologists have proposed several competing definitions of marriage in an attempt to encompass the wide variety of marital practices observed across cultures. Even within Western culture, "definitions of marriage have careened from one extreme to another and everywhere in between" (as Evan Gerstmann has put it). In "The History of Human Marriage" (1891), Edvard Westermarck defined marriage as "a more or less durable connection between male and female lasting beyond the mere act of propagation till after the birth of the offspring." In "The Future of Marriage in Western Civilization" (1936), he rejected his earlier definition, instead provisionally defining marriage as "a relation of one or more men to one or more women that is recognized by custom or law". The anthropological handbook "Notes and Queries" (1951) defined marriage as "a union between a man and a woman such that children born to the woman are the recognized legitimate offspring of both partners." In recognition of a practice by the Nuer people of Sudan allowing women to act as a husband in certain circumstances (the ghost marriage), Kathleen Gough suggested modifying this to "a woman and one or more other persons." In an analysis of marriage among the Nayar, a polyandrous society in India, Gough found that the group lacked a husband role in the conventional sense; that unitary role in the west was divided between a non-resident "social father" of the woman's children, and her lovers who were the actual procreators. None of these men had legal rights to the woman's child. This forced Gough to disregard sexual access as a key element of marriage and to define it in terms of legitimacy of offspring alone: marriage is "a relationship established between a woman and one or more other persons, which provides a child born to the woman under circumstances not prohibited by the rules of relationship, is accorded full birth-status rights common to normal members of his society or social stratum." Economic anthropologist Duran Bell has criticized the legitimacy-based definition on the basis that some societies do not require marriage for legitimacy. He argued that a legitimacy-based definition of marriage is circular in societies where illegitimacy has no other legal or social implications for a child other than the mother being unmarried. Edmund Leach criticized Gough's definition for being too restrictive in terms of recognized legitimate offspring and suggested that marriage be viewed in terms of the different types of rights it serves to establish. In a 1955 article in "Man", Leach argued that no one definition of marriage applied to all cultures. He offered a list of ten rights associated with marriage, including sexual monopoly and rights with respect to children, with specific rights differing across cultures. Those rights, according to Leach, included: In a 1997 article in "Current Anthropology", Duran Bell describes marriage as "a relationship between one or more men (male or female) in severalty to one or more women that provides those men with a demand-right of sexual access within a domestic group and identifies women who bear the obligation of yielding to the demands of those specific men." In referring to "men in severalty", Bell is referring to corporate kin groups such as lineages which, in having paid brideprice, retain a right in a woman's offspring even if her husband (a lineage member) deceases (Levirate marriage). In referring to "men (male or female)", Bell is referring to women within the lineage who may stand in as the "social fathers" of the wife's children born of other lovers. (See Nuer "ghost marriage".) Monogamy is a form of marriage in which an individual has only one spouse during their lifetime or at any one time (serial monogamy). Anthropologist Jack Goody's comparative study of marriage around the world utilizing the Ethnographic Atlas found a strong correlation between intensive plough agriculture, dowry and monogamy. This pattern was found in a broad swath of Eurasian societies from Japan to Ireland. The majority of Sub-Saharan African societies that practice extensive hoe agriculture, in contrast, show a correlation between "bride price" and polygamy. A further study drawing on the Ethnographic Atlas showed a statistical correlation between increasing size of the society, the belief in "high gods" to support human morality, and monogamy. In the countries which do not permit polygamy, a person who marries in one of those countries a person while still being lawfully married to another commits the crime of bigamy. In all cases, the second marriage is considered legally null and void. Besides the second and subsequent marriages being void, the bigamist is also liable to other penalties, which also vary between jurisdictions. Governments that support monogamy may allow easy divorce. In a number of Western countries divorce rates approach 50%. Those who remarry do so on average three times. Divorce and remarriage can thus result in "serial monogamy", i.e. having multiple marriages but only one legal spouse at a time. This can be interpreted as a form of plural mating, as are those societies dominated by female-headed families in the Caribbean, Mauritius and Brazil where there is frequent rotation of unmarried partners. In all, these account for 16 to 24% of the "monogamous" category. Serial monogamy creates a new kind of relative, the "ex-". The "ex-wife", for example, remains an active part of her "ex-husband's" or "ex-wife's" life, as they may be tied together by transfers of resources (alimony, child support), or shared child custody. Bob Simpson notes that in the British case, serial monogamy creates an "extended family" – a number of households tied together in this way, including mobile children (possible exes may include an ex-wife, an ex-brother-in-law, etc., but not an "ex-child"). These "unclear families" do not fit the mould of the monogamous nuclear family. As a series of connected households, they come to resemble the polygynous model of separate households maintained by mothers with children, tied by a male to whom they are married or divorced. Polygamy is a marriage which includes more than two spouses. When a man is married to more than one wife at a time, the relationship is called polygyny, and there is no marriage bond between the wives; and when a woman is married to more than one husband at a time, it is called polyandry, and there is no marriage bond between the husbands. If a marriage includes multiple husbands or wives, it can be called group marriage. A molecular genetic study of global human genetic diversity argued that sexual polygyny was typical of human reproductive patterns until the shift to sedentary farming communities approximately 10,000 to 5,000 years ago in Europe and Asia, and more recently in Africa and the Americas. As noted above, Anthropologist Jack Goody's comparative study of marriage around the world utilizing the Ethnographic Atlas found that the majority of Sub-Saharan African societies that practice extensive hoe agriculture show a correlation between "Bride price" and polygamy. A survey of other cross-cultural samples has confirmed that the absence of the plough was the only predictor of polygamy, although other factors such as high male mortality in warfare (in non-state societies) and pathogen stress (in state societies) had some impact. Marriages are classified according to the number of legal spouses an individual has. The suffix "-gamy" refers specifically to the number of spouses, as in bi-gamy (two spouses, generally illegal in most nations), and poly-gamy (more than one spouse). Societies show variable acceptance of polygamy as a cultural ideal and practice. According to the Ethnographic Atlas, of 1,231 societies noted, 186 were monogamous; 453 had occasional polygyny; 588 had more frequent polygyny; and 4 had polyandry. However, as Miriam Zeitzen writes, social tolerance for polygamy is different from the practice of polygamy, since it requires wealth to establish multiple households for multiple wives. The actual practice of polygamy in a tolerant society may actually be low, with the majority of aspirant polygamists practicing monogamous marriage. Tracking the occurrence of polygamy is further complicated in jurisdictions where it has been banned, but continues to be practiced ("de facto polygamy"). Zeitzen also notes that Western perceptions of African society and marriage patterns are biased by "contradictory concerns of nostalgia for traditional African culture versus critique of polygamy as oppressive to women or detrimental to development." Polygamy has been condemned as being a form of human rights abuse, with concerns arising over domestic abuse, forced marriage, and neglect. The vast majority of the world's countries, including virtually all of the world's developed nations, do not permit polygamy. There have been calls for the abolition of polygamy in developing countries. Polygyny usually grants wives equal status, although the husband may have personal preferences. One type of de facto polygyny is concubinage, where only one woman gets a wife's rights and status, while other women remain legal house mistresses. Although a society may be classified as polygynous, not all marriages in it necessarily are; monogamous marriages may in fact predominate. It is to this flexibility that Anthropologist Robin Fox attributes its success as a social support system: "This has often meant – given the imbalance in the sex ratios, the higher male infant mortality, the shorter life span of males, the loss of males in wartime, etc. – that often women were left without financial support from husbands. To correct this condition, females had to be killed at birth, remain single, become prostitutes, or be siphoned off into celibate religious orders. Polygynous systems have the advantage that they can promise, as did the Mormons, a home and family for every woman." Nonetheless, polygyny is a gender issue which offers men asymmetrical benefits. In some cases, there is a large age discrepancy (as much as a generation) between a man and his youngest wife, compounding the power differential between the two. Tensions not only exist "between" genders, but also "within" genders; senior and junior men compete for wives, and senior and junior wives in the same household may experience radically different life conditions, and internal hierarchy. Several studies have suggested that the wive's relationship with other women, including co-wives and husband's female kin, are more critical relationships than that with her husband for her productive, reproductive and personal achievement. In some societies, the co-wives are relatives, usually sisters, a practice called "sororal polygyny"; the pre-existing relationship between the co-wives is thought to decrease potential tensions within the marriage. Fox argues that "the major difference between polygyny and monogamy could be stated thus: while plural mating occurs in both systems, under polygyny several unions may be recognized as being legal marriages while under monogamy only one of the unions is so recognized. Often, however, it is difficult to draw a hard and fast line between the two." As polygamy in Africa is increasingly subject to legal limitations, a variant form of "de facto" (as opposed to legal or "de jure") polygyny is being practised in urban centres. Although it does not involve multiple (now illegal) formal marriages, the domestic and personal arrangements follow old polygynous patterns. The de facto form of polygyny is found in other parts of the world as well (including some Mormon sects and Muslim families in the United States). In some societies such as the Lovedu of South Africa, or the Nuer of the Sudan, aristocratic women may become female 'husbands.' In the Lovedu case, this female husband may take a number of polygamous wives. This is not a lesbian relationship, but a means of legitimately expanding a royal lineage by attaching these wives' children to it. The relationships are considered polygynous, not polyandrous, because the female husband is in fact assuming masculine gendered political roles. Religious groups have differing views on the legitimacy of polygyny. It is allowed in Islam and Confucianism. Judaism and Christianity have mentioned practices involving polygyny in the past, however, outright religious acceptance of such practices was not addressed until its rejection in later passages. They do explicitly prohibit polygyny today. Polyandry is notably more rare than polygyny, though less rare than the figure commonly cited in the "Ethnographic Atlas" (1980) which listed only those polyandrous societies found in the Himalayan Mountains. More recent studies have found 53 societies outside the 28 found in the Himalayans which practice polyandry. It is most common in egalitarian societies marked by high male mortality or male absenteeism. It is associated with "partible paternity", the cultural belief that a child can have more than one father. The explanation for polyandry in the Himalayan Mountains is related to the scarcity of land; the marriage of all brothers in a family to the same wife ("fraternal polyandry") allows family land to remain intact and undivided. If every brother married separately and had children, family land would be split into unsustainable small plots. In Europe, this was prevented through the social practice of impartible inheritance (the dis-inheriting of most siblings, some of whom went on to become celibate monks and priests). Group marriage (also known as "multi-lateral marriage") is a form of polyamory in which more than two persons form a family unit, with all the members of the group marriage being considered to be married to all the other members of the group marriage, and all members of the marriage share parental responsibility for any children arising from the marriage. No country legally condones group marriages, neither under the law nor as a common law marriage, but historically it has been practiced by some cultures of Polynesia, Asia, Papua New Guinea and the Americas – as well as in some intentional communities and alternative subcultures such as the Oneida Perfectionists in up-state New York. Of the 250 societies reported by the American anthropologist George Murdock in 1949, only the Kaingang of Brazil had any group marriages at all. A child marriage is a marriage where one or both spouses are under the age of 18. It is related to child betrothal and teenage pregnancy. Child marriage was common throughout history, even up until the 1900s in the United States, where in 1880 CE, in the state of Delaware, the age of consent for marriage was 7 years old. Still, in 2017, over half of the 50 United States have no explicit minimum age to marry and several states set the age as low as 14. Today it is condemned by international human rights organizations. Child marriages are often arranged between the families of the future bride and groom, sometimes as soon as the girl is born. However, in the late 1800s in England and the United States, feminist activists began calling for raised age of consent laws, which was eventually handled in the 1920s, having been raised to 16–18. Child marriages can also occur in the context of bride kidnapping. In the year 1552 CE, John Somerford and Jane Somerford Brereton were both married at the ages of 3 and 2, respectively. Twelve years later, in 1564, John filed for divorce. While child marriage is observed for both boys and girls, the overwhelming majority of child spouses are girls. In many cases, only one marriage-partner is a child, usually the female, due to the importance placed upon female virginity. Causes of child marriage include poverty, bride price, dowry, laws that allow child marriages, religious and social pressures, regional customs, fear of remaining unmarried, and perceived inability of women to work for money. Today, child marriages are widespread in parts of the world; being most common in South Asia and sub-Saharan Africa, with more than half of the girls in some countries in those regions being married before 18. The incidence of child marriage has been falling in most parts of the world. In developed countries child marriage is outlawed or restricted. Girls who marry before 18 are at greater risk of becoming victims of domestic violence, than those who marry later, especially when they are married to a much older man. Several kinds of same-sex marriages have been documented in Indigenous and lineage-based cultures. In the Americas, We'wha (Zuni), was a "lhamana" (male individuals who, at least some of the time, dress and live in the roles usually filled by women in that culture); a respected artist, We'wha served as an emissary of the Zuni to Washington, where he met President Grover Cleveland. We'wha had at least one husband who was generally recognized as such. While it is a relatively new practice to grant same-sex couples the same form of legal marital recognition as commonly granted to mixed-sex couples, there is some history of recorded same-sex unions around the world. Ancient Greek same-sex relationships were like modern companionate marriages, unlike their different-sex marriages in which the spouses had few emotional ties, and the husband had freedom to engage in outside sexual liaisons. The Codex Theodosianus ("C. Th." 9.7.3) issued in 438 CE imposed severe penalties or death on same-sex relationships, but the exact intent of the law and its relation to social practice is unclear, as only a few examples of same-sex relationships in that culture exist. Same-sex unions were celebrated in some regions of China, such as Fujian. Possibly the earliest documented same-sex wedding in Latin Christendom occurred in Rome, Italy, at the San Giovanni a Porta Latina basilica in 1581. Several cultures have practiced temporary and conditional marriages. Examples include the Celtic practice of handfasting and fixed-term marriages in the Muslim community. Pre-Islamic Arabs practiced a form of temporary marriage that carries on today in the practice of Nikah mut‘ah, a fixed-term marriage contract. The Islamic prophet Muhammad sanctioned a temporary marriage – sigheh in Iran and muta'a in Iraq – which can provide a legitimizing cover for sex workers. The same forms of temporary marriage have been used in Egypt, Lebanon and Iran to make the donation of a human ova legal for in vitro fertilisation; a woman cannot, however, use this kind of marriage to obtain a sperm donation. Muslim controversies related to Nikah Mut'ah have resulted in the practice being confined mostly to Shi'ite communities. The matrilineal Mosuo of China practice what they call "walking marriage". In some jurisdictions cohabitation, in certain circumstances, may constitute a common-law marriage, an unregistered partnership, or otherwise provide the unmarried partners with various rights and responsibilities; and in some countries the laws recognize cohabitation in lieu of institutional marriage for taxation and social security benefits. This is the case, for example, in Australia. Cohabitation may be an option pursued as a form of resistance to traditional institutionalized marriage. However, in this context, some nations reserve the right to define the relationship as marital, or otherwise to regulate the relation, even if the relation has not been registered with the state or a religious institution. Conversely, institutionalized marriages may not involve cohabitation. In some cases couples living together do not wish to be recognized as married. This may occur because pension or alimony rights are adversely affected; because of taxation considerations; because of immigration issues, or for other reasons. Such marriages have also been increasingly common in Beijing. Guo Jianmei, director of the center for women's studies at Beijing University, told a Newsday correspondent, "Walking marriages reflect sweeping changes in Chinese society." A "walking marriage" refers to a type of temporary marriage formed by the Mosuo of China, in which male partners live elsewhere and make nightly visits. A similar arrangement in Saudi Arabia, called misyar marriage, also involves the husband and wife living separately but meeting regularly. There is wide cross-cultural variation in the social rules governing the selection of a partner for marriage. There is variation in the degree to which partner selection is an individual decision by the partners or a collective decision by the partners' kin groups, and there is variation in the rules regulating which partners are valid choices. The United Nations World Fertility Report of 2003 reports that 89% of all people get married before age forty-nine. The percent of women and men who marry before age forty-nine drops to nearly 50% in some nations and reaches near 100% in other nations. In other cultures with less strict rules governing the groups from which a partner can be chosen the selection of a marriage partner may involve either the couple going through a selection process of courtship or the marriage may be arranged by the couple's parents or an outside party, a matchmaker. Some people want to marry a person with higher or lower status than them. Others want to marry people who have similar status. In many societies women marry men who are of higher social status. There are marriages where each party has sought a partner of similar status. There are other marriages in which the man is older than the woman. Societies have often placed restrictions on marriage to relatives, though the degree of prohibited relationship varies widely. Marriages between parents and children, or between full siblings, with few exceptions, have been considered incest and forbidden. However, marriages between more distant relatives have been much more common, with one estimate being that 80% of all marriages in history have been between second cousins or closer. This proportion has fallen dramatically, but still more than 10% of all marriages are believed to be between people who are second cousins or more closely related. In the United States, such marriages are now highly stigmatized, and laws ban most or all first-cousin marriage in 30 states. Specifics vary: in South Korea, historically it was illegal to marry someone with the same last name and same ancestral line. An Avunculate marriage is a marriage that occurs between an uncle and his niece or between an aunt and her nephew. Such marriages are illegal in most countries due to incest restrictions. However, a small number of countries have legalized it, including Argentina, Australia, Austria, Malaysia, and Russia. In various societies the choice of partner is often limited to suitable persons from specific social groups. In some societies the rule is that a partner is selected from an individual's own social group – endogamy, this is often the case in class- and caste-based societies. But in other societies a partner must be chosen from a different group than one's own – exogamy, this may be the case in societies practicing totemic religion where society is divided into several exogamous totemic clans, such as most Aboriginal Australian societies. In other societies a person is expected to marry their cross-cousin, a woman must marry her father's sister's son and a man must marry his mother's brother's daughter – this is often the case if either a society has a rule of tracing kinship exclusively through patrilineal or matrilineal descent groups as among the Akan people of West Africa. Another kind of marriage selection is the levirate marriage in which widows are obligated to marry their husband's brother, mostly found in societies where kinship is based on endogamous clan groups. Religion has commonly weighed in on the matter of which relatives, if any, are allowed to marry. Relations may be by consanguinity or affinity, meaning by blood or by marriage. On the marriage of cousins, Catholic policy has evolved from initial acceptance, through a long period of general prohibition, to the contemporary requirement for a dispensation. Islam has always allowed it, while Hindu texts vary widely. In a wide array of lineage-based societies with a classificatory kinship system, potential spouses are sought from a specific class of relative as determined by a prescriptive marriage rule. This rule may be expressed by anthropologists using a "descriptive" kinship term, such as a "man's mother's brother's daughter" (also known as a "cross-cousin"). Such descriptive rules mask the participant's perspective: a man should marry a woman from his mother's lineage. Within the society's kinship terminology, such relatives are usually indicated by a specific term which sets them apart as potentially marriageable. Pierre Bourdieu notes, however, that very few marriages ever follow the rule, and that when they do so, it is for "practical kinship" reasons such as the preservation of family property, rather than the "official kinship" ideology. Insofar as regular marriages following prescriptive rules occur, lineages are linked together in fixed relationships; these ties between lineages may form political alliances in kinship dominated societies. French structural anthropologist Claude Lévi-Strauss developed alliance theory to account for the "elementary" kinship structures created by the limited number of prescriptive marriage rules possible. A pragmatic (or 'arranged') marriage is made easier by formal procedures of family or group politics. A responsible authority sets up or encourages the marriage; they may, indeed, engage a professional matchmaker to find a suitable spouse for an unmarried person. The authority figure could be parents, family, a religious official, or a group consensus. In some cases, the authority figure may choose a match for purposes other than marital harmony. A forced marriage is a marriage in which one or both of the parties is married against their will. Forced marriages continue to be practiced in parts of the world, especially in South Asia and Africa. The line between forced marriage and consensual marriage may become blurred, because the social norms of these cultures dictate that one should never oppose the desire of one's parents/relatives in regard to the choice of a spouse; in such cultures it is not necessary for violence, threats, intimidation etc. to occur, the person simply "consents" to the marriage even if they don't want it, out of the implied social pressure and duty. The customs of bride price and dowry, that exist in parts of the world, can lead to buying and selling people into marriage. In some societies, ranging from Central Asia to the Caucasus to Africa, the custom of bride kidnapping still exists, in which a woman is captured by a man and his friends. Sometimes this covers an elopement, but sometimes it depends on sexual violence. In previous times, "raptio" was a larger-scale version of this, with groups of women captured by groups of men, sometimes in war; the most famous example is The Rape of the Sabine Women, which provided the first citizens of Rome with their wives. Other marriage partners are more or less imposed on an individual. For example, widow inheritance provides a widow with another man from her late husband's brothers. In rural areas of India, child marriage is practiced, with parents often arranging the wedding, sometimes even before the child is born. This practice was made illegal under the Child Marriage Restraint Act of 1929. The financial aspects of marriage vary between cultures and have changed over time. In some cultures, dowries and bridewealth continue to be required today. In both cases, the financial arrangements are usually made between the groom (or his family) and the bride's family; with the bride often not being involved in the negotiations, and often not having a choice in whether to participate in the marriage. In Early modern Britain, the social status of the couple was supposed to be equal. After the marriage, all the property (called "fortune") and expected inheritances of the wife belonged to the husband. A dowry is "a process whereby parental property is distributed to a daughter at her marriage (i.e. "inter vivos") rather than at the holder's death ("mortis causa")… A dowry establishes some variety of conjugal fund, the nature of which may vary widely. This fund ensures her support (or endowment) in widowhood and eventually goes to provide for her sons and daughters." In some cultures, especially in countries such as Turkey, India, Bangladesh, Pakistan, Sri Lanka, Morocco, Nepal, dowries continue to be expected. In India, thousands of dowry-related deaths have taken place on yearly basis, to counter this problem, several jurisdictions have enacted laws restricting or banning dowry (see Dowry law in India). In Nepal, dowry was made illegal in 2009. Some authors believe that the giving and receiving of dowry reflects the status and even the effort to climb high in social hierarchy. Direct Dowry contrasts with bridewealth, which is paid by the groom or his family to the bride's parents, and with indirect dowry (or dower), which is property given to the bride herself by the groom at the time of marriage and which remains under her ownership and control. In the Jewish tradition, the rabbis in ancient times insisted on the marriage couple entering into a prenuptial agreement, called a "ketubah". Besides other things, the "ketubah" provided for an amount to be paid by the husband in the event of a divorce or his estate in the event of his death. This amount was a replacement of the biblical dower or bride price, which was payable at the time of the marriage by the groom to the father of the bride. This innovation was put in place because the biblical bride price created a major social problem: many young prospective husbands could not raise the bride price at the time when they would normally be expected to marry. So, to enable these young men to marry, the rabbis, in effect, delayed the time that the amount would be payable, when they would be more likely to have the sum. It may also be noted that both the dower and the "ketubah" amounts served the same purpose: the protection for the wife should her support cease, either by death or divorce. The only difference between the two systems was the timing of the payment. It is the predecessor to the wife's present-day entitlement to maintenance in the event of the breakup of marriage, and family maintenance in the event of the husband not providing adequately for the wife in his will. Another function performed by the "ketubah" amount was to provide a disincentive for the husband contemplating divorcing his wife: he would need to have the amount to be able to pay to the wife. Morning gifts, which might also be arranged by the bride's father rather than the bride, are given to the bride herself; the name derives from the Germanic tribal custom of giving them the morning after the wedding night. She might have control of this morning gift during the lifetime of her husband, but is entitled to it when widowed. If the amount of her inheritance is settled by law rather than agreement, it may be called dower. Depending on legal systems and the exact arrangement, she may not be entitled to dispose of it after her death, and may lose the property if she remarries. Morning gifts were preserved for centuries in morganatic marriage, a union where the wife's inferior social status was held to prohibit her children from inheriting a noble's titles or estates. In this case, the morning gift would support the wife and children. Another legal provision for widowhood was jointure, in which property, often land, would be held in joint tenancy, so that it would automatically go to the widow on her husband's death. Islamic tradition has similar practices. A 'mahr', either immediate or deferred, is the woman's portion of the groom's wealth (divorce) or estate (death). These amounts are usually set on the basis of the groom's own and family wealth and incomes, but in some parts these are set very high so as to provide a disincentive for the groom exercising the divorce, or the husband's family 'inheriting' a large portion of the estate, especially if there are no male offspring from the marriage. In some countries, including Iran, the mahr or alimony can amount to more than a man can ever hope to earn, sometimes up to US$1,000,000 (4000 official Iranian gold coins). If the husband cannot pay the mahr, either in case of a divorce or on demand, according to the current laws in Iran, he will have to pay it by installments. Failure to pay the mahr might even lead to imprisonment. Bridewealth is a common practice in parts of Southeast Asia (Thailand, Cambodia), parts of Central Asia, and in much of sub-Saharan Africa. It is also known as brideprice although this has fallen in disfavor as it implies the purchase of the bride. Bridewealth is the amount of money or property or wealth paid by the groom or his family to the parents of a woman upon the marriage of their daughter to the groom. In anthropological literature, bride price has often been explained as payment made to compensate the bride's family for the loss of her labor and fertility. In some cases, bridewealth is a means by which the groom's family's ties to the children of the union are recognized. In some countries a married person or couple benefits from various taxation advantages not available to a single person. For example, spouses may be allowed to average their combined incomes. This is advantageous to a married couple with disparate incomes. To compensate for this, countries may provide a higher tax bracket for the averaged income of a married couple. While income averaging might still benefit a married couple with a stay-at-home spouse, such averaging would cause a married couple with roughly equal personal incomes to pay more total tax than they would as two single persons. In the United States, this is called the marriage penalty. When the rates applied by the tax code are not based income averaging, but rather on the "sum" of individuals' incomes, higher rates will usually apply to each individual in a two-earner households in a progressive tax systems. This is most often the case with high-income taxpayers and is another situation called a marriage penalty. Conversely, when progressive tax is levied on the individual with no consideration for the partnership, dual-income couples fare much better than single-income couples with similar household incomes. The effect can be increased when the welfare system treats the same income as a shared income thereby denying welfare access to the non-earning spouse. Such systems apply in Australia and Canada, for example. In many Western cultures, marriage usually leads to the formation of a new household comprising the married couple, with the married couple living together in the same home, often sharing the same bed, but in some other cultures this is not the tradition. Among the Minangkabau of West Sumatra, residency after marriage is matrilocal, with the husband moving into the household of his wife's mother. Residency after marriage can also be patrilocal or avunculocal. In these cases, married couples may not form an independent household, but remain part of an extended family household. Early theories explaining the determinants of postmarital residence connected it with the sexual division of labor. However, to date, cross-cultural tests of this hypothesis using worldwide samples have failed to find any significant relationship between these two variables. However, Korotayev's tests show that the female contribution to subsistence does correlate significantly with matrilocal residence in general. However, this correlation is masked by a general polygyny factor. Although, in different-sex marriages, an increase in the female contribution to subsistence tends to lead to matrilocal residence, it also tends simultaneously to lead to general non-sororal polygyny which effectively destroys matrilocality. If this polygyny factor is controlled (e.g., through a multiple regression model), division of labor turns out to be a significant predictor of postmarital residence. Thus, Murdock's hypotheses regarding the relationships between the sexual division of labor and postmarital residence were basically correct, though the actual relationships between those two groups of variables are more complicated than he expected. There has been a trend toward the neolocal residence in western societies. Marriage laws refer to the legal requirements which determine the validity of a marriage, which vary considerably between countries. Article 16 of the Universal Declaration of Human Rights declares that "Men and women of full age, without any limitation due to race, nationality or religion, have the right to marry and to found a family. They are entitled to equal rights as to marriage, during marriage and at its dissolution. Marriage shall be entered into only with the free and full consent of the intending spouses." A marriage bestows rights and obligations on the married parties, and sometimes on relatives as well, being the sole mechanism for the creation of affinal ties (in-laws). These may include, depending on jurisdiction: These rights and obligations vary considerably between societies, and between groups within society. These might include arranged marriages, family obligations, the legal establishment of a nuclear family unit, the legal protection of children and public declaration of commitment. In many countries today, each marriage partner has the choice of keeping his or her property separate or combining properties. In the latter case, called community property, when the marriage ends by divorce each owns half. In lieu of a will or trust, property owned by the deceased generally is inherited by the surviving spouse. In some legal systems, the partners in a marriage are "jointly liable" for the debts of the marriage. This has a basis in a traditional legal notion called the "Doctrine of Necessities" whereby, in a heterosexual marriage, a husband was responsible to provide necessary things for his wife. Where this is the case, one partner may be sued to collect a debt for which they did not expressly contract. Critics of this practice note that debt collection agencies can abuse this by claiming an unreasonably wide range of debts to be expenses of the marriage. The cost of defense and the burden of proof is then placed on the non-contracting party to prove that the expense is not a debt of the family. The respective maintenance obligations, both during and eventually after a marriage, are regulated in most jurisdictions; alimony is one such method. Marriage is an institution that is historically filled with restrictions. From age, to race, to social status, to consanguinity, to gender, restrictions are placed on marriage by society for reasons of benefiting the children, passing on healthy genes, maintaining cultural values, or because of prejudice and fear. Almost all cultures that recognize marriage also recognize adultery as a violation of the terms of marriage. Most jurisdictions set a minimum age for marriage, that is, a person must attain a certain age to be legally allowed to marry. This age may depend on circumstances, for instance exceptions from the general rule may be permitted if the parents of a young person express their consent and/or if a court decides that said marriage is in the best interest of the young person (often this applies in cases where a girl is pregnant). Although most age restrictions are in place in order to prevent children from being forced into marriages, especially to much older partners – marriages which can have negative education and health related consequences, and lead to child sexual abuse and other forms of violence – such child marriages remain common in parts of the world. According to the UN, child marriages are most common in rural sub-Saharan Africa and South Asia. The ten countries with the highest rates of child marriage are: Niger (75%), Chad, Central African Republic, Bangladesh, Guinea, Mozambique, Mali, Burkina Faso, South Sudan, and Malawi. To prohibit incest and eugenic reasons, marriage laws have set restrictions for relatives to marry. Direct blood relatives are usually prohibited to marry, while for branch line relatives, laws are wary. Laws banning "race-mixing" were enforced in certain North American jurisdictions from 1691 until 1967, in Nazi Germany (The Nuremberg Laws) from 1935 until 1945, and in South Africa during most part of the Apartheid era (1949–1985). All these laws primarily banned marriage between persons of different racially or ethnically defined groups, which was termed "amalgamation" or "miscegenation" in the U.S. The laws in Nazi Germany and many of the U.S. states, as well as South Africa, also banned sexual relations between such individuals. In the United States, laws in some but not all of the states prohibited the marriage of whites and blacks, and in many states also the intermarriage of whites with Native Americans or Asians. In the U.S., such laws were known as anti-miscegenation laws. From 1913 until 1948, 30 out of the then 48 states enforced such laws. Although an "Anti-Miscegenation Amendment" to the United States Constitution was proposed in 1871, in 1912–1913, and in 1928, no nationwide law against racially mixed marriages was ever enacted. In 1967, the Supreme Court of the United States unanimously ruled in "Loving v. Virginia" that anti-miscegenation laws are unconstitutional. With this ruling, these laws were no longer in effect in the remaining 16 states that still had them. The Nazi ban on interracial marriage and interracial sex was enacted in September 1935 as part of the Nuremberg Laws, the "Gesetz zum Schutze des deutschen Blutes und der deutschen Ehre" (The Law for the Protection of German Blood and German Honour). The Nuremberg Laws classified Jews as a race and forbade marriage and extramarital sexual relations at first with people of Jewish descent, but was later ended to the "Gypsies, Negroes or their bastard offspring" and people of "German or related blood". Such relations were marked as "Rassenschande" (lit. "race-disgrace") and could be punished by imprisonment (usually followed by deportation to a concentration camp) and even by death. South Africa under apartheid also banned interracial marriage. The Prohibition of Mixed Marriages Act, 1949 prohibited marriage between persons of different races, and the Immorality Act of 1950 made sexual relations with a person of a different race a crime. Same-sex marriage is legally performed and recognized (nationwide or in some jurisdictions) in Argentina, Australia, Austria, Belgium, Brazil, Canada, Colombia, Costa Rica, Denmark, Ecuador, Finland, France, Germany, Iceland, Ireland, Luxembourg, Malta, Mexico, the Netherlands, New Zealand, Norway, Portugal, South Africa, Spain, Sweden, Taiwan, the United Kingdom, the United States, and Uruguay. Israel recognizes same-sex marriages entered into abroad as full marriages. Furthermore, the Inter-American Court of Human Rights has issued a ruling that is expected to facilitate recognition in several countries in the Americas. The introduction of same-sex marriage has varied by jurisdiction, being variously accomplished through legislative change to marriage law, a court ruling based on constitutional guarantees of equality, or by direct popular vote (via ballot initiative or referendum). The recognition of same-sex marriage is considered to be a human right and a civil right as well as a political, social, and religious issue. The most prominent supporters of same-sex marriage are human rights and civil rights organizations as well as the medical and scientific communities, while the most prominent opponents are religious groups. Various faith communities around the world support same-sex marriage, while many religious groups oppose it. Polls consistently show continually rising support for the recognition of same-sex marriage in all developed democracies and in some developing democracies. The establishment of recognition in law for the marriages of same-sex couples is one of the most prominent objectives of the LGBT rights movement. Polygyny is widely practiced in mostly Muslim and African countries. In the Middle Eastern region, Israel, Turkey and Tunisia are notable exceptions. In most other jurisdictions, polygamy is illegal. For example, In the United States, polygamy is illegal in all 50 states. In the late-19th century, citizens of the self-governing territory of what is present-day Utah were forced by the United States federal government to abandon the practice of polygamy through the vigorous enforcement of several Acts of Congress, and eventually complied. The Church of Jesus Christ of Latter-day Saints formally abolished the practice in 1890, in a document labeled 'The Manifesto' (see Latter Day Saint polygamy in the late-19th century). Among American Muslims, a small minority of around 50,000 to 100,000 people are estimated to live in families with a husband maintaining an illegal polygamous relationship. Several countries such as India and Sri Lanka, permit only their Islamic citizens to practice polygamy. Some Indians have converted to Islam in order to bypass such legal restrictions. Predominantly Christian nations usually do not allow polygamous unions, with a handful of exceptions being the Republic of the Congo, Uganda, and Zambia. Myanmar (frequently referred to as Burma) is also the only predominantly Buddhist nation to allow for civil polygynous marriages, though such is rarely tolerated by the Burmese population. In various jurisdictions, a civil marriage may take place as part of the religious marriage ceremony, although they are theoretically distinct. Some jurisdictions allow civil marriages in circumstances which are notably not allowed by particular religions, such as same-sex marriages or civil unions. The opposite case may happen as well. Partners may not have full juridical acting capacity and churches may have less strict limits than the civil jurisdictions. This particularly applies to minimum age, or physical infirmities. It is possible for two people to be recognised as married by a religious or other institution, but not by the state, and hence without the legal rights and obligations of marriage; or to have a civil marriage deemed invalid and sinful by a religion. Similarly, a couple may remain married in religious eyes after a civil divorce. A marriage is usually formalized at a wedding or marriage ceremony. The ceremony may be officiated either by a religious official, by a government official or by a state approved celebrant. In various European and some Latin American countries, any religious ceremony must be held separately from the required civil ceremony. Some countries – such as Belgium, Bulgaria, France, the Netherlands, Romania and Turkey – require that a civil ceremony take place before any religious one. In some countries – notably the United States, Canada, the United Kingdom, the Republic of Ireland, Norway and Spain – both ceremonies can be held together; the officiant at the religious and civil ceremony also serving as agent of the state to perform the civil ceremony. To avoid any implication that the state is "recognizing" a religious marriage (which is prohibited in some countries) – the "civil" ceremony is said to be taking place at the same time as the religious ceremony. Often this involves simply signing a register during the religious ceremony. If the civil element of the religious ceremony is omitted, the marriage ceremony is not recognized as a marriage by government under the law. Some countries, such as Australia, permit marriages to be held in private and at any location; others, including England and Wales, require that the civil ceremony be conducted in a place open to the public and specially sanctioned by law for the purpose. In England, the place of marriage formerly had to be a church or register office, but this was extended to any public venue with the necessary licence. An exception can be made in the case of marriage by special emergency license (UK: licence), which is normally granted only when one of the parties is terminally ill. Rules about where and when persons can marry vary from place to place. Some regulations require one of the parties to reside within the jurisdiction of the register office (formerly parish). Each religious authority has rules for the manner in which marriages are to be conducted by their officials and members. Where religious marriages are recognised by the state, the officiator must also conform with the law of the jurisdiction. In a small number of jurisdictions marriage relationships may be created by the operation of the law alone. Unlike the typical ceremonial marriage with legal contract, wedding ceremony, and other details, a common-law marriage may be called "marriage by habit and repute (cohabitation)." A de facto common-law marriage without a license or ceremony is legally binding in some jurisdictions but has no legal consequence in others. A "civil union", also referred to as a "civil partnership", is a legally recognized form of partnership similar to marriage. Beginning with Denmark in 1989, civil unions under one name or another have been established by law in several countries in order to provide same-sex couples rights, benefits, and responsibilities similar (in some countries, identical) to opposite-sex civil marriage. In some jurisdictions, such as Brazil, New Zealand, Uruguay, Ecuador, France and the U.S. states of Hawaii and Illinois, civil unions are also open to opposite-sex couples. Sometimes people marry to take advantage of a certain situation, sometimes called a marriage of convenience or a sham marriage. In 2003, over 180,000 immigrants were admitted to the U.S. as spouses of U.S. citizens; more were admitted as fiancés of US citizens for the purpose of being married within 90 days. These marriages had a diverse range of motives, including obtaining permanent residency, securing an inheritance that has a marriage clause, or to enroll in health insurance, among many others. While all marriages have a complex combination of conveniences motivating the parties to marry, a marriage of convenience is one that is devoid of normal reasons to marry. In certain countries like Singapore sham marriages are punishable criminal offences. People have proposed arguments against marriage for reasons that include political, philosophical and religious criticisms; concerns about the divorce rate; individual liberty and gender equality; questioning the necessity of having a personal relationship sanctioned by government or religious authorities; or the promotion of celibacy for religious or philosophical reasons. Feminist theory approaches opposite-sex marriage as an institution traditionally rooted in patriarchy that promotes male superiority and power over women. This power dynamic conceptualizes men as "the provider operating in the public sphere" and women as "the caregivers operating within the private sphere". "Theoretically, women ... [were] defined as the property of their husbands ... The adultery of a woman was always treated with more severity than that of a man." "[F]eminist demands for a wife's control over her own property were not met [in parts of Britain] until ... [laws were passed in the late 19th century]." Traditional heterosexual marriage imposed an obligation of the wife to be sexually available for her husband and an obligation of the husband to provide material/financial support for the wife. Numerous philosophers, feminists and other academic figures have commented on this throughout history, condemning the hypocrisy of legal and religious authorities in regard to sexual issues; pointing to the lack of choice of a woman in regard to controlling her own sexuality; and drawing parallels between marriage, an institution promoted as sacred, and prostitution, widely condemned and vilified (though often tolerated as a "necessary evil"). Mary Wollstonecraft, in the 18th century, described marriage as "legal prostitution". Emma Goldman wrote in 1910: "To the moralist prostitution does not consist so much in the fact that the woman sells her body, but rather that she sells it out of wedlock". Bertrand Russell in his book Marriage and Morals wrote that: "Marriage is for woman the commonest mode of livelihood, and the total amount of undesired sex endured by women is probably greater in marriage than in prostitution." Angela Carter in Nights at the Circus wrote: "What is marriage but prostitution to one man instead of many?" Some critics object to what they see as propaganda in relation to marriage – from the government, religious organizations, the media – which aggressively promote marriage as a solution for all social problems; such propaganda includes, for instance, marriage promotion in schools, where children, especially girls, are bombarded with positive information about marriage, being presented only with the information prepared by authorities. The performance of dominant gender roles by men and submissive gender roles by women influence the power dynamic of a heterosexual marriage. In some American households, women internalize gender role stereotypes and often assimilate into the role of "wife", "mother", and "caretaker" in conformity to societal norms and their male partner. Author bell hooks states "within the family structure, individuals learn to accept sexist oppression as 'natural' and are primed to support other forms of oppression, including heterosexist domination." "[T]he cultural, economic, political and legal supremacy of the husband" was "[t]raditional ... under English law". This patriarchal dynamic is contrasted with a conception of egalitarian or Peer Marriage in which power and labour are divided equally, and not according to gender roles. In the US, studies have shown that, despite egalitarian ideals being common, less than half of respondents viewed their opposite-sex relationships as equal in power, with unequal relationships being more commonly dominated by the male partner. Studies also show that married couples find the highest level of satisfaction in egalitarian relationships and lowest levels of satisfaction in wife dominate relationships. In recent years, egalitarian or Peer Marriages have been receiving increasing focus and attention politically, economically and culturally in a number of countries, including the United States. Different societies demonstrate variable tolerance of extramarital sex. The Standard Cross-Cultural Sample describes the occurrence of extramarital sex by gender in over 50 pre-industrial cultures. The occurrence of extramarital sex by men is described as "universal" in 6 cultures, "moderate" in 29 cultures, "occasional" in 6 cultures, and "uncommon" in 10 cultures. The occurrence of extramarital sex by women is described as "universal" in 6 cultures, "moderate" in 23 cultures, "occasional" in 9 cultures, and "uncommon" in 15 cultures. Three studies using nationally representative samples in the United States found that between 10–15% of women and 20–25% of men engage in extramarital sex. Many of the world's major religions look with disfavor on sexual relations outside marriage. There are non-secular states that sanction criminal penalties for sexual intercourse before marriage. Sexual relations by a married person with someone other than his/her spouse is known as adultery. Adultery is considered in many jurisdictions to be a crime and grounds for divorce. In some countries, such as Saudi Arabia, Pakistan, Afghanistan, Iran, Kuwait, Maldives, Morocco, Oman, Mauritania, United Arab Emirates, Sudan, Yemen, any form of sexual activity outside marriage is illegal. In some parts of the world, women and girls accused of having sexual relations outside marriage are at risk of becoming victims of honor killings committed by their families. In 2011 several people were sentenced to death by stoning after being accused of adultery in Iran, Somalia, Afghanistan, Sudan, Mali and Pakistan. Practices such as honor killings and stoning continue to be supported by mainstream politicians and other officials in some countries. In Pakistan, after the 2008 Balochistan honour killings in which five women were killed by tribesmen of the Umrani Tribe of Balochistan, Pakistani Federal Minister for Postal Services Israr Ullah Zehri defended the practice; he said: "These are centuries-old traditions, and I will continue to defend them. Only those who indulge in immoral acts should be afraid." An issue that is a serious concern regarding marriage and which has been the object of international scrutiny is that of sexual violence within marriage. Throughout much of the history, in most cultures, sex in marriage was considered a 'right', that could be taken by force (often by a man from a woman), if 'denied'. As the concept of human rights started to develop in the 20th century, and with the arrival of second-wave feminism, such views have become less widely held. The legal and social concept of marital rape has developed in most industrialized countries in the mid- to late 20th century; in many other parts of the world it is not recognized as a form of abuse, socially or legally. Several countries in Eastern Europe and Scandinavia made marital rape illegal before 1970, and other countries in Western Europe and the English-speaking Western world outlawed it in the 1980s and 1990s. In England and Wales, marital rape was made illegal in 1991. Although marital rape is being increasingly criminalized in developing countries too, cultural, religious, and traditional ideologies about "conjugal rights" remain very strong in many parts of the world; and even in many countries that have adequate laws against rape in marriage these laws are rarely enforced. Apart from the issue of rape committed against one's spouse, marriage is, in many parts of the world, closely connected with other forms of sexual violence: in some places, like Morocco, unmarried girls and women who are raped are often forced by their families to marry their rapist. Because being the victim of rape and losing virginity carry extreme social stigma, and the victims are deemed to have their "reputation" tarnished, a marriage with the rapist is arranged. This is claimed to be in the advantage of both the victim – who does not remain unmarried and doesn't lose social status – and of the rapist, who avoids punishment. In 2012, after a Moroccan 16-year-old girl committed suicide after having been forced by her family to marry her rapist and enduring further abuse by the rapist after they married, there have been protests from activists against this practice which is common in Morocco. In some societies, the very high social and religious importance of marital fidelity, especially female fidelity, has as result the criminalization of adultery, often with harsh penalties such as stoning or flogging; as well as leniency towards punishment of violence related to infidelity (such as honor killings). In the 21st century, criminal laws against adultery have become controversial with international organizations calling for their abolition. Opponents of adultery laws argue that these laws are a major contributor to discrimination and violence against women, as they are enforced selectively mostly against women; that they prevent women from reporting sexual violence; and that they maintain social norms which justify violent crimes committed against women by husbands, families and communities. A Joint Statement by the United Nations Working Group on discrimination against women in law and in practice states that "Adultery as a criminal offence violates women's human rights". Some human rights organizations argue that the criminalization of adultery also violates internationally recognized protections for private life, as it represents an arbitrary interference with an individual's privacy, which is not permitted under international law. The laws surrounding heterosexual marriage in many countries have come under international scrutiny because they contradict international standards of human rights; institutionalize violence against women, child marriage and forced marriage; require the permission of a husband for his wife to work in a paid job, sign legal documents, file criminal charges against someone, sue in civil court etc.; sanction the use by husbands of violence to "discipline" their wives; and discriminate against women in divorce. Such things were legal even in many Western countries until recently: for instance, in France, married women obtained the right to work without their husband's permission in 1965, and in West Germany women obtained this right in 1977 (by comparison women in East Germany had many more rights). In Spain, during Franco's era, a married woman needed her husband's consent, referred to as the "permiso marital", for almost all economic activities, including employment, ownership of property, and even traveling away from home; the "permiso marital" was abolished in 1975. An absolute submission of a wife to her husband is accepted as natural in many parts of the world, for instance surveys by UNICEF have shown that the percentage of women aged 15–49 who think that a husband is justified in hitting or beating his wife under certain circumstances is as high as 90% in Afghanistan and Jordan, 87% in Mali, 86% in Guinea and Timor-Leste, 81% in Laos, 80% in Central African Republic. Detailed results from Afghanistan show that 78% of women agree with a beating if the wife "goes out without telling him [the husband]" and 76% agree "if she argues with him". Throughout history, and still today in many countries, laws have provided for extenuating circumstances, partial or complete defenses, for men who killed their wives due to adultery, with such acts often being seen as crimes of passion and being covered by legal defenses such as provocation or defense of family honor. While international law and conventions recognize the need for consent for entering a marriage – namely that people cannot be forced to get married against their will – the right to obtain a divorce is not recognized; therefore holding a person in a marriage against their will (if such person has consented to entering in it) is not considered a violation of human rights, with the issue of divorce being left at the appreciation of individual states. The European Court of Human Rights has repeatedly ruled that under the European Convention on Human Rights there is neither a right to apply to divorce, nor a right to obtain the divorce if applied for it; in 2017, in "Babiarz v. Poland", the Court ruled that Poland was entitled to deny a divorce because the grounds for divorce were not met, even if the marriage in question was acknowledged both by Polish courts and by the ECHR as being a legal fiction involving a long-term separation where the husband lived with another woman with whom he had an 11-year-old child. In the EU, the last country to allow divorce was Malta, in 2011. Around the world, the only countries to forbid divorce are Philippines and Vatican City, although in practice in many countries which use a fault-based divorce system obtaining a divorce is very difficult. The ability to divorce, in law and practice, has been and continues to be a controversial issue in many countries, and public discourse involves different ideologies such as feminism, social conservatism, religious interpretations. In recent years, the customs of dowry and bride price have received international criticism for inciting conflicts between families and clans; contributing to violence against women; promoting materialism; increasing property crimes (where men steal goods such as cattle in order to be able to pay the bride price); and making it difficult for poor people to marry. African women's rights campaigners advocate the abolishing of bride price, which they argue is based on the idea that women are a form of property which can be bought. Bride price has also been criticized for contributing to child trafficking as impoverished parents sell their young daughters to rich older men. A senior Papua New Guinea police officer has called for the abolishing of bride price arguing that it is one of the main reasons for the mistreatment of women in that country. The opposite practice of dowry has been linked to a high level of violence (see Dowry death) and to crimes such as extortion. Historically, and still in many countries, children born outside marriage suffered severe social stigma and discrimination. In England and Wales, such children were known as bastards and whoresons. There are significant differences between world regions in regard to the social and legal position of non-marital births, ranging from being fully accepted and uncontroversial to being severely stigmatized and discriminated. The 1975 European Convention on the Legal Status of Children Born out of Wedlock protects the rights of children born to unmarried parents. The convention states, among others, that: "The father and mother of a child born out of wedlock shall have the same obligation to maintain the child as if it were born in wedlock" and that "A child born out of wedlock shall have the same right of succession in the estate of its father and its mother and of a member of its father's or mother's family, as if it had been born in wedlock." While in most Western countries legal inequalities between children born inside and outside marriage have largely been abolished, this is not the case in some parts of the world. The legal status of an unmarried father differs greatly from country to country. Without voluntary formal recognition of the child by the father, in most cases there is a need of due process of law in order to establish paternity. In some countries however, unmarried cohabitation of a couple for a specific period of time does create a presumption of paternity similar to that of formal marriage. This is the case in Australia. Under what circumstances can a paternity action be initiated, the rights and responsibilities of a father once paternity has been established (whether he can obtain parental responsibility and whether he can be forced to support the child) as well as the legal position of a father who voluntarily acknowledges the child, vary widely by jurisdiction. A special situation arises when a married woman has a child by a man other than her husband. Some countries, such as Israel, refuse to accept a legal challenge of paternity in such a circumstance, in order to avoid the stigmatization of the child (see Mamzer, a concept under Jewish law). In 2010, the European Court of Human Rights ruled in favor of a German man who had fathered twins with a married woman, granting him right of contact with the twins, despite the fact that the mother and her husband had forbidden him to see the children. The steps that an unmarried father must take in order to obtain rights to his child vary by country. In some countries (such as the UK – since 2003 in England and Wales, 2006 in Scotland, and 2002 in Northern Ireland) it is sufficient for the father to be listed on the birth certificate for him to have parental rights; in other countries, such as Ireland, simply being listed on the birth certificate does not offer any rights, additional legal steps must be taken (if the mother agrees, the parents can both sign a "statutory declaration", but if the mother does not agree, the father has to apply to court). Children born outside marriage have become more common, and in some countries, the majority. Recent data from Latin America showed figures for non-marital childbearing to be 74% for Colombia, 69% for Peru, 68% for Chile, 66% for Brazil, 58% for Argentina, 55% for Mexico. In 2012, in the European Union, 40% of births were outside marriage, and in the United States, in 2013, the figure was similar, at 41%. In the United Kingdom 48% of births were to unmarried women in 2012; in Ireland the figure was 35%. During the first half of the 20th century, unmarried women in some Western countries were coerced by authorities to give their children up for adoption. This was especially the case in Australia, through the forced adoptions in Australia, with most of these adoptions taking place between the 1950s and the 1970s. In 2013, Julia Gillard, then Prime Minister of Australia, offered a national apology to those affected by the forced adoptions. Some married couples choose not to have children. Others are unable to have children because of infertility or other factors preventing conception or the bearing of children. In some cultures, marriage imposes an "obligation" on women to bear children. In northern Ghana, for example, payment of bridewealth signifies a woman's requirement to bear children, and women using birth control face substantial threats of physical abuse and reprisals. Religions develop in specific geographic and social milieux. Religious attitudes and practices relating to marriage vary, but have many similarities. The Bahá'í Faith encourages marriage and views it as a mutually strengthening bond, but it is not obligatory. A Bahá'í marriage requires the couple to choose each other, and then obtain the consent of all living parents. Modern Christianity bases its views on marriage upon the teachings of Jesus and the Paul the Apostle. Many of the largest Christian denominations regard marriage as a sacrament, sacred institution, or covenant. However, this was not the case in the Roman Catholic Church before the 1184 Council of Verona officially recognized it as such. Before then, no specific ritual was prescribed for celebrating a marriage: "Marriage vows did not have to be exchanged in a church, nor was a priest's presence required. A couple could exchange consent anywhere, anytime." The Church only formally recognized the union and it was finalized with the couple together partaking Holy Communion. Decrees on marriage of the Roman Catholic Council of Trent (twenty-fourth session of 1563) made the validity of marriage dependent on the wedding occurring in the presence of a priest and two witnesses. The absence of a requirement of parental consent ended a debate that proceeded from the 12th century. In the case of a civil divorce, the innocent spouse had and has no right to marry again until the death of the other spouse terminates the still valid marriage, even if the other spouse was guilty of adultery.
https://en.wikipedia.org/wiki?curid=19728
Midgard In Germanic cosmology, Midgard (an anglicised form of Old Norse ; Old English , Old Saxon , Old High German , and Gothic "Midjun-gards"; "middle yard") is the name for Earth (equivalent in meaning to the Greek term , "inhabited") inhabited by and known to humans in early Germanic cosmology. The Old Norse form plays a notable role in Norse cosmology. This name occurs in Old Norse literature as . In Old Saxon "Heliand" it appears as and in Old High German poem "Muspilli" it appears as . The Gothic form is attested in the Gospel of Luke as a translation of the Greek word . The word is present in Old English epic and poetry as ; later transformed to or ("Middle-earth") in Middle English literature. All these forms are from a Common Germanic "*midja-gardaz" ("*meddila-", "*medjan-"), a compound of "*midja-" "middle" and "*gardaz" "yard, enclosure". In early Germanic cosmology, the term stands alongside "world" (Old English "weorold", Old Saxon "werold", Old High German "weralt", Old Frisian "warld" and Old Norse "verǫld"), from a Common Germanic compound "*wira-alđiz", the "age of men". Midgard is a realm in Norse mythology. It is one of the Nine Worlds and the only one that is completely visible to mankind (the others may intersect with this visible realm but are mostly invisible). Pictured as placed somewhere in the middle of Yggdrasil, Midgard is between the land of Niflheim—the land of ice—to the north and Muspelheim—the land of fire—to the south. Midgard is surrounded by a world of water, or ocean, that is impassable. The ocean is inhabited by the great sea serpent Jörmungandr (Miðgarðsormr), who is so huge that he encircles the world entirely, grasping his own tail. The concept is similar to that of the Ouroboros. Midgard was also connected to Asgard, the home of the gods, by the Bifröst, the rainbow bridge, guarded by Heimdallr. In Norse mythology, "Miðgarðr" became applied to the wall around the world that the gods constructed from the eyebrows of the giant Ymir as a defense against the Jotuns who lived in Jotunheim, east of "Manheimr", the "home of men", a word used to refer to the entire world. The gods slew the giant Ymir, the first created being, and put his body into the central void of the universe, creating the world out of his body: his flesh constituting the land, his blood the oceans, his bones the mountains, his teeth the cliffs, his hairs the trees, and his brains the clouds. Ymir's skull was held by four dwarfs, Nordri, Sudri, Austri, and Vestri, who represent the four points on the compass and became the dome of heaven. The sun, moon, and stars were said to be scattered sparks in the skull. According to the Eddas, Midgard will be destroyed at Ragnarök, the battle at the end of the world. Jörmungandr will arise from the ocean, poisoning the land and sea with his venom and causing the sea to rear up and lash against the land. The final battle will take place on the plane of Vígríðr, following which Midgard and almost all life on it will be destroyed, with the earth sinking into the sea only to rise again, fertile and green when the cycle repeats and the creation begins again. Although most surviving instances of the word Midgard refer to spiritual matters, it was also used in more mundane situations, as in the Viking Age runestone poem from the inscription Sö 56 from Fyrby: The Danish and Swedish form or , the Norwegian or , as well as the Icelandic and Faroese form , all derive from the Old Norse term. The name "middangeard" occurs six times in the Old English epic poem "Beowulf", and is the same word as Midgard in Old Norse. The term is equivalent in meaning to the Greek term Oikoumene, as referring to the known and inhabited world. The concept of Midgard occurs many times in Middle English. The association with "earth" (OE "eorðe") in Middle English "middellærd", "middelerde" is by popular etymology; the continuation of "geard" "enclosure" is "yard". An early example of this transformation is from the Ormulum: The usage of "Middle-earth" as a name for a setting was popularized by Old English scholar J. R. R. Tolkien in his "The Lord of the Rings" and other fantasy works; he was originally inspired by the references to "middangeard" and "Éarendel" in the Old English poem "Crist". "Mittilagart" is mentioned in the 9th-century Old High German "Muspilli" (v. 54) meaning "the world" as opposed to the sea and the heavens:
https://en.wikipedia.org/wiki?curid=19731
Mage: The Ascension Mage: The Ascension is a role-playing game based in the World of Darkness, and was published by White Wolf Game Studio. The characters portrayed in the game are referred to as mages, and are capable of feats of magic. The idea of magic in "Mage" is broadly inclusive of diverse ideas about mystical practices as well as other belief systems, such as science and religion, so that most mages do not resemble typical fantasy wizards. In 2005, White Wolf released a new version of the game, marketed as "", for the new World of Darkness series. The new game features some of the same game mechanics but uses a substantially different premise and setting. Following the release of "", White Wolf put out a new roleplaying game every year, each set in "Vampire"'s World of Darkness and using its Storyteller rule system. The next four games were: "" (1992), "Mage: The Ascension" (1993), "" (1994) and "" (1995). "Mage" was the first World of Darkness game that Mark Rein•Hagen was not explicitly involved with, although it featured the Order of Hermes from his "Ars Magica" as just a single tradition among many. The basic premise of "Mage: The Ascension" is that everyone has the capacity, at some level, to shape reality. This capacity, personified as a mysterious alter ego called the Avatar, is dormant in most people, who are known as sleepers, whereas Magi (and/or their Avatars) are said to be Awakened. Because they're awakened, Magi can consciously effect changes to reality via willpower, beliefs, and specific magical techniques. The beliefs and techniques of Magi vary enormously, and the ability to alter reality can only exist in the context of a coherent system of belief and technique, called a paradigm. A paradigm organizes a Mage's understanding of reality, how the universe works, and what things mean. It also provides the Mage with an understanding of how to change reality, through specific magical techniques. For example, an alchemical paradigm might describe the act of wood burning as the wood "releasing its essence of elemental Fire," while modern science would describe fire as "combustion resulting from a complex chemical reaction." Paradigms tend to be idiosyncratic to the individual Mage, but the vast majority belong to broad categories of paradigm, e.g., Shamanism, Medieval Sorcery, religious miracle working, and superscience. In the Mage setting, everyday reality is governed by commonsense rules derived from the collective beliefs of sleepers. This is called the consensus. Most Magi's paradigms differ substantially from the consensus. When a mage performs an act of magic that does not seriously violate this commonsense version of reality, in game terms this is called coincidental magic. Magic that deviates wildly from consensus is called vulgar or dynamic magic. When it is performed ineptly, or is vulgar, and especially if it is vulgar and witnessed by sleepers, magic can cause Paradox, a phenomenon in which reality tries to resolve contradictions between the consensus and the Mage's efforts. Paradox is difficult to predict and almost always bad for the mage. The most common consequences of paradox include physical damage directly to the Mage's body, and paradox flaws, magical effects which can for example turn the mage's hair green, make him mute, make him incapable of leaving a certain location, and so on. In more extreme cases paradox can cause Quiet (madness that may leak into reality), Paradox Spirits (nebulous, often powerful beings which purposefully set about resolving the contradiction, usually by directly punishing the mage), or even the removal of the Mage to a paradox realm, a pocket dimension from which it may be difficult to escape. In Mage, there is an underlying framework to reality called the Tapestry. The Tapestry is naturally divided into various sections, including the physical realm and various levels of the spirit world, or Umbra. At the most basic level, the Tapestry is composed of Quintessence, the essence of magic and what is real. Quintessence can have distinctive characteristics, called resonance, which are broken down into three categories: dynamic, static, and entropic. In order to understand the metaphysics of the Mage setting, it is important to remember that many of the terms used to describe magic and Magi (e.g. Avatar, Quintessence, the Umbra, Paradox, Resonance, etc.) as well as the appearance, meaning, and understanding of a character's "Spheres," the areas of magic in which their character is proficient, vary depending on the Paradigm of the Mage in question, even though they are often, in the texts of the game, described from particular paradigmatic points-of-view. In-character, only a Mage's Paradigm can explain what each of these things are, what they mean, and why it's the way it is. In the game, Mages have always existed, though there are legends of the Pure Ones who were shards of the original, divine One. Early mages cultivated their magical beliefs alone or in small groups, generally conforming to and influencing the belief systems of their societies. Obscure myths suggest that the precursors of the modern organizations of mages originally gathered in ancient Egypt. This period of historical uncertainty also saw the rise of the Nephandi in the Near East. This set the stage for what the game's history calls the Mythic Ages. Until the late Middle Ages, mages' fortunes waxed and waned along with their native societies. Eventually, though, mages belonging to the Order of Hermes and the Messianic Voices attained great influence over European society. However, absorbed by their pursuit of occult power and esoteric knowledge, they often neglected and even abused humanity. Frequently, they were at odds with mainstream religions, envied by noble authorities and cursed by common folk. Mages who believed in proto-scientific theories banded together under the banner of the , declaring their aim was to create a safe world with Man as its ruler. They won the support of Sleepers by developing the useful arts of manufacturing, economics, wayfaring, and medicine. They also championed many of the values that we now associate with the Renaissance. Masses of Sleepers embraced the gifts of early Technology and the Science that accompanied them. As the masses' beliefs shifted, the Consensus changed and wizards began to lose their position as their power and influence waned. This was intentional. The Order of Reason perceived a safe world as one devoid of heretical beliefs, ungodly practices and supernatural creatures preying upon humanity. As the defenders of the common folk, they intended to replace the dominant magical groups with a society of philosopher-scientists as shepherds, protecting and guiding humanity. In response, non-scientific mages banded together to form the where mages of all the major magical paths gathered. They fought on battlefields and in universities trying to undermine as many discoveries as they could, but to no avail – technology made the march of Science unstoppable. The Traditions' power bases were crippled, their believers mainly converted, their beliefs ridiculed all around the world. Their final counteroffensives against the Order of Reason were foiled by internal dissent and treachery in their midst. However, from the turn of the 17th century on, the goals of the Order of Reason began to change. As their scientific paradigm unfolded, they decided that the mystical beliefs of the common people were not only backward, but dangerous, and that they should be replaced by cold, measurable and predictable physical laws and respect for human genius. They replaced long-held theologies, pantheons, and mystical traditions with ideas like rational thought and the scientific method. As more and more sleepers began to use the Order's discoveries in their everyday lives, Reason and rationality came to govern their beliefs, and the old ways came to be regarded as misguided superstition. However, the Order of Reason became less and less focused on improving the daily lives of sleepers and more concerned with eliminating any resistance to their choke-hold on the minds of humanity. Ever since a reorganization performed under Queen Victoria in the late 1800s, they call themselves . The Technocracy espouses an authoritarian rule over Sleepers' beliefs, while suppressing the Council of Nine's attempts to reintroduce magic. The Traditions replenished their numbers (which had been diminished by the withdrawal of two Traditions, the secretive Ahl-i-Batin, and the Solificati, alchemists plagued by scandal) with former Technocrats from the Sons of Ether and Virtual Adepts factions, vying for the beliefs of sleepers and with the Technocracy, and perpetually wary of the Nephandi (who consciously embrace evil and service to a demonic or alien master) and the Marauders (who resist Paradox with a magical form of madness). While the Technocracy's propaganda campaigns were effective in turning the Consensus against mystic and heterodox science, the Traditions maintained various resources, including magical nodes, hidden schools and fortresses called Chantries, and various realms outside of the Consensus in the Umbra. Finally, from 1997–2000, a series of metaplot events destroyed the Council of Nine's Umbral steadings, killing many of their most powerful members. This also cut the Technocracy off from their leadership. Both sides called a truce in their struggle to assess their new situation, especially since these events implied that Armageddon was soon at hand. Chief among these signs was creation of a barrier between the physical world and spirit world. This barrier was called the Avatar Storm because it affected the Avatar of the Mage. This Avatar Storm was the result of a battle in India on the so-called "Week of Nightmares." These changes were introduced in supplements for the second edition of the game and became core material in the third edition. Aside from common changes introduced by the World of Darkness metaplot, mages dealt with renewed conflict when the hidden Rogue Council and the Technocracy's Panopticon encouraged the Traditions and Technocracy to struggle once again. The Rogue Council only made itself known through coded missives, while Panopticon was apparently created by the leaders of the Technocracy to counter it. This struggle eventually led to the point on the timeline occupied by the book called "". While the entire metaplot has always been meant to be altered as each play group sees fit, "Ascension" provided multiple possible endings, with none of them being definitive (though one was meant to resolve the metaplot). Thus, there is no definitive canonical ending. Since the game is meant to be adapted to a group's tastes, the importance of this and the preceding storyline is largely a matter of personal preference. The metaplot of the game involves a four-way struggle between the technological and authoritarian Technocracy, the insane Marauders, the cosmically evil Nephandi and the nine mystical Traditions (that tread the middle path), to which the player characters are assumed to belong. (This struggle has in every edition of the game been characterized both as primarily a covert, violent war directly between factions, and primarily as an effort to sway the imaginations and beliefs of sleepers.) The Traditions (formally called the Nine Mystic Traditions) are a fictional alliance of secret societies in the "Mage: the Ascension" role-playing game. The Traditions exist to unify users of magic under a common banner to protect reality (particularly those parts of reality that are magical) against the growing disbelief of the modern world, the spreading dominance of the , and the predations of unstable mages such as Marauders and Nephandi. Each of the Traditions are largely independent organizations unified by a broadly accepted paradigm for practicing magic. The Traditions themselves vary substantially from one another. Some have almost no structure or rules, while others have rigid rules of protocol, etiquette, and rank. Though unified in their desire to keep magic alive, the magic practiced by different Traditions are often wildly different and entirely incompatible with one another. Understanding Traditions as a whole requires understanding each Tradition separately, and then assembling them into a somewhat cohesive whole. The nine traditions are: the Akashic Brotherhood, Celestial Chorus, Cult of Ecstasy, Dreamspeakers, Euthanatos, Order of Hermes, Sons of Ether, Verbena and Virtual Adepts. The Technocracy is likewise divided into groups; unlike the Traditions, however, they share a single paradigm, and instead divide themselves based upon methodologies and areas of expertise. The Marauders are a group of mages that embody Dynamism. Marauders are chaos mages. They are completely insane. To other mages, they appear immune to paradox effects, often using vulgar magic to accomplish their insane tasks. Marauders represent the other narrative extreme, the repellent and frightening corruption of unrestrained power, of dynamism unchecked. Marauders are insane mages whose Avatars have been warped by their mental instability, and who exist in a state of permanent Quiet. While the nature of a Marauder's power may make them seem invincible, they are still severely hampered by their madness. They cannot become Archmages, as they lack sufficient insight and are incapable of appreciating truths which do not suit their madness. In the second edition of "Mage: The Ascension", Marauders were much more cogent and likely to operate in groups, with the Umbral Underground using the Umbra to infiltrate any location and wreak havoc with the aid of bygones. They were also associated heavily with other perceived agents of Dynamism, particularly the s (who equate Dynamism with the Wyld) and sometimes Changelings. For example, the Marauders chapter in "The Book of Madness" is narrated by a Corax (were-raven) named Johnny Gore, who relates his experiences running with the Butcher Street Regulars. In the revised edition, Marauders were made darker and less coherent, in keeping with the more serious treatment of madness used for Malkavians in "Vampire: The Masquerade Revised Edition". The Avatar Storm was a very convenient explanation for the Underground's loss of power and influence, though they also became more vulnerable to Paradox. In this edition, the Regulars are a cell of the Underground, and like the other cells have highly compatible Quiets. With the Technocracy representing Stasis and the Marauders acting on behalf of Dynamism, the third part of this trifecta is Entropy, as borne by the Nephandi. While other mages may be callous or cruel, the Nephandi are morally inverted and spiritually mutilated. While a Traditionalist or Technocrat may simply fall prey to human failings or excessive zeal in their ethos, while a Marauder may well commit some true atrocities in the depth of her incurable madness; a Nephandus retains a clear moral compass, and deliberately pursues actions to worsen the world and bring about its final end. To this end, the Technocracy and Traditions have been known to set aside the ongoing war for reality to temporarily join forces to oppose the Nephandi, and even the Marauders are known to attack the Nephandi on sight. Some of their members, called "barabbi", hail from the Technocracy and Traditions, but all Nephandi have experienced the Rebirth, wherein they embrace the antithesis of everything they know to be right, and are physically and spiritually torn apart and reassembled. This metamorphosis has a sort of terrible permanence to it: while each Mage's avatar will be reborn again and again, theirs is permanently twisted as a result of their rebirth: known as Widderslainte, these mages awaken as Nephandi. While some of the background stories detail a particular mage and her teacher trying—and succeeding—at keeping her from falling again, this is very rare. Other mystical traditions that are not part of the nine exist, and are known as Crafts. Some examples of these are the mages of Ahl-i-Batin (also known as "The Subtle Ones") who are masters of the Correspondence Sphere and former holders of the seat now held by the Virtual Adepts, as well as the djinn binding magicians known as The Taftani and the eclectic nonconformist group of willworkers known as Hollow Ones, however they are far from the only ones. The core rules of the game are similar to those in other World of Darkness games; see Storyteller System for an explanation. Like other storytelling games Mage emphasizes personal creativity and that ultimately the game's powers and traits should be used to tell a satisfying story. One of Mage's highlights is its system for describing magic, based on spheres, a relatively open-ended 'toolkit' approach to using game mechanics to define the bounds of a given character's magical ability. Different Mages will have differing aptitudes for spheres, and player characters' magical expertise is described by allocation of points in the spheres. There are nine known spheres: Deals with spatial relations, giving the Mage power over space and distances. Correspondence magic allows powers such as teleportation, seeing into distant areas, and at higher levels the Mage may also co-locate herself or even stack different spaces within each other. Correspondence can be combined with almost any other sphere to create effects that span distances. This sphere gives the Mage power over order, chaos, fate and fortune. A mage can sense where elements of chance influence the world and manipulate them to some degree. At simple levels machines can be made to fail, plans to go off without a hitch, and games of chance heavily influenced. Advanced mages can craft self-propagating memes or curse entire family lines with blights. The only requirement of the Entropy sphere is that all interventions work within the general flow of natural entropy. Forces concerns energies and natural forces and their negative opposites (i.e. light and shadow can both be manipulated independently with this Sphere). Essentially, anything in the material world that can be seen or felt but is not material can be controlled: electricity, gravity, magnetism, friction, heat, motion, fire, etc. At low levels the mage can control forces on a small scale, changing their direction, converting one energy into another. At high levels, storms and explosions can be conjured. Obviously, this Sphere tends to do the most damage and is the most flashy and vulgar. Along with Life and Matter, Forces is one of the three 'Pattern Spheres' which together are able to mold all aspects of the physical world. Life deals with understanding and influencing biological systems. Generally speaking, any material object with mostly living cells falls under the influence of this sphere. Simply, this allows the mage to heal herself or metamorphose simple life-forms at lower levels, working up to healing others and controlling more complex life at higher levels. Usually, seeking to improve a complex life-form beyond natural limits causes the condition of pattern bleeding: the affected life form begins to wither and die over time. Along with Matter and Forces, Life is one of the three Pattern Spheres. Dealing with control over one's own mind, the reading and influencing of other minds, and a variety of subtler applications such as Astral Projection and psychometry. At high levels, Mages can create new complete minds or completely rework existing ones. Matter deals with all inanimate material. Thus, being alive protects a thing from direct manipulation by the Matter sphere. Stone, dead wood, water, gold, and the corpses of once living things are only the beginning. With this Sphere, matter can be reshaped mentally, transmuted into another substance, or given altered properties. Along with Life and Forces, Matter is one of the three Pattern Spheres. This sphere deals directly with Quintessence, the raw material of the tapestry, which is the metaphysical structure of reality. This sphere allows Quintessence to be channeled and/or funneled in any way at higher levels, and it is necessary if the mage ever wants to conjure something out of nothing (as opposed to transforming one pattern into another). Uses of Prime include general magic senses, counter-magic, and making magical effects permanent. This sphere is an eclectic mixture of abilities relating to dealings with the spirit world or Umbra. It includes stepping into the Near Umbra right up to traveling through outer space, contacting and controlling spirits, communing with your own or others' avatars, returning a Mage into a sleeper, returning ghosts to life, creating magical fetish items, and so forth. Unlike other Spheres, the difficulty of Spirit magic is often a factor of the Gauntlet, making these spells more difficult for the most part. The Sphere is referred to as Dimensional Science by the Technocratic Union. This sphere deals with dilating, slowing, stopping or traveling through time. Due to game mechanics, it is simpler to travel forward in time than backwards. Time can be used to install delays into spells, view the past or future, and even pull people and objects out of linear progression. Time magic offers one means to speed up a character to get multiple actions in a combat round, a much coveted power in turn-based role-playing. One of the plot hooks that the second edition books put forth were persistent rumors of a "tenth sphere". Though there were hints, it was deliberately left vague. The final book in the line, "Ascension" implies that the tenth sphere is the sphere of Ascension (in as much as spheres are practically relevant at that point in the story). As the book presents alternative resolutions for the Mage line, Chapter Two also presents an alternative interpretation that the tenth sphere is "Judgement" or "Telos" and that Anthelios (the red star in the World of Darkness metaplot) is its planet (each sphere has an associated planet and Umbral realm). The various sphere sigils are, in whole or in part, symbols taken from alchemical texts. The third revision of the rules, "Mage: The Ascension Revised", made significant changes to the rules and setting, mainly to update Mage with respect to its own ongoing storyline, particularly in regards to events that occurred during the run of the game's second edition. (Like other World of Darkness games, Mage uses a continuing storyline across all of its books). Adam Tinworth of "Arcane" gave "Mage: The Ascension" second edition a score of 8/10, calling it good for those who like involving and challenging games; he noted that it could be difficult for new players to grasp the entire background and how magic works, and to develop their own style of magic, but found the gameplay system itself to be easy to understand for newcomers. "Mage: The Ascension" was ranked 16th in the 1996 reader poll of "Arcane" magazine to determine the 50 most popular role-playing games of all time. The magazine's editor Paul Pettengale commented: "Mage is perfect for those of a philosophical bent. It's a hard game to get right, requiring a great deal of thought from players and referees alike, but its underlying theme – the nature of reality – makes it one of the most interesting and mature roleplaying games available."
https://en.wikipedia.org/wiki?curid=19732
Malcolm Fraser John Malcolm Fraser (; 21 May 1930 – 20 March 2015) was an Australian politician who served as the 22nd Prime Minister of Australia, in office from 1975 to 1983 as leader of the Liberal Party. Fraser was raised on his father's sheep stations, and after studying at Magdalen College, Oxford, returned to Australia to take over the family property in the Western District of Victoria. After an initial defeat in 1954, he was elected to the House of Representatives at the 1955 federal election, standing in the Division of Wannon. He was 25 at the time, making him one of the youngest people ever elected to parliament. When Harold Holt became prime minister in 1966, Fraser was appointed Minister for the Army. After Holt's disappearance and replacement by John Gorton, Fraser became Minister for Education and Science (1968–1969) and then Minister for Defence (1969–1971). In 1971, Fraser resigned from cabinet and denounced Gorton as "unfit to hold the great office of prime minister"; this precipitated the replacement of Gorton with William McMahon. He subsequently returned to his old education and science portfolio. After the Coalition was defeated at the 1972 election, Fraser unsuccessfully stood for the Liberal leadership, losing to Billy Snedden. When the party lost the 1974 election, he began to move against Snedden, eventually mounting a successful challenge in March 1975. As Leader of the Opposition, Fraser used the Coalition's control of the Senate to block supply to the Whitlam Government, precipitating a constitutional crisis. This culminated with Gough Whitlam being dismissed as prime minister by Governor-General John Kerr, a unique occurrence in Australian history. The correctness of Fraser's actions in the crisis and the exact nature of his involvement in Kerr's decision have since been a topic of debate. Fraser remains the only Australian prime minister to ascend to the position upon the dismissal of his predecessor. After Whitlam's dismissal, Fraser was sworn in as prime minister on an initial caretaker basis. The Coalition won a landslide victory at the 1975 election, and was re-elected in 1977 and 1980. Fraser took a keen interest in foreign affairs as prime minister, and was more active in the international sphere than many of his predecessors. He was a strong supporter of multiculturalism, and during his term in office Australia admitted significant numbers of non-white immigrants (including Vietnamese boat people) for the first time. His government also established the Special Broadcasting Service (SBS). Particularly in his final years in office, Fraser came into conflict with the economic rationalist faction of his party. His government made few major changes to economic policy. Fraser and the Coalition lost power at the 1983 election, and he left politics a short time later. To date, he is the last Prime Minister from a country seat. In retirement, he held advisory positions with the UN and the Commonwealth of Nations, and was president of the aid agency CARE from 1990 to 1995. He resigned his membership of the Liberal Party in 2009, having been a critic of its policy direction for a number of years. Evaluations of Fraser's prime ministership have been mixed. He is generally credited with restoring stability to the country after a series of short-term leaders, but some have seen his government as a lost opportunity for economic reform. Only three Australian prime ministers have served longer terms in office – Robert Menzies, John Howard and Bob Hawke. John Malcolm Fraser was born in Toorak, Melbourne, Victoria, on 21 May 1930. He was the second of two children born to Una Arnold (née Woolf) and John Neville Fraser; his older sister Lorraine had been born in 1928. Both he and his father were known exclusively by their middle names. His paternal grandfather, Sir Simon Fraser, was born in Nova Scotia, Canada, and arrived in Australia in 1853. He made his fortune as a railway contractor, and later acquired significant pastoral holdings, becoming a member of the "squattocracy". Fraser's maternal grandfather, Louis Woolf, was born in Dunedin, New Zealand, and arrived in Australia as a child. He was of Jewish origin, a fact which his grandson did not learn until he was an adult. A chartered accountant by trade, he married Amy Booth, who was related to the wealthy Hordern family of Sydney and was a first cousin of Sir Samuel Hordern. Fraser had a political background on both sides of his family. His father served on the Wakool Shire Council, including as president for two years, and was an admirer of Billy Hughes and a friend of Richard Casey. Simon Fraser served in both houses of the colonial Parliament of Victoria, and represented Victoria at several of the constitutional conventions of the 1890s. He eventually become one of the inaugural members of the new federal Senate, serving from 1901 to 1913 as a member of the early conservative parties. Louis Woolf also ran for the Senate in 1901, standing as a Free Trader in Western Australia. He polled only 400 votes across the whole state, and was never again a candidate for public office. Fraser spent most of his early life at "Balpool-Nyang", a sheep station of on the Edward River near Moulamein, New South Wales. His father had a law degree from Magdalen College, Oxford, but never practised law and preferred the life of a grazier. Fraser contracted a severe case of pneumonia when he was eight years old, which nearly proved fatal. He was home-schooled until the age of ten, when he was sent to board at Tudor House School in the Southern Highlands. He attended Tudor House from 1940 to 1943, and then completed his secondary education at Melbourne Grammar School from 1944 to 1948 where he was a member of Rusden House. While at Melbourne Grammar, he lived in a flat that his parents owned on Collins Street. In 1943, Fraser's father sold "Balpool-Nyang" – which had been prone to drought – and bought "Nareen", in the Western District of Victoria. He was devastated by the sale of his childhood home, and regarded the day he found out about it as the worst of his life. In 1949, Fraser moved to England to study at Magdalen College, Oxford, which his father had also attended. He read Philosophy, Politics and Economics (PPE), graduating in 1952 with third-class honours. Although Fraser did not excel academically, he regarded his time at Oxford as his intellectual awakening, where he learned "how to think". His college tutor was Harry Weldon, who was a strong influence. His circle of friends at Oxford included Raymond Bonham Carter, Nicolas Browne-Wilkinson, and John Turner. In his second year, he had a relationship with Anne Reid, who as Anne Fairbairn later became a prominent poet. After graduating, Fraser considered taking a law degree or joining the British Army, but eventually decided to return to Australia and take over the running of the family property. Fraser returned to Australia in mid-1952. He began attending meetings of the Young Liberals in Hamilton, and became acquainted with many of the local party officials. In November 1953, aged 23, Fraser unexpectedly won Liberal preselection for the Division of Wannon, which covered most of Victoria's Western District. The previous Liberal member, Dan Mackinnon, had been defeated in 1951 and moved to a different electorate. He was expected to be succeeded by Magnus Cormack, who had recently lost his place in the Senate. Fraser had put his name forward as a way of building a profile for future candidacies, but mounted a strong campaign and in the end won a narrow victory. In January 1954, he made the first of a series of weekly radio broadcasts on 3HA Hamilton and 3YB Warrnambool, titled "One Australia". His program – consisting of a pre-recorded 15-minute monologue – covered a wide range of topics, and was often reprinted in newspapers. It continued more or less uninterrupted until his retirement from politics in 1983, and helped him build a substantial personal following in his electorate. At the 1954 election, Fraser lost to the sitting Labor member Don McLeod by just 17 votes (out of over 37,000 cast). However, he reprised his candidacy at the early 1955 election after a redistribution made Wannon notionally Liberal. McLeod concluded the reconfigured Wannon was unwinnable and retired. These factors, combined with the 1955 Labor Party split, allowed Fraser to win a landslide victory. Fraser took his seat in parliament at the age of 25 – the youngest sitting MP by four years, and the first who had been too young to serve in World War II. He was re-elected at the 1958 election despite being restricted in his campaigning by a bout of hepatitis. Fraser was soon being touted as a future member of cabinet, but despite good relations with Robert Menzies never served in any of his ministries. This was probably due to a combination of his youth and the fact that the ministry already contained a disproportionately high number of Victorians. Fraser spoke on a wide range of topics during his early years in parliament, but took a particular interest in foreign affairs. In 1964, he and Gough Whitlam were both awarded Leader Grants by the United States Department of State, allowing them to spend two months in Washington, D.C., getting to know American political and military leaders. The Vietnam War was the main topic of conversation, and on his return trip to Australia he spent two days in Saigon. Early in 1965, he also made a private seven-day visit to Jakarta, and with assistance from Ambassador Mick Shann secured meetings with various high-ranking officials. After more than a decade on the backbench, Fraser was appointed to the Cabinet by the prime minister, Harold Holt, in 1966. As Minister for the Army he presided over the controversial Vietnam War conscription program. Under the new prime minister, John Gorton, he became Minister for Education and Science and in 1969 was promoted to Minister for Defence, a particularly challenging post at the time, given the height of Australia's involvement in the Vietnam War and the protests against it. In March 1971 Fraser abruptly resigned from the Cabinet in protest at what he called Gorton's "interference in (his) ministerial responsibilities". This precipitated a series of events which eventually led to the downfall of Gorton and his replacement as prime minister by William McMahon. Gorton never forgave Fraser for the role he played in his downfall; to the day Gorton died in 2002, he could not bear to be in the same room with Fraser. McMahon immediately reappointed Fraser to the Cabinet, returning him to his old position of Minister for Education and Science. When the Liberals were defeated at the 1972 election by the Labor Party under Gough Whitlam, McMahon resigned and Fraser became Shadow Minister for Labour under Billy Snedden. After the Coalition lost the 1972 election, Fraser was one of five candidates for the Liberal leadership that had been vacated by McMahon. He outpolled John Gorton and James Killen, but was eliminated on the third ballot. Billy Snedden eventually defeated Nigel Bowen by a single vote on the fifth ballot. In the new shadow cabinet – which featured only Liberals – Fraser was given responsibility for primary industry. This was widely seen as a snub, as the new portfolio kept him mostly out of the public eye and was likely to be given to a member of the Country Party when the Coalition returned to government. In an August 1973 reshuffle, Snedden instead made him the Liberals' spokesman for industrial relations. He had hoped to be given responsibility for foreign affairs (in place of the retiring Nigel Bowen), but that role was given to Andrew Peacock. Fraser oversaw the development of the party's new industrial relations policy, which was released in April 1974. It was seen as more flexible and even-handed than the policy that the Coalition had pursued in government, and was received well by the media. According to Fraser's biographer Philip Ayres, by "putting a new policy in place, he managed to modify his public image and emerge as an excellent communicator across a traditionally hostile divide". After the Liberals lost the 1974 election, Fraser unsuccessfully challenged Snedden for the leadership in November. Despite surviving the challenge, Snedden's position in opinion polls continued to decline and he was unable to get the better of Whitlam in the Parliament. Fraser again challenged Snedden on 21 March 1975, this time succeeding and becoming Leader of the Liberal Party and Leader of the Opposition. Following a series of ministerial scandals engulfing the Whitlam Government later that year, Fraser began to instruct Coalition senators to delay the government's budget bills, with the objective of forcing an early election that he believed he would win. After several months of political deadlock, during which time the government secretly explored methods of obtaining supply funding outside the Parliament, the Governor-General, Sir John Kerr, controversially dismissed Whitlam as prime minister on 11 November 1975. Fraser was immediately sworn in as caretaker prime minister on the condition that he end the political deadlock and call an immediate double dissolution election. On 19 November 1975, shortly after the election had been called, a letter bomb was sent to Fraser, but it was intercepted and defused before it reached him. Similar devices were sent to the governor-general and the Premier of Queensland, Joh Bjelke-Petersen. At the 1975 election, Fraser led the Liberal-Country Party Coalition to a landslide victory. The Coalition won 91 seats of a possible 127 in the election to gain a 55-seat majority, which remains to date the largest in Australian history. Fraser subsequently led the Coalition to a second victory in 1977, with only a very small decrease in their vote. The Liberals actually won a majority in their own right in both of these elections, something that Menzies and Holt had never achieved. Although Fraser thus had no need for the support of the (National) Country Party to govern, he retained the formal Coalition between the two parties. Fraser quickly dismantled some of the programs of the Whitlam Government, such as the Ministry of the Media, and made major changes to the universal health insurance system Medibank. He initially maintained Whitlam's levels of tax and spending, but real per-person tax and spending soon began to increase. He did manage to rein in inflation, which had soared under Whitlam. His so-called "Razor Gang" implemented stringent budget cuts across many areas of the Commonwealth Public Sector, including the Australian Broadcasting Corporation (ABC). Fraser practised Keynesian economics during his time as Prime Minister, in part demonstrated by running budget deficits throughout his term as Prime Minister. He was the Liberal Party's last Keynesian Prime Minister. Though he had long been identified with the Liberal Party's right wing, he did not carry out the radically conservative program that his political enemies had predicted, and that some of his followers wanted. Fraser's relatively moderate policies particularly disappointed the Treasurer, John Howard, as well as other ministers who were strong adherents of economic liberalism, and therefore detractors of Keynesian economics. The government's economic record was marred by rising double-digit unemployment and double-digit inflation, creating "stagflation", caused in part by the ongoing effects of the 1973 oil crisis. Fraser was particularly active in foreign policy as prime minister. He supported the Commonwealth in campaigning to abolish apartheid in South Africa and refused permission for the aircraft carrying the Springbok rugby team to refuel on Australian territory en route to their controversial 1981 tour of New Zealand. However, an earlier tour by the South African ski boat angling team was allowed to pass through Australia on the way to New Zealand in 1977 and the transit records were suppressed by Cabinet order. Fraser also strongly opposed white minority rule in Rhodesia. During the 1979 Commonwealth Conference, Fraser, together with his Nigerian counterpart, convinced the newly elected British prime minister, Margaret Thatcher, to withhold recognition of the internal settlement Zimbabwe Rhodesia government; Thatcher had earlier promised to recognise it. Subsequently, the Lancaster House Agreement was signed and Robert Mugabe was elected leader of an independent Zimbabwe at the inaugural 1980 election. Duncan Campbell, a former deputy secretary of the Department of Foreign Affairs and Trade has stated that Fraser was "the principal architect" in the ending of white minority rule. The President of Tanzania, Julius Nyerere, said that he considered Fraser's role "crucial in many parts" and the President of Zambia, Kenneth Kaunda, called his contribution "vital". Under Fraser, Australia recognised Indonesia's annexation of East Timor, although many East Timorese refugees were granted asylum in Australia. Fraser was also a strong supporter of the United States and supported the boycott of the 1980 Summer Olympics in Moscow. However, although he persuaded some sporting bodies not to compete, Fraser did not try to prevent the Australian Olympic Committee sending a team to the Moscow Games. Fraser also surprised his critics over immigration policy; according to 1977 Cabinet documents, the Fraser Government adopted a formal policy for "a humanitarian commitment to admit refugees for resettlement". Fraser's aim was to expand immigration from Asian countries and allow more refugees to enter Australia. He was a firm supporter of multiculturalism and established a government-funded multilingual radio and television network, the Special Broadcasting Service (SBS), building on their first radio stations which had been established under the Whitlam Government. Despite Fraser's support for SBS, his government imposed stringent budget cuts on the national broadcaster, the ABC, which came under repeated attack from the Coalition for alleged "left-wing bias" and "unfair" coverage on their TV programs, including "This Day Tonight" and "Four Corners", and on the ABC's new youth-oriented radio station Double Jay. One result of the cuts was a plan to establish a national youth radio network, of which Double Jay was the first station. The network was delayed for many years and did not come to fruition until the 1990s. Fraser also legislated to give Indigenous Australians control of their traditional lands in the Northern Territory, but resisted imposing land rights laws on conservative state governments. At the 1980 election, Fraser saw his majority more than halved, from 48 seats to 21. The Coalition also lost control of the Senate. Despite this, Fraser remained ahead of Labor leader Bill Hayden in opinion polls. However, the economy was hit by the early 1980s recession, and a protracted scandal over tax-avoidance schemes run by some high-profile Liberals also began to hurt the Government. In April 1981, the Minister for Industrial Relations, Andrew Peacock, resigned from the Cabinet, accusing Fraser of "constant interference in his portfolio". Fraser, however, had accused former prime minister John Gorton of the same thing a decade earlier. Peacock subsequently challenged Fraser for the leadership; although Fraser defeated Peacock, these events left him politically weakened. By early 1982, the popular former ACTU President, Bob Hawke, who had entered Parliament in 1980, was polling well ahead of both Fraser and the Labor Leader, Bill Hayden, on the question of who voters would rather see as prime minister. Fraser was well aware of the infighting this caused between Hayden and Hawke and had planned to call a snap election in autumn 1982, preventing the Labor Party changing leaders. These plans were derailed when Fraser suffered a severe back injury. Shortly after recovering from his injury, the Liberal Party narrowly won a by-election in the marginal seat of Flinders in December 1982. The failure of the Labor Party to win the seat convinced Fraser that he would be able to win an election against Hayden. As leadership tensions began to grow in the Labor Party throughout January, Fraser subsequently resolved to call a double dissolution election at the earliest opportunity, hoping to capitalise on Labor's disunity. He knew that if the writs were issued soon enough, Labor would essentially be frozen into going into the subsequent election with Hayden as leader. On 3 February 1983, Fraser arranged to visit the Governor-General of Australia, Ninian Stephen, intending to ask for a surprise election. However, Fraser made his run too late. Without any knowledge of Fraser's plans, Hayden resigned as Labor leader just two hours before Fraser travelled to Government House. This meant that the considerably more popular Hawke was able to replace him at almost exactly the same time that the writs were issued for the election. Although Fraser reacted to the move by saying he looked forward to "knock[ing] two Labor Leaders off in one go" at the forthcoming election, Labor immediately surged in the opinion polls. At the election on 5 March the Coalition was heavily defeated, suffering a 24-seat swing, the worst defeat of a non-Labor government since Federation. Fraser immediately announced his resignation as Liberal leader and formally resigned as prime minister on 11 March 1983; he retired from Parliament two months later. To date, he is the last non-interim prime minister from a rural seat. In retirement Fraser served as Chairman of the UN Panel of Eminent Persons on the Role of Transnational Corporations in South Africa 1985, as Co-Chairman of the Commonwealth Group of Eminent Persons on South Africa in 1985–86 (appointed by Prime Minister Hawke), and as Chairman of the UN Secretary-General's Expert Group on African Commodity Issues in 1989–90. He was a distinguished international fellow at the American Enterprise Institute from 1984 to 1986. Fraser helped to establish the foreign aid group CARE organisation in Australia and became the agency's international president in 1991, and worked with a number of other charitable organisations. In 2006, he was appointed Professorial Fellow at the Asia Pacific Centre for Military Law, and in October 2007 he presented his inaugural professorial lecture, "Finding Security in Terrorism's Shadow: The importance of the rule of law". On 14 October 1986, Fraser, then the Chairman of the Commonwealth Eminent Persons Group, was found in the foyer of the Admiral Benbow Inn, a seedy Memphis hotel, wearing only a pair of underpants and confused as to where his trousers were. The hotel was an establishment popular with prostitutes and drug dealers. Though it was rumoured at the time that the former Prime Minister had been with a prostitute, his wife stated that Fraser had no recollection of the events and that she believes it more likely that he was the victim of a practical joke by his fellow delegates. In 1993, Fraser made a bid for the Liberal Party presidency but withdrew at the last minute following opposition to his bid, which was raised due to him having been critical of then Liberal leader John Hewson for losing the election earlier that year. After 1996, Fraser was critical of the Howard Coalition government over foreign policy issues, particularly John Howard's alignment with the foreign policy of the Bush administration, which Fraser saw as damaging Australian relationships in Asia. He opposed Howard's policy on asylum-seekers, campaigned in support of an Australian Republic and attacked what he perceived as a lack of integrity in Australian politics, together with former Labor prime minister Gough Whitlam, finding much common ground with his predecessor and his successor Bob Hawke, another republican. The 2001 election continued his estrangement from the Liberal Party. Many Liberals criticised the Fraser years as "a decade of lost opportunity" on deregulation of the Australian economy and other issues. In early 2004, a Young Liberal convention in Hobart called for Fraser's life membership of the Liberal Party to be ended. In 2006, Fraser criticised Howard Liberal government policies on areas such as refugees, terrorism and civil liberties, and that "if Australia continues to follow United States policies, it runs the risk of being embroiled in the conflict in Iraq for decades, and a fear of Islam in the Australian community will take years to eradicate". Fraser claimed that the way the Howard government handled the David Hicks, Cornelia Rau and Vivian Solon cases was questionable. On 20 July 2007, Fraser sent an open letter to members of the large activist group GetUp!, encouraging members to support GetUp's campaign for a change in policy on Iraq including a clearly defined exit strategy. Fraser stated: "One of the things we should say to the Americans, quite simply, is that if the United States is not prepared to involve itself in high-level diplomacy concerning Iraq and other Middle East questions, our forces will be withdrawn before Christmas." After the defeat of the Howard government at the 2007 federal election, Fraser claimed Howard approached him in a corridor, following a cabinet meeting in May 1977 regarding Vietnamese refugees, and said: "We don't want too many of these people. We're doing this just for show, aren't we?" The claims were made by Fraser in an interview to mark the release of the 1977 cabinet papers. Howard, through a spokesman, denied having made the comment. In October 2007 Fraser gave a speech to Melbourne Law School on terrorism and "the importance of the rule of law," which Liberal MP Sophie Mirabella condemned in January 2008, claiming errors and "either intellectual sloppiness or deliberate dishonesty", and claimed that he tacitly supported Islamic fundamentalism, that he should have no influence on foreign policy, and claimed his stance on the war on terror had left him open to caricature as a "frothing-at-the-mouth leftie". Shortly after Tony Abbott won the 2009 Liberal Party leadership spill, Fraser ended his Liberal Party membership, stating the party was "no longer a liberal party but a conservative party". In December 2011, Fraser was highly critical of the Australian government's decision (also supported by the Liberal Party Opposition) to permit the export of uranium to India, relaxing the Fraser government's policy of banning sales of uranium to countries that are not signatories of the Nuclear Non-Proliferation Treaty. In 2012, Fraser criticised the basing of US military forces in Australia. In late 2012, Fraser wrote a foreword for the journal "Jurisprudence" where he openly criticised the current state of human rights in Australia and the Western World. "It is a sobering thought that in recent times, freedoms hard won through centuries of struggle, in the United Kingdom and elsewhere have been whittled away. In Australia alone we have laws that allow the secret detention of the innocent. We have had a vast expansion of the power of intelligence agencies. In many cases the onus of proof has been reversed and the justice that once prevailed has been gravely diminished." In July 2013, Fraser endorsed Australian Greens Senator Sarah Hanson-Young for re-election in a television advertisement, stating she had been a "reasonable and fair-minded voice". Fraser's books include "Malcolm Fraser: The Political Memoirs" (with Margaret Simons – The Miegunyah Press, 2010) and "Dangerous Allies" (Melbourne University Press, 2014), which warns of "strategic dependence" on the United States. In the book and in talks promoting it, he criticised the concept of American exceptionalism and US foreign policy. Fraser died on the early morning of 20 March 2015 shortly before his 85th birthday after a brief illness. An obituary noted that there had been "greater appreciation of the constructive and positive nature of his post-prime ministerial contribution" as his retirement years progressed. Fraser was given a state funeral at Scots' Church in Melbourne on 27 March 2015. His ashes are interred within the 'Prime Ministers Garden' of Melbourne General Cemetery. On 9 December 1956, Fraser married Tamara "Tamie" Beggs, who was almost six years his junior. They had met at a New Year's Eve party, and bonded over similar personal backgrounds and political views. The couple had four children together: Mark (b. 1958), Angela (b. 1959), Hugh (b. 1963), and Phoebe (b. 1966). Tamie frequently assisted her husband in campaigning, and her gregariousness was seen as complementing his more shy and reserved nature. She advised him on most of the important decisions in his career, and in retirement he observed that "if she had been prime minister in 1983, we would have won". Fraser attended Anglican schools, although his parents were Presbyterian. In university he was inclined towards atheism, once writing that "the idea that God exists is a nonsense". However, his beliefs became less definite over time and tended towards agnosticism. During his political career, he occasionally self-described as Christian, such as in a 1975 interview with "The Catholic Weekly". Margaret Simons, the co-author of Fraser's memoirs, thought that he was "not religious, and yet thinks religion is a necessary thing". In a 2010 interview with her, he said: "I would probably like to be less logical and, you know, really able to believe there is a god, whether it is Allah, or the Christian god, or some other – but I think I studied too much philosophy ... you can never know". In 2004, Fraser designated the University of Melbourne the official custodian of his personal papers and library to create the Malcolm Fraser Collection at the university. Upon his death, Fraser's 1983 nemesis and often bitter opponent Hawke fondly described him as a "very significant figure in the history of Australian politics" who, in his post-Prime Ministerial years, "became an outstanding figure in the advancement of human rights issues in all respects", praised him for being "extraordinarily generous and welcoming to refugees from Indochina" and concluded that Fraser had "moved so far to the left he was almost out of sight". Andrew Peacock, who had challenged Fraser for the Liberal leadership and later succeeded him, said that he had "a deep respect and pleasurable memories of the first five years of the Fraser Government... I disagreed with him later on but during that period in the 1970s he was a very effective Prime Minister", and lamented that "despite all my arguments with him later on I am filled with admiration for his efforts on China". In June 2018, he was honoured with the naming of the Australian Electoral Division of Fraser in the inner north-western suburbs of Melbourne. Orders Foreign honours Organisations Personal Fellowships Academic degrees
https://en.wikipedia.org/wiki?curid=19734
Macquarie University Macquarie University () is a public research university based in Sydney, Australia, in the suburb of Macquarie Park. Founded in 1964 by the New South Wales Government, it was the third university to be established in the metropolitan area of Sydney. Established as a verdant university, Macquarie has five faculties, as well as the Macquarie University Hospital and the Macquarie Graduate School of Management, which are located on the university's main campus in suburban Sydney. The university is the first in Australia to fully align its degree system with the Bologna Accord. The idea of founding a third university in Sydney was flagged in the early 1960s when the New South Wales Government formed a committee of enquiry into higher education to deal with a perceived emergency in university enrollments in New South Wales. During this enquiry, the Senate of the University of Sydney put in a submission which highlighted 'the immediate need to establish a third university in the metropolitan area'. After much debate a future campus location was selected in what was then a semi-rural part of North Ryde, and it was decided that the future university be named after Lachlan Macquarie, an important early governor of the colony of New South Wales. Macquarie University was formally established in 1964 with the passage of the Macquarie University Act 1964 by the New South Wales parliament. The initial concept of the campus was to create a new high technology corridor, similar to the area surrounding Stanford University in Palo Alto, California, the goal being to provide for interaction between industry and the new university. The academic core was designed in the Brutalist style and developed by the renowned town planner Walter Abraham who also oversaw the next 20 years of planning and development for the university. A committee appointed to advise the state government on the establishment of the new university at North Ryde nominated Abraham as the architect-planner. The fledgling Macquarie University Council decided that planning for the campus would be done within the university, rather than by consultants, and this led to the establishment of the architect-planners office. The first Vice-Chancellor of Macquarie University, Alexander George Mitchell, was selected by the University Council which met for the first time on 17 June 1964. Members of the first university council included: Colonel Sir Edward Ford OBE, David Paver Mellor, Rae Else-Mitchell QC and Sir Walter Scott. The university first opened to students on 6 March 1967 with more students than anticipated. The Australian Universities Commission had allowed for 510 effective full-time students (EFTS) but Macquarie had 956 enrolments and 622 EFTS. Between 1968 and 1969, enrolment at Macquarie increased dramatically with an extra 1200 EFTS, with 100 new academic staff employed. 1969 also saw the establishment of the Macquarie Graduate School of Management (MGSM). Macquarie grew during the seventies and eighties with rapid expansion in courses offered, student numbers and development of the site. In 1972, the university established the Macquarie Law School, the third law school in Sydney. In their book "Liberality of Opportunity", Bruce Mansfield and Mark Hutchinson describe the founding of Macquarie University as 'an act of faith and a great experiment'. An additional topic considered in this book is the science reform movement of the late 1970s that resulted in the introduction of a named science degree, thus facilitating the subsequent inclusion of other named degrees in addition to the traditional BA. An alternative view on this topic is given by theoretical physicist John Ward. In 1973 the student union (MUSC) worked with the Builders Labourers Federation (BLF) to organise one of the first "pink bans". Similar in tactic to the green ban, the pink ban was recommended when one of the residential colleges at Macquarie University, Robert Menzies College, ordered a student to lead a celibate life and undertake therapy and confession to cure himself of his homosexuality. The BLF decided to stop all construction work at the college until the university and the college Master made statements committing to a non-discriminatory university environment. MUSC was successful in engaging with the BLF again in 1974 when a woman at Macquarie University had her NSW Department of Education scholarship cancelled on the basis that she was a lesbian and therefore unfit to be a teacher. After over a decade of service, the first Vice Chancellor Mitchell was succeeded by Edwin Webb in December 1975. Webb was required to steer the university through one of its most difficult periods as the value of universities were debated and the governments introduced significant funding cuts. Webb left the university in 1986 and was succeeded by Di Yerbury, the first female Vice-Chancellor in Australia. Yerbury would go on to hold the position of Vice-Chancellor for nearly 20 years. In 1990 the university absorbed the Institute of Early Childhood Studies of the Sydney College of Advanced Education, under the terms of the Higher Education (Amalgamation) Act 1989. l Steven Schwartz replaced Di Yerbury at the beginning of 2006. Yerbury's departure was attended with much controversy, including a "bitter dispute" with Schwartz, disputed ownership of university artworks worth $13 million and Yerbury's salary package. In August 2006, Schwartz expressed concern about the actions of Yerbury in a letter to university auditors. Yerbury strongly denied any wrongdoing and claimed the artworks were hers. During 2007, Macquarie University restructured its student organisation after an audit raised questions about management of hundreds of thousands of dollars in funds by student organisations At the centre of the investigation was Victor Ma, president of the Macquarie University Students' Council, who was previously involved in a high-profile case of student election fixing at the University of Sydney. The university Council resolved to immediately remove Ma from his position. Vice-Chancellor Schwartz cited an urgent need to reform Macquarie's main student bodies. However, Ma strongly denied any wrongdoing and labelled the controversy a case of 'character assassination'. The Federal Court ordered on 23 May 2007 that Macquarie University Union Ltd be wound up. Following the dissolution of Macquarie University Union Ltd, the outgoing student organisation was replaced with a new wholly owned subsidiary company of the university, known as U@MQ Ltd. The new student organisation originally lacked a true student representative union; however, following a complete review and authorisation from the university Council, a new student union known as Macquarie University Students Association (MUSRA) was established in 2009. Within the first few hundred days of Schwartz's instatement as Vice-Chancellor, the 'Macquarie@50' strategic plan was launched, which positioned the university to enhance research, teaching, infrastructure and academic rankings by the university's 50th anniversary in 2014. Included in the university's plans for the future was the establishment of a sustainability office in order to more effectively manage environmental and social development at Macquarie. As part of this campaign, in 2009 Macquarie became the first Fair Trade accredited university in Australia. The beginning of 2009 also saw the introduction of a new logo for the university which retained the Sirius Star, present on both the old logo and the university crest, but now 'embedded in a stylised lotus flower'. In accordance with the university by-law, the crest continues to be used for formal purposes and is displayed on university testamurs. The by-law also prescribes the university's motto, taken from Chaucer: 'And gladly teche'. In 2013, the university became the first in Australia to fully align its degree system with the Bologna Accord. Macquarie's arms was assumed through a 1967 amendment of the Macquarie University Act 1964 (Confirmed by Letters Patent of the College of Arms, 16 August 1969). The escutcheon displays the Macquarie Lighthouse tower, the first major public building in the colony, as well as the Sirius star, the name of the flagship of the First Fleet. The university's founders originally wanted to base the university's arms on Lachlan Macquarie's family crest, but they decided to go for a more radical approach that represented Lachlan Macquarie as a builder and administrator. The motto chosen for the university was "And Gladly Teche." This is taken from the general Prologue of The Canterbury Tales, Geoffrey Chaucer c.1400 and symbolises the university's commitment to both learning and teaching. The coat of arms and the motto are used in a very limited number of formal communications. Macquarie has had a number of logos in its history. In 2014, the university launched a new logo as part of its Shared Identity Project. The logo reintroduced the Macquarie Lighthouse, a popular symbol of the University within the University community and maintained the Sirus Star. Macquarie University's main campus is located about north-west of the Sydney CBD and is set on 126 hectares of rolling lawns and natural bushland. Located within the high-technology corridor of Sydney's north-west and in close proximity to Macquarie Park and its surrounding industries, Macquarie's location has been crucial in its development as a relatively research intensive university. Prior to the development of the campus, most of the site was cultivated with peach orchards, market gardens and poultry farms. The university's first architect-planner was Walter Abraham, one of the first six administrators appointed to Macquarie University. As the site adapted from its former rural use to a busy collegiate environment, he implemented carefully designed planting programs across the campus. Abraham established a grid design comprising lots of running north–south, with the aim of creating a compact academic core. The measure of was seen as one minute's walk, and grid design reflected the aim of having a maximum walk of 10 minutes between any two parts of the university. The main east–west walkway that runs from the Macquarie University Research Park through to the arts faculty buildings, was named Wally's Walk in recognition of Walter Abraham's contribution to the development of the university. Apart from its centres of learning, the campus features the Macquarie University Research Park, museums, art galleries, a sculpture park, an observatory, a sport and aquatic centre and also the private Macquarie University Hospital. The campus has its own postcode, 2109. Macquarie became the first university in Australia to own and operate a private medical facility in 2010 when it opened a $300 million hospital on its campus. The hospital is the first and only private not-for-profit teaching hospital on an Australian university campus. The Macquarie University Hospital is located to the north of the main campus area towards the university sports grounds. It comprises 183 beds, 12 operating theatres, 2 cardiac and vascular angiography suites. The hospital is co-located with the university's Australian School of Advanced Medicine. The university hosts a number of high technology companies on its campus. Primarily designed to encourage interaction between the university and industry, commercialisation of its campus has also given the institution an additional revenue stream. Tenants are selected based off their potential to collaborate with the universities researches or their ability to provide opportunities for its students and graduates. Cochlear Limited, has its headquarters in close proximity to the Australian Hearing Hub on the southern edge of campus. Other companies that have office space at the campus include Dow Corning, Goodman Fielder, Nortel Networks, OPSM and Siemens. The Macquarie University Observatory was originally constructed in 1978 as a research facility but, since 1997, has been accessible to the public through its Public Observing Program. The library houses over 1.8 million items and uses the Library of Congress Classification System. The library features several collections including a Rare Book Collection, a Palaeontology Collection and the Brunner Collection of Egyptological materials. Macquarie University operated two libraries during the transition. The old library in building C7A closed at the end of July 2011 (which has since been repurposed as a student support and study space), and the new library in building C3C became fully operational on 1 August 2011. The new library was the first university library in Australia to possess an Automated Storage and Retrieval System (ASRS). The ASRS consists of an environmentally controlled vault with metal bins storing the items; robotic cranes retrieve an item on request and deliver it to the service desk for collection. The Macquarie University Incubator is a space to research and develop ideas that can be commercialised. It was established in 2017 as a part of the Macquarie Park Innovation District (MPID) project. Macquarie University received $1 million grant from the New South Wales government to build the incubator. The University has also committed about $7 million to the incubator with financial support of the big businesses and the New South Wales government. It was officially opened by Prince Andrew, Duke of York on 25 September 2017. Macquarie University has two residential colleges on its campus, Dunmore Lang College and Robert Menzies College, both founded in 1972. The colleges offer academic support and a wide range of social and sporting activities in a communal environment. Separate to the colleges is the Macquarie University Village. The village has over 900 rooms in mostly town house style buildings to the north of the campus. The village encourages its students to interact in its communal spaces and has a number of social events throughout the year. The museums and collections of Macquarie University are extensive and include nine museums and galleries. Each collection focuses on various historical, scientific or artistic interests. The most visible collection on campus is the sculpture park which is exhibited across the entire campus. At close to 100 sculptures on display, it is the largest park of its kind in the Southern Hemisphere. All museums and galleries are open to the public and offer educational programs for students at primary, secondary and tertiary levels. Located on the western side of the campus is the Macquarie University Sport and Aquatic Centre. Previously a sports hall facility, the complex was renovated and reopened in 2007 with the addition of the new gym and aquatic centre. It houses a 50-metre FINA-compliant outdoor pool and a 25-metre indoor pool. The complex also contains a gymnasium and squash, badminton, basketball, volleyball and netball courts. Macquarie also has seven hectares of high quality playing fields for football, cricket and tennis. Situated to the north of the campus, the playing fields are used by the university as well as a number of elite sporting teams such as Sydney FC and the Westfield Matildas. Macquarie University is served by Macquarie University Station on the Sydney Metro Northwest line. The station opened in 2009 as part of the Epping to Chatswood Rail Link on the Sydney Trains network. Services initially ran as a shuttle between Epping and Chatswood before being incorporated into the T9 Northern Line. In 2018, Macquarie University station closed for six months for conversion to Sydney Metro on the Sydney Metro Northwest line. Platform screen doors were installed as part of the upgrade. Macquarie is the only university in Australia with a railway station on campus. The station is served by driverless Alstom Metropolis trains every ten minutes during the off peak and every four minutes during peak hours. There is also a major bus interchange within the campus that provides close to 800 bus services daily. The M2 Motorway runs parallel to the northern boundary of the campus and is accessible to traffic from the university. The university currently comprises 35 departments within five faculties: Research centres, schools and institutes that are affiliated with the university: Macquarie University's Australian Hearing Hub is partnered with Cochlear. Cochlear Headquarters are on campus. The Australian Hearing Hub includes the head office of Australian Hearing. The Australian Research Institute for Environment and Sustainability is a research centre that promotes change for environmental sustainability, is affiliated with the University and is located on its campus. Access Macquarie Limited was established in 1989 as the commercial arm of the university. It facilitates and supports the commercial needs of industry, business and government organisations seeking to utilise the academic expertise of the broader University community. The university is governed by a 17-member Council. The University Council is the governing authority of the university under the "Macquarie University Act 1989". The Council takes primary responsibility for the control and management of the affairs of the University, and is empowered to make by-laws and rules relating to how the University is managed. Members of the Council include the University Vice-Chancellor, Academic and non-academic staff, the Vice President of the Academic Senate and a student representative. The Council is chaired by The Chancellor of the University. The Academic Senate is the primary academic body of the university. It has certain powers delegated to it by Council, such as the approving of examination results and the completion of requirements for the award of degrees. At the same time, it makes recommendations to the Council concerning all changes to degree rules, and all proposals for new awards. While the Academic Senate is an independent body, it is required to make recommendations to the university Council in relation to matters outside its delegated authority. Macquarie's current Vice-Chancellor, Bruce Dowton, took over from Schwartz in September 2012. Prior to his appointment Dowton served as a senior medical executive having held a range of positions in university, healthcare and consulting organisations. He also served as a pediatrician at the Massachusetts General Hospital for Children, and as Clinical Professor of Pediatrics at Harvard Medical School. There have been five Vice-Chancellors in the university's history. The Macquarie University International College offers Foundation Studies (Pre-University) and University-level Diplomas. Upon successful completion of a MUIC Diploma, students enter the appropriate bachelor's degree as a second year student. The Centre for Macquarie English is the English-language centre that offers a range of specialised, direct entry English programmes that are approved by Macquarie University. The university positions itself as being research intensive. In 2012, 85% of Macquarie's broad fields of research was rated 'at or above world standard' in the Excellence in Research for Australia 2012 National report. The university is within the top 3 universities in Australia for the number of peer reviewed publications produced per academic staff member. Researchers at Macquarie University, David Skellern and Neil Weste, and the Commonwealth Scientific and Industrial Research Organisation helped develop Wi-Fi. David Skellern has been a major donor to the University through the Skellern Family Trust. Macquarie physicists Frank Duarte and Jim Piper pioneered the laser designs adopted by researchers worldwide, in various major national programs, for atomic vapor laser isotope separation. Macquarie University's linguistics department developed the Macquarie Dictionary. The dictionary is regarded as the standard reference on Australian English. Macquarie University has a research partnership with the University of Hamburg in Germany and Fudan University in China. They offer dual and joint degree programs and engage in joint research. Macquarie University (MQ) world rankings includes it being number 237 on the QS rankings, number 251+ on Times (THE), number 151+ on ARWU, and number 267= with US News. This contributes to Macquarie being the number 8 ranked Australian university overall in the world ranking systems. Macquarie University rankings within Australia include being placed at number 8 on the ERA scale (2012) and being a 4 1/2 Star AEN rated university. Macquarie also has a student survey satisfaction rating of 77.4% for business, 90.3% for health, 91.4% for arts, and 93.8% for science. Macquarie is ranked in the top 40 universities in the Asia-Pacific region and within Australia's top 12 universities according to the Academic Ranking of World Universities, the U.S. News & World Report Rankings and the QS World University Rankings. Macquarie was the highest ranked university in Australia under the age of 50 and was ranked 18th in the world (prior to its golden jubilee in 2014), according to the QS World University Rankings. Internationally, Macquarie was ranked 239th in the world (9th in Australia) in the Academic Ranking of World Universities of 2014. Macquarie University was ranked among the top 50 universities in the world for linguistics (43rd), psychology (48th) and earth and marine sciences (48th), and was ranked in the top 5 nationally for philosophy and earth and marine sciences, according to the 2014 QS World University Rankings. Macquarie ranked 67th in the world for Arts and Humanities (equal 5th in Australia), according to the 2015 Times Higher Education rankings by subject and 54th in the world for arts and humanities, according to the 2017 USNWR rankings by subject. Arts and Humanities is Macquarie's best discipline area in rankings. Macquarie was one of four non-Group of Eight universities ranked in the top 100 universities in the world in particular discipline areas. The Macquarie Graduate School of Management is one of the oldest business schools in Australia. In 2014, "The Economist" ranked MGSM 5th in the Asia-Pacific, 3rd in Australia, 1st in Sydney/New South Wales and 49th in the world. It was the highest ranked business school in Australia and was ranked 68th in the world in the 2015 "Financial Times" MBA ranking. Macquarie is the fourth largest university in Sydney (38,753 students in 2013). The university has the largest student exchange programme in Australia. In 2012, 9,802 students from Asia were enrolled at Macquarie University (Sydney campuses and offshore programs in China, Hong Kong, Korea and Singapore). Campus Life manages the university's non-academic services: food and retail, sport and recreation, student groups, child care, and entertainment. From late 2017 onward its Campus Hub facility has been closed for reconstruction; a 'pop-up'-style replacement, the Campus Common, has been opened for the duration. The Global Leadership Program (GLP) is a University-funded extracurricular program that is open to all students and can be undertaken alongside any degree at Macquarie University. The GLP aims to instil leadership and innovation skills, cross-cultural understanding and a sense of global citizenship in its graduates. Upon successful completion of the GLP, students receive a formal notation on their academic transcript and a certificate. Macquarie's GLP was the first of its kind when it launched in the Australian university sector in 2005 and is the country's flagship tertiary global leadership program with more than 4000 active participants in more than 200 academic disciplines. GLP is a voluntary, extra-curricular learning and engagement program that students design according to their own interests and complete at their own pace. Students are required to complete a workshop series, attend tailored keynote speaker and networking events and complete an experiential credit component. This ranges from short-term study abroad, volunteering (domestic and/or international), internships (domestic and/or international), learning a new language or attending internationally themed seminars and study tours. The GLP won the Institute for International Education's 2017 Heiskell award for Innovation in International Education - Internationalising the Campus. Macquarie University is the first Southern Hemisphere university to receive the award in its 17-year history. The GLP was awarded the 2018 NSW International Student Community Engagement Award (Joint Winner) in the Education Provider category. This award recognises the innovative way in which the GLP facilitates connection and engagement with community for Macquarie University International GLP Students, and also recognises the contribution that the GLP makes to the International Student experience in New South Wales. In 2019, the GLP won the Global PIEoneer Award for International Education in the category of 'Progressive Education Delivery' in Guildhall, London. The PIEoneer Awards are the only global awards that celebrate innovation and achievement across the whole of the international education industry. Macquarie University has its own community radio station on campus, 2SER FM. The station is jointly owned by Macquarie University and University of Technology, Sydney. Macquarie University students celebrate Conception Day each year since 1969 to – according to legend – commemorate the date of conception of Lachlan Macquarie, as his birthday fell at the wrong time of year for a celebration. Conception Day is traditionally held on the last day of classes before the September mid-semester break. Alumni include Rhodes and John Monash Scholars and several Fulbright Scholars. Notable alumni include: Australian politician and former Lord Mayor of Brisbane, Jim Soorley; Australian politician, Tanya Plibersek; Australian basketball player, Lauren Jackson; Australian swimmer, Ian Thorpe; Australian water polo player, Holly Lincoln-Smith; three founding members of the Australian children's musical group The Wiggles (Murray Cook, Anthony Field, Greg Page); former Director-General of the National Library of Australia, Anne-Marie Schwirtlich AM; New Zealand conservationist, Pete Bethune. Notable alumni in science include: Australian scientist Barry Brook, American physicist Frank Duarte, and Australian physicist Cathy Foley. Alumni notable in the business world include: Australian hedge fund manager Greg Coffey, Australian businesswoman Catherine Livingstone, founder of Freelancer.com Matt Barrie, businessman Napoleon Perdis and Australian venture capitalist Larry R. Marshall. Notable faculty members include: Indian neurosurgeon, B. K. Misra Australian writer and four time Miles Franklin Award winner, Thea Astley; Hungarian Australian mathematician, Esther Szekeres; Australian mathematician, Neil Trudinger; Australian composer, Phillip Wilcher; Australian environmentalist and activist, Tim Flannery; British physicist and author, Paul Davies; British-Australian physicist, John Clive Ward; Israeli-Australian mathematician, José Enrique Moyal; Australian linguist, Geoffrey Hull; Australian geologist, Fellow of the Australian Academy of Science, John Veevers; Australian climatologist, Ann Henderson-Sellers; Australian sociologist, Raewyn Connell. Four Macquarie University academics were included in The World's Most Influential Minds 2014 report by Thomson Reuters, which identified the most highly cited researchers of the last 11 years.
https://en.wikipedia.org/wiki?curid=19735
Muspelheim In Norse cosmology, Muspelheim (), also called Muspell (), is a realm of fire. The etymology of "Muspelheim" is uncertain, but may come from "Mund-spilli", "world-destroyers", "wreck of the world". According to Norse myth there is said to be nine worlds. The Poetic Edda section Völuspá tells of them: Muspelheim is described as a hot and glowing land of fire, home to the fire giants, and guarded by Surtr, with his flaming sword. It is featured in both the creation and destruction stories of Norse myth. According to the Prose Edda, A great time before the earth was made, Niflheim existed. Inside Niflheim was a well called Hvergelmer, from this well flowed numerous streams known as the Elivog. Their names were Svol, Gunnthro, Form, Finbul, Thul, Slid and Hrid, Sylg and Ylg, Vid, Leipt and Gjoll. After a time these streams had traveled far from their source at Niflheim. So far that the venom that flowed within them hardened and turned to ice. When this ice eventually settled, rain rose up from it, and froze into rime. This ice then began to layer itself over the primordial void, Ginungagap. This made the northern portion of Ginungagap thick with ice, and storms begin to form within. However, in the southern region of Ginungagap glowing sparks were flying out of Muspelheim. When the heat and sparks from Muspelheim met the ice, it began to melt. These sparks would go on to create the Sun, Moon, and stars, and the drops would form the primeval being Ymir. "by the might of him who sent the heat, the drops quickened into life and took the likeness of a man, who got the name Ymer. But the Frost giants call him Aurgelmer" The "Prose Edda" section "Gylfaginning" foretells that the sons of Muspell will break the Bifröst bridge as part of the events of Ragnarök:
https://en.wikipedia.org/wiki?curid=19736
Mariah Carey Mariah Carey (born March 27, 1969 or 1970) is an American singer, songwriter, record producer, actress, entrepreneur, and philanthropist. Renowned for her five-octave vocal range, melismatic singing style, and signature use of the whistle register, Carey is referred to as the "Songbird Supreme" by "Guinness World Records". She rose to fame in 1990 after signing to Columbia Records and releasing her eponymous debut album, which topped the US "Billboard" 200 for eleven consecutive weeks. In 1993, Carey married Sony Music head Tommy Mottola, who signed her to Columbia. She achieved worldwide success with follow-up albums "Music Box" (1993), "Merry Christmas" (1994), and "Daydream" (1995). These albums spawned some of Carey's most successful singles, including "Hero", "Without You", "All I Want for Christmas Is You", "Fantasy", "Always Be My Baby", as well as "One Sweet Day", which topped the US "Billboard" Hot 100 for 16 weeks and became "Billboard"s Song of the Decade (1990s Decade). After separating from Mottola, Carey adopted a new image and incorporated more elements of hip hop into her music with the release of "Butterfly" (1997). "Billboard" named her the country's most successful artist of the 1990s, while the World Music Awards honored her as the world's best-selling music artist of the 1990s, and the best-selling female artist of the millennium. After eleven consecutive years charting a US number-one single, Carey parted ways with Columbia in 2000 and signed a $100 million recording contract with Virgin Records. However, following her highly publicized physical and emotional breakdown, as well as the critical and commercial failure of her film "Glitter" (2001) and its accompanying soundtrack, her contract was bought out for $50 million by Virgin and she signed with Island Records the following year. After a relatively unsuccessful period, she returned to the top of music charts with "The Emancipation of Mimi" (2005), the world's second-best-selling album of 2005. Its second single, "We Belong Together", topped the "Billboard" Hot 100 for 14 weeks and became "Billboard"s Song of the Decade (2000s Decade). In 2009, she was cast in the critically acclaimed film "Precious", which won her the Breakthrough Actress Performance Award at the Palm Springs International Film Festival. Throughout her career, Carey has sold over 200 million records worldwide, making her one of the best-selling music artists of all time. With a total of 19 songs topping the "Billboard" Hot 100, Carey holds the record for the most number-one singles by a solo artist, a female songwriter, and a female producer. According to the Recording Industry Association of America (RIAA), she is the second-highest-certified female artist in the United States, with 66.5 million certified album units. In 2012, she was ranked second on VH1's list of the 100 Greatest Women in Music. In 2019, "Billboard" named her the all-time top female artist in the United States, based on both album and song chart performances. Aside from her commercial accomplishments, Carey has won five Grammy Awards, nineteen World Music Awards, ten American Music Awards, and fifteen Billboard Music Awards. An inductee of the Songwriters Hall of Fame, she is noted for inspiring other artists in pop and contemporary R&B music. Mariah Carey was born in Huntington, New York, on March 27, 1969 or 1970.
https://en.wikipedia.org/wiki?curid=19499
Mervyn Peake Mervyn Laurence Peake (9 July 1911 – 17 November 1968) was an English writer, artist, poet, and illustrator. He is best known for what are usually referred to as the "Gormenghast" books. The three works were part of what Peake conceived as a lengthy cycle, the completion of which was prevented by his death. They are sometimes compared to the work of his older contemporary J. R. R. Tolkien, but Peake's surreal fiction was influenced by his early love for Charles Dickens and Robert Louis Stevenson rather than Tolkien's studies of mythology and philology. Peake also wrote poetry and literary nonsense in verse form, short stories for adults and children ("Letters from a Lost Uncle", 1948), stage and radio plays, and "Mr Pye" (1953), a relatively tightly-structured novel in which God implicitly mocks the evangelical pretensions and cosy world-view of the eponymous hero. Peake first made his reputation as a painter and illustrator during the 1930s and 1940s, when he lived in London, and he was commissioned to produce portraits of well-known people. For a short time at the end of World War II he was commissioned by various newspapers to depict war scenes. A collection of his drawings is still in the possession of his family. Although he gained little popular success in his lifetime, his work was highly respected by his peers, and his friends included Dylan Thomas and Graham Greene. His works are now included in the collections of the National Portrait Gallery, the Imperial War Museum and The National Archives. In 2008, "The Times" named Peake among their list of "The 50 greatest British writers since 1945". Mervyn Peake was born of British parents in Kuling (Lushan) in Jiangxi Province of central China in 1911, only three months before the revolution and the founding of the Republic of China. His father, Ernest Cromwell Peake, was a medical missionary doctor with the London Missionary Society of the Congregationalist tradition, and his mother, Amanda Elizabeth Powell, had come to China as a missionary assistant. The Peakes were given leave to visit England just before World War I in 1914 and returned to China in 1916. Mervyn Peake attended Tientsin Grammar School until the family left for England in December 1922 via the Trans-Siberian Railway. About this time he wrote a novella, "The White Chief of the Umzimbooboo Kaffirs". Peake never returned to China but it has been noted that Chinese influences can be detected in his works, not least in the castle of Gormenghast itself, which in some respects echoes the ancient walled city of Beijing, as well as the enclosed compound where he grew up in Tianjin. It is also likely that his early exposure to the contrasts between the lives of the Europeans and of the Chinese, and between the poor and the wealthy in China, also exerted an influence on the Gormenghast books. His education continued at Eltham College, Mottingham (1923–29), where his talents were encouraged by his English teacher, Eric Drake. Peake completed his formal education at Croydon School of Art in the autumn of 1929 and then from December 1929 to 1933 at the Royal Academy Schools, where he first painted in oils. By this time he had written his first long poem, "A Touch o' the Ash". In 1931 he had a painting accepted for display by the Royal Academy and exhibited his work with the so-called "Soho Group". His early career in the 1930s was as a painter in London, although he lived on the Channel Island of Sark for a time. He first moved to Sark in 1932 where his former teacher Eric Drake was setting up an artists' colony. In 1934 Peake exhibited with the Sark artists both in the Sark Gallery built by Drake and at the Cooling Galleries in London, and in 1935 he exhibited at the Royal Academy and at the Leger Galleries in London. In 1936 he returned to London and was commissioned to design the sets and costumes for "The Insect Play" and his work was acclaimed in "The Sunday Times". He also began teaching life drawing at Westminster School of Art where he met the painter Maeve Gilmore, whom he married in 1937. They had three children, Sebastian (1940–2012), Fabian (b. 1942), and Clare (b. 1949). He had a very successful exhibition of paintings at the Calmann Gallery in London in 1938 and his first book, the self-illustrated children's pirate romance "Captain Slaughterboard Drops Anchor" (based on a story he had written around 1936) was first published in 1939 by "Country Life". In December 1939 he was commissioned by Chatto & Windus to illustrate a children's book, "Ride a Cock Horse and Other Nursery Rhymes", published for the Christmas market in 1940. At the outbreak of World War II he applied to become a war artist for he was keen to put his skills at the service of his country. He imagined "An Exhibition by the Artist, Adolf Hitler", in which horrific images of war with ironic titles were offered as "artworks" by the Nazi leader. Although the drawings were bought by the British Ministry of Information, Peake's application was turned down and he was conscripted into the Army, where he served first with the Royal Artillery, then with the Royal Engineers. He began writing "Titus Groan" at this time. In April 1942, after his requests for commissions as a war artist – or even leave to depict war damage in London – had been consistently refused, he suffered a nervous breakdown and was sent to Southport Hospital. That autumn he was taken on as a graphic artist by the Ministry of Information for a period of six months to work on propaganda illustrations. The next spring he was invalided out of the Army. In 1943 he was commissioned by the War Artists' Advisory Committee, WAAC, to paint glassblowers at the Chance Brothers factory in Smethwick where cathode ray tubes for early radar sets were being produced. Peake was next given a full-time, three-month WAAC contract to depict various factory subjects and was also asked to submit a large painting showing RAF pilots being debriefed. Some of these paintings are on permanent display in Manchester Art Gallery whilst other examples are in the Imperial War Museum collection. The five years between 1943 and 1948 were some of the most productive of his career. He finished "Titus Groan" and "Gormenghast" and completed some of his most acclaimed illustrations for books by other authors, including Lewis Carroll's "The Hunting of the Snark" (for which he was reportedly paid only £5) and "Alice in Wonderland", Samuel Taylor Coleridge's "The Rime of the Ancient Mariner", the Brothers Grimm's "Household Tales", "All This and Bevin Too" by Quentin Crisp and Robert Louis Stevenson's "Strange Case of Dr Jekyll and Mr Hyde", as well as producing many original poems, drawings, and paintings. Peake designed the logo for Pan Books. The publishers offered him either a flat fee of £10 or a royalty of one farthing per book. On the advice of Graham Greene, who told him that paperback books were a passing fad that would not last, Peake opted for the £10. A book of nonsense poems, "Rhymes Without Reason", was published in 1944 and was described by John Betjeman as "outstanding". Shortly after the war ended in 1945, Edgar Ainsworth, the art editor of "Picture Post", commissioned Peake to visit France and Germany for the magazine. With writer Tom Pocock he was among the first British civilians to witness the horrors of the Nazi concentration camp at Belsen, where the remaining prisoners, too sick to be moved, were dying before his very eyes. He made several drawings, but not surprisingly he found the experience profoundly harrowing, and expressed in deeply felt poems the ambiguity of turning their suffering into art. In 1946 the family moved to Sark, where Peake continued to write and illustrate, and Maeve painted. "Gormenghast" was published in 1950, and the family moved back to England, settling in Smarden, Kent. Peake taught part-time at the Central School of Art, began his comic novel "Mr Pye", and renewed his interest in theatre. His father died that year and left his house in Hillside Gardens in Wallington, Surrey to Mervyn. "Mr Pye" was published in 1953, and he later adapted it as a radio play. The BBC broadcast other plays of his in 1954 and 1956. In 1956 Mervyn and Maeve visited Spain, financed by a friend who hoped that Peake's health, which was already declining, would be improved by the holiday. That year his novella "Boy in Darkness" was published beside stories by William Golding and John Wyndham in a volume called "Sometime, Never". On 18 December the BBC broadcast his radio play "The Eye of the Beholder" (later revised as "The Voice of One") in which an avant-garde artist is commissioned to paint a church mural. Peake placed much hope in his play "The Wit to Woo", which was finally staged in London's West End in 1957, but it was a critical and commercial failure. This affected him greatly – his health degenerated rapidly and he was again admitted to hospital with a nervous breakdown. He was showing unmistakable early symptoms of dementia, for which he was given electroconvulsive therapy, to little avail. Over the next few years he gradually lost the ability to draw steadily and quickly, although he still managed to produce some drawings with the help of his wife. Among his last completed works were the illustrations for Balzac's "Droll Stories" (1961) and for his own poem "The Rhyme of the Flying Bomb" (1962), which he had written some 15 years earlier. "Titus Alone" was published in 1959 and was revised in 1970 by Langdon Jones, editor of "New Worlds", to remove apparent inconsistencies introduced by the publisher's careless editing. A 1995 edition of all three completed Gormenghast novels includes a very short fragment of the beginning of what would have been the fourth Gormenghast novel, "Titus Awakes", as well as a listing of events and themes he wanted to address in that and later Gormenghast novels. Throughout the 1960s, Peake's health declined into physical and mental incapacitation, and he died on 17 November 1968 at a care home run by his brother-in-law, at Burcot, near Oxford. He was buried in the churchyard of St Mary's in the village of Burpham, Sussex. A 2003 study published in JAMA Neurology assessed that Peake's death was the result of dementia with Lewy bodies (DLB). His work, especially the "Gormenghast" series, became much better known and more widely appreciated after his death. They have since been translated into more than two dozen languages. Six volumes of Peake's verse were published during his lifetime; "Shapes & Sounds" (1941), "Rhymes without Reason" 1944, "The Glassblowers" (1950), "The Rhyme of the Flying Bomb" (1962), "Poems & Drawings" (1965), and "A Reverie of Bone" (1967). After his death came "Selected Poems" (1972), followed by "Peake's Progress" in 1979 – though the Penguin edition of 1982, with many corrections, including a whole stanza inadvertently omitted from the hardback edition. "The Collected Poems of Mervyn Peake" was published by Carcanet Press in June 2008. Other collections include "The Drawings of Mervyn Peake" (1974), "Writings and Drawings" (1974), and "Mervyn Peake: the man and his art" (2006). An extremely expensive limited edition of the collected works, issued to celebrate Peake's centenary year, was published by Queen Anne Press, but the editing and reproduction of drawings did not match the price asked. In 2010 an archive consisting of 28 containers of material, which included correspondence between Peake and Laurie Lee, Walter de la Mare and C. S. Lewis, plus 39 Gormenghast notebooks and original drawings for both "Alice Through the Looking Glass" and "Alice's Adventures in Wonderland", was acquired by the British Library. Access to the Mervyn Peake Archive is available through the British Library website. Peake's three children presented on BBC Radio Four in 2018 a half-hour memoir of their father's life, emphasizing the importance of the island of Sark. The first blue plaque on Sark was unveiled in Peake's honour at the Gallery Stores in the Avenue on 30 August 2019. In 1983, the Australian Broadcasting Corporation broadcast eight hour-long episodes for radio dramatising the complete Gormenghast Trilogy. This was the first to include the third book "Titus Alone". In 1984, BBC Radio 4 broadcast two 90-minute plays based on "Titus Groan" and "Gormenghast", adapted by Brian Sibley and starring Sting as Steerpike and Freddie Jones as the Artist (narrator). A slightly abridged compilation of the two, running to 160 minutes, and entitled "Titus Groan of Gormenghast", was broadcast on Christmas Day, 1992. BBC 7 repeated the original versions on 21 and 28 September 2003. In 1986, "Mr Pye" was adapted as a four-part Channel 4 miniseries starring Derek Jacobi. In 2000, the BBC and WGBH Boston co-produced a lavish miniseries, titled "Gormenghast", based on the first two books of the series. It starred Jonathan Rhys-Meyers as Steerpike, Neve McIntosh as Fuchsia, June Brown as Nannie Slagg, Ian Richardson as Lord Groan, Christopher Lee as Flay, Richard Griffiths as Swelter, Warren Mitchell as Barquentine, Celia Imrie as Countess Gertrude, Lynsey Baxter and Zoë Wanamaker as the twins, Cora and Clarice, and John Sessions as Dr Prunesquallor. The supporting cast included Olga Sosnovska, Stephen Fry and Eric Sykes and the series is also notable as the last screen performance by comedy legend Spike Milligan (as the Headmaster). A 30-minute TV short film "A Boy in Darkness" (also made in 2000 and adapted from Peake's novella) was the first production from the BBC Drama Lab. It was set in a 'virtual' computer-generated world created by young computer game designers, and starred Jack Ryder (from "EastEnders") as Titus, with Terry Jones ("Monty Python's Flying Circus") narrating. Irmin Schmidt, founder of seminal German Krautrock group Can, wrote an opera called "Gormenghast", based on the novels; it was first performed in Wuppertal, Germany, in November 1998. A number of early songs by New Zealand rock group Split Enz were inspired by Peake's work. The song "The Drowning Man", by British band The Cure, is inspired by events in "Gormenghast", and the song "Lady Fuchsia" by another British band, Strawbs, is also based on events in the novels. Peake's play "The Cave", which dates from the mid-1950s, was given a first public reading at the Blue Elephant Theatre in Camberwell (London) in 2009, and had its world premiere in the same theatre, directed by Aaron Paterson, on 19 October 2010. In 2011 Brian Sibley adapted the story again, this time as six one-hour episodes broadcast on BBC Radio 4 as the Classic Serial starting on 10 July 2011. The serial was titled "The History of Titus Groan" and adapted all three novels written by Mervyn Peake and the recently discovered concluding volume, "Titus Awakes", completed by his widow, Maeve Gilmore. It starred Luke Treadaway as Titus, David Warner as the Artist and Carl Prekopp as Steerpike. It also starred Paul Rhys, Miranda Richardson, James Fleet, Tamsin Greig, Fenella Woolgar, Adrian Scarborough and Mark Benton among others. Sting owned the film rights to the "Gormenghast" novels for a brief period in the 1980s, during which he discussed the possibility of adapting the novels into a series of concept albums, but he abandoned the idea after declaring the Radio 4 audio drama as ideal. As of 2015, author Neil Gaiman was in talks to adapt the novels for the big screen. Gormenghast "Boy in Darkness and other stories" (2007, the correct text and five other pieces) Other Works
https://en.wikipedia.org/wiki?curid=19500
Martial arts Martial arts are codified systems and traditions of combat practiced for a number of reasons such as self-defense; military and law enforcement applications; competition; physical, mental and spiritual development; and entertainment or the preservation of a nation's intangible cultural heritage. Although the term "martial art" has become associated with the fighting arts of East Asia, it originally referred to the combat systems of Europe as early as the 1550s. The term is derived from Latin and means "arts of Mars", the Roman god of war. Some authors have argued that fighting arts or fighting systems would be more appropriate on the basis that many martial arts were never "martial" in the sense of being used or created by professional warriors. Martial arts may be categorized using a variety of criteria, including: Unarmed martial arts can be broadly grouped into those focusing on strikes, those focusing on grappling, and those that cover both fields, often described as hybrid martial arts. Strikes Grappling The traditional martial arts that cover armed combat often encompass a wide spectrum of melee weapons, including bladed weapons and polearms. Such traditions include eskrima, silat, kalaripayat, kobudo, and historical European martial arts, especially those of the German Renaissance. Many Chinese martial arts also feature weapons as part of their curriculum. Sometimes, training with one specific weapon may be considered a style in its own right, especially in the case of Japanese martial arts, with disciplines such as kenjutsu and kendo (sword), bojutsu (staff), and kyūdō (archery). Similarly, modern martial arts and sports include modern fencing, stick-fighting systems like canne de combat, and modern competitive archery. Many martial arts, especially those from Asia, also teach side disciplines which pertain to medicinal practices. This is particularly prevalent in traditional Asian martial arts which may teach bone-setting, herbalism, and other aspects of traditional medicine. Martial arts can also be linked with religion and spirituality. Numerous systems are reputed to have been founded, disseminated, or practiced by monks or nuns. Throughout the Asian arts, meditation may be incorporated as a part of training. In the arts influenced by Hindu-Buddhist philosophy, the practice itself may be used as an aid to attaining enlightenment. Japanese styles, when concerning non-physical qualities of the combat, are often strongly influenced by Mahayana Buddhist philosophy. Concepts like "empty mind" and "beginner's mind" are recurrent. Aikido practitioners for instance, can have a strong philosophical belief of the flow of energy and peace fostering, as idealised by the art's founder Morihei Ueshiba. Traditional Korean martial arts place emphasis on the development of the practitioner's spiritual and philosophical development. A common theme in most Korean styles, such as Taekkyon and taekwondo, is the value of "inner peace" in a practitioner, which is stressed to be only achievable through individual meditation and training. The Koreans believe that the use of physical force is only justifiable for self defense. Systema draws upon breathing and relaxation techniques, as well as elements of Russian Orthodox thought, to foster self-conscience and calmness, and to benefit the practitioner in different levels: the physical, the psychological and the spiritual. Some martial arts in various cultures can be performed in dance-like settings for various reasons, such as for evoking ferocity in preparation for battle or showing off skill in a more stylized manner, with capoeira being the most prominent example. Many such martial arts incorporate music, especially strong percussive rhythms. (See also war dance.) Human warfare dates back to the Epipalaeolithic to early Neolithic era. The oldest works of art depicting scenes of battle are cave paintings from eastern Spain (Spanish Levante) dated between 10,000 and 6,000 BC that show organized groups fighting with bows and arrows. Similar evidence of warfare has been found in Epipalaeolithic to early Neolithic era mass burials, excavated in Germany and at Jebel Sahaba in Northern Sudan. Wrestling is the oldest combat sport, with origins in hand-to-hand combat. Belt wrestling was depicted in works of art from Mesopotamia and Ancient Egypt circa 3000 BC, and later in the Sumerian "Epic of Gilgamesh". The earliest known depiction of boxing comes from a Sumerian relief in Mesopotamia (modern Iraq) from the 3rd millennium BC. Chinese martial arts originated during the legendary, possibly apocryphal, Xia Dynasty more than 4000 years ago. It is said the Yellow Emperor Huangdi (legendary date of ascension 2698 BC) introduced the earliest fighting systems to China. The Yellow Emperor is described as a famous general who before becoming China's leader, wrote lengthy treatises on medicine, astrology and martial arts. One of his main opponents was Chi You who was credited as the creator of jiao di, a forerunner to the modern art of Chinese wrestling. The foundation of modern Asian martial arts is likely a blend of early Chinese and Indian martial arts. During the Warring States period of Chinese history (480-221 BC) extensive development in martial philosophy and strategy emerged, as described by Sun Tzu in "The Art of War" (c. 350 BC). Legendary accounts link the origin of Shaolinquan to the spread of Buddhism from ancient India during the early 5th century AD, with the figure of Bodhidharma, to China. Written evidence of martial arts in Southern India dates back to the Sangam literature of about the 2nd century BC to the 2nd century AD. The combat techniques of the Sangam period were the earliest precursors to Kalaripayattu. In Europe, the earliest sources of martial arts traditions date to Ancient Greece. Boxing ("pygme", "pyx"), wrestling ("pale") and pankration were represented in the Ancient Olympic Games. The Romans produced gladiatorial combat as a public spectacle. A number of historical combat manuals have survived from the European Middle Ages. This includes such styles as sword and shield, two-handed swordfighting and other types of melee weapons besides unarmed combat. Amongst these are transcriptions of Johannes Liechtenauer's mnemonic poem on the longsword dating back to the late fourteenth century. Likewise, Asian martial arts became well-documented during the medieval period, Japanese martial arts beginning with the establishment of the samurai nobility in the 12th century, Chinese martial arts with Ming era treatises such as Ji Xiao Xin Shu, Indian martial arts in medieval texts such as the Agni Purana and the Malla Purana, and Korean martial arts from the Joseon era and texts such as Muyejebo (1598). European swordsmanship always had a sportive component, but the duel was always a possibility until World War I. Modern sport fencing began developing during the 19th century as the French and Italian military academies began codifying instruction. The Olympic games led to standard international rules, with the Féderation Internationale d'Escrime founded in 1913. Modern boxing originates with Jack Broughton's rules in the 18th century, and reaches its present form with the Marquess of Queensberry Rules of 1867. Certain traditional combat sports and fighting styles exist all over the world, rooted in local culture and folklore. The most common of these are styles of folk wrestling, some of which have been practiced since antiquity and are found in the most remote areas. Other examples include forms of stick fighting and boxing. While these arts are based on historical traditions of folklore, they are not "historical" in the sense that they reconstruct or preserve a historical system from a specific era. They are rather contemporary regional sports that coexist with the modern forms of martial arts sports as they have developed since the 19th century, often including cross-fertilization between sports and folk styles; thus, the traditional Thai art of muay boran developed into the modern national sport of muay Thai, which in turn came to be practiced worldwide and contributed significantly to modern hybrid styles like kickboxing and mixed martial arts. Singlestick, an English martial art can be seen often utilized in morris dancing. Many European dances share elements of martial arts with examples including Ukrainian Hopak, Polish Zbójnicki (use of ciupaga), the Czech dance odzemek, and the Norwegian Halling. The mid to late 19th century marks the beginning of the history of martial arts as modern sports developed out of earlier traditional fighting systems. In Europe, this concerns the developments of boxing and fencing as sports. In Japan, the same period marks the formation of the modern forms of judo, jujutsu, karate, and kendo (among others) based on revivals of old schools of Edo period martial arts which had been suppressed during the Meiji Restoration Modern muay Thai rules date to the 1920s. In China, the modern history of martial arts begins in the Nanjing decade (1930s) following the foundation of the Central Guoshu Institute in 1928 under the Kuomintang government. Western interest in Asian martial arts arises towards the end of the 19th century, due to the increase in trade between the United States with China and Japan. Relatively few Westerners actually practiced the arts, considering it to be mere performance. Edward William Barton-Wright, a railway engineer who had studied jujutsu while working in Japan between 1894 and 1897, was the first man known to have taught Asian martial arts in Europe. He also founded an eclectic style named Bartitsu which combined jujutsu, judo, wrestling, boxing, savate and stick fighting. Fencing and Greco-Roman wrestling was included in the 1896 Summer Olympics. FILA Wrestling World Championships and Boxing at the Summer Olympics were introduced in 1904. The tradition of awarding championship belts in wrestling and boxing can be traced to the Lonsdale Belt, introduced in 1909. The International Boxing Association was established in 1920. World Fencing Championships have been held since 1921. As Western influence grew in Asia a greater number of military personnel spent time in China, Japan and South Korea during World War II and the Korean War and were exposed to local fighting styles. Jujutsu, judo and karate first became popular among the mainstream from the 1950s-60s. Due in part to Asian and Hollywood martial arts movies, most modern American martial arts are either Asian-derived or Asian influenced. The term kickboxing (キックボクシング) was created by the Japanese boxing promoter Osamu Noguchi for a variant of muay Thai and karate that he created in the 1950s. American kickboxing was developed in the 1970s, as a combination of boxing and karate. Taekwondo was developed in the context of the Korean War in the 1950s. The later 1960s and 1970s witnessed an increased media interest in Chinese martial arts, influenced by martial artist Bruce Lee. Bruce Lee is credited as one of the first instructors to openly teach Chinese martial arts to Westerners. World Judo Championships have been held since 1956, Judo at the Summer Olympics was introduced in 1964. Karate World Championships were introduced in 1970. Following the "kung fu wave" in Hong Kong action cinema in the 1970s, a number of mainstream films produced during the 1980s contributed significantly to the perception of martial arts in western popular culture. These include "The Karate Kid" (1984) and "Bloodsport" (1988). This era produced some Hollywood action stars with martial arts background, such as Jean-Claude Van Damme and Chuck Norris. Also during the 20th century, a number of martial arts were adapted for self-defense purposes for military hand-to-hand combat. World War II combatives, KAPAP (1930s) and Krav Maga (1950s) in Israel, Systema in Soviet-era Russia, and Sanshou in the People's Republic of China are examples of such systems. The US military de-emphasized hand-to-hand combat training during the Cold War period, but revived it with the introduction of LINE in 1989. During the 1990s Brazilian jiu-jitsu became popular and proved to be effective in mixed martial arts competitions such as the UFC and PRIDE. In 1993 the first Pancrase event was held. The K-1 rules of kickboxing were introduced based on 1980s Seidokaikan karate. Jackie Chan and Jet Li are prominent martial artists who have become major movie figures. Their popularity and media presence has been at the forefront for promoting Chinese martial arts since the late 20th and early 21st centuries. With the continual discovery of more medieval and Renaissance fighting manuals, the practice of Historical European Martial Arts and other Western Martial Arts have been growing in popularity across the United States and Europe. On November 29, 2011, UNESCO inscribed Taekkyon onto its Intangible Cultural Heritage of Humanity List. Many martial arts which originated in Southern India were banned by the government of the British Raj, few of them which barely survived are Kalaripayattu and Silambam. These and other martial arts survived by telling the British government it was a form of dance. Varma kalai, a martial arts concentrating on vital points, was almost dead but is gradually being revived. Testing or evaluation is important to martial artists of many disciplines who wish to determine their progression or own level of skill in specific contexts. Students often undergo periodic testing and grading by their own teacher in order to advance to a higher level of recognized achievement, such as a different belt color or title. The type of testing used varies from system to system but may include forms or sparring. Various forms and sparring are commonly used in martial art exhibitions and tournaments. Some competitions pit practitioners of different disciplines against each other using a common set of rules, these are referred to as mixed martial arts competitions. Rules for sparring vary between art and organization but can generally be divided into "light-contact", "medium-contact", and "full-contact" variants, reflecting the amount of force that should be used on an opponent. These types of sparring restrict the amount of force that may be used to hit an opponent, in the case of light sparring this is usually to 'touch' contact, e.g. a punch should be 'pulled' as soon as or before contact is made. In medium-contact (sometimes referred to as semi-contact) the punch would not be 'pulled' but not hit with full force. As the amount of force used is restricted, the aim of these types of sparring is not to knock out an opponent; a point system is used in competitions. A referee acts to monitor for fouls and to control the match, while judges mark down scores, as in boxing. Particular targets may be prohibited, certain techniques may be forbidden (such as headbutting or groin hits), and fighters may be required to wear protective equipment on their head, hands, chest, groin, shins or feet. Some grappling arts, such as aikido, use a similar method of compliant training that is equivalent to light or medium contact. In some styles (such as fencing and some styles of taekwondo sparring), competitors score points based on the landing of a single technique or strike as judged by the referee, whereupon the referee will briefly stop the match, award a point, then restart the match. Alternatively, sparring may continue with the point noted by the judges. Some critics of point sparring feel that this method of training teaches habits that result in lower combat effectiveness. Lighter-contact sparring may be used exclusively, for children or in other situations when heavy contact would be inappropriate (such as beginners), medium-contact sparring is often used as training for full contact. Full-contact sparring or competition, where strikes or techniques are not pulled but used with full force as the name implies, has a number of tactical differences from light and medium-contact sparring. It is considered by some to be requisite in learning realistic unarmed combat. In full-contact sparring, the aim of a competitive match is to knock out the opponent or to force the opponent to submit. Where scoring takes place it may be a subsidiary measure, only used if no clear winner has been established by other means; in some competitions, such as the UFC 1, there was no scoring, though most now use some form of judging as a backup. Due to these factors, full-contact matches tend to be more aggressive in character, but rule sets may still mandate the use of protective equipment, or limit the techniques allowed. Nearly all mixed martial arts organizations such as UFC, Pancrase, Shooto use a form of full-contact rules as do professional boxing organizations and K-1. Kyokushin karate requires advanced practitioners to engage in bare-knuckled, full-contact sparring allowing kicks, knees and punching although punching to the head is disallowed while wearing only a karate "gi" and groin protector. Brazilian jiu-jitsu and judo matches do not allow striking, but are full-contact in the sense that full force is applied in the permitted grappling and submission techniques. Competitions held by World Taekwondo requires the use of Headgear and padded vest, but are full contact in the sense that full force is applied to strikes to the head and body, and win by knockout is possible. Martial arts have crossed over into sports when forms of sparring become competitive, becoming a sport in its own right that is dissociated from the original combative origin, such as with western fencing. The Summer Olympic Games includes judo, taekwondo, western archery, boxing, javelin, wrestling and fencing as events, while Chinese wushu recently failed in its bid to be included, but is still actively performed in tournaments across the world. Practitioners in some arts such as kickboxing and Brazilian jiu-jitsu often train for sport matches, whereas those in other arts such as aikido generally spurn such competitions. Some schools believe that competition breeds better and more efficient practitioners, and gives a sense of good sportsmanship. Others believe that the rules under which competition takes place have diminished the combat effectiveness of martial arts or encourage a kind of practice which focuses on winning trophies rather than a focus such as cultivating a particular moral character. The question of "which is the best martial art" has led to inter style competitions fought with very few rules allowing a variety of fighting styles to enter with few limitations. This was the origin of the first Ultimate Fighting Championship tournament (later renamed UFC 1: The Beginning) in the U.S. inspired by the Brazilian Vale tudo tradition and along with other minimal rule competitions, most notably those from Japan such as Shooto and Pancrase, have evolved into the combat sport of Mixed Martial Arts (MMA). Some martial artists compete in non-sparring competitions such as breaking or choreographed routines of techniques such as poomse, kata and aka, or modern variations of the martial arts which include dance-influenced competitions such as tricking. Martial traditions have been influenced by governments to become more sport-like for political purposes; the central impetus for the attempt by the People's Republic of China in transforming Chinese martial arts into the committee-regulated sport of wushu was suppressing what they saw as the potentially subversive aspects of martial training, especially under the traditional system of family lineages. Martial arts training aims to result in several benefits to trainees, such as their physical, mental, emotional and spiritual health. Through systematic practice in the martial arts a person's physical fitness may be boosted (strength, stamina, speed, flexibility, movement coordination, etc.) as the whole body is exercised and the entire muscular system is activated. Beyond contributing to physical fitness, martial arts training also has benefits for mental health, contributing to self-esteem, self-control, emotional and spiritual well-being. For this reason, a number of martial arts schools have focused purely on therapeutic aspects, de-emphasizing the historical aspect of self-defense or combat completely. According to Bruce Lee, martial arts also have the nature of an art, since there is emotional communication and complete emotional expression. Some traditional martial concepts have seen new use within modern military training. Perhaps the most recent example of this is point shooting which relies on muscle memory to more effectively utilize a firearm in a variety of awkward situations, much the way an iaidoka would master movements with their sword. During the World War II era William E. Fairbairn and Eric A. Sykes were recruited by the Special Operations Executive (SOE) to teach their martial art of defendu (itself drawing on Western boxing and jujutsu) and pistol shooting to UK, US, and Canadian special forces. The book "Kill or Get Killed", written by Colonel Rex Applegate, was based on the defendu taught by Sykes and Fairbairn. Both Fairbairn's "Get Tough" and Appelgate's "Kill or Get Killed" became classic works on hand-to-hand combat. Traditional hand-to-hand, knife, and spear techniques continue to see use in the composite systems developed for today's wars. Examples of this include European Unifight, the US Army's Combatives system developed by Matt Larsen, the Israeli army's KAPAP and Krav Maga, and the US Marine Corps's "Marine Corps Martial Arts Program" (MCMAP). Unarmed dagger defenses identical to those found in the manual of Fiore dei Liberi and the Codex Wallerstein were integrated into the U.S. Army's training manuals in 1942 and continue to influence today's systems along with other traditional systems such as eskrima and silat. The rifle-mounted bayonet which has its origin in the spear, has seen use by the United States Army, the United States Marine Corps, and the British Army as recently as the Iraq War. Many martial arts are also seen and used in Law Enforcement hand to hand training. For example, the Tokyo Riot Police's use of aikido. Martial arts since the 1970s has become a significant industry, a subset of the wider sport industry (including cinema and sports television). Hundreds of millions of people worldwide practice some form of martial art. Web Japan (sponsored by the Japanese Ministry of Foreign Affairs) claims there are 50 million karate practitioners worldwide. The South Korean government in 2009 published an estimate that taekwondo is practiced by 70 million people in 190 countries. The wholesale value of martial arts related sporting equipment shipped in the United States was estimated at 314 million USD in 2007; participation in the same year was estimated at 6.9 million (ages 6 or older, 2% of US population). R. A. Court, CEO of Martial Arts Channel, stated the total revenue of the US martial arts industry at USD 40 billion and the number of US practitioners at 30 million in 2003. Martial arts equipment can include that which is used for conditioning, protection and weapons. Specialized conditioning equipment can include breaking boards, dummy partners such as the wooden dummy, and targets such as punching bags and the makiwara. Protective equipment for sparring and competition includes boxing gloves and headgear. Asian martial arts experienced a surge of popularity in the west during the 1970s, and the rising demand resulted in numerous low quality or fraudulent schools. Fueled by fictional depictions in martial arts movies, this led to the ninja craze of the 1980s in the United States. There were also numerous fraudulent ads for martial arts training programs, inserted into comic books circa the 1960s and 1970s, which were read primarily by adolescent boys. When the martial arts came to the United States in the seventies, lower ranks (kyu) began to be given colorful belts to show progress. This proved to be commercially viable and colored-belt systems were adopted in many martial arts degree mills (also known as "McDojos" and "belt factories") as a means to generate additional cash. This was covered in the "" (June 2010).
https://en.wikipedia.org/wiki?curid=19501
Finitary relation In mathematics, a finitary relation over sets is a subset of the Cartesian product ; that is, it is a set of "n"-tuples consisting of elements "x""i" in "X""i". Typically, the relation describes a possible connection between the elements of an "n"-tuple. For example, the relation ""x" is divisible by "y" and "z"" consists of the set of 3-tuples such that when substituted to "x", "y" and "z", respectively, make the sentence true. The non-negative integer "n" giving the number of "places" in the relation is called the "arity", "adicity" or "degree" of the relation. A relation with "n" "places" is variously called an n"-ary relation, an n"-adic relation or a relation of degree "n". Relations with a finite number of places are called "finitary relations" (or simply "relations" if the context is clear). It is also possible to generalize the concept to "infinitary relations" with infinite sequences. An "n"-ary relation over sets is an element of the power set of . 0-ary relations count only two members: the one that always holds, and the one that never holds. This is because there is only one 0-tuple, the empty tuple (). They are sometimes useful for constructing the base case of an induction argument. Unary relations can be viewed as a collection of members (such as the collection of Nobel laureates) having some property (such as that of having been awarded the Nobel prize). Binary relations are the most commonly studied form of finitary relations. When "X"1 = "X"2 it is called a homogeneous relation, for example: Otherwise it is a heterogeneous relation, for example: Consider the ternary relation "R" ""x" thinks that "y" likes "z"" over the set of people }, defined by: "R" can be represented equivalently by the following table: Here, each row represents a triple of "R", that is it makes a statement of the form ""x" thinks that "y" likes "z"". For instance, the first row states that "Alice thinks that Bob likes Denise". All rows are distinct. The ordering of rows is insignificant but the ordering of columns is significant. The above table is also a simple example of a relational database, a field with theory rooted in relational algebra and applications in data management. Computer scientists, logicians, and mathematicians, however, tend to have different conceptions what a general relation is, and what it is consisted of. For example, databases are designed to deal with empirical data, which is by definition finite, whereas in mathematics, relations with infinite arity (i.e., infinitary relation) are also considered. The first definition of relations encountered in mathematics is: The second definition of relations makes use of an idiom that is common in mathematics, stipulating that "such and such is an "n"-tuple" in order to ensure that such and such a mathematical object is determined by the specification of mathematical objects with "n" elements. In the case of a relation "R" over "n" sets, there are things to specify, namely, the "n" sets plus a subset of their Cartesian product. In the idiom, this is expressed by saying that "R" is a ()-tuple. As a rule, whatever definition best fits the application at hand will be chosen for that purpose, and if it ever becomes necessary to distinguish between the two definitions, then an entity satisfying the second definition may be called an "embedded" or "included relation". Both statements in "R" (under the first definition) and in "G" (under the second definition) read ""x"1, …, "x""n" are "R"-related" and are denoted by the prefix notation or the postfix notation . In the case where "R" is a binary relation, those statements are also denoted by the infix notation . The following considerations apply under either definition: Let a Boolean domain B be a two-element set, say, }, whose elements can be interpreted as logical values, typically and . The characteristic function of "R", denoted by χ"R", is the Boolean-valued function , defined by if and otherwise. In applied mathematics, computer science and statistics, it is common to refer to a Boolean-valued function as an "n"-ary "predicate". From the more abstract viewpoint of formal logic and model theory, the relation "R" constitutes a "logical model" or a "relational structure", that serves as one of many possible interpretations of some "n"-ary predicate symbol. Because relations arise in many scientific disciplines, as well as in many branches of mathematics and logic, there is considerable variation in terminology. Aside from the set-theoretic extension of a relational concept or term, the term "relation" can also be used to refer to the corresponding logical entity, either the logical comprehension, which is the totality of intensions or abstract properties shared by all elements in the relation, or else the symbols denoting these elements and intensions. Further, some writers of the latter persuasion introduce terms with more concrete connotations (such as "relational structure" for the set-theoretic extension of a given relational concept). The logician Augustus De Morgan, in work published around 1860, was the first to articulate the notion of relation in anything like its present sense. He also stated the first formal results in the theory of relations (on De Morgan and relations, see Merrill 1990). Charles Peirce, Gottlob Frege, Georg Cantor, Richard Dedekind and others advanced the theory of relations. Many of their ideas, especially on relations called orders, were summarized in "The Principles of Mathematics" (1903) where Bertrand Russell made free use of these results. In 1970 Edgar F. Codd proposed a relational model for databases, thus anticipating the development of data base management systems.
https://en.wikipedia.org/wiki?curid=19509
Mokele-mbembe In Congo River Basin mythology, Mokele-mbembe (Lingala: , "one who stops the flow of rivers") is a water-dwelling entity, sometimes described as a living creature, sometimes as a spirit. During the early 20th century, descriptions of the entity increasingly reflected public fascination with dinosaurs, including aspects of particular dinosaur species now known among scientists to be incorrect, and the entity became increasingly described alongside a number of purported living dinosaurs in Africa. Over time, the entity became a point of focus in particular among adherents of the pseudoscience of cryptozoology, and young Earth creationism, resulting in numerous expeditions led by cryptozoologists and funded by young Earth creationists and groups with the aim of finding evidence that invalidates scientific consensus regarding evolution. Paleontologist Donald Prothero remarks that "the quest for Mokele Mbembe ... is part of the effort by creationists to overthrow the theory of evolution and teaching of science by any means possible". Additionally, Prothero observes that "the only people looking for Mokele-mbembe are creationist ministers, not wildlife biologists." 1909 saw the first mention of a apatosaurus-like creature in "Beasts and Men", the autobiography of famed big-game hunter Carl Hagenbeck. He claimed to have heard from two independent sources about a creature living in Rhodesia which was described to them by natives as "half elephant, half dragon." Naturalist Joseph Menges had also told Hagenbeck about similar stories. Hagenbeck speculated that "it can only be some kind of dinosaur, seemingly akin to the "Brontosaurus"." Another of Hagenbeck's sources, Hans Schomburgk, asserted that while at Lake Bangweulu, he noted a lack of hippopotami; his native guides informed him of a large hippo-killing creature that lived in Lake Bangweulu; however, as noted below, Schomburgk thought that native testimony was sometimes unreliable. Reports of entities described to be dinosaur-like in Africa caused a minor sensation in the mass media, and newspapers in Europe and North America carried many articles on the subject in 1910–1911; some took the reports at face value, others were more skeptical. According to German adventurer Lt. Paul Gratz's account from 1911:The crocodile is found only in very isolated specimens in Lake Bangweulu, except in the mouths of the large rivers at the north. In the swamp lives the "nsanga", much feared by the natives, a degenerate saurian which one might well confuse with the crocodile were it not that its skin has no scales and its toes are armed with claws. I did not succeed in shooting a "nsanga", but on the island of Mbawala I came by some strips of its skin. Another report comes from German Captain , as described by Willy Ley in "Exotic Zoology" (1959). Von Stein was ordered to conduct a survey of German colonies in what is now Cameroon in 1913. He heard stories of an enormous reptile called "Mokéle-mbêmbe" alleged to live in the jungles, and included a description in his official report. According to Ley, "von Stein worded his report with utmost caution," knowing it might be seen as unbelievable. Nonetheless, von Stein thought the tales were credible: trusted native guides had related the tales to him, and the stories were related to him by independent sources, yet featured many of the same details. Though von Stein's report was never formally published, Ley quoted von Stein as writing: The animal is said to be of a brownish-gray color with a smooth skin, its size is approximately that of an elephant; at least that of a hippopotamus. It is said to have a long and very flexible neck and only one tooth but a very long one; "some say it is a horn". A few spoke about a long, muscular tail like that of an alligator. Canoes coming near it are said to be doomed; the animal is said to attack the vessels at once and to kill the crews but without eating the bodies. The creature is said to live in the caves that have been washed out by the river in the clay of its shores at sharp bends. It is said to climb the shores even at daytime in search of food; its diet is said to be entirely vegetable. This feature disagrees with a possible explanation as a myth. The preferred plant was shown to me, it is a kind of liana with large white blossoms, with a milky sap and applelike fruits. At the Ssombo River I was shown a path said to have been made by this animal in order to get at its food. The path was fresh and there were plants of the described type nearby. But since there were too many tracks of elephants, hippos, and other large mammals it was impossible to make out a particular spoor with any amount of certainty. Alfred Aloysius Smith, who had worked for a British trading company in what is now Gabon in the late 1800s, briefly mentions in his 1927 memoir the "jago-nini" and "amali":Aye, and behind the Cameroon there's things living we know nothing about. I could 'a' made books about many things. The "Jago-Nini" they say is still in the swamps and rivers. Giant diver it means. Comes out of the water and devours people. Old men'll tell you what their grandfathers saw but they still believe its there. Same as the Amali I've always taken it to be. I've seen the Amali's footprint. About the size of a good frying pan in circumference and three claws instead of five.He also speculates that "some great creature like the Amali" could be responsible for finding broken and splintered ivory in (now known to be mythical) elephants' graveyards, as well as claiming to have given a chiseled out cave painting of the "amali" to Ulysses S. Grant. In 2001, BBC broadcast in the TV series "Congo" a collective interview with a group of BiAka pygmies, who identified the "mokele mbembe" as a rhinoceros while looking at an illustrated manual of wildlife. Neither species of African rhinoceros is common in the Congo Basin, and the Mokèlé-mbèmbé may be a mixture of mythology and folk memory from a time when rhinoceroses were found in the area. In August and September of 2018, Lensgreve of Knuthenborg, Adam Christoffer Knuth, along with a film crew from DR and a DNA scientist, traveled to Lake Tele in Congo, in search of the Mokele-mbembe. They did not find the dinosaur. However they found a new green algae, which has not been discovered before. In 2016, a young travel documentary crew, from South Africa, made an independent documentary about searching for 'Mokèlé-mbèmbé, which they later sold to Discovery Africa. The team, consisting of Jordan Deall, Luke Macdonald, and Donovan Orr, spent roughly four weeks in the Likuoala swamp region visiting various Aka (pygmy) villages, collecting stories of the creature's existence. Whilst they point out the difficulty of differentiating between Mokele-Mbembe's spiritual and physical existence, they interview numerous people who believe in its presence, whilst others suggest the last of the species died at least a decade ago.
https://en.wikipedia.org/wiki?curid=19510
Intuitionism In the philosophy of mathematics, intuitionism, or neointuitionism (opposed to preintuitionism), is an approach where mathematics is considered to be purely the result of the constructive mental activity of humans rather than the discovery of fundamental principles claimed to exist in an objective reality. That is, logic and mathematics are not considered analytic activities wherein deep properties of objective reality are revealed and applied, but are instead considered the application of internally consistent methods used to realize more complex mental constructs, regardless of their possible independent existence in an objective reality. The fundamental distinguishing characteristic of intuitionism is its interpretation of what it means for a mathematical statement to be true. In Brouwer's original intuitionism, the truth of a mathematical statement is a subjective claim: a mathematical statement corresponds to a mental construction, and a mathematician can assert the truth of a statement only by verifying the validity of that construction by intuition. The vagueness of the intuitionistic notion of truth often leads to misinterpretations about its meaning. Kleene formally defined intuitionistic truth from a realist position, yet Brouwer would likely reject this formalization as meaningless, given his rejection of the realist/Platonist position. Intuitionistic truth therefore remains somewhat ill-defined. However, because the intuitionistic notion of truth is more restrictive than that of classical mathematics, the intuitionist must reject some assumptions of classical logic to ensure that everything they prove is in fact intuitionistically true. This gives rise to intuitionistic logic. To an intuitionist, the claim that an object with certain properties exists is a claim that an object with those properties can be constructed. Any mathematical object is considered to be a product of a construction of a mind, and therefore, the existence of an object is equivalent to the possibility of its construction. This contrasts with the classical approach, which states that the existence of an entity can be proved by refuting its non-existence. For the intuitionist, this is not valid; the refutation of the non-existence does not mean that it is possible to find a construction for the putative object, as is required in order to assert its existence. As such, intuitionism is a variety of mathematical constructivism; but it is not the only kind. The interpretation of negation is different in intuitionist logic than in classical logic. In classical logic, the negation of a statement asserts that the statement is "false"; to an intuitionist, it means the statement is "refutable" (i.e., that there is a counterexample). There is thus an asymmetry between a positive and negative statement in intuitionism. If a statement "P" is provable, then it is certainly impossible to prove that there is no proof of "P". But even if it can be shown that no disproof of "P" is possible, we cannot conclude from this absence that there "is" a proof of "P". Thus "P" is a stronger statement than "not-not-P". Similarly, to assert that "A" or "B" holds, to an intuitionist, is to claim that either "A" or "B" can be "proved". In particular, the law of excluded middle, ""A" or not "A"", is not accepted as a valid principle. For example, if "A" is some mathematical statement that an intuitionist has not yet proved or disproved, then that intuitionist will not assert the truth of ""A" or not "A"". However, the intuitionist will accept that ""A" and not "A"" cannot be true. Thus the connectives "and" and "or" of intuitionistic logic do not satisfy de Morgan's laws as they do in classical logic. Intuitionistic logic substitutes constructability for abstract truth and is associated with a transition from the proof of model theory to abstract truth in modern mathematics. The logical calculus preserves justification, rather than truth, across transformations yielding derived propositions. It has been taken as giving philosophical support to several schools of philosophy, most notably the Anti-realism of Michael Dummett. Thus, contrary to the first impression its name might convey, and as realized in specific approaches and disciplines (e.g. Fuzzy Sets and Systems), intuitionist mathematics is more rigorous than conventionally founded mathematics, where, ironically, the foundational elements which Intuitionism attempts to construct/refute/refound are taken as intuitively given. Among the different formulations of intuitionism, there are several different positions on the meaning and reality of infinity. The term potential infinity refers to a mathematical procedure in which there is an unending series of steps. After each step has been completed, there is always another step to be performed. For example, consider the process of counting: The term actual infinity refers to a completed mathematical object which contains an infinite number of elements. An example is the set of natural numbers, In Cantor's formulation of set theory, there are many different infinite sets, some of which are larger than others. For example, the set of all real numbers is larger than , because any procedure that you attempt to use to put the natural numbers into one-to-one correspondence with the real numbers will always fail: there will always be an infinite number of real numbers "left over". Any infinite set that can be placed in one-to-one correspondence with the natural numbers is said to be "countable" or "denumerable". Infinite sets larger than this are said to be "uncountable". Cantor's set theory led to the axiomatic system of Zermelo–Fraenkel set theory (ZFC), now the most common foundation of modern mathematics. Intuitionism was created, in part, as a reaction to Cantor's set theory. Modern constructive set theory includes the axiom of infinity from ZFC (or a revised version of this axiom) and the set of natural numbers. Most modern constructive mathematicians accept the reality of countably infinite sets (however, see Alexander Esenin-Volpin for a counter-example). Brouwer rejected the concept of actual infinity, but admitted the idea of potential infinity. Intuitionism's history can be traced to two controversies in nineteenth century mathematics. The first of these was the invention of transfinite arithmetic by Georg Cantor and its subsequent rejection by a number of prominent mathematicians including most famously his teacher Leopold Kronecker—a confirmed finitist. The second of these was Gottlob Frege's effort to reduce all of mathematics to a logical formulation via set theory and its derailing by a youthful Bertrand Russell, the discoverer of Russell's paradox. Frege had planned a three volume definitive work, but just as the second volume was going to press, Russell sent Frege a letter outlining his paradox, which demonstrated that one of Frege's rules of self-reference was self-contradictory. In an appendix to the second volume, Frege acknowledged that one of the axioms of his system did in fact lead to Russell's paradox. Frege, the story goes, plunged into depression and did not publish the third volume of his work as he had planned. For more see Davis (2000) Chapters 3 and 4: Frege: "From Breakthrough to Despair" and Cantor: "Detour through Infinity." See van Heijenoort for the original works and van Heijenoort's commentary. These controversies are strongly linked as the logical methods used by Cantor in proving his results in transfinite arithmetic are essentially the same as those used by Russell in constructing his paradox. Hence how one chooses to resolve Russell's paradox has direct implications on the status accorded to Cantor's transfinite arithmetic. In the early twentieth century L. E. J. Brouwer represented the "intuitionist" position and David Hilbert the formalist position—see van Heijenoort. Kurt Gödel offered opinions referred to as "Platonist" (see various sources re Gödel). Alan Turing considers: "non-constructive systems of logic with which not all the steps in a proof are mechanical, some being intuitive". (Turing 1939, reprinted in Davis 2004, p. 210) Later, Stephen Cole Kleene brought forth a more rational consideration of intuitionism in his Introduction to Meta-mathematics (1952).
https://en.wikipedia.org/wiki?curid=19513
Mishnah The Mishnah or Mishna (; , "study by repetition", from the verb "shanah" , or "to study and review", also "secondary") is the first major written collection of the Jewish oral traditions known as the Oral Torah. It is also the first major work of rabbinic literature. The Mishnah was redacted by Judah ha-Nasi at the beginning of the third century CE in a time when, according to the Talmud, the persecution of the Jews and the passage of time raised the possibility that the details of the oral traditions of the Pharisees from the Second Temple period (536 BCE – 70 CE) would be forgotten. Most of the Mishnah is written in Mishnaic Hebrew, while some parts are Aramaic. The Mishnah consists of six orders (', singular ' ), each containing 7–12 tractates (', singular ' ; lit. "web"), 63 in total, and further subdivided into chapters and paragraphs. The word "Mishnah" can also indicate a single paragraph of the work, i.e. the smallest unit of structure in the Mishnah. For this reason the whole work is sometimes referred to in the plural form, "". The term ""Mishnah"" originally referred to a method of teaching by presenting topics in a systematic order, as contrasted with "", which followed the order of the Bible. As a written compilation, the order of the Mishnah is by subject matter and includes a much broader selection of halakhic subjects, and discusses individual subjects more thoroughly, than the "Midrash". The Mishnah consists of six orders (', singular ' ), each containing 7–12 tractates (', singular ' ; lit. "web"), 63 in total. Each ' is divided into chapters (', singular ') and then paragraphs (', singular ""). In this last context, the word "mishnah" means a single paragraph of the work, i.e. the smallest unit of structure, leading to the use of the plural, ""Mishnayot"", for the whole work. Because of the division into six orders, the Mishnah is sometimes called Shas (an acronym for "Shisha Sedarim" – the "six orders"), although that term is more often used for the Talmud as a whole. The six orders are: In each order (with the exception of Zeraim), tractates are arranged from biggest (in number of chapters) to smallest. A popular mnemonic consists of the acronym "Z'MaN NaKaT." The Babylonian Talmud (Hagiga 14a) states that there were either six hundred or seven hundred orders of the Mishnah. Hillel the Elder organized them into six orders to make it easier to remember. The historical accuracy of this tradition is disputed. There is also a tradition that Ezra the scribe dictated from memory not only the 24 books of the Tanakh but 60 esoteric books. It is not known whether this is a reference to the Mishnah, but there is a case for saying that the Mishnah does consist of 60 tractates. (The current total is 63, but Makkot was originally part of Sanhedrin, and Bava Kamma, Bava Metzia and Bava Batra may be regarded as subdivisions of a single tractate Nezikin.) Reuvein Margolies (1889–1971) posited that there were originally seven orders of Mishnah, citing a Gaonic tradition on the existence of a seventh order containing the laws of "Sta"m" (scribal practice) and Berachot (blessings). A number of important laws are not elaborated upon in the Mishnah. These include the laws of tzitzit, tefillin (phylacteries), mezuzot, the holiday of Hanukkah, and the laws of conversion to Judaism. These were later discussed in the minor tractates. Nissim ben Jacob's "Hakdamah Le'mafteach Hatalmud" argued that it was unnecessary for Judah the Prince to discuss them as many of these laws were so well known. Margolies suggests that as the Mishnah was redacted after the Bar Kokhba revolt, Judah could not have included discussion of Hanukkah, which commemorates the Jewish revolt against the Seleucid Empire (the Romans would not have tolerated this overt nationalism). Similarly, there were then several decrees in place aimed at suppressing outward signs of national identity, including decrees against wearing tefillin and tzitzit; as conversion to Judaism was against Roman law, Judah would not have discussed this. David Zvi Hoffmann suggests that there existed ancient texts analogous to the present-day "Shulchan Aruch" that discussed the basic laws of day to day living and it was therefore not necessary to focus on these laws in the Mishnah. Rabbinic commentaries on the Mishnah from the next four centuries, done in the Land of Israel and in Babylonia, were eventually redacted and compiled as well. In themselves they are known as "Gemara". The books which set out the Mishnah in its original structure, together with the associated "Gemara", are known as Talmuds. Two Talmuds were compiled, the Babylonian Talmud (to which the term "Talmud" normally refers) and the Jerusalem Talmud. Unlike the Hebrew Mishnah, the "Gemara" is written primarily in Aramaic. The Mishnah teaches the oral traditions by example, presenting actual cases being brought to judgment, usually along with (i) the "debate" on the matter, and (ii) the judgment that was given by a notable rabbi based on halakha, mitzvot, and spirit of the teaching ("Torah") that guided his decision. In this way, the Mishnah brings to everyday reality the practice of the "mitzvot" as presented in the Torah, and aims to cover all aspects of human living, serve as an example for future judgments, and, most important, demonstrate pragmatic exercise of the Biblical laws, which was much needed since the time when the Second Temple was destroyed (70 CE). The Mishnah is thus not the development of new laws, but rather the collection of existing traditions. The term "Mishnah" is related to the verb "shanah", to teach or repeat, and to the adjectives ""sheni"" and ""mishneh"", meaning "second". It is thus named for being both the one written authority (codex) secondary (only) to the Tanakh as a basis for the passing of judgment, a source and a tool for creating laws, and the first of many books to complement the Tanakh in certain aspects. Before the publication of the Mishnah, Jewish scholarship and judgement were predominantly oral, as according to the Talmud, it was not permitted to write them down. The earliest recorded oral law may have been of the midrashic form, in which halakhic discussion is structured as exegetical commentary on the Torah. Rabbis expounded on and debated the Tanakh, the Hebrew Bible, without the benefit of written works (other than the Biblical books themselves), though some may have made private notes () for example of court decisions. The oral traditions were far from monolithic, and varied among various schools, the most famous of which were the House of Shammai and the House of Hillel. After First Jewish–Roman War in 70 CE, with the end of the Second Temple Jewish center in Jerusalem, Jewish social and legal norms were in upheaval. The Rabbis were faced with the new reality of Judaism without a Temple (to serve as the center of teaching and study) and Judea without autonomy. It is during this period that Rabbinic discourse began to be recorded in writing. The possibility was felt that the details of the oral traditions of the Pharisees from the Second Temple period (530s BCE – 70 CE) would be forgotten, so the justification was found to have these oral laws transcribed. Over time, different traditions of the Oral Law came into being, raising problems of interpretation. According to the "Mevo Hatalmud", many rulings were given in a specific context, but would be taken out of it; or a ruling was revisited but the second ruling would not become popularly known. To correct this, Judah the Prince took up the redaction of the Mishnah. If a point was of no conflict, he kept its language; where there was conflict, he reordered the opinions and ruled; and he clarified where context was not given. The idea was not to use his own discretion, but rather to examine the tradition as far back as he could, and only supplement as required. According to Rabbinic Judaism, the Oral Torah () was given to Moses with the Torah at Mount Sinai or Mount Horeb as an exposition to the latter. The accumulated traditions of the Oral Law, expounded by scholars in each generation from Moses onward, is considered as the necessary basis for the interpretation, and often for the reading, of the Written Law. Jews sometimes refer to this as the Masorah (Hebrew: ), roughly translated as tradition, though that word is often used in a narrower sense to mean traditions concerning the editing and reading of the Biblical text (see Masoretic Text). The resulting Jewish law and custom is called halakha. While most discussions in the Mishnah concern the correct way to carry out laws recorded in the Torah, it usually presents its conclusions without explicitly linking them to any scriptural passage, though scriptural quotations do occur. For this reason it is arranged in order of topics rather than in the form of a Biblical commentary. (In a very few cases, there is no scriptural source at all and the law is described as "Halakha leMoshe miSinai", "law to Moses from Sinai".) The "Midrash halakha", by contrast, while presenting similar laws, does so in the form of a Biblical commentary and explicitly links its conclusions to details in the Biblical text. These Midrashim often predate the Mishnah. The Mishnah also quotes the Torah for principles not associated with law, but just as practical advice, even at times for humor or as guidance for understanding historical debates. Some Jews did not accept the codification of the oral law at all. Karaite Judaism, for example, recognised only the Tanakh as authoritative in "Halakha" (Jewish religious law) and theology. It vehemently rejected the codification of the Oral Torah in the Mishnah and Talmud and subsequent works of mainstream Rabbinic Judaism which maintained that the Talmud was an authoritative interpretations of the Torah. Karaites maintained that all of the divine commandments handed down to Moses by God were recorded in the written Torah without additional Oral Law or explanation. As a result, Karaite Jews did not accept as binding the written collections of the oral tradition in the Midrash or Talmud. The Karaites comprised a significant portion of the world Jewish population in the 10th and 11th centuries CE, and remain extant, although they currently number in the thousands. The rabbis who contributed to the Mishnah are known as the "Tannaim", of whom approximately 120 are known. The period during which the Mishnah was assembled spanned about 130 years, or five generations, in the first and second centuries CE. Judah the Prince is credited with the final redaction and publication of the Mishnah, although there have been a few additions since his time: those passages that cite him or his grandson, Judah II, and the end of tractate Sotah, which refers to the period after Judah the Prince's death. One must also note that in addition to redacting the Mishnah, Judah the Prince and his court also ruled on which opinions should be followed, though the rulings do not always appear in the text. Most of the Mishnah is related without attribution ("). This usually indicates that many sages taught so, or that Judah the Prince ruled so. The halakhic ruling usually follows that view. Sometimes, however, it appears to be the opinion of a single sage, and the view of the sages collectively (, "hachamim") is given separately. As Judah the Prince went through the tractates, the Mishnah was set forth, but throughout his life some parts were updated as new information came to light. Because of the proliferation of earlier versions, it was deemed too hard to retract anything already released, and therefore a second version of certain laws were released. The Talmud refers to these differing versions as ' ("First Mishnah") and ' ("Last Mishnah"). David Zvi Hoffmann suggests that "Mishnah Rishonah" actually refers to texts from earlier Sages upon which Rabbi based his Mishnah. The Talmud records a tradition that unattributed statements of the law represent the views of Rabbi Meir (Sanhedrin 86a), which supports the theory (recorded by Sherira Gaon in his famous "Iggeret") that he was the author of an earlier collection. For this reason, the few passages that actually say "this is the view of Rabbi Meir" represent cases where the author intended to present Rabbi Meir's view as a "minority opinion" not representing the accepted law. There are also references to the "Mishnah of Rabbi Akiva", suggesting a still earlier collection; on the other hand, these references may simply mean his teachings in general. Another possibility is that Rabbi Akiva and Rabbi Meir established the divisions and order of subjects in the Mishnah, making them the authors of a school curriculum rather than of a book. Authorities are divided on whether Rabbi Judah the Prince recorded the Mishnah in writing or established it as an oral text for memorisation. The most important early account of its composition, the "Iggeret Rav Sherira Gaon" (Epistle of Rabbi Sherira Gaon) is ambiguous on the point, although the Spanish recension leans to the theory that the Mishnah was written. However, the Talmud records that, in every study session, there was a person called the "tanna" appointed to recite the Mishnah passage under discussion. This may indicate that, even if the Mishnah was reduced to writing, it was not available on general distribution. Very roughly, there are two traditions of Mishnah text. One is found in manuscripts and printed editions of the Mishnah on its own, or as part of the Jerusalem Talmud. The other is found in manuscripts and editions of the Babylonian Talmud; though there is sometimes a difference between the text of a whole paragraph printed at the beginning of a discussion (which may be edited to conform with the text of the Mishnah-only editions) and the line-by-line citations in the course of the discussion. Robert Brody, in his "Mishna and Tosefta Studies" (Jerusalem 2014), warns against over-simplifying the picture by assuming that the Mishnah-only tradition is always the more authentic, or that it represents a "Palestinian" as against a "Babylonian" tradition. Manuscripts from the Cairo Geniza, or citations in other works, may support either type of reading or other readings altogether. Complete mss. bolded. The Literature of the Jewish People in the Period of the Second Temple and the Talmud, Volume 3 The Literature of the Sages: First Part: Oral Tora, Halakha, Mishna, Tosefta, Talmud, External Tractates. "Compendia Rerum Iudaicarum ad Novum Testamentum", Ed. Shmuel Safrai, BRILL, 1987, The first printed edition of the Mishnah was published in Naples. There have been many subsequent editions, including the late 19th century Vilna edition, which is the basis of the editions now used by the religious public. Vocalized editions were published in Italy, culminating in the edition of David ben Solomon Altaras, publ. Venice 1737. The Altaras edition was republished in Mantua in 1777, in Pisa in 1797 and 1810 and in Livorno in many editions from 1823 until 1936: reprints of the vocalized Livorno editions were published in Israel in 1913, 1962, 1968 and 1976. These editions show some textual variants by bracketing doubtful words and passages, though they do not attempt detailed textual criticism. The Livorno editions are the basis of the Sephardic tradition for recitation. As well as being printed on its own, the Mishnah is included in all editions of the Babylonian and Jerusalem Talmuds. Each paragraph is printed on its own, and followed by the relevant Gemara discussion. However, that discussion itself often cites the Mishnah line by line. While the text printed in paragraph form has generally been standardized to follow the Vilna edition, the text cited line by line in the Gemara often preserves important variants, which sometimes reflect the readings of older manuscripts. The nearest approach to a critical edition is that of Hanoch Albeck. There is also an edition by Yosef Qafiḥ of the Mishnah together with the commentary of Maimonides, which compares the base text used by Maimonides with the Napoli and Vilna editions and other sources. The Mishnah was and still is traditionally studied through recitation (out loud). Jewish communities around the world preserved local melodies for chanting the Mishnah, and distinctive ways of pronouncing its words. Many medieval manuscripts of the Mishnah are vowelized, and some of these, especially some fragments found in the Genizah, are partially annotated with Tiberian cantillation marks. Today, many communities have a special tune for the Mishnaic passage "Bammeh madliqin" in the Friday night service; there may also be tunes for Mishnaic passages in other parts of the liturgy, such as the passages in the daily prayers relating to sacrifices and incense and the paragraphs recited at the end of the Musaf service on Shabbat. Otherwise, there is often a customary intonation used in the study of Mishnah or Talmud, somewhat similar to an Arabic mawwal, but this is not reduced to a precise system like that for the Biblical books. (In some traditions this intonation is the same as or similar to that used for the Passover Haggadah.) Recordings have been made for Israeli national archives, and Frank Alvarez-Pereyre has published a book-length study of the Syrian tradition of Mishnah reading on the basis of these recordings. Most vowelized editions of the Mishnah today reflect standard Ashkenazic vowelization, and often contain mistakes. The Albeck edition of the Mishnah was vowelized by Hanokh Yalon, who made careful eclectic use of both medieval manuscripts and current oral traditions of pronunciation from Jewish communities all over the world. The Albeck edition includes an introduction by Yalon detailing his eclectic method. Two institutes at the Hebrew University in Jerusalem have collected major oral archives which hold (among other things) extensive recordings of Jews chanting the Mishnah using a variety of melodies and many different kinds of pronunciation. These institutes are the Jewish Oral Traditions Research Center and the National Voice Archives (the "Phonoteca" at the Jewish National and University Library). See below for external links. Both the Mishnah and Talmud contain little serious biographical studies of the people discussed therein, and the same tractate will conflate the points of view of many different people. Yet, sketchy biographies of the Mishnaic sages can often be constructed with historical detail from Talmudic and Midrashic sources. According to the Encyclopaedia Judaica (Second Edition), it is accepted that Judah the Prince added, deleted, and rewrote his source material during the process of redacting the Mishnah. Modern authors who have provided examples of these changes include J.N. Epstein and S. Friedman. Following Judah the Prince's redaction there remained a number of different versions of the Mishnah in circulation. The Mishnah used in the Babylonian rabbinic community differing markedly from that used in the Palestinian one. Indeed within these rabbinic communities themselves there are indications of different versions being used for study. These differences are shown in divergent citations of individual Mishnah passages in the Talmud Yerushalmi and the Talmud Bavli, and in variances of medieval manuscripts and early editions of the Mishnah. The best known examples of these differences is found in J.N.Epstein’s Introduction to the Text of the Mishnah (1948). Epstein has also concluded that the period of the Amoraim was one of further deliberate changes to the text of the Mishnah, which he views as attempts to return the text to what was regarded as its original form. These lessened over time, as the text of the Mishnah became more and more regarded as authoritative. Many modern historical scholars have focused on the timing and the formation of the Mishnah. A vital question is whether it is composed of sources which date from its editor's lifetime, and to what extent is it composed of earlier, or later sources. Are Mishnaic disputes distinguishable along theological or communal lines, and in what ways do different sections derive from different schools of thought within early Judaism? Can these early sources be identified, and if so, how? In response to these questions, modern scholars have adopted a number of different approaches. A notable literary work on the composition of the Mishnah is Milton Steinberg's novel "As a Driven Leaf".
https://en.wikipedia.org/wiki?curid=19518
Monotheism Monotheism is the belief in one god. A narrower definition of monotheism is the belief in the existence of only one god that created the world, is omnipotent, omnipresent and omniscient, and intervenes in the world. A distinction may be made between exclusive monotheism, and both inclusive monotheism and pluriform (panentheistic) monotheism which, while recognising various distinct gods, postulate some underlying unity. Monotheism is distinguished from henotheism, a religious system in which the believer worships one god without denying that others may worship different gods with equal validity, and monolatrism, the recognition of the existence of many gods but with the consistent worship of only one deity. The term "monolatry" was perhaps first used by Julius Wellhausen. The broader definition of monotheism characterizes the traditions of Bábism, the Baháʼí Faith, Balinese Hinduism, Cao Dai (Caodaiism), Cheondoism (Cheondogyo), Christianity, Deism, Druze faith, Eckankar, Hindu sects such as Shaivism and Vaishnavism, Islam, Judaism, Mandaeism, Rastafari, Seicho no Ie, Sikhism, Tengrism (Tangrism), Tenrikyo (Tenriism), Yazidism, and Zoroastrianism, and elements of pre-monotheistic thought are found in early religions such as Atenism, ancient Chinese religion, and . The word "monotheism" comes from the Greek ("monos") meaning "single" and ("theos") meaning "god". The English term was first used by Henry More (1614–1687). Quasi-monotheistic claims of the existence of a universal deity date to the Late Bronze Age, with Akhenaten's "Great Hymn to the Aten". A possible inclination towards monotheism emerged during the Vedic period in Iron-Age South Asia. The Rigveda exhibits notions of monism of the Brahman, particularly in the comparatively late tenth book, which is dated to the early Iron Age, e.g. in the Nasadiya sukta. Since the sixth century BCE, Zoroastrians have believed in the supremacy of one God above all: Ahura Mazda as the "Maker of All" and the first being before all others. Nonetheless, Zoroastrianism was not strictly monotheistic because it venerated other "yazatas" alongside Ahura Mazda. Ancient Hindu theology, meanwhile, was monist, but was not strictly monotheistic in worship because it still maintained the existence of many gods, who were envisioned as aspects of one supreme God, Brahman. Numerous ancient Greek philosophers, including Xenophanes of Colophon and Antisthenes believed in a similar polytheistic monism that came close to monotheism, but fell short. Judaism was the first religion to conceive the notion of a personal monotheistic God within a monist context. The concept of ethical monotheism, which holds that morality stems from God alone and that its laws are unchanging, first occurred in Judaism, but is now a core tenet of most modern monotheistic religions, including Zoroastrianism, Christianity, Islam, Sikhism, and Baháʼí Faith. According to Jewish, Christian and Islamic tradition, monotheism was the original religion of humanity; this original religion is sometimes referred to as "the Adamic religion", or, in the terms of Andrew Lang, the "Urreligion". Scholars of religion largely abandoned that view in the 19th century in favour of an evolutionary progression from animism via polytheism to monotheism, but by 1974 this theory was less widely held, and a modified view similar to Lang's became more prominent. Austrian anthropologist Wilhelm Schmidt had postulated an "Urmonotheismus", "original" or "primitive monotheism" in the 1910s. It was objected that Judaism, Christianity, and Islam had grown up in opposition to polytheism as had Greek philosophical monotheism. More recently, Karen Armstrong and other authors have returned to the idea of an evolutionary progression beginning with animism, which developed into polytheism, which developed into henotheism, which developed into monolatry, which developed into true monotheism. While all adherents of the Abrahamic religions consider themselves to be monotheists, some in Judaism do not consider Christianity to be a pure form of monotheism (due to the Christian doctrine of the Trinity), classifying it as Shituf. Islam likewise does not recognize modern-day Christianity as monotheistic, primarily due to the Christian doctrine of Trinity, which Islam argues was not a part of the original monotheistic Christianity as preached by Jesus. Christians, on the other hand, argue that the doctrine of the Trinity is a valid expression of monotheism, citing that the Trinity does not consist of three separate deities, but rather the three persons, who exist consubstantially (as one substance) within a single Godhead. Judaism is traditionally considered one of the oldest monotheistic religions in the world, although it is also believed that the earliest Israelites (pre-7th century BCE) were polytheistic, evolved into henotheistic and later monolatristic, rather than monotheistic. God in later Judaism was strictly monotheistic, an absolute one, indivisible, and incomparable being who is the ultimate cause of all existence. The Babylonian Talmud references other, "foreign gods" as non-existent entities to whom humans mistakenly ascribe reality and power. One of the best-known statements of Rabbinical Judaism on monotheism is the Second of Maimonides' 13 Principles of faith: Some in Judaism and Islam reject the Christian idea of monotheism. Judaism uses the term "shituf" to refer to the worship of God in a manner which Judaism deems to be neither purely monotheistic (though still permissible for non-Jews) nor polytheistic (which would be prohibited). During the 8th century BCE, the worship of Yahweh in Israel was in competition with many other cults, described by the Yahwist faction collectively as Baals. The oldest books of the Hebrew Bible reflect this competition, as in the books of Hosea and Nahum, whose authors lament the "apostasy" of the people of Israel, threatening them with the wrath of God if they do not give up their polytheistic cults. Ancient Israelite religion was originally polytheistic; the Israelites worshipped many deities, including El, Baal, Asherah, and Astarte. Yahweh was originally the national god of the Kingdom of Israel and the Kingdom of Judah. As time progressed, the henotheistic cult of Yahweh grew increasingly militant in its opposition to the worship of other gods. Later, the reforms of King Josiah imposed a form of strict Israeli monolatrism. After the fall of Judah and the beginning of the Babylonian captivity, a small circle of priests and scribes gathered around the exiled royal court, where they first developed the concept of Yahweh as the sole God of the world. "Shema Yisrael" ("Hear, [O] Israel") are the first two words of a section of the Torah, and is the title of a prayer that serves as a centerpiece of the morning and evening Jewish prayer services. The first verse encapsulates the monotheistic essence of Judaism: "Hear, O Israel: the our God, the is one" (), found in , sometimes alternatively translated as "The is our God, the alone". Observant Jews consider the Shema to be the most important part of the prayer service in Judaism, and its twice-daily recitation as a "mitzvah" (religious commandment). It is traditional for Jews to say the Shema as their last words, and for parents to teach their children to say it before they go to sleep at night. Among early Christians there was considerable debate over the nature of the Godhead, with some denying the incarnation but not the deity of Jesus (Docetism) and others later calling for an Arian conception of God. Despite at least one earlier local synod rejecting the claim of Arius, this Christological issue was to be one of the items addressed at the First Council of Nicaea. The First Council of Nicaea, held in Nicaea (in present-day Turkey), convoked by the Roman Emperor Constantine I in 325, was the first ecumenical council of bishops of the Roman Empire, and most significantly resulted in the first uniform Christian doctrine, called the Nicene Creed. With the creation of the creed, a precedent was established for subsequent general ecumenical councils of bishops (synods) to create statements of belief and canons of doctrinal orthodoxy— the intent being to define a common creed for the Church and address heretical ideas. One purpose of the council was to resolve disagreements in Alexandria over the nature of Jesus in relationship to the Father; in particular, whether Jesus was of the same substance as God the Father or merely of similar substance. All but two bishops took the first position; while Arius' argument failed. Christian orthodox traditions (Eastern Orthodox, Oriental Orthodox, Roman Catholic, and most Protestants) follow this decision, which was reaffirmed in 381 at the First Council of Constantinople and reached its full development through the work of the Cappadocian Fathers. They consider God to be a triune entity, called the Trinity, comprising three "persons", God the Father, God the Son, and God the Holy Spirit. These three are described as being "of the same substance" (). Christians overwhelmingly assert that monotheism is central to the Christian faith, as the Nicene Creed (and others), which gives the orthodox Christian definition of the Trinity, begins: "I believe in one God". From earlier than the times of the Nicene Creed, 325 CE, various Christian figures advocated the triune mystery-nature of God as a normative profession of faith. According to Roger E. Olson and Christopher Hall, through prayer, meditation, study and practice, the Christian community concluded "that God must exist as both a unity and trinity", codifying this in ecumenical council at the end of the 4th century. Most modern Christians believe the Godhead is triune, meaning that the three persons of the Trinity are in one union in which each person is also wholly God. They also hold to the doctrine of a man-god Christ Jesus as God incarnate. These Christians also do not believe that one of the three divine figures is God alone and the other two are not but that all three are mysteriously God and one. Other Christian religions, including Unitarian Universalism, Jehovah's Witnesses, Mormonism and others, do not share those views on the Trinity. Some Christian faiths, such as Mormonism, argue that the Godhead is in fact three separate individuals which include God the Father, His Son Jesus Christ, and the Holy Ghost. Each individual having a distinct purpose in the grand existence of human kind. Furthermore, Mormons believe that before the Council of Nicaea, the predominant belief among many early Christians was that the Godhead was three separate individuals. In support of this view, they cite early Christian examples of belief in subordinationism. Unitarianism is a theological movement, named for its understanding of God as one person, in direct contrast to Trinitarianism. In Islam, God (Allāh) is all-powerful and all-knowing, the creator, sustainer, ordainer and judge of the universe. God in Islam is strictly singular ("tawhid") unique ("wahid") and inherently One ("ahad"), all-merciful and omnipotent. Allāh exists without place and the Qurʼan states that "No vision can grasp Him, but His grasp is over all vision. God is above all comprehension, yet is acquainted with all things" (Qurʼan 6:103) Allāh is the only God and the same God worshiped in Christianity and Judaism. (). Islam emerged in the 7th century CE in the context of both Christianity and Judaism, with some thematic elements similar to Gnosticism. Islamic belief states that Muhammad did not bring a new religion from God, but is rather the same religion as practiced by Abraham, Moses, David, Jesus and all the other prophets of God. The assertion of Islam is that the message of God had been corrupted, distorted or lost over time and the Quran was sent to Muhammad in order to correct the lost message of the Torah, New Testament and prior scriptures from God. The Qurʼan asserts the existence of a single and absolute truth that transcends the world; a unique and indivisible being who is independent of the creation. The Qurʼan rejects binary modes of thinking such as the idea of a duality of God by arguing that both good and evil generate from God's creative act. God is a universal god rather than a local, tribal or parochial one; an absolute who integrates all affirmative values and brooks no evil. Ash'ari theology, which dominated Sunni Islam from the tenth to the nineteenth century, insists on ultimate divine transcendence and holds that divine unity is not accessible to human reason. Ash'arism teaches that human knowledge regarding it is limited to what has been revealed through the prophets, and on such paradoxes as God's creation of evil, revelation had to accept "bila kayfa" (without [asking] how). "Tawhid" constitutes the foremost article of the Muslim profession of faith, "There is no god but God, Muhammad is the messenger of God. To attribute divinity to a created entity is the only unpardonable sin mentioned in the Qurʼan. The entirety of the Islamic teaching rests on the principle of "tawhid". Medieval Islamic philosopher Al-Ghazali offered a proof of monotheism from omnipotence, asserting there can only be one omnipotent being. For if there were two omnipotent beings, the first would either have power over the second (meaning the second is not omnipotent) or not (meaning the first is not omnipotent); thus implying that there could only be one omnipotent being. As they traditionally profess a concept of monotheism with a singular person as God, Judaism and Islam reject the Christian idea of monotheism. Judaism uses the term Shituf to refer to non-monotheistic ways of worshiping God. Though Muslims venerate Jesus (Isa in Arabic) as a prophet, they do not accept the doctrine that he was a begotten son of God. Mandaeism or Mandaeanism ( ') is a monotheistic Gnostic religion. Its adherents, the Mandaeans, revere Adam, Abel, Seth, Enos, Noah, Shem, Aram, and especially John the Baptist. The Mandaean God is named as "Hayyi Rabbi" meaning The Great Life or The Great Living God. The Mandaeans are Semites and speak a dialect of Eastern Aramaic known as Mandaic. The name 'Mandaean' is said to come from the Aramaic "manda" meaning "knowledge", as does Greek "gnosis". Within the Middle East, but outside of their community, the Mandaeans are more commonly known as the ' (singular: ) or Sabians. The term ' is derived from the Aramaic root related to baptism, the neo-Mandaic is "." In Islam, the "Sabians" ( ') are described several times in the Quran as People of the Book, alongside Jews and Christians. God in the Baháʼí Faith is taught to be a personal god, too great for humans to fully comprehend. Human primitive understanding of God is achieved through his revelations via his divine intermediary Manifestations. In the Baháʼí faith, such Christian doctrines as the Trinity are seen as compromising the Baháʼí view that God is single and has no equal. And the very existence of the Baháʼí Faith is a challenge to the Islamic doctrine of the finality of Muhammad's revelation. God in the Baháʼí Faith communicates to humanity through divine intermediaries, known as Manifestations of God. These Manifestations establish religion in the world. It is through these divine intermediaries that humans can approach God, and through them God brings divine revelation and law. The Oneness of God is one of the core teachings of the Baháʼí Faith. The obligatory prayers in the Baháʼí Faith involve explicit monotheistic testimony. God is the imperishable, uncreated being who is the source of all existence. He is described as "a personal God, unknowable, inaccessible, the source of all Revelation, eternal, omniscient, omnipresent and almighty". Although transcendent and inaccessible directly, his image is reflected in his creation. The purpose of creation is for the created to have the capacity to know and love its creator. God communicates his will and purpose to humanity through intermediaries, known as Manifestations of God, who are the prophets and messengers that have founded religions from prehistoric times up to the present day. Rastafari, sometimes termed Rastafarianism, is classified as both a new religious movement and social movement. It developed in Jamaica during the 1930s. It lacks any centralised authority and there is much heterogeneity among practitioners, who are known as Rastafari, Rastafarians, or Rastas. Rastafari refer to their beliefs, which are based on a specific interpretation of the Bible, as "Rastalogy". Central is a monotheistic belief in a single God—referred to as Jah—who partially resides within each individual. The former emperor of Ethiopia, Haile Selassie, is given central importance. Many Rastas regard him as an incarnation of Jah on Earth and as the Second Coming of Christ. Others regard him as a human prophet who fully recognised the inner divinity within every individual. Amenhotep IV initially introduced Atenism in Year 5 of his reign (1348/1346 BCE) during the 18th dynasty of the New Kingdom. He raised Aten, once a relatively obscure Egyptian Solar deity representing the disk of the sun, to the status of Supreme God in the Egyptian pantheon. To emphasise the change, Aten's name was written in the cartouche form normally reserved for Pharaohs, an innovation of Atenism. This religious reformation appears to coincide with the proclamation of a Sed festival, a sort of royal jubilee intended to reinforce the Pharaoh's divine powers of kingship. Traditionally held in the thirtieth year of the Pharaoh's reign, this possibly was a festival in honour of Amenhotep III, who some Egyptologists think had a coregency with his son Amenhotep IV of two to twelve years. Year 5 is believed to mark the beginning of Amenhotep IV's construction of a new capital, Akhetaten ("Horizon of the Aten"), at the site known today as Amarna. Evidence of this appears on three of the boundary stelae used to mark the boundaries of this new capital. At this time, Amenhotep IV officially changed his name to Akhenaten ("Agreeable to Aten") as evidence of his new worship. The date given for the event has been estimated to fall around January 2 of that year. In Year 7 of his reign (1346/1344 BCE), the capital was moved from Thebes to Akhetaten (near modern Amarna), though construction of the city seems to have continued for two more years. In shifting his court from the traditional ceremonial centres Akhenaten was signalling a dramatic transformation in the focus of religious and political power. The move separated the Pharaoh and his court from the influence of the priesthood and from the traditional centres of worship, but his decree had deeper religious significance too—taken in conjunction with his name change, it is possible that the move to Amarna was also meant as a signal of Akhenaten's symbolic death and rebirth. It may also have coincided with the death of his father and the end of the coregency. In addition to constructing a new capital in honor of Aten, Akhenaten also oversaw the construction of some of the most massive temple complexes in ancient Egypt, including one at Karnak and one at Thebes, close to the old temple of Amun. In Year 9 (1344/1342 BCE), Akhenaten declared a more radical version of his new religion, declaring Aten not merely the supreme god of the Egyptian pantheon, but the only God of Egypt, with himself as the sole intermediary between the Aten and the Egyptian people. Key features of Atenism included a ban on idols and other images of the Aten, with the exception of a rayed solar disc, in which the rays (commonly depicted ending in hands) appear to represent the unseen spirit of Aten. Akhenaten made it however clear that the image of the Aten only represented the god, but that the god transcended creation and so could not be fully understood or represented. Aten was addressed by Akhenaten in prayers, such as the "Great Hymn to the Aten": "O Sole God beside whom there is none". The details of Atenist theology are still unclear. The exclusion of all but one god and the prohibition of idols was a radical departure from Egyptian tradition, but scholars see Akhenaten as a practitioner of monolatry rather than monotheism, as he did not actively deny the existence of other gods; he simply refrained from worshiping any but Aten. Akhenaten associated Aten with Ra and put forward the eminence of Aten as the renewal of the kingship of Ra. Under Akhenaten's successors, Egypt reverted to its traditional religion, and Akhenaten himself came to be reviled as a heretic. The orthodox faith system held by most dynasties of China since at least the Shang Dynasty (1766 BCE) until the modern period centered on the worship of "Shangdi" (literally "Above Sovereign", generally translated as "God") or Heaven as an omnipotent force. This faith system pre-dated the development of Confucianism and Taoism and the introduction of Buddhism and Christianity. It has features of monotheism in that Heaven is seen as an omnipotent entity, a noncorporeal force with a personality transcending the world. From the writings of Confucius in the "Analects", it is known Confucius believed that Heaven cannot be deceived, Heaven guides people's lives and maintains a personal relationship with them, and that Heaven gives tasks for people to fulfill in order to teach them of virtues and morality. However, this faith system was not truly monotheistic since other lesser gods and spirits, which varied with locality, were also worshiped along with "Shangdi". Still, later variants such as Mohism (470 BCE–c.391 BCE) approached true monotheism, teaching that the function of lesser gods and ancestral spirits is merely to carry out the will of "Shangdi", akin to the angels in Abrahamic religions which in turn counts as only one god. In Mozi's "Will of Heaven" (天志), he writes: Worship of "Shangdi" and Heaven in ancient China includes the erection of shrines, the last and greatest being the Temple of Heaven in Beijing, and the offering of prayers. The ruler of China in every Chinese dynasty would perform annual sacrificial rituals to "Shangdi", usually by slaughtering a completely healthy bull as sacrifice. Although its popularity gradually diminished after the advent of Taoism and Buddhism, among other religions, its concepts remained in use throughout the pre-modern period and have been incorporated in later religions in China, including terminology used by early Christians in China. Despite the rising of non-theistic and pantheistic spirituality contributed by Taoism and Buddhism, Shangdi was still praised up until the end of the Qing Dynasty as the last ruler of the Qing declared himself son of heaven. The Himba people of Namibia practice a form of monotheistic panentheism, and worship the god Mukuru. The deceased ancestors of the Himba and Herero are subservient to him, acting as intermediaries. The Igbo people practice a form of monotheism called Odinani. Odinani has monotheistic and panentheistic attributes, having a single God as the source of all things. Although a pantheon of spirits exists, these are lesser spirits prevalent in Odinani expressly serving as elements of Chineke (or Chukwu), the supreme being or high god. Waaq is the name of a singular God in the traditional religion of many Cushitic people in the Horn of Africa, denoting an early monotheistic religion. However this religion was mostly replaced with the Abrahamic religions. Some (approximately 3%) of Oromo still follow this traditional monotheistic religion called Waaqeffannaa in Oromo. The supreme god of the Proto-Indo-European religion was the god *"Dyḗus Pḥatḗr ". A number of words derived from the name of this supreme deity are used in various Indo-European languages to denote a monotheistic God. Nonetheless, in spite of this, Proto-Indo-European religion itself was not monotheistic. In western Eurasia, the ancient traditions of the Slavic religion contained elements of monotheism. In the sixth century AD, the Byzantine chronicler Procopius recorded that the Slavs "acknowledge that one god, creator of lightning, is the only lord of all: to him do they sacrifice an ox and all sacrificial animals." The deity to whom Procopius is referring is the storm god Perún, whose name is derived from *"Perkwunos", the Proto-Indo-European god of lightning. The ancient Slavs syncretized him with the Germanic god Thor and the Biblical prophet Elijah. As an old religion, Hinduism inherits religious concepts spanning monotheism, polytheism, panentheism, pantheism, monism, and atheism among others; and its concept of God is complex and depends upon each individual and the tradition and philosophy followed. Hindu views are broad and range from monism, through pantheism and panentheism (alternatively called monistic theism by some scholars) to monotheism and even atheism. Hinduism cannot be said to be purely polytheistic. Hindu religious leaders have repeatedly stressed that while God's forms are many and the ways to communicate with him are many, God is one. The "puja" of the "murti" is a way to communicate with the abstract one god ("Brahman") which creates, sustains and dissolves creation. Rig Veda 1.164.46, Traditions of Gaudiya Vaishnavas, the Nimbarka Sampradaya and followers of Swaminarayan and Vallabha consider Krishna to be the source of all avatars, and the source of Vishnu himself, or to be the same as Narayana. As such, he is therefore regarded as "Svayam Bhagavan". When Krishna is recognized to be "Svayam Bhagavan", it can be understood that this is the belief of Gaudiya Vaishnavism, the Vallabha Sampradaya, and the Nimbarka Sampradaya, where Krishna is accepted to be the source of all other avatars, and the source of Vishnu himself. This belief is drawn primarily "from the famous statement of the Bhagavatam" (1.3.28). A viewpoint differing from this theological concept is the concept of Krishna as an "avatar" of Narayana or Vishnu. It should be however noted that although it is usual to speak of Vishnu as the source of the avataras, this is only one of the names of the God of Vaishnavism, who is also known as Narayana, Vasudeva and Krishna and behind each of those names there is a divine figure with attributed supremacy in Vaishnavism. The Rig Veda discusses monotheistic thought, as do the Atharva Veda and Yajur Veda: "Devas are always looking to the supreme abode of Vishnu" ("tad viṣṇoḥ paramaṁ padaṁ sadā paśyanti sṻrayaḥ" Rig Veda 1.22.20) "The One Truth, sages know by many names" (Rig Veda 1.164.46) "When at first the unborn sprung into being, He won His own dominion beyond which nothing higher has been in existence" (Atharva Veda 10.7.31) "There is none to compare with Him. There is no parallel to Him, whose glory, verily, is great." (Yajur Veda 32.3) The number of auspicious qualities of God are countless, with the following six qualities ("bhaga") being the most important: In the Shaivite tradition, the "Shri Rudram" (Sanskrit श्रि रुद्रम्), to which the Chamakam (चमकम्) is added by scriptural tradition, is a Hindu "stotra" dedicated to Rudra (an epithet of Shiva), taken from the Yajurveda (TS 4.5, 4.7). Shri Rudram is also known as "Sri Rudraprasna", "", and "Rudradhyaya". The text is important in Vedanta where Shiva is equated to the Universal supreme God. The hymn is an early example of enumerating the names of a deity, a tradition developed extensively in the sahasranama literature of Hinduism. The Nyaya school of Hinduism has made several arguments regarding a monotheistic view. The Naiyanikas have given an argument that such a god can only be one. In the "Nyaya Kusumanjali", this is discussed against the proposition of the "Mimamsa" school that let us assume there were many demigods ("devas") and sages ("rishis") in the beginning, who wrote the Vedas and created the world. Nyaya says that: [If they assume such] omniscient beings, those endowed with the various superhuman faculties of assuming infinitesimal size, and so on, and capable of creating everything, then we reply that the "law of parsimony" bids us assume only one such, namely Him, the adorable Lord. There can be no confidence in a non-eternal and non-omniscient being, and hence it follows that according to the system which rejects God, the tradition of the Veda is simultaneously overthrown; there is no other way open. In other words, Nyaya says that the polytheist would have to give elaborate proofs for the existence and origin of his several celestial spirits, none of which would be logical, and that it is more logical to assume one eternal, omniscient god. Sikhi is a monotheistic and a revealed religion. God in Sikhi is called by many names like Ram, Allah and "Vāhigurū" etc. but refer to same god, and is shapeless, timeless, and sightless: "niraṅkār", "akaal", and "alakh". God is present ("sarav viāpak") in all of creation. God must be seen from "the inward eye", or the "heart". Sikhi devotees must meditate to progress towards enlightenment, as its rigorous application permits the existence of communication between God and human beings. Sikhism is a monotheistic faith that arose in northern India during the 16th and 17th centuries. Sikhs believe in one, timeless, omnipresent, supreme creator. The opening verse of the Guru Granth Sahib, known as the Mul Mantra, signifies this: The word "ੴ" ("Ik ōaṅkār") has two components. The first is ੧, the digit "1" in Gurmukhi signifying the singularity of the creator. Together the word means: "One Universal creator God". It is often said that the 1430 pages of the Guru Granth Sahib are all expansions on the Mul Mantra. Although the Sikhs have many names for God, some derived from Islam and Hinduism, they all refer to the same Supreme Being. The Sikh holy scriptures refer to the One God who pervades the whole of space and is the creator of all beings in the universe. The following quotation from the Guru Granth Sahib highlights this point: However, there is a strong case for arguing that the Guru Granth Sahib teaches monism due to its non-dualistic tendencies: Sikhs believe that God has been given many names, but they all refer to the One God, VāhiGurū. Sikhs believe that members of other religions such as Islam, Hinduism and Christianity all worship the same God, and the names Allah, Rahim, Karim, Hari, Raam and Paarbrahm are frequently mentioned in the Sikh holy scriptures. Although there is no set reference to God in Sikhism, the most commonly used Sikh reference to God is Akal Purakh (which means "the true immortal") or Waheguru, the Primal Being. Zoroastrianism combines cosmogonic dualism and eschatological monotheism which makes it unique among the religions of the world. Zoroastrianism proclaims an evolution through time from dualism to monotheism. Zoroastrianism is a monotheistic religion, although Zoroastrianism is often regarded as dualistic, duotheistic or bitheistic, for its belief in the hypostatis of the ultimately good Ahura Mazda "(creative spirit)" and the ultimately evil Angra Mainyu "(destructive spirit)". Zoroastrianism was once one of the largest religions on Earth, as the official religion of the Persian Empire. By some scholars, the Zoroastrians ("Parsis" or "Zartoshtis") are credited with being some of the first monotheists and having had influence on other world religions. Gathered statistics shows the number of adherents at as many as 3.5 million, with adherents living in many regions, including South Asia. The surviving fragments of the poems of the classical Greek philosopher Xenophanes of Colophon suggest that he held views very similar to those of modern monotheists. His poems harshly criticize the traditional notion of anthropomorphic gods, commenting that "...if cattle and horses and lions had hands or could paint with their hands and create works such as men do... [they] also would depict the gods' shapes and make their bodies of such a sort as the form they themselves have." Instead, Xenophanes declares that there is "...one god, greatest among gods and humans, like mortals neither in form nor in thought." Xenophanes's theology appears to have been monist, but not truly monotheistic in the strictest sense. Although some later philosophers, such as Antisthenes, believed in doctrines similar to those expounded by Xenophanes, his ideas do not appear to have become widely popular. Although Plato himself was a polytheist, in his writings, he often presents Socrates as speaking of "the god" in the singular form. He does, however, often speak of the gods in the plural form as well. The Euthyphro dilemma, for example, is formulated as "Is that which is holy loved by the gods because it is holy, or is it holy because it is loved by the gods?" The development of pure (philosophical) monotheism is a product of the Late Antiquity. During the 2nd to 3rd centuries, early Christianity was just one of several competing religious movements advocating monotheism. "The One" (Τὸ Ἕν) is a concept that is prominent in the writings of the Neoplatonists, especially those of the philosopher Plotinus. In the writings of Plotinus, "The One" is described as an inconceivable, transcendent, all-embodying, permanent, eternal, causative entity that permeates throughout all of existence. A number of oracles of Apollo from Didyma and Clarus, the so-called "theological oracles", dated to the 2nd and 3rd century CE, proclaim that there is only one highest god, of whom the gods of polytheistic religions are mere manifestations or servants. 4th century CE Cyprus had, besides Christianity, an apparently monotheistic cult of Dionysus. The Hypsistarians were a religious group who believed in a most high god, according to Greek documents. Later revisions of this Hellenic religion were adjusted towards Monotheism as it gained consideration among a wider populace. The worship of Zeus as the head-god signaled a trend in the direction of monotheism, with less honour paid to the fragmented powers of the lesser gods. Various New religious movements, such as Rastafari, Cao Đài, Tenrikyo, Seicho no Ie and Cheondoism are monotheistic. Tengrism or Tangrism (sometimes stylized as Tengriism), occasionally referred to as Tengrianism , is a modern term for a Central Asian religion characterized by features of shamanism, animism, totemism, both polytheism and monotheism,
https://en.wikipedia.org/wiki?curid=19522
Muay Thai Muay Thai (, , ), or literally 'Thai boxing', is a combat sport of Thailand that uses stand-up striking along with various clinching techniques. This discipline is known as the "art of eight limbs" as it is characterized by the combined use of fists, elbows, knees, and shins. Muay Thai became widespread internationally in the late-20th to 21st century, when Westernized practitioners from Thailand began competing in kickboxing and mixed rules matches as well as matches under muay Thai rules around the world. The professional league is governed by The Professional Boxing Association of Thailand (P.A.T), sanctioned by The Sports Authority of Thailand (SAT), and the World Professional Muaythai Federation (WMF) overseas. Muay Thai is similar to related styles in other parts of the Indian cultural sphere, namely Lethwei in Myanmar, Pradal Serey in Cambodia, Muay Lao in Laos and Tomoi in Malaysia. The history of muay Thai can be traced to the middle of the 18th century. During battles between the Burmese of the Konbaung Dynasty and the Ayutthaya Kingdom Burmese–Siamese War (1765–1767) Muay boran, and therefore muay Thai, was originally called by more generic names such as "toi muay" or simply "muay". As well as being a practical fighting technique for use in actual warfare, muay became a sport in which the opponents fought in front of spectators who went to watch for entertainment. These muay contests gradually became an integral part of local festivals and celebrations, especially those held at temples. Eventually, the previously bare-fisted fighters started wearing lengths of hemp rope around their hands and forearms. This type of match was called muay "khat chueak" (มวยคาดเชือก). Kickboxing was also a component of military training and gained prominence during the reign of King Naresuan the Great in 1560 CE. Muay Thai is referred to as the "art of eight limbs" or the "science of eight limbs", because it makes use of punches, kicks, elbow and knee strikes, thus using eight "points of contact", as opposed to "two points" (fists) in boxing and "four points" (hands and feet) used in other more regulated combat sports, such as kickboxing and "savate". A practitioner of muay Thai is known as a "nak muay". Western practitioners are sometimes called "nak muay farang", meaning 'foreign boxer'. The ascension of King Chulalongkorn (Rama V) to the throne in 1868 ushered in a golden age not only for muay but for the whole country of Thailand. Muay progressed greatly during the reign of Rama V as a direct result of the king's personal interest in the sport. The country was at peace and muay functioned as a means of physical exercise, self-defense, attacking, recreation, and personal advancement. 1909-1910: King Chulalongkorn formalized "muay boran" ('ancient boxing') by awarding (in 1910) three "muen" to victors at the funeral fights for his son (in 1909). The region style: Lopburi, Korat, and Chaiya. 1913: British boxing was introduced into the curriculum of the Suan Kulap College. The first descriptive use of the term "muay Thai". 1919: British boxing and muay taught as one sport in the curriculum of the Suan Kulap College. Judo was also offered. 1921: First permanent ring in Siam at Suan Kulap College. Used for both muay and British boxing. 1923: Suan Sanuk Stadium. First international style three-rope ring with red and blue padded corners, near Lumpinee Park. Muay and British boxing. King Rama VII (r. 1925–1935) pushed for codified rules for muay, and they were put into place. Thailand's first boxing ring was built in 1921 at Suan Kulap. Referees were introduced and rounds were now timed by kick. Fighters at the Lumpinee Boxing Stadium began wearing modern gloves, as well as hard groin protectors, during training and in boxing matches against foreigners. Traditional rope-binding ("khat chueak") made the hands a hardened, dangerous striking tool. The use of knots in the rope over the knuckles made the strikes more abrasive and damaging for the opponent while protecting the hands of the fighter. This rope-binding was still used in fights between Thais but after the occurrence of a death in the ring, it was decided that fighters should wear gloves and cotton coverlets over the feet and ankles. It was also around this time that the term "muay Thai" became commonly used, while the older form of the style came to be known as "muay boran", which is now performed primarily as an exhibition art form. Muay Thai was at the height of its popularity in the 1980s and 1990s. Top fighters commanded purses of up to 200,000 baht and the stadia where gambling was legal drew big gates and big advertising revenues. , a payout to a superstar fighter was about 100,000 baht per fight, but can range as high as 540,000 baht for a bout. In 1993, the International Federation of Muaythai Amateur, or IFMA was inaugurated. It became the governing body of amateur muay Thai consisting of 128 member countries worldwide and is recognized by the Olympic Council of Asia. In 1995, the World Muaythai Council, the oldest and largest professional sanctioning organizations of muay Thai, was established by the Thai government and sanctioned by the Sports Authority of Thailand. In 1995, the World Muay Thai Federation was founded via the merger of two existing organizations, and established in Bangkok becoming the federation governing international muay Thai. As of August 2012, it had over 70 member countries. Its president is elected at the World Muay Thai Congress. In 2006, muay Thai was included in SportAccord with IFMA. One of the requirements of SportAccord was that no sport can have a name of a country in its name. As a result, an amendment was made in the IFMA constitution to change the name of the sport from "muay Thai" to "Muaythai"—written as one word in accordance with Olympic requirements. In 2014 muay Thai was included in the International World Games Association (IWGA) and will be represented in the official programme of The World Games 2017 in Wrocław, Poland. In January 2015, muay Thai was granted the patronage of the International University Sports Federation (FISU) and from 16 to 23 March 2015 the first University World Muaythai Cup was held in Bangkok. Like most full contact fighting sports, muay Thai has a heavy focus on body conditioning. Training regimens include many staples of combat sport conditioning such as running, shadowboxing, rope jumping, body weight resistance exercises, medicine ball exercises, abdominal exercises, and in some cases weight training. Thai boxers rely heavily on kicks utilizing the shin bone. As such, practitioners of muay Thai will repeatedly hit a dense heavy bag with their shins, conditioning it, hardening the bone through a process called cortical remodeling. Striking a sand-filled bag will have the same effect. Training specific to a Thai fighter includes training with coaches on Thai pads, focus mitts, heavy bag, and sparring. Daily training includes many rounds (3–5 minute periods broken up by a short rest, often 1–2 minutes) of these various methods of practice. Thai pad training is a cornerstone of muay Thai conditioning that involves practicing punches, kicks, knees, and elbow strikes with a trainer wearing thick pads covering the forearms and hands. These special pads (often referred to as Thai pads) are used to absorb the impact of the fighter's strikes and allow the fighter to react to the attacks of the pad holder in a live situation. The trainer will often also wear a belly pad around the abdominal area so that the fighter can attack with straight kicks or knees to the body at any time during the round. Focus mitts are specific to training a fighter's hand speed, punch combinations, timing, punching power, defense, and counter-punching and may also be used to practice elbow strikes. Heavy bag training is a conditioning and power exercise that reinforces the techniques practiced on the pads. Sparring is a means to test technique, skills, range, strategy, and timing against a partner. Sparring is often a light to medium contact exercise because competitive fighters on a full schedule are not advised to risk injury by sparring hard. Specific tactics and strategies can be trained with sparring including in close fighting, clinching and kneeing only, cutting off the ring, or using reach and distance to keep an aggressive fighter away. Due to the rigorous training regimen (some Thai boxers fight almost every other week) professional boxers in Thailand have relatively short careers in the ring. Many retire from competition to begin instructing the next generation of Thai fighters. Most professional Thai boxers come from lower economic backgrounds, and the purse (after other parties get their cut) is sought as means of support for the fighters and their families. Very few higher economic strata Thais join the professional muay Thai ranks; they usually either do not practice the sport or practice it only as amateur boxers. The "mongkhon", or "mongkol" ('headband') and "pra jiad" ('armbands') are often worn into the ring before the match begins. They originated when Siam was in a constant state of war. Young men would tear off pieces of a loved one's clothing (often a mother's "sarong") and wear it in battle for good luck as well as to ward off harmful spirits. In modern times the "mongkol" (lit. 'holy spirit', 'luck', 'protection') is worn as a tribute to the fighter's gym. The "mongkol" is traditionally presented by a trainer to the fighter when he judges that the fighter is ready to represent the gym in the ring. Often, after the fighter has finished the "wai kru", the trainer will take the "mongkol" off his head and place it in his corner of the ring for luck. They were also used for protection. Whether the fighter is a Buddhist or not, it is common for them to bring the "mongkol" to a Buddhist monk who blesses it for good luck prior to stepping into the ring. In 2016, 9,998 children under the age of 15 were registered with Board of Boxing under the Sport Authority of Thailand, according to the Child Safety Promotion and Injury Prevention Research Centre (CSIP). Four hundred twenty young boxers registered with the board annually, between 2007 and 2015. Some estimates put the number of child boxers nationwide at between 200,000 and 300,000, some as young as four years old. The Advanced Diagnostic Imaging Centre (AIMC) at Ramathibodi Hospital studied 300 child boxers aged under 15 with two to more than five years of experience, as well as 200 children who do not box. The findings show that child boxers not only sustain brain injuries, they also have a lower IQ, about 10 points lower than average levels. Moreover, IQ levels correlate with the length of their training. Beyond brain damage, the death of young fighters in the ring sometimes occurs. Adisak Plitapolkarnpim, director of CSIP, was indirectly quoted (in 2016) as having said that muay Thai practitioners "younger than 15 years old are being urged to avoid 'head contact' to reduce the risk of brain injuries, while children aged under nine should be banned from the combat fight"; furthermore the Boxing Act's minimum age to compete professionally, was largely being flouted; furthermore, indirectly quoted: "Boxers aged between 13 and 15" should still be permitted to compete, but "with light contact to the head and face"; He said that "Spectators and a change in the boxing rules can play a vital role in preventing child boxers from suffering brain injuries, abnormality in brain structure, Parkinson's disease and early-onset Alzheimer's later in life...Children aged between nine and 15 can take part in [Thai] boxing, but direct head contact must not be allowed". Referring to "Findings [of 2014] on the Worst Forms of Child Labour" as published by the US Department of Labor's Bureau of International Labor Affairs, he said that, "We know Muay Thai paid fighters have been exploited in the past like child labourers and the matter still remains a serious concern". At the 13th World Conference on Injury Prevention and Safety Promotion in 2018, it was revealed that up to three percent of the upcoming generation will grow up with learning disabilities unless an amendment is ratified that bans children under 12 from participating in boxing matches. International pediatricians have called on lawmakers in Thailand to help. Muay Thai is a combat sport that utilizes eight different parts of the body (fists, elbows, knees, and shins), with that being said injuries are quite common in all levels of muay Thai. An injury is considered reportable if it requires the athlete to rest for more than one day. Many injuries in the sport of muay Thai go unreported as the fighters may not notice the injuries at first, refusing to admit they need treatment, a heightened pain threshold, fear that their instructor will perceive the injury negatively, and confusion as to what is an injury. Similar to most sports, injury rates tend to be higher in beginners rather than amateurs and professionals. Soft tissue injuries are the most common form of injury in muay Thai, contributing between 80–90% of all injuries. These injuries are caused by repeated trauma to soft parts of the body. During matches there is little to no padding and that leaves soft tissue vulnerable to strikes. The second most common injury among beginner and amateur muay Thai fighters are sprains and strains. It appears that these injuries can be easily avoided or reduced. Many participants of a study admitted to inadequate warm up before the event of the injury. The third most common injury are fractures. Fractures are more commonly seen with amateur and professional fighters, because they are allowed full contact and beginners are allowed no contact. The most common sites for fractures are the nose, carpal bones, metacarpals, digits, and ribs. The distribution of injuries are significantly different between the three groups (beginner, amateur, and professional), this seems to be expected as a person progresses through the different levels the forces involved are a lot higher, less padding and protective equipment is used, and athletes are likely to train harder, resulting in more serious injuries among experienced fighters. According to a "Bangkok Post" columnist, "...Thai professional boxing is all about gambling and big money. Gambling on muay Thai boxing is estimated to worth about 40 billion baht a year...all the talk about the promotion of Thai martial arts is just baloney." Rob Cox, the manager of a boxing camp just east of Bangkok claims that, "Without the gamblers, the sport would pretty much be dead. They're killing it off, but they’re also keeping it alive." The practice of fixing fights is not unknown. Boxers can earn from 60,000 to 150,000 baht for purposefully losing a fight. A fighter, later arrested, who threw a fight at Rajadamnern Stadium in December 2019 is only the most recent example. An infamous alleged case of match-fixing was the bout on 12 October 2014 in Pattaya between top Thai boxer Buakaw Banchamek and his challenger, Enriko Kehl, at the K-1 World Max Final event. Formal muay Thai techniques are divided into two groups: "mae mai", or 'major techniques', and "luk mai", or 'minor techniques'. Muay Thai is often a fighting art of attrition, where opponents exchange blows with one another. This is certainly the case with traditional stylists in Thailand, but is a less popular form of fighting in the contemporary world fighting circuit where the Thai style of exchanging blow for blow is no longer favorable. Almost all techniques in muay Thai use the entire body movement, rotating the hip with each kick, punch, elbow and block. The punch techniques in muay Thai were originally quite limited, being crosses and a long (or lazy) circular strike made with a straight (but not locked) arm and landing with the heel of the palm. Cross-fertilization with Western boxing and Western martial arts mean the full range of western boxing punches are now used: lead jab, straight/cross, hook, uppercut, shovel and corkscrew punches and overhands as well as hammer fists and back fists. As a tactic, body punching is used less in muay Thai than most other striking combat sports to avoid exposing the attacker's head to counter strikes from knees or elbows. To utilize the range of targeting points, in keeping with the center line theory, the fighter can use either the Western or Thai stance which allows for either long range or short range attacks to be undertaken effectively without compromising guard. The elbow can be used in several ways as a striking weapon: horizontal, diagonal-upwards, diagonal-downwards, uppercut, downward, backward-spinning and flying. From the side, it can be used as either a finishing move or as a way to cut the opponent's eyebrow so that blood might block his vision. The diagonal elbows are faster than the other forms but are less powerful. The elbow strike is considered the most dangerous form of attack in the sport. There is a distinct difference between a single elbow and a follow-up elbow. The single elbow is a move independent from any other, whereas a follow-up elbow is the second strike from the same arm, being a hook or straight punch first with an elbow follow-up. Such elbows, and most other elbow strikes, are used when the distance between fighters becomes too small and there is too little space to throw a hook at the opponent's head. Elbows can be used to great effect as blocks or defenses against, for example, spring knees, side body knees, body kicks or punches. When well connected, an elbow strike can cause serious damage to the opponent, including cuts or even a knockout. The two most common kicks in muay Thai are known as the "thip" (literally "foot jab") and the "te chiang" (kicking upwards in the shape of a triangle cutting under the arm and ribs), or roundhouse kick. The Thai roundhouse kick uses a rotational movement of the entire body and has been widely adopted by practitioners of other combat sports. It is done from a circular stance with the back leg just a little ways back (roughly shoulder width apart) in comparison to instinctive upper body fighting (boxing) where the legs must create a wider base. The roundhouse kick draws its power almost entirely from the rotational movement of the hips, counter-rotation of the shoulders and arms are also often used to add torque to the lower body and increase the power of the kick as well. If a roundhouse kick is attempted by the opponent, the Thai boxer will normally check the kick, that is, he will block the kick with the outside of his lower leg. Thai boxers are trained to always connect with the shin. The foot contains many fine bones and is much weaker. A fighter may end up hurting himself if he tries to strike with his foot or instep. Shins are trained by repeatedly striking firm objects, such as pads or heavy bags. The foot-thrust, or literally, "foot jab", is one of the techniques in muay Thai. It is mainly used as a defensive technique to control distance or block attacks. Foot-thrusts should be thrown quickly but with enough force to knock an opponent off balance. In Western boxing, the two fighters are separated when they clinch; in muay Thai, however, they are not. It is often in the clinch where knee and elbow techniques are used. To strike and bind the opponent for both offensive and defensive purposes, small amounts of stand-up grappling are used in the clinch. The front clinch should be performed with the palm of one hand on the back of the other. There are three reasons why the fingers must not be intertwined. 1) In the ring fighters are wearing boxing gloves and cannot intertwine their fingers. 2) The Thai front clinch involves pressing the head of the opponent downwards, which is easier if the hands are locked behind the back of the head instead of behind the neck. Furthermore, the arms should be putting as much pressure on the neck as possible. 3) A fighter may incur an injury to one or more fingers if they are intertwined, and it becomes more difficult to release the grip in order to quickly elbow the opponent's head. A correct clinch also involves the fighter's forearms pressing against the opponent's collar bone while the hands are around the opponent's head rather than the opponent's neck. The general way to get out of a clinch is to push the opponent's head backward or elbow them, as the clinch requires both participants to be very close to one another. Additionally, the non-dominant clincher can try to "swim" their arm underneath and inside the opponent's clinch, establishing the previously non-dominant clincher as the dominant clincher. Muay Thai has several other variants of the clinch or "chap kho" , including: Defenses in muay Thai are categorized in six groups: Defensively, the concept of "wall of defense" is used, in which shoulders, arms and legs are used to hinder the attacker from successfully executing techniques. Blocking is a critical element in muay Thai and compounds the level of conditioning a successful practitioner must possess. Low and mid body roundhouse kicks are normally blocked with the upper portion of a raised shin (this block is known as a 'check'). High body strikes are blocked ideally with the forearms and shoulder together, or if enough time is allowed for a parry, the glove (elusively), elbow, or shin will be used. Midsection roundhouse kicks can also be caught/trapped, allowing for a sweep or counter-attack to the remaining leg of the opponent. Punches are blocked with an ordinary boxing guard and techniques similar, if not identical, to basic boxing technique. A common means of blocking a punch is using the hand on the same side as the oncoming punch. For example, if an orthodox fighter throws a jab (being the left hand), the defender will make a slight tap to redirect the punch's angle with the right hand. The deflection is always as small and precise as possible to avoid unnecessary energy expenditure and return the hand to the guard as quickly as possible. Hooks are often blocked with a motion sometimes described as "combing the hair", that is, raising the elbow forward and effectively shielding the head with the forearm, flexed biceps and shoulder. More advanced muay Thai blocks are usually in the form of counter-strikes, using the opponent's weight (as they strike) to amplify the damage that the countering opponent can deliver. This requires impeccable timing and thus can generally only be learned by many repetitions. According to the folklore story, the urban legend started being told by Thai people in 1767 around the time of the fall of the ancient Siamese capital of Ayutthaya, the invading Burmese troops rounded up thousands of Siamese citizens. They then organized a seven-day, seven-night religious festival in honor of Buddha's relics. The festivities included many forms of entertainment, such as the costume plays, comedies and sword fighting matches. According to the story, at one point, King Mangra wanted to see how the Thai fighter would compare to their lethwei. Nai Khanomtom was selected to fight against the King's chosen champion and the boxing ring was set up in front of throne. Nai Khanomtom performed a traditional "wai kru" pre-fight dance, to pay his respects to his teachers and ancestors, dancing around his opponent. The perplexed Burmese thought it was black magic. When the fight began, Nai Khanomtom charged out, using punches, kicks, elbows, and knees to pummel his opponent until he collapsed. The referee supposedly said that "the Burmese champion was too distracted by the dance", and declared the knockout invalid. The King supposedly asked if Nai Khanomtom would fight nine other Burmese champions to prove himself. He agreed and fought one after the other with no rest periods. His last opponent was a great kickboxing teacher from Rakhine State who Nai Khanomtom defeated with kicks. King Mangra was so impressed that he allegedly remarked that "Every part of the Siamese is blessed with venom. Even with his bare hands, he can fell nine or ten opponents. But his Lord was incompetent and lost the country to the enemy. If he had been any good, there was no way the City of Ayutthaya would ever have fallen." King Mangra granted Nai Khanomtom freedom along with his choice of either riches or two beautiful Burmese wives. Nai Khanomtom chose the wives as he said that money was easier to find. He then departed with his wives for Siam. Variations of this story had him also win the release of his fellow Thai prisoners. This tale is celebrated every 17 March as Boxer's Day or National Muay Boran Day in his honor. Today, some have wrongly attributed the legend of Nai Khanomtom to King Naresuan, who spent his youth as a royal hostage in Burma while Ayutthaya was a Burmese vassal. However, Nai Khanomtom and King Naresuan lived almost two centuries apart.
https://en.wikipedia.org/wiki?curid=19525
Mao Zedong Mao Zedong (; December 26, 1893September 9, 1976), also known as Chairman Mao, was a Chinese communist revolutionary who became the founding father of the People's Republic of China (PRC), which he ruled as the chairman of the Communist Party of China from its establishment in 1949 until his death in 1976. Ideologically a Marxist–Leninist, his theories, military strategies, and political policies are collectively known as Maoism. Mao was the son of a prosperous peasant in Shaoshan, Hunan. He had a Chinese nationalist and an anti-imperialist outlook early in his life, and was particularly influenced by the events of the Xinhai Revolution of 1911 and May Fourth Movement of 1919. He later adopted Marxism–Leninism while working at Peking University, and became a founding member of the Communist Party of China (CPC), leading the Autumn Harvest Uprising in 1927. During the Chinese Civil War between the Kuomintang (KMT) and the CPC, Mao helped to found the Chinese Workers' and Peasants' Red Army, led the Jiangxi Soviet's radical land policies, and ultimately became head of the CPC during the Long March. Although the CPC temporarily allied with the KMT under the United Front during the Second Sino-Japanese War (1937–1945), China's civil war resumed after Japan's surrender and in 1949 Mao's forces defeated the Nationalist government, which withdrew to Taiwan. On October 1, 1949, Mao proclaimed the foundation of the PRC, a single-party state controlled by the CPC. In the following years he solidified his control through campaigns against landlords, suppression of "counter-revolutionaries", "Three-anti and Five-anti Campaigns" and through a psychological victory in the Korean War, which altogether caused the deaths of several-million Chinese. From 1953–1958, Mao played an important role in enforcing planned economy in China, constructing the first Constitution of the PRC, launching the industrialisation program, and initiating the "Two Bombs, One Satellite" project. On the other hand, in 1955-1957, Mao launched the Sufan movement and the Anti-Rightist Campaign, with at least 550,000 people persecuted in the latter, most of whom were intellectuals and dissidents. In 1958, he launched the Great Leap Forward that aimed to rapidly transform China's economy from agrarian to industrial, which led to the deadliest famine in history and the deaths of 20–46 million people between 1958 and 1962. In 1963, Mao launched the Socialist Education Movement, and in 1966 he initiated the Cultural Revolution, a program to remove "counter-revolutionary" elements in Chinese society which lasted 10 years and was marked by violent class struggle, widespread destruction of cultural artifacts, and an unprecedented elevation of Mao's cult of personality. Tens of millions of people were persecuted during the Revolution, while the estimated number of deaths ranges from hundreds of thousands to millions, including Liu Shaoqi, the 2nd Chairman of the PRC. After years of ill health, Mao suffered a series of heart attacks in 1976 and died at the age of 82. During Mao's era, China's population grew from around 550 million to over 900 million while the government did not strictly enforce its family planning policy, forcing Mao's successors such as Deng Xiaoping to take stricter policies to cope with the overpopulation crisis. A controversial figure, Mao is regarded as one of the most important and influential individuals in modern world history. He is also known as a political intellect, theorist, military strategist, poet, and visionary. During Mao's era, China was involved in the Korean War, the Sino-Soviet split, the Vietnam War, and the rise of Khmer Rouge; in particular, in 1972, Mao welcomed U.S. President Richard Nixon in Beijing, signalling the start of a policy of opening China to the world. Supporters credit him with driving imperialism out of China, modernising the nation and building it into a world power, promoting the status of women, improving education and health care, as well as increasing life expectancy of average Chinese. Conversely, his regime has been called autocratic and totalitarian, and condemned for bringing about mass repression and destroying religious and cultural artifacts and sites. It was additionally responsible for vast numbers of deaths with estimates ranging from 30 to 80 million victims through starvation, persecution, prison labour and mass executions. During Mao's lifetime, the English-language media universally rendered his name as Mao Tse-tung, using the Wade-Giles system of transliteration for Standard Chinese though with the circumflex accent in the syllable "Tsê" dropped. Due to its recognizability, the spelling was used widely, even by the Foreign Ministry of the PRC after pinyin ("Hanyu Pinyin") became the PRC's official romanization system for Mandarin Chinese in 1958. For example, the well-known booklet of Mao's political statements, ""The Little Red Book"", was officially entitled "Quotations from Chairman Mao Tse-tung" in English translations. While the pinyin-derived spelling "Mao Zedong" is increasingly common, the Wade-Giles-derived spelling "Mao Tse-tung" continues to be used in modern publications to some extent. Mao Zedong was born on December 26, 1893 in Shaoshan village, Hunan. His father, Mao Yichang, was a formerly impoverished peasant who had become one of the wealthiest farmers in Shaoshan. Growing up in rural Hunan, Mao described his father as a stern disciplinarian, who would beat him and his three siblings, the boys Zemin and Zetan, as well as an adopted girl, Zejian. Mao's mother, Wen Qimei, was a devout Buddhist who tried to temper her husband's strict attitude. Mao too became a Buddhist, but abandoned this faith in his mid-teenage years. At age 8, Mao was sent to Shaoshan Primary School. Learning the value systems of Confucianism, he later admitted that he did not enjoy the classical Chinese texts preaching Confucian morals, instead favouring popular novels like "Romance of the Three Kingdoms" and "Water Margin". At age 13, Mao finished primary education, and his father united him in an arranged marriage to the 17-year-old Luo Yixiu, thereby uniting their land-owning families. Mao refused to recognise her as his wife, becoming a fierce critic of arranged marriage and temporarily moving away. Luo was locally disgraced and died in 1910. While working on his father's farm, Mao read voraciously and developed a "political consciousness" from Zheng Guanying's booklet which lamented the deterioration of Chinese power and argued for the adoption of representative democracy. Interested in history, Mao was inspired by the military prowess and nationalistic fervour of George Washington and Napoleon Bonaparte. His political views were shaped by Gelaohui-led protests which erupted following a famine in Changsha, the capital of Hunan; Mao supported the protesters' demands, but the armed forces suppressed the dissenters and executed their leaders. The famine spread to Shaoshan, where starving peasants seized his father's grain. He disapproved of their actions as morally wrong, but claimed sympathy for their situation. At age 16, Mao moved to a higher primary school in nearby Dongshan, where he was bullied for his peasant background. In 1911, Mao began middle school in Changsha. Revolutionary sentiment was strong in the city, where there was widespread animosity towards Emperor Puyi's absolute monarchy and many were advocating republicanism. The republicans' figurehead was Sun Yat-sen, an American-educated Christian who led the Tongmenghui society. In Changsha, Mao was influenced by Sun's newspaper, "The People's Independence" ("Minli bao"), and called for Sun to become president in a school essay. As a symbol of rebellion against the Manchu monarch, Mao and a friend cut off their queue pigtails, a sign of subservience to the emperor. Inspired by Sun's republicanism, the army rose up across southern China, sparking the Xinhai Revolution. Changsha's governor fled, leaving the city in republican control. Supporting the revolution, Mao joined the rebel army as a private soldier, but was not involved in fighting. The northern provinces remained loyal to the emperor, and hoping to avoid a civil war, Sun—proclaimed "provisional president" by his supporters—compromised with the monarchist general Yuan Shikai. The monarchy was abolished, creating the Republic of China, but the monarchist Yuan became president. The revolution over, Mao resigned from the army in 1912, after six months as a soldier. Around this time, Mao discovered socialism from a newspaper article; proceeding to read pamphlets by Jiang Kanghu, the student founder of the Chinese Socialist Party, Mao remained interested yet unconvinced by the idea. Over the next few years, Mao Zedong enrolled and dropped out of a police academy, a soap-production school, a law school, an economics school, and the government-run Changsha Middle School. Studying independently, he spent much time in Changsha's library, reading core works of classical liberalism such as Adam Smith's "The Wealth of Nations" and Montesquieu's "The Spirit of the Laws", as well as the works of western scientists and philosophers such as Darwin, Mill, Rousseau, and Spencer. Viewing himself as an intellectual, years later he admitted that at this time he thought himself better than working people. He was inspired by Friedrich Paulsen, whose liberal emphasis on individualism led Mao to believe that strong individuals were not bound by moral codes but should strive for the greater good, and that the "end justifies the means" conclusion of Consequentialism. His father saw no use in his son's intellectual pursuits, cut off his allowance and forced him to move into a hostel for the destitute. Mao desired to become a teacher and enrolled at the Fourth Normal School of Changsha, which soon merged with the First Normal School of Changsha, widely seen as the best in Hunan. Befriending Mao, professor Yang Changji urged him to read a radical newspaper, "New Youth" ("Xin qingnian"), the creation of his friend Chen Duxiu, a dean at Peking University. Although a Chinese nationalist, Chen argued that China must look to the west to cleanse itself of superstition and autocracy. Mao published his first article in "New Youth" in April 1917, instructing readers to increase their physical strength to serve the revolution. He joined the Society for the Study of Wang Fuzhi ("Chuan-shan Hsüeh-she"), a revolutionary group founded by Changsha literati who wished to emulate the philosopher Wang Fuzhi. In his first school year, Mao befriended an older student, Xiao Zisheng; together they went on a walking tour of Hunan, begging and writing literary couplets to obtain food. A popular student, in 1915 Mao was elected secretary of the Students Society. He organized the Association for Student Self-Government and led protests against school rules. In spring 1917, he was elected to command the students' volunteer army, set up to defend the school from marauding soldiers. Increasingly interested in the techniques of war, he took a keen interest in World War I, and also began to develop a sense of solidarity with workers. Mao undertook feats of physical endurance with Xiao Zisheng and Cai Hesen, and with other young revolutionaries they formed the Renovation of the People Study Society in April 1918 to debate Chen Duxiu's ideas. Desiring personal and societal transformation, the Society gained 70–80 members, many of whom would later join the Communist Party. Mao graduated in June 1919, ranked third in the year. Mao moved to Beijing, where his mentor Yang Changji had taken a job at Peking University. Yang thought Mao exceptionally "intelligent and handsome", securing him a job as assistant to the university librarian Li Dazhao, who would become an early Chinese Communist. Li authored a series of "New Youth" articles on the October Revolution in Russia, during which the Communist Bolshevik Party under the leadership of Vladimir Lenin had seized power. Lenin was an advocate of the socio-political theory of Marxism, first developed by the German sociologists Karl Marx and Friedrich Engels, and Li's articles added Marxism to the doctrines in Chinese revolutionary movement. Becoming "more and more radical", Mao was initially influenced by Peter Kropotkin's anarchism, which was the most prominent radical doctrine of the day. Chinese anarchists, such as Cai Yuanpei, Chancellor of Peking University, called for complete social revolution in social relations, family structure, and women's equality, rather than the simple change in the form of government called for by earlier revolutionaries. He joined Li's Study Group and "developed rapidly toward Marxism" during the winter of 1919. Paid a low wage, Mao lived in a cramped room with seven other Hunanese students, but believed that Beijing's beauty offered "vivid and living compensation". At the university, Mao was snubbed by other students due to his rural Hunanese accent and lowly position. He joined the university's Philosophy and Journalism Societies and attended lectures and seminars by the likes of Chen Duxiu, Hu Shih, and Qian Xuantong. Mao's time in Beijing ended in the spring of 1919, when he travelled to Shanghai with friends who were preparing to leave for France. He did not return to Shaoshan, where his mother was terminally ill. She died in October 1919 and her husband died in January 1920. On May 4, 1919, students in Beijing gathered at the Gate of Heavenly Peace to protest the Chinese government's weak resistance to Japanese expansion in China. Patriots were outraged at the influence given to Japan in the Twenty-One Demands in 1915, the complicity of Duan Qirui's Beiyang Government, and the betrayal of China in the Treaty of Versailles, wherein Japan was allowed to receive territories in Shandong which had been surrendered by Germany. These demonstrations ignited the nationwide May Fourth Movement and fueled the New Culture Movement which blamed China's diplomatic defeats on social and cultural backwardness. In Changsha, Mao had begun teaching history at the Xiuye Primary School and organizing protests against the pro-Duan Governor of Hunan Province, Zhang Jingyao, popularly known as "Zhang the Venomous" due to his corrupt and violent rule. In late May, Mao co-founded the Hunanese Student Association with He Shuheng and Deng Zhongxia, organizing a student strike for June and in July 1919 began production of a weekly radical magazine, "Xiang River Review" ("Xiangjiang pinglun"). Using vernacular language that would be understandable to the majority of China's populace, he advocated the need for a "Great Union of the Popular Masses", strengthened trade unions able to wage non-violent revolution. His ideas were not Marxist, but heavily influenced by Kropotkin's concept of . Zhang banned the Student Association, but Mao continued publishing after assuming editorship of the liberal magazine "New Hunan" ("Xin Hunan") and offered articles in popular local newspaper "Justice" ("Ta Kung Po"). Several of these advocated feminist views, calling for the liberation of women in Chinese society; Mao was influenced by his forced arranged-marriage. In December 1919, Mao helped organise a general strike in Hunan, securing some concessions, but Mao and other student leaders felt threatened by Zhang, and Mao returned to Beijing, visiting the terminally ill Yang Changji. Mao found that his articles had achieved a level of fame among the revolutionary movement, and set about soliciting support in overthrowing Zhang. Coming across newly translated Marxist literature by Thomas Kirkup, Karl Kautsky, and Marx and Engels—notably "The Communist Manifesto"—he came under their increasing influence, but was still eclectic in his views. Mao visited Tianjin, Jinan, and Qufu, before moving to Shanghai, where he worked as a laundryman and met Chen Duxiu, noting that Chen's adoption of Marxism "deeply impressed me at what was probably a critical period in my life". In Shanghai, Mao met an old teacher of his, Yi Peiji, a revolutionary and member of the Kuomintang (KMT), or Chinese Nationalist Party, which was gaining increasing support and influence. Yi introduced Mao to General Tan Yankai, a senior KMT member who held the loyalty of troops stationed along the Hunanese border with Guangdong. Tan was plotting to overthrow Zhang, and Mao aided him by organizing the Changsha students. In June 1920, Tan led his troops into Changsha, and Zhang fled. In the subsequent reorganization of the provincial administration, Mao was appointed headmaster of the junior section of the First Normal School. Now receiving a large income, he married Yang Kaihui in the winter of 1920. The Communist Party of China was founded by Chen Duxiu and Li Dazhao in the French concession of Shanghai in 1921 as a study society and informal network. Mao set up a Changsha branch, also establishing a branch of the Socialist Youth Corps and a Cultural Book Society which opened a bookstore to propagate revolutionary literature throughout Hunan. He was involved in the movement for Hunan autonomy, in the hope that a Hunanese constitution would increase civil liberties and make his revolutionary activity easier. When the movement was successful in establishing provincial autonomy under a new warlord, Mao forgot his involvement. By 1921, small Marxist groups existed in Shanghai, Beijing, Changsha, Wuhan, Guangzhou, and Jinan; it was decided to hold a central meeting, which began in Shanghai on July 23, 1921. The first session of the National Congress of the Communist Party of China was attended by 13 delegates, Mao included. After the authorities sent a police spy to the congress, the delegates moved to a boat on South Lake near Jiaxing, in Zhejiang, to escape detection. Although Soviet and Comintern delegates attended, the first congress ignored Lenin's advice to accept a temporary alliance between the Communists and the "bourgeois democrats" who also advocated national revolution; instead they stuck to the orthodox Marxist belief that only the urban proletariat could lead a socialist revolution. Mao was now party secretary for Hunan stationed in Changsha, and to build the party there he followed a variety of tactics. In August 1921, he founded the Self-Study University, through which readers could gain access to revolutionary literature, housed in the premises of the Society for the Study of Wang Fuzhi, a Qing dynasty Hunanese philosopher who had resisted the Manchus. He joined the YMCA Mass Education Movement to fight illiteracy, though he edited the textbooks to include radical sentiments. He continued organizing workers to strike against the administration of Hunan Governor Zhao Hengti. Yet labor issues remained central. The successful and famous Anyuan coal mines strikes (contrary to later Party historians) depended on both "proletarian" and "bourgeois" strategies. Liu Shaoqi and Li Lisan and Mao not only mobilised the miners, but formed schools and cooperatives and engaged local intellectuals, gentry, military officers, merchants, Red Gang dragon heads and even church clergy. Mao claimed that he missed the July 1922 Second Congress of the Communist Party in Shanghai because he lost the address. Adopting Lenin's advice, the delegates agreed to an alliance with the "bourgeois democrats" of the KMT for the good of the "national revolution". Communist Party members joined the KMT, hoping to push its politics leftward. Mao enthusiastically agreed with this decision, arguing for an alliance across China's socio-economic classes. Mao was a vocal anti-imperialist and in his writings he lambasted the governments of Japan, UK and US, describing the latter as "the most murderous of hangmen". At the Third Congress of the Communist Party in Shanghai in June 1923, the delegates reaffirmed their commitment to working with the KMT. Supporting this position, Mao was elected to the Party Committee, taking up residence in Shanghai. At the First KMT Congress, held in Guangzhou in early 1924, Mao was elected an alternate member of the KMT Central Executive Committee, and put forward four resolutions to decentralise power to urban and rural bureaus. His enthusiastic support for the KMT earned him the suspicion of Li Li-san, his Hunan comrade. In late 1924, Mao returned to Shaoshan, perhaps to recuperate from an illness. He found that the peasantry were increasingly restless and some had seized land from wealthy landowners to found communes. This convinced him of the revolutionary potential of the peasantry, an idea advocated by the KMT leftists but not the Communists. He returned to Guangzhou to run the 6th term of the KMT's Peasant Movement Training Institute from May to September 1926. The Peasant Movement Training Institute under Mao trained cadre and prepared them for militant activity, taking them through military training exercises and getting them to study basic left-wing texts. In the winter of 1925, Mao fled to Guangzhou after his revolutionary activities attracted the attention of Zhao's regional authorities. When party leader Sun Yat-sen died in May 1925, he was succeeded by Chiang Kai-shek, who moved to marginalise the left-KMT and the Communists. Mao nevertheless supported Chiang's National Revolutionary Army, who embarked on the Northern Expedition attack in 1926 on warlords. In the wake of this expedition, peasants rose up, appropriating the land of the wealthy landowners, who were in many cases killed. Such uprisings angered senior KMT figures, who were themselves landowners, emphasizing the growing class and ideological divide within the revolutionary movement. In March 1927, Mao appeared at the Third Plenum of the KMT Central Executive Committee in Wuhan, which sought to strip General Chiang of his power by appointing Wang Jingwei leader. There, Mao played an active role in the discussions regarding the peasant issue, defending a set of "Regulations for the Repression of Local Bullies and Bad Gentry", which advocated the death penalty or life imprisonment for anyone found guilty of counter-revolutionary activity, arguing that in a revolutionary situation, "peaceful methods cannot suffice". In April 1927, Mao was appointed to the KMT's five-member Central Land Committee, urging peasants to refuse to pay rent. Mao led another group to put together a "Draft Resolution on the Land Question", which called for the confiscation of land belonging to "local bullies and bad gentry, corrupt officials, militarists and all counter-revolutionary elements in the villages". Proceeding to carry out a "Land Survey", he stated that anyone owning over 30 "mou" (four and a half acres), constituting 13% of the population, were uniformly counter-revolutionary. He accepted that there was great variation in revolutionary enthusiasm across the country, and that a flexible policy of land redistribution was necessary. Presenting his conclusions at the Enlarged Land Committee meeting, many expressed reservations, some believing that it went too far, and others not far enough. Ultimately, his suggestions were only partially implemented. Fresh from the success of the Northern Expedition against the warlords, Chiang turned on the Communists, who by now numbered in the tens of thousands across China. Chiang ignored the orders of the Wuhan-based left KMT government and marched on Shanghai, a city controlled by Communist militias. As the Communists awaited Chiang's arrival, he loosed the White Terror, massacring 5000 with the aid of the Green Gang. In Beijing, 19 leading Communists were killed by Zhang Zuolin. That May, tens of thousands of Communists and those suspected of being communists were killed, and the CPC lost approximately of its members. The CPC continued supporting the Wuhan KMT government, a position Mao initially supported, but by the time of the CPC's Fifth Congress he had changed his mind, deciding to stake all hope on the peasant militia. The question was rendered moot when the Wuhan government expelled all Communists from the KMT on July 15. The CPC founded the Workers' and Peasants' Red Army of China, better known as the "Red Army", to battle Chiang. A battalion led by General Zhu De was ordered to take the city of Nanchang on August 1, 1927, in what became known as the Nanchang Uprising. They were initially successful, but were forced into retreat after five days, marching south to Shantou, and from there they were driven into the wilderness of Fujian. Mao was appointed commander-in-chief of the Red Army and led four regiments against Changsha in the Autumn Harvest Uprising, in the hope of sparking peasant uprisings across Hunan. On the eve of the attack, Mao composed a poem—the earliest of his to survive—titled "Changsha". His plan was to attack the KMT-held city from three directions on September 9, but the Fourth Regiment deserted to the KMT cause, attacking the Third Regiment. Mao's army made it to Changsha, but could not take it; by September 15, he accepted defeat and with 1000 survivors marched east to the Jinggang Mountains of Jiangxi. Jung Chang and Jon Halliday claim that the uprising was in fact sabotaged by Mao to allow him to prevent a group of KMT soldiers from defecting to any other CPC leader. Chang and Halliday also claim that Mao talked the other leaders (including Russian diplomats at the Soviet consulate in Changsha who, Chang and Halliday claim, had been controlling much of the CPC activity) into striking only at Changsha, then abandoning it. Chang and Halliday report a view sent to Moscow by the secretary of the Soviet Consulate in Changsha that the retreat was "the most despicable treachery and cowardice." The CPC Central Committee, hiding in Shanghai, expelled Mao from their ranks and from the Hunan Provincial Committee, as punishment for his "military opportunism", for his focus on rural activity, and for being too lenient with "bad gentry". They nevertheless adopted three policies he had long championed: the immediate formation of Workers' councils, the confiscation of all land without exemption, and the rejection of the KMT. Mao's response was to ignore them. He established a base in Jinggangshan City, an area of the Jinggang Mountains, where he united five villages as a self-governing state, and supported the confiscation of land from rich landlords, who were "re-educated" and sometimes executed. He ensured that no massacres took place in the region, and pursued a more lenient approach than that advocated by the Central Committee. He proclaimed that "Even the lame, the deaf and the blind could all come in useful for the revolutionary struggle", he boosted the army's numbers, incorporating two groups of bandits into his army, building a force of around troops. He laid down rules for his soldiers: prompt obedience to orders, all confiscations were to be turned over to the government, and nothing was to be confiscated from poorer peasants. In doing so, he molded his men into a disciplined, efficient fighting force. In spring 1928, the Central Committee ordered Mao's troops to southern Hunan, hoping to spark peasant uprisings. Mao was skeptical, but complied. They reached Hunan, where they were attacked by the KMT and fled after heavy losses. Meanwhile, KMT troops had invaded Jinggangshan, leaving them without a base. Wandering the countryside, Mao's forces came across a CPC regiment led by General Zhu De and Lin Biao; they united, and attempted to retake Jinggangshan. They were initially successful, but the KMT counter-attacked, and pushed the CPC back; over the next few weeks, they fought an entrenched guerrilla war in the mountains. The Central Committee again ordered Mao to march to south Hunan, but he refused, and remained at his base. Contrastingly, Zhu complied, and led his armies away. Mao's troops fended the KMT off for 25 days while he left the camp at night to find reinforcements. He reunited with the decimated Zhu's army, and together they returned to Jinggangshan and retook the base. There they were joined by a defecting KMT regiment and Peng Dehuai's Fifth Red Army. In the mountainous area they were unable to grow enough crops to feed everyone, leading to food shortages throughout the winter. In January 1929, Mao and Zhu evacuated the base with 2,000 men and a further 800 provided by Peng, and took their armies south, to the area around Tonggu and Xinfeng in Jiangxi. The evacuation led to a drop in morale, and many troops became disobedient and began thieving; this worried Li Lisan and the Central Committee, who saw Mao's army as "lumpenproletariat", that were unable to share in proletariat class consciousness. In keeping with orthodox Marxist thought, Li believed that only the urban proletariat could lead a successful revolution, and saw little need for Mao's peasant guerrillas; he ordered Mao to disband his army into units to be sent out to spread the revolutionary message. Mao replied that while he concurred with Li's theoretical position, he would not disband his army nor abandon his base. Both Li and Mao saw the Chinese revolution as the key to world revolution, believing that a CPC victory would spark the overthrow of global imperialism and capitalism. In this, they disagreed with the official line of the Soviet government and Comintern. Officials in Moscow desired greater control over the CPC and removed Li from power by calling him to Russia for an inquest into his errors. They replaced him with Soviet-educated Chinese Communists, known as the "28 Bolsheviks", two of whom, Bo Gu and Zhang Wentian, took control of the Central Committee. Mao disagreed with the new leadership, believing they grasped little of the Chinese situation, and he soon emerged as their key rival. In February 1930, Mao created the Southwest Jiangxi Provincial Soviet Government in the region under his control. In November, he suffered emotional trauma after his wife and sister were captured and beheaded by KMT general He Jian. Mao then married He Zizhen, an 18-year-old revolutionary who bore him five children over the following nine years. Facing internal problems, members of the Jiangxi Soviet accused him of being too moderate, and hence anti-revolutionary. In December, they tried to overthrow Mao, resulting in the Futian incident, during which Mao's loyalists tortured many and executed between 2000 and 3000 dissenters. The CPC Central Committee moved to Jiangxi which it saw as a secure area. In November it proclaimed Jiangxi to be the Soviet Republic of China, an independent Communist-governed state. Although he was proclaimed Chairman of the Council of People's Commissars, Mao's power was diminished, as his control of the Red Army was allocated to Zhou Enlai. Meanwhile, Mao recovered from tuberculosis. The KMT armies adopted a policy of encirclement and annihilation of the Red armies. Outnumbered, Mao responded with guerrilla tactics influenced by the works of ancient military strategists like Sun Tzu, but Zhou and the new leadership followed a policy of open confrontation and conventional warfare. In doing so, the Red Army successfully defeated the first and second encirclements. Angered at his armies' failure, Chiang Kai-shek personally arrived to lead the operation. He too faced setbacks and retreated to deal with the further Japanese incursions into China. As a result of the KMT's change of focus to the defence of China against Japanese expansionism, the Red Army was able to expand its area of control, eventually encompassing a population of 3 million. Mao proceeded with his land reform program. In November 1931 he announced the start of a "land verification project" which was expanded in June 1933. He also orchestrated education programs and implemented measures to increase female political participation. Chiang viewed the Communists as a greater threat than the Japanese and returned to Jiangxi, where he initiated the fifth encirclement campaign, which involved the construction of a concrete and barbed wire "wall of fire" around the state, which was accompanied by aerial bombardment, to which Zhou's tactics proved ineffective. Trapped inside, morale among the Red Army dropped as food and medicine became scarce. The leadership decided to evacuate. On October 14, 1934, the Red Army broke through the KMT line on the Jiangxi Soviet's south-west corner at Xinfeng with soldiers and party cadres and embarked on the "Long March". In order to make the escape, many of the wounded and the ill, as well as women and children, were left behind, defended by a group of guerrilla fighters whom the KMT massacred. The who escaped headed to southern Hunan, first crossing the Xiang River after heavy fighting, and then the Wu River, in Guizhou where they took Zunyi in January 1935. Temporarily resting in the city, they held a conference; here, Mao was elected to a position of leadership, becoming Chairman of the Politburo, and "de facto" leader of both Party and Red Army, in part because his candidacy was supported by Soviet Premier Joseph Stalin. Insisting that they operate as a guerrilla force, he laid out a destination: the Shenshi Soviet in Shaanxi, Northern China, from where the Communists could focus on fighting the Japanese. Mao believed that in focusing on the anti-imperialist struggle, the Communists would earn the trust of the Chinese people, who in turn would renounce the KMT. From Zunyi, Mao led his troops to Loushan Pass, where they faced armed opposition but successfully crossed the river. Chiang flew into the area to lead his armies against Mao, but the Communists outmanoeuvred him and crossed the Jinsha River. Faced with the more difficult task of crossing the Tatu River, they managed it by fighting a battle over the Luding Bridge in May, taking Luding. Marching through the mountain ranges around Ma'anshan, in Moukung, Western Szechuan, they encountered the -strong CPC Fourth Front Army of Zhang Guotao, and together proceeded to Maoerhkai and then Gansu. Zhang and Mao disagreed over what to do; the latter wished to proceed to Shaanxi, while Zhang wanted to retreat east to Tibet or Sikkim, far from the KMT threat. It was agreed that they would go their separate ways, with Zhu De joining Zhang. Mao's forces proceeded north, through hundreds of kilometres of Grasslands, an area of quagmire where they were attacked by Manchu tribesman and where many soldiers succumbed to famine and disease. Finally reaching Shaanxi, they fought off both the KMT and an Islamic cavalry militia before crossing the Min Mountains and Mount Liupan and reaching the Shenshi Soviet; only 7,000–8000 had survived. The Long March cemented Mao's status as the dominant figure in the party. In November 1935, he was named chairman of the Military Commission. From this point onward, Mao was the Communist Party's undisputed leader, even though he would not become party chairman until 1943. Jung Chang and Jon Halliday offered an alternative account on many events during this period in their book "". For example, there was no battle at Luding and the CPC crossed the bridge unopposed, the Long March was not a strategy of the CPC but devised by Chiang Kai-shek, and Mao and other top CPC leaders did not walk the Long March but were carried on litters. However, although well received in the popular press, Chang and Halliday's work has been highly criticized by professional historians. Mao's troops arrived at the Yan'an Soviet during October 1935 and settled in Pao An, until spring 1936. While there, they developed links with local communities, redistributed and farmed the land, offered medical treatment, and began literacy programs. Mao now commanded soldiers, boosted by the arrival of He Long's men from Hunan and the armies of Zhu De and Zhang Guotao returned from Tibet. In February 1936, they established the North West Anti-Japanese Red Army University in Yan'an, through which they trained increasing numbers of new recruits. In January 1937, they began the "anti-Japanese expedition", that sent groups of guerrilla fighters into Japanese-controlled territory to undertake sporadic attacks. In May 1937, a Communist Conference was held in Yan'an to discuss the situation. Western reporters also arrived in the "Border Region" (as the Soviet had been renamed); most notable were Edgar Snow, who used his experiences as a basis for "Red Star Over China", and Agnes Smedley, whose accounts brought international attention to Mao's cause. On the Long March, Mao's wife He Zizen had been injured by a shrapnel wound to the head. She traveled to Moscow for medical treatment; Mao proceeded to divorce her and marry an actress, Jiang Qing. Mao moved into a cave-house and spent much of his time reading, tending his garden and theorizing. He came to believe that the Red Army alone was unable to defeat the Japanese, and that a Communist-led "government of national defence" should be formed with the KMT and other "bourgeois nationalist" elements to achieve this goal. Although despising Chiang Kai-shek as a "traitor to the nation", on May 5, he telegrammed the Military Council of the Nanking National Government proposing a military alliance, a course of action advocated by Stalin. Although Chiang intended to ignore Mao's message and continue the civil war, he was arrested by one of his own generals, Zhang Xueliang, in Xi'an, leading to the Xi'an Incident; Zhang forced Chiang to discuss the issue with the Communists, resulting in the formation of a United Front with concessions on both sides on December 25, 1937. The Japanese had taken both Shanghai and Nanking (Nanjing)—resulting in the Nanking Massacre, an atrocity Mao never spoke of all his life—and was pushing the Kuomintang government inland to Chungking. The Japanese's brutality led to increasing numbers of Chinese joining the fight, and the Red Army grew from to . In August 1938, the Red Army formed the New Fourth Army and the Eighth Route Army, which were nominally under the command of Chiang's National Revolutionary Army. In August 1940, the Red Army initiated the Hundred Regiments Campaign, in which troops attacked the Japanese simultaneously in five provinces. It was a military success that resulted in the death of Japanese, the disruption of railways and the loss of a coal mine. From his base in Yan'an, Mao authored several texts for his troops, including "Philosophy of Revolution", which offered an introduction to the Marxist theory of knowledge; "Protracted Warfare", which dealt with guerilla and mobile military tactics; and "New Democracy", which laid forward ideas for China's future. In 1944, the Americans sent a special diplomatic envoy, called the Dixie Mission, to the Communist Party of China. According to Edwin Moise, in "Modern China: A History 2nd Edition": Most of the Americans were favourably impressed. The CPC seemed less corrupt, more unified, and more vigorous in its resistance to Japan than the KMT. United States fliers shot down over North China ... confirmed to their superiors that the CPC was both strong and popular over a broad area. In the end, the contacts which the USA developed with the CPC led to very little. After the end of World War II, the U.S. continued their military assistance to Chiang Kai-shek and his KMT government forces against the People's Liberation Army (PLA) led by Mao Zedong during the civil war. Likewise, the Soviet Union gave quasi-covert support to Mao by their occupation of north east China, which allowed the PLA to move in en masse and take large supplies of arms left by the Japanese's Kwantung Army. To enhance the Red Army's military operations, Mao as the Chairman of the Communist Party of China, named his close associate General Zhu De to be its Commander-in-Chief. In 1948, under direct orders from Mao, the People's Liberation Army starved out the Kuomintang forces occupying the city of Changchun. At least civilians are believed to have perished during the siege, which lasted from June until October. PLA lieutenant colonel Zhang Zhenglu, who documented the siege in his book "White Snow, Red Blood", compared it to Hiroshima: "The casualties were about the same. Hiroshima took nine seconds; Changchun took five months." On January 21, 1949, Kuomintang forces suffered great losses in decisive battles against Mao's forces. In the early morning of December 10, 1949, PLA troops laid siege to Chongqing and Chengdu on mainland China, and Chiang Kai-shek fled from the mainland to Formosa (Taiwan). Mao proclaimed the establishment of The People's Republic of China from the Gate of Heavenly Peace (Tian'anmen) on October 1, 1949, and later that week declared "The Chinese people have stood up" (). Mao went to Moscow for long talks in the winter of 1949-50. Mao initiated the talks which focused on the political and economic revolution in China, foreign policy, railways, naval bases, and Soviet economic and technical aid. The resulting treaty reflected Stalin's dominance and his willingness to help Mao. Mao pushed the Party to organize campaigns to reform society and extend control. These campaigns were given urgency in October 1950, when Mao made the decision to send the People's Volunteer Army, a special unit of the People's Liberation Army, into the Korean War and fight as well as to reinforce the armed forces of North Korea, the Korean People's Army, which had been in full retreat. The United States placed a trade embargo on the People's Republic as a result of its involvement in the Korean War, lasting until Richard Nixon improvements of relations. At least 180 thousand Chinese troops died during the war. Mao directed operations to the minutest detail. As the Chairman of the Central Military Commission (CMC), he was also the Supreme Commander in Chief of the PLA and the People's Republic and Chairman of the Party. Chinese troops in Korea were under the overall command of then newly installed Premier Zhou Enlai, with General Peng Dehuai as field commander and political commissar. During the land reform campaigns, large numbers of landlords and rich peasants were beaten to death at mass meetings organised by the Communist Party as land was taken from them and given to poorer peasants, which significantly reduced economic inequality. The Campaign to Suppress Counter-revolutionaries targeted and publicly executed former Kuomintang officials, businessmen accused of "disturbing" the market, former employees of Western companies and intellectuals whose loyalty was suspect. In 1976, the U.S. State department estimated as many as a million were killed in the land reform, and killed in the counter-revolutionary campaign. Mao himself claimed that a total of people were killed in attacks on "counter-revolutionaries" during the years 1950–1952. However, because there was a policy to select "at least one landlord, and usually several, in virtually every village for public execution", the number of deaths range between 2 million and 5 million. In addition, at least 1.5 million people, perhaps as many as 4 to 6 million, were sent to "reform through labour" camps where many perished. Mao played a personal role in organizing the mass repressions and established a system of execution quotas, which were often exceeded. He defended these killings as necessary for the securing of power. The Mao government is generally credited with eradicating both consumption and production of opium during the 1950s using unrestrained repression and social reform. Ten million addicts were forced into compulsory treatment, dealers were executed, and opium-producing regions were planted with new crops. Remaining opium production shifted south of the Chinese border into the Golden Triangle region. Starting in 1951, Mao initiated two successive movements in an effort to rid urban areas of corruption by targeting wealthy capitalists and political opponents, known as the three-anti/five-anti campaigns. Whereas the three-anti campaign was a focused purge of government, industrial and party officials, the five-anti campaign set its sights slightly broader, targeting capitalist elements in general. Workers denounced their bosses, spouses turned on their spouses, and children informed on their parents; the victims were often humiliated at struggle sessions, where a targeted person would be verbally and physically abused until they confessed to crimes. Mao insisted that minor offenders be criticised and reformed or sent to labour camps, "while the worst among them should be shot". These campaigns took several hundred thousand additional lives, the vast majority via suicide. In Shanghai, suicide by jumping from tall buildings became so commonplace that residents avoided walking on the pavement near skyscrapers for fear that suicides might land on them. Some biographers have pointed out that driving those perceived as enemies to suicide was a common tactic during the Mao-era. For example, in his biography of Mao, Philip Short notes that in the Yan'an Rectification Movement, Mao gave explicit instructions that "no cadre is to be killed", but in practice allowed security chief Kang Sheng to drive opponents to suicide and that "this pattern was repeated throughout his leadership of the People's Republic". Following the consolidation of power, Mao launched the First Five-Year Plan (1953–1958), which aimed to end Chinese dependence upon agriculture in order to become a world power. With the Soviet Union's assistance, new industrial plants were built and agricultural production eventually fell to a point where industry was beginning to produce enough capital that China no longer needed the USSR's support. The declared success of the First-Five Year Plan was to encourage Mao to instigate the Second Five-Year Plan in 1958. Mao also launched a phase of rapid collectivization. The CPC introduced price controls as well as a Chinese character simplification aimed at increasing literacy. Large-scale industrialization projects were also undertaken. Programs pursued during this time include the Hundred Flowers Campaign, in which Mao indicated his supposed willingness to consider different opinions about how China should be governed. Given the freedom to express themselves, liberal and intellectual Chinese began opposing the Communist Party and questioning its leadership. This was initially tolerated and encouraged. After a few months, however, Mao's government reversed its policy and persecuted those who had criticised the party, totaling perhaps , as well as those who were merely alleged to have been critical, in what is called the Anti-Rightist Movement. Authors such as Jung Chang have alleged that the Hundred Flowers Campaign was merely a ruse to root out "dangerous" thinking. Li Zhisui, Mao's physician, suggested that Mao had initially seen the policy as a way of weakening opposition to him within the party and that he was surprised by the extent of criticism and the fact that it came to be directed at his own leadership. It was only then that he used it as a method of identifying and subsequently persecuting those critical of his government. The Hundred Flowers movement led to the condemnation, silencing, and death of many citizens, also linked to Mao's Anti-Rightist Movement, resulting in deaths possibly in the millions. In January 1958, Mao launched the second Five-Year Plan, known as the Great Leap Forward, a plan intended to turn China from an agrarian nation to an industrialized one and as an alternative model for economic growth to the Soviet model focusing on heavy industry that was advocated by others in the party. Under this economic program, the relatively small agricultural collectives that had been formed to date were rapidly merged into far larger people's communes, and many of the peasants were ordered to work on massive infrastructure projects and on the production of iron and steel. Some private food production was banned, and livestock and farm implements were brought under collective ownership. Under the Great Leap Forward, Mao and other party leaders ordered the implementation of a variety of unproven and unscientific new agricultural techniques by the new communes. The combined effect of the diversion of labour to steel production and infrastructure projects, and cyclical natural disasters led to an approximately 15% drop in grain production in 1959 followed by a further 10% decline in 1960 and no recovery in 1961. In an effort to win favour with their superiors and avoid being purged, each layer in the party hierarchy exaggerated the amount of grain produced under them. Based upon the fabricated success, party cadres were ordered to requisition a disproportionately high amount of that fictitious harvest for state use, primarily for use in the cities and urban areas but also for export. The result, compounded in some areas by drought and in others by floods, was that rural peasants were left with little food for themselves and many millions starved to death in the Great Chinese Famine. China's population suffered from the Great Famine during the late 20th century. This came as a result of the lack of food production and distribution to the population of China. The people of urban areas in China were given food stamps each month, but the people of rural areas were expected to grow their own crops and give some of the crops back to the government. The deaths in the rural parts of China out ranked the ones in the Urban cities. Also, the government of China continued to export food to other countries during the Great Famine; this food could have been used to feed the starving citizens. These factors led to the catastrophic death of about 52 million citizens. The famine was a direct cause of the death of some 30 million Chinese peasants between 1959 and 1962. Further, many children who became emaciated and malnourished during years of hardship and struggle for survival died shortly after the Great Leap Forward came to an end in 1962. The extent of Mao's knowledge of the severity of the situation has been disputed. Mao's physician believed that he may have been unaware of the extent of the famine, partly due to a reluctance to criticise his policies, and the willingness of his staff to exaggerate or outright fake reports regarding food production. Upon learning of the extent of the starvation, Mao vowed to stop eating meat, an action followed by his staff. Hong Kong-based historian Frank Dikötter, challenged the notion that Mao did not know about the famine throughout the country until it was too late: The idea that the state mistakenly took too much grain from the countryside because it assumed that the harvest was much larger than it was is largely a myth—at most partially true for the autumn of 1958 only. In most cases the party knew very well that it was starving its own people to death. At a secret meeting in the Jinjiang Hotel in Shanghai dated March 25, 1959, Mao specifically ordered the party to procure up to one third of all the grain, much more than had ever been the case. At the meeting he announced that "To distribute resources evenly will only ruin the Great Leap Forward. When there is not enough to eat, people starve to death. It is better to let half of the people die so that the other half can eat their fill." Professor Emeritus Thomas P. Bernstein of Columbia University offered his view on Mao's statement on starvation in the March 25, 1959, meeting: Some scholars believe that this shows Mao's readiness to accept mass death on an immense scale. My own view is that this is an instance of Mao's use of hyperbole, another being his casual acceptance of death of half the population during a nuclear war. In other contexts, Mao did not in fact accept mass death. Zhou's Chronology shows that in October 1958, Mao expressed real concern that 40,000 people in Yunnan had starved to death (p. 173). Shortly after the March 25 meeting, he worried about 25.2 million people who were at risk of starvation. But from late summer on, Mao essentially forgot about this issue, until, as noted, the "Xinyang Incident" came to light in October 1960. In the article "Mao Zedong and the Famine of 1959–1960: A Study in Wilfulness", published in 2006 in "The China Quarterly", Professor Thomas P. Bernstein also discussed Mao's change of attitudes during different phases of the Great Leap Forward: In late autumn 1958, Mao Zedong strongly condemned widespread practices of the Great Leap Forward (GLF) such as subjecting peasants to exhausting labour without adequate food and rest, which had resulted in epidemics, starvation and deaths. At that time Mao explicitly recognized that anti-rightist pressures on officialdom were a major cause of "production at the expense of livelihood." While he was not willing to acknowledge that only abandonment of the GLF could solve these problems, he did strongly demand that they be addressed. After the July 1959 clash at Lushan with Peng Dehuai, Mao revived the GLF in the context of a new, extremely harsh anti-rightist campaign, which he relentlessly promoted into the spring of 1960 together with the radical policies that he previously condemned. Not until spring 1960 did Mao again express concern about abnormal deaths and other abuses, but he failed to apply the pressure needed to stop them. Given what he had already learned about the costs to the peasants of GLF extremism, the Chairman should have known that the revival of GLF radicalism would exact a similar or even bigger price. Instead, he wilfully ignored the lessons of the first radical phase for the sake of achieving extreme ideological and developmental goals. In "", Jasper Becker notes that Mao was dismissive of reports he received of food shortages in the countryside and refused to change course, believing that peasants were lying and that rightists and kulaks were hoarding grain. He refused to open state granaries, and instead launched a series of "anti-grain concealment" drives that resulted in numerous purges and suicides. Other violent campaigns followed in which party leaders went from village to village in search of hidden food reserves, and not only grain, as Mao issued quotas for pigs, chickens, ducks and eggs. Many peasants accused of hiding food were tortured and beaten to death. Whatever the cause of the disaster, Mao lost esteem among many of the top party cadres. He was eventually forced to abandon the policy in 1962, and he lost political power to moderate leaders such as Liu Shaoqi and Deng Xiaoping. Mao, however, supported by national propaganda, claimed that he was only partly to blame for the famine. As a result, he was able to remain Chairman of the Communist Party, with the Presidency transferred to Liu Shaoqi. The Great Leap Forward was a tragedy for the vast majority of the Chinese. Although the steel quotas were officially reached, almost all of the supposed steel made in the countryside was iron, as it had been made from assorted scrap metal in home-made furnaces with no reliable source of fuel such as coal. This meant that proper smelting conditions could not be achieved. According to Zhang Rongmei, a geometry teacher in rural Shanghai during the Great Leap Forward: We took all the furniture, pots, and pans we had in our house, and all our neighbours did likewise. We put everything in a big fire and melted down all the metal. The worst of the famine was steered towards enemies of the state. As Jasper Becker explains: The most vulnerable section of China's population, around five per cent, were those whom Mao called 'enemies of the people'. Anyone who had in previous campaigns of repression been labeled a 'black element' was given the lowest priority in the allocation of food. Landlords, rich peasants, former members of the nationalist regime, religious leaders, rightists, counter-revolutionaries and the families of such individuals died in the greatest numbers. At a large Communist Party conference in Beijing in January 1962, called the "Seven Thousand Cadres Conference", State Chairman Liu Shaoqi denounced the Great Leap Forward as responsible for widespread famine. The overwhelming majority of delegates expressed agreement, but Defense Minister Lin Biao staunchly defended Mao. A brief period of liberalization followed while Mao and Lin plotted a comeback. Liu Shaoqi and Deng Xiaoping rescued the economy by disbanding the people's communes, introducing elements of private control of peasant smallholdings and importing grain from Canada and Australia to mitigate the worst effects of famine. At the Lushan Conference in July/August 1959, several ministers expressed concern that the Great Leap Forward had not proved as successful as planned. The most direct of these was Minister of Defence and Korean War veteran General Peng Dehuai. Following Peng's criticism of the Great Leap Forward, Mao orchestrated a purge of Peng and his supporters, stifling criticism of the Great Leap policies. Senior officials who reported the truth of the famine to Mao were branded as "right opportunists." A campaign against right-wing opportunism was launched and resulted in party members and ordinary peasants being sent to prison labor camps where many would subsequently die in the famine. Years later the CPC would conclude that as many as six million people were wrongly punished in the campaign. The number of deaths by starvation during the Great Leap Forward is deeply controversial. Until the mid-1980s, when official census figures were finally published by the Chinese Government, little was known about the scale of the disaster in the Chinese countryside, as the handful of Western observers allowed access during this time had been restricted to model villages where they were deceived into believing that the Great Leap Forward had been a great success. There was also an assumption that the flow of individual reports of starvation that had been reaching the West, primarily through Hong Kong and Taiwan, must have been localised or exaggerated as China was continuing to claim record harvests and was a net exporter of grain through the period. Because Mao wanted to pay back early to the Soviets debts totalling 1.973 billion yuan from 1960 to 1962, exports increased by 50%, and fellow Communist regimes in North Korea, North Vietnam and Albania were provided grain free of charge. Censuses were carried out in China in 1953, 1964 and 1982. The first attempt to analyse this data to estimate the number of famine deaths was carried out by American demographer Dr. Judith Banister and published in 1984. Given the lengthy gaps between the censuses and doubts over the reliability of the data, an accurate figure is difficult to ascertain. Nevertheless, Banister concluded that the official data implied that around 15 million excess deaths incurred in China during 1958–61, and that based on her modelling of Chinese demographics during the period and taking account of assumed under-reporting during the famine years, the figure was around 30 million. The official statistic is 20 million deaths, as given by Hu Yaobang. Yang Jisheng, a former Xinhua News Agency reporter who had privileged access and connections available to no other scholars, estimates a death toll of 36 million. Frank Dikötter estimates that there were at least 45 million premature deaths attributable to the Great Leap Forward from 1958 to 1962. Various other sources have put the figure at between 20 and 46 million. On the international front, the period was dominated by the further isolation of China. The Sino-Soviet split resulted in Nikita Khrushchev's withdrawal of all Soviet technical experts and aid from the country. The split concerned the leadership of world communism. The USSR had a network of Communist parties it supported; China now created its own rival network to battle it out for local control of the left in numerous countries. Lorenz M. Lüthi argues: The Sino-Soviet split was one of the key events of the Cold War, equal in importance to the construction of the Berlin Wall, the Cuban Missile Crisis, the Second Vietnam War, and Sino-American rapprochement. The split helped to determine the framework of the Second Cold War in general, and influenced the course of the Second Vietnam War in particular. The split resulted from Nikita Khrushchev's more moderate Soviet leadership after the death of Stalin in March 1953. Only Albania openly sided with China, thereby forming an alliance between the two countries which would last until after Mao's death in 1976. Warned that the Soviets had nuclear weapons, Mao minimized the threat. Becker says that "Mao believed that the bomb was a 'paper tiger', declaring to Khrushchev that it would not matter if China lost 300 million people in a nuclear war: the other half of the population would survive to ensure victory". Stalin had established himself as the successor of "correct" Marxist thought well before Mao controlled the Communist Party of China, and therefore Mao never challenged the suitability of any Stalinist doctrine (at least while Stalin was alive). Upon the death of Stalin, Mao believed (perhaps because of seniority) that the leadership of Marxist doctrine would fall to him. The resulting tension between Khrushchev (at the head of a politically and militarily superior government), and Mao (believing he had a superior understanding of Marxist ideology) eroded the previous patron-client relationship between the Communist Party of the Soviet Union and the CPC. In China, the formerly favoured Soviets were now denounced as "revisionists" and listed alongside "American imperialism" as movements to oppose. Partly surrounded by hostile American military bases in South Korea, Japan, and Taiwan, China was now confronted with a new threat from the Soviet Union north and west. Both the internal crisis and the external threat called for extraordinary statesmanship from Mao, but as China entered the new decade the statesmen of China were in hostile confrontation with each other. During the early 1960s, Mao became concerned with the nature of post-1959 China. He saw that the revolution and Great Leap Forward had replaced the old ruling elite with a new one. He was concerned that those in power were becoming estranged from the people they were to serve. Mao believed that a revolution of culture would unseat and unsettle the "ruling class" and keep China in a state of "perpetual revolution" that, theoretically, would serve the interests of the majority, rather than a tiny and privileged elite. State Chairman Liu Shaoqi and General Secretary Deng Xiaoping favoured the idea that Mao be removed from actual power as China's head of state and government but maintain his ceremonial and symbolic role as Chairman of the Communist Party of China, with the party upholding all of his positive contributions to the revolution. They attempted to marginalise Mao by taking control of economic policy and asserting themselves politically as well. Many claim that Mao responded to Liu and Deng's movements by launching the Great Proletarian Cultural Revolution in 1966. Some scholars, such as Mobo Gao, claim the case for this is overstated. Others, such as Frank Dikötter, hold that Mao launched the Cultural Revolution to wreak revenge on those who had dared to challenge him over the Great Leap Forward. Believing that certain liberal bourgeois elements of society continued to threaten the socialist framework, groups of young people known as the Red Guards struggled against authorities at all levels of society and even set up their own tribunals. Chaos reigned in much of the nation, and millions were persecuted. During the Cultural Revolution, nearly all of the schools and universities in China were closed, and the young intellectuals living in cities were ordered to the countryside to be "re-educated" by the peasants, where they performed hard manual labour and other work. The Cultural Revolution led to the destruction of much of China's traditional cultural heritage and the imprisonment of a huge number of Chinese citizens, as well as the creation of general economic and social chaos in the country. Millions of lives were ruined during this period, as the Cultural Revolution pierced into every part of Chinese life, depicted by such Chinese films as "To Live", "The Blue Kite" and "Farewell My Concubine". It is estimated that hundreds of thousands of people, perhaps millions, perished in the violence of the Cultural Revolution. This included prominent figures such as Liu Shaoqi. When Mao was informed of such losses, particularly that people had been driven to suicide, he is alleged to have commented: "People who try to commit suicide—don't attempt to save them! . . . China is such a populous nation, it is not as if we cannot do without a few people." The authorities allowed the Red Guards to abuse and kill opponents of the regime. Said Xie Fuzhi, national police chief: "Don't say it is wrong of them to beat up bad persons: if in anger they beat someone to death, then so be it." As a result, in August and September 1966, there were a reported 1,772 people murdered by the Red Guards in Beijing alone. It was during this period that Mao chose Lin Biao, who seemed to echo all of Mao's ideas, to become his successor. Lin was later officially named as Mao's successor. By 1971, however, a divide between the two men had become apparent. Official history in China states that Lin was planning a military coup or an assassination attempt on Mao. Lin Biao died on September 13, 1971 in a plane crash over the air space of Mongolia, presumably as he fled China, probably anticipating his arrest. The CPC declared that Lin was planning to depose Mao and posthumously expelled Lin from the party. At this time, Mao lost trust in many of the top CPC figures. The highest-ranking Soviet Bloc intelligence defector, Lt. Gen. Ion Mihai Pacepa described his conversation with Nicolae Ceaușescu, who told him about a plot to kill Mao Zedong with the help of Lin Biao organised by the KGB. Despite being considered a feminist figure by some and a supporter of women's rights, documents released by the US Department of State in 2008 show that Mao declared women to be a "nonsense" in 1973, in conversation with Kissinger, joking that "China is a very poor country. We don't have much. What we have in excess is women... Let them go to your place. They will create disasters. That way you can lessen our burdens." When Mao offered 10 million women, Kissinger replied by saying that Mao was "improving his offer". Mao and Kissinger then agreed that their comments on women be removed from public records, prompted by a Chinese official who feared that Mao's comments might incur public anger if released. In 1969, Mao declared the Cultural Revolution to be over, although various historians in and outside of China mark the end of the Cultural Revolution—as a whole or in part—in 1976, following Mao's death and the arrest of the Gang of Four. In the last years of his life, Mao was faced with declining health due to either Parkinson's disease or, according to his physician, amyotrophic lateral sclerosis, as well as lung ailments due to smoking and heart trouble. Some also attributed Mao's decline in health to the betrayal of Lin Biao. Mao remained passive as various factions within the Communist Party mobilised for the power struggle anticipated after his death. The Cultural Revolution is now officially regarded as a "severe setback" for the PRC. It is often looked at in all scholarly circles as a greatly disruptive period for China. While one-tenth of Chinese people—an estimated 100 million—did suffer during the period, some scholars, such as Lee Feigon and Mobo Gao, claim there were many great advances, and in some sectors the Chinese economy continued to outperform the West. They hold that the Cultural Revolution period laid the foundation for the spectacular growth that continues in China. During the Cultural Revolution, China detonated its first H-Bomb (in 1967), launched the Dong Fang Hong satellite (on January 30, 1970), commissioned its first nuclear submarines and made various advances in science and technology. Healthcare was free, and living standards in the countryside continued to improve. In comparison, the Great Leap probably did cause a much larger loss of life with its flawed economic policies which encompassed even the peasants. Estimates of the death toll during the Cultural Revolution, including civilians and Red Guards, vary greatly. An estimate of around 400,000 deaths is a widely accepted minimum figure, according to Maurice Meisner. MacFarquhar and Schoenhals assert that in rural China alone some 36 million people were persecuted, of whom between 750,000 and 1.5 million were killed, with roughly the same number permanently injured. In "", Jung Chang and Jon Halliday claim that as many as 3 million people died in the violence of the Cultural Revolution. Historian Daniel Leese notes that in the 1950s Mao's personality was hardening: During his leadership, Mao traveled outside China on only two occasions, both state visits to the Soviet Union. The first visit was to celebrate the 70th birthday of Soviet leader Joseph Stalin, which was also attended by East German Deputy Chairman of the Council of Ministers Walter Ulbricht and Mongolian communist General Secretary Yumjaagiin Tsedenbal. The second visit to Moscow was a two-week state visit of which the highlights included Mao's attendance at the 40th anniversary (Ruby Jubilee) celebrations of the October Revolution (he attended the annual military parade of the Moscow Garrison on Red Square as well as a banquet in the Moscow Kremlin) and the International Meeting of Communist and Workers Parties, where he met with other communist leaders such as North Korea's Kim Il-Sung and Albania's Enver Hoxha. When Mao stepped down as head of state on April 27, 1959, further diplomatic state visits and travels abroad were undertaken by president Liu Shaoqi rather than Mao personally. Mao's health mostly declined in his last years. Smoking may have played an important role in his declining health, for Mao was a heavy chain-smoker
https://en.wikipedia.org/wiki?curid=19527
Mechanical engineering Mechanical engineering is an engineering branch that combines engineering physics and mathematics principles with materials science to design, analyze, manufacture, and maintain mechanical systems. It is one of the oldest and broadest of the engineering branches. The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity. In addition to these core principles, mechanical engineers use tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), and product lifecycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, transport systems, aircraft, watercraft, robotics, medical devices, weapons, and others. It is the branch of engineering that involves the design, production, and operation of machinery. Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century; however, its development can be traced back several thousand years around the world. In the 19th century, developments in physics led to the development of mechanical engineering science. The field has continually evolved to incorporate advancements; today mechanical engineers are pursuing developments in such areas as composites, mechatronics, and nanotechnology. It also overlaps with aerospace engineering, metallurgical engineering, civil engineering, electrical engineering, manufacturing engineering, chemical engineering, industrial engineering, and other engineering disciplines to varying amounts. Mechanical engineers may also work in the field of biomedical engineering, specifically with biomechanics, transport phenomena, biomechatronics, bionanotechnology, and modelling of biological systems. The application of mechanical engineering can be seen in the archives of various ancient and medieval societies. The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC. The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC. In ancient Greece, the works of Archimedes (287–212 BC) influenced mechanics in the Western tradition. In Roman Egypt, Heron of Alexandria (c. 10–70 AD) created the first steam-powered device (Aeolipile). In China, Zhang Heng (78–139 AD) improved a water clock and invented a seismometer, and Ma Jun (200–265 AD) invented a chariot with differential gears. The medieval Chinese horologist and engineer Su Song (1020–1101 AD) incorporated an escapement mechanism into his astronomical clock tower two centuries before escapement devices were found in medieval European clocks. He also invented the world's first known endless power-transmitting chain drive. During the Islamic Golden Age (7th to 15th century), Muslim inventors made remarkable contributions in the field of mechanical technology. Al-Jazari, who was one of them, wrote his famous "Book of Knowledge of Ingenious Mechanical Devices" in 1206 and presented many mechanical designs. Al-Jazari is also the first known person to create devices such as the crankshaft and camshaft, which now form the basics of many mechanisms. During the 17th century, important breakthroughs in the foundations of mechanical engineering occurred in England. Sir Isaac Newton formulated Newton's Laws of Motion and developed Calculus, the mathematical basis of physics. Newton was reluctant to publish his works for years, but he was finally persuaded to do so by his colleagues, such as Sir Edmond Halley, much to the benefit of all mankind. Gottfried Wilhelm Leibniz is also credited with creating Calculus during this time period. During the early 19th century industrial revolution, machine tools were developed in England, Germany, and Scotland. This allowed mechanical engineering to develop as a separate field within engineering. They brought with them manufacturing machines and the engines to power them. The first British professional society of mechanical engineers was formed in 1847 Institution of Mechanical Engineers, thirty years after the civil engineers formed the first such professional society Institution of Civil Engineers. On the European continent, Johann von Zimmermann (1820–1901) founded the first factory for grinding machines in Chemnitz, Germany in 1848. In the United States, the American Society of Mechanical Engineers (ASME) was formed in 1880, becoming the third such professional engineering society, after the American Society of Civil Engineers (1852) and the American Institute of Mining Engineers (1871). The first schools in the United States to offer an engineering education were the United States Military Academy in 1817, an institution now known as Norwich University in 1819, and Rensselaer Polytechnic Institute in 1825. Education in mechanical engineering has historically been based on a strong foundation in mathematics and science. Degrees in mechanical engineering are offered at various universities worldwide. Mechanical engineering programs typically take four to five years of study and result in a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Science Engineering (B.Sc.Eng.), Bachelor of Technology (B.Tech.), Bachelor of Mechanical Engineering (B.M.E.), or Bachelor of Applied Science (B.A.Sc.) degree, in or with emphasis in mechanical engineering. In Spain, Portugal and most of South America, where neither B.S. nor B.Tech. programs have been adopted, the formal name for the degree is "Mechanical Engineer", and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, but in order to qualify as an Engineer one has to pass a state exam at the end of the course. In Greece, the coursework is based on a five-year curriculum and the requirement of a 'Diploma' Thesis, which upon completion a 'Diploma' is awarded rather than a B.Sc. In the United States, most undergraduate mechanical engineering programs are accredited by the Accreditation Board for Engineering and Technology (ABET) to ensure similar course requirements and standards among universities. The ABET web site lists 302 accredited mechanical engineering programs as of 11 March 2014. Mechanical engineering programs in Canada are accredited by the Canadian Engineering Accreditation Board (CEAB), and most other countries offering engineering degrees have similar accreditation societies. In Australia, mechanical engineering degrees are awarded as Bachelor of Engineering (Mechanical) or similar nomenclature, although there are an increasing number of specialisations. The degree takes four years of full-time study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord. Before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. Similar systems are also present in South Africa and are overseen by the Engineering Council of South Africa (ECSA). In India, to become an engineer, one needs to have an engineering degree like a B.Tech or B.E, have a diploma in engineering, or by completing a course in an engineering trade like fitter from the Industrial Training Institute (ITIs) to receive a "ITI Trade Certificate" and also pass the All India Trade Test (AITT) with an engineering trade conducted by the National Council of Vocational Training (NCVT) by which one is awarded a "National Trade Certificate". A similar system is used in Nepal. Some mechanical engineers go on to pursue a postgraduate degree such as a Master of Engineering, Master of Technology, Master of Science, Master of Engineering Management (M.Eng.Mgt. or M.E.M.), a Doctor of Philosophy in engineering (Eng.D. or Ph.D.) or an engineer's degree. The master's and engineer's degrees may or may not include research. The Doctor of Philosophy includes a significant research component and is often viewed as the entry point to academia. The Engineer's degree exists at a few institutions at an intermediate level between the master's degree and the doctorate. Standards set by each country's accreditation society are intended to provide uniformity in fundamental subject material, promote competence among graduating engineers, and to maintain confidence in the engineering profession as a whole. Engineering programs in the U.S., for example, are required by ABET to show that their students can "work professionally in both thermal and mechanical systems areas." The specific courses required to graduate, however, may differ from program to program. Universities and Institutes of technology will often combine multiple subjects into a single class or split a subject into multiple classes, depending on the faculty available and the university's major area(s) of research. The fundamental subjects of mechanical engineering usually include: Mechanical engineers are also expected to understand and be able to apply basic concepts from chemistry, physics, Tribology, chemical engineering, civil engineering, and electrical engineering. All mechanical engineering programs include multiple semesters of mathematical classes including calculus, and advanced mathematical concepts including differential equations, partial differential equations, linear algebra, abstract algebra, and differential geometry, among others. In addition to the core mechanical engineering curriculum, many mechanical engineering programs offer more specialized programs and classes, such as control systems, robotics, transport and logistics, cryogenics, fuel technology, automotive engineering, biomechanics, vibration, optics and others, if a separate department does not exist for these subjects. Most mechanical engineering programs also require varying amounts of research or community projects to gain practical problem-solving experience. In the United States it is common for mechanical engineering students to complete one or more internships while studying, though this is not typically mandated by the university. Cooperative education is another option. Future work skills research puts demand on study components that feed student's creativity and innovation. Mechanical engineers research, design, develop, build, and test mechanical and thermal devices, including tools, engines, and machines. Mechanical engineers typically do the following: Mechanical engineers design and oversee the manufacturing of many products ranging from medical devices to new batteries. They also design power-producing machines such as electric generators, internal combustion engines, and steam and gas turbines as well as power-using machines, such as refrigeration and air-conditioning systems. Like other engineers, mechanical engineers use computers to help create and analyze designs, run simulations and test how a machine is likely to work. Engineers may seek license by a state, provincial, or national government. The purpose of this process is to ensure that engineers possess the necessary technical knowledge, real-world experience, and knowledge of the local legal system to practice engineering at a professional level. Once certified, the engineer is given the title of Professional Engineer (in the United States, Canada, Japan, South Korea, Bangladesh and South Africa), Chartered Engineer (in the United Kingdom, Ireland, India and Zimbabwe), "Chartered Professional Engineer" (in Australia and New Zealand) or "European Engineer" (much of the European Union). In the U.S., to become a licensed Professional Engineer (PE), an engineer must pass the comprehensive FE (Fundamentals of Engineering) exam, work a minimum of 4 years as an "Engineering Intern (EI)" or "Engineer-in-Training (EIT)", and pass the "Principles and Practice" or PE (Practicing Engineer or Professional Engineer) exams. The requirements and steps of this process are set forth by the National Council of Examiners for Engineering and Surveying (NCEES), a composed of engineering and land surveying licensing boards representing all U.S. states and territories. In the UK, current graduates require a BEng plus an appropriate master's degree or an integrated MEng degree, a minimum of 4 years post graduate on the job competency development and a peer reviewed project report to become a Chartered Mechanical Engineer (CEng, MIMechE) through the Institution of Mechanical Engineers. CEng MIMechE can also be obtained via an examination route administered by the City and Guilds of London Institute. In most developed countries, certain engineering tasks, such as the design of bridges, electric power plants, and chemical plants, must be approved by a professional engineer or a chartered engineer. "Only a licensed engineer, for instance, may prepare, sign, seal and submit engineering plans and drawings to a public authority for approval, or to seal engineering work for public and private clients." This requirement can be written into state and provincial legislation, such as in the Canadian provinces, for example the Ontario or Quebec's Engineer Act. In other countries, such as Australia, and the UK, no such legislation exists; however, practically all certifying bodies maintain a code of ethics independent of legislation, that they expect all members to abide by or risk expulsion. The total number of engineers employed in the U.S. in 2015 was roughly 1.6 million. Of these, 278,340 were mechanical engineers (17.28%), the largest discipline by size. In 2012, the median annual income of mechanical engineers in the U.S. workforce was $80,580. The median income was highest when working for the government ($92,030), and lowest in education ($57,090). In 2014, the total number of mechanical engineering jobs was projected to grow 5% over the next decade. As of 2009, the average starting salary was $58,800 with a bachelor's degree. The field of mechanical engineering can be thought of as a collection of many mechanical engineering science disciplines. Several of these subdisciplines which are typically taught at the undergraduate level are listed below, with a brief explanation and the most common application of each. Some of these subdisciplines are unique to mechanical engineering, while others are a combination of mechanical engineering and one or more other disciplines. Most work that a mechanical engineer does uses skills and techniques from several of these subdisciplines, as well as specialized subdisciplines. Specialized subdisciplines, as used in this article, are more likely to be the subject of graduate studies or on-the-job training than undergraduate research. Several specialized subdisciplines are discussed in this section. Mechanics is, in the most general sense, the study of forces and their effect upon matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include Mechanical engineers typically use mechanics in the design or analysis phases of engineering. If the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine, to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle (see HVAC), or to design the intake system for the engine. Mechatronics is a combination of mechanics and electronics. It is an interdisciplinary branch of mechanical engineering, electrical engineering and software engineering that is concerned with integrating electrical and mechanical engineering to create hybrid systems. In this way, machines can be automated through the use of electric motors, servo-mechanisms, and other electrical systems in conjunction with special software. A common example of a mechatronics system is a CD-ROM drive. Mechanical systems open and close the drive, spin the CD and move the laser, while an optical system reads the data on the CD and converts it to bits. Integrated software controls the process and communicates the contents of the CD to the computer. Robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot). Robots are used extensively in industrial engineering. They allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. Many companies employ assembly lines of robots, especially in Automotive Industries and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications, from recreation to domestic applications. Structural analysis is the branch of mechanical engineering (and also civil engineering) devoted to examining why and how objects fail and to fix the objects and their performance. Structural failures occur in two general modes: static failure, and fatigue failure. "Static structural failure" occurs when, upon being loaded (having a force applied) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. "Fatigue failure" occurs when an object fails after a number of repeated loading and unloading cycles. Fatigue failure occurs because of imperfections in the object: a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle (propagation) until the crack is large enough to cause ultimate failure. Failure is not simply defined as when a part breaks, however; it is defined as when a part does not operate as intended. Some systems, such as the perforated top sections of some plastic bags, are designed to break. If these systems do not break, failure analysis might be employed to determine the cause. Structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. Engineers often use online documents and books such as those published by ASM to aid them in determining the type of failure and possible causes. Once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. Structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests. Thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. At its simplest, thermodynamics is the study of energy, its use and transformation through a system. Typically, engineering thermodynamics is concerned with changing energy from one form to another. As an example, automotive engines convert chemical energy (enthalpy) from the fuel into heat, and then into mechanical work that eventually turns the wheels. Thermodynamics principles are used by mechanical engineers in the fields of heat transfer, thermofluids, and energy conversion. Mechanical engineers use thermo-science to design engines and power plants, heating, ventilation, and air-conditioning (HVAC) systems, heat exchangers, heat sinks, radiators, refrigeration, insulation, and others. Drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S. mechanical engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions. Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings. However, with the advent of computer numerically controlled (CNC) manufacturing, parts can now be fabricated without the need for constant technician input. Manually manufactured parts generally consist of spray coatings, surface finishes, and other processes that cannot economically or practically be done by a machine. Drafting is used in nearly every subdiscipline of mechanical engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD). Many mechanical engineering companies, especially those in industrialized nations, have begun to incorporate computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and the ease of use in designing mating interfaces and tolerances. Other CAE programs commonly used by mechanical engineers include product lifecycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM). Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of a relative few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows. As mechanical engineering begins to merge with other disciplines, as seen in mechatronics, multidisciplinary design optimization (MDO) is being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also utilize sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems. Mechanical engineers are constantly pushing the boundaries of what is physically possible in order to produce safer, cheaper, and more efficient machines and mechanical systems. Some technologies at the cutting edge of mechanical engineering are listed below (see also exploratory engineering). Micron-scale mechanical components such as springs, gears, fluidic and heat transfer devices are fabricated from a variety of substrate materials such as silicon, glass and polymers like SU8. Examples of MEMS components are the accelerometers that are used as car airbag sensors, modern cell phones, gyroscopes for precise positioning and microfluidic devices used in biomedical applications. Friction stir welding, a new type of welding, was discovered in 1991 by The Welding Institute (TWI). The innovative steady state (non-fusion) welding technique joins materials previously un-weldable, including several aluminum alloys. It plays an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include welding the seams of the aluminum main Space Shuttle external tank, Orion Crew Vehicle, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket, armor plating for amphibious assault ships, and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation among an increasingly growing pool of uses. Composites or composite materials are a combination of materials which provide different physical characteristics than either material separately. Composite material research within mechanical engineering typically focuses on designing (and, subsequently, finding applications for) stronger or more rigid materials while attempting to reduce weight, susceptibility to corrosion, and other undesirable factors. Carbon fiber reinforced composites, for instance, have been used in such diverse applications as spacecraft and fishing rods. Mechatronics is the synergistic combination of mechanical engineering, electronic engineering, and software engineering. The discipline of mechatronics began as a way to combine mechanical principles with electrical engineering. Mechatronic concepts are used in the majority of electro-mechanical systems. Typical electro-mechanical sensors used in mechatronics are strain gauges, thermocouples, and pressure transducers. At the smallest scales, mechanical engineering becomes nanotechnology—one speculative goal of which is to create a molecular assembler to build molecules and materials via mechanosynthesis. For now that goal remains within exploratory engineering. Areas of current mechanical engineering research in nanotechnology include nanofilters, nanofilms, and nanostructures, among others. Finite Element Analysis is a computational tool used to estimate stress, strain, and deflection of solid bodies. It uses a mesh setup with user-defined sizes to measure physical quantities at a node. The more nodes there are, the higher the precision. This field is not new, as the basis of Finite Element Analysis (FEA) or Finite Element Method (FEM) dates back to 1941. But the evolution of computers has made FEA/FEM a viable option for analysis of structural problems. Many commercial codes such as NASTRAN, ANSYS, and ABAQUS are widely used in industry for research and the design of components. Some 3D modeling and CAD software packages have added FEA modules. In the recent times, cloud simulation platforms like SimScale are becoming more common. Other techniques such as finite difference method (FDM) and finite-volume method (FVM) are employed to solve problems relating heat and mass transfer, fluid flows, fluid surface interaction, etc. Biomechanics is the application of mechanical principles to biological systems, such as humans, animals, plants, organs, and cells. Biomechanics also aids in creating prosthetic limbs and artificial organs for humans. Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. In the past decade, reverse engineering of materials found in nature such as bone matter has gained funding in academia. The structure of bone matter is optimized for its purpose of bearing a large amount of compressive stress per unit weight. The goal is to replace crude steel with bio-material for structural design. Over the past decade the Finite element method (FEM) has also entered the Biomedical sector highlighting further engineering aspects of Biomechanics. FEM has since then established itself as an alternative to in vivo surgical assessment and gained the wide acceptance of academia. The main advantage of Computational Biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modelling to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g. BioSpine). Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as turbulent flows. Initial validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests. Acoustical engineering is one of many other sub-disciplines of mechanical engineering and is the application of acoustics. Acoustical engineering is the study of Sound and Vibration. These engineers work effectively to reduce noise pollution in mechanical devices and in buildings by soundproofing or removing sources of unwanted noise. The study of acoustics can range from designing a more efficient hearing aid, microphone, headphone, or recording studio to enhancing the sound quality of an orchestra hall. Acoustical engineering also deals with the vibration of different mechanical systems. Manufacturing engineering, aerospace engineering and automotive engineering are grouped with mechanical engineering at times. A bachelor's degree in these areas will typically have a difference of a few specialized classes.
https://en.wikipedia.org/wiki?curid=19528
Monkey Island (series) Monkey Island is a series of adventure games. The first four games in the series were produced and published by LucasArts, formerly known as Lucasfilm Games. The fifth installment of the franchise was developed by Telltale Games in collaboration with LucasArts. The games follow the misadventures of the hapless Guybrush Threepwood as he struggles to become the most notorious pirate in the Caribbean, defeat the plans of the evil undead pirate LeChuck and win the heart of governess Elaine Marley. Each game's plot usually involves the mysterious Monkey Island and its impenetrable secrets. The first game in the series was created as a collaborative effort among Ron Gilbert, Tim Schafer and Dave Grossman. Gilbert worked on the first two games before leaving LucasArts. Grossman and Schafer, who also worked on the first two games, would enjoy success on other games before they both left LucasArts. The rights to "Monkey Island" remained with LucasArts, and the third and fourth games were created without direct involvement from the original writing staff. Dave Grossman was the project leader of the fifth game in the series and Ron Gilbert was involved with the first design of the game. The Monkey Island series is known for its humor and "player-friendly" qualities. The player cannot permanently place the game in an unwinnable state or cause Guybrush to die without great effort. This "player-friendly" approach was unusual at the time of the first game's release in 1990; prominent adventure-game rivals included Sierra On-Line and Infocom, both of which were known for games with sudden and frequent character deaths or "lock-outs". LucasArts itself used such closed plot paths for its drama games like "" (1989), but preferred the open format for other humor-oriented adventure games such as "Sam & Max Hit the Road" (1993) and "Day of the Tentacle" (1993). After "" in 1991, the series went in hiatus until 1997, when it resumed with "The Curse of Monkey Island". After the fourth entry, "Escape from Monkey Island", the franchise again went on hiatus, though numerous rumors persisted about a revival until the announcement of "Tales of Monkey Island" by Telltale Games in early 2009. Much of the music of the games is composed by Michael Land. The score largely consists of reggae, Caribbean and dub-inspired music. The series also tends to break the fourth wall, as several of the characters acknowledge that they are in a video game. Each of the games takes place on in the Caribbean around the Golden Age of Piracy sometime between the 17th and 18th centuries. The islands teem with pirates dressed in outfits that seem to come from films and comic books rather than history, and there are many deliberate anachronisms and references to modern-day popular culture. The main setting of the "Monkey Island" games is the "Tri-Island Area", a fictional archipelago in the Caribbean. Since the first game in the series, "The Secret of Monkey Island", three of the games have visited the eponymous island of Monkey Island, while all have introduced their own set of islands to explore. "Monkey Island 2: LeChuck's Revenge" features four new islands, but does not return to Monkey Island until the final cutscene. "The Curse of Monkey Island" introduces three, and "Escape from Monkey Island", which revisits some of the older islands, features three new islands as well. As such, the "Tri-Island area" actually comprises a total of 13 visitable islands. "Tales of Monkey Island" takes place in a new area of the Caribbean called the "Gulf of Melange". The main islands of the Tri-Island Area are Mêlée Island, Booty Island, and Plunder Island governed by Elaine Marley in place of her long lost grandfather, Horatio Torquemada Marley. Elaine moves from island to island at her convenience, though she considers her governor's mansion on Mêlée Island, the capital island of the area, as home. Other islands in the region are considered under the umbrella of Tri-Island Area as well, even though not directly governed by Elaine include: Lucre Island, Jambalaya Island, Scabb Island, Phatt Island, Hook Island, Skull Island, Knuttin Atoll, Blood Island, Spittle Island and Pinchpenny Island. The Gulf of Melange has its own set of islands: Flotsam Island, the Jerkbait Islands (Spinner Cay, Spoon Island, Roe Island), Brillig Island, Boulder Beach, Isle of Ewe, and the Rock of Gelato. Monkey Island and Dinky Island are not officially part of any island area, but nonetheless are central to the series' overall back-story and canon. The games have a wide cast of characters, many of which reappear throughout the series. Each entry in the series revolves around three main characters: the hero Guybrush Threepwood; his love interest Elaine Marley; and the villain, the ghost pirate LeChuck. Several other characters such as the Voodoo Lady, Stan the salesman, Murray the Demonic Talking Skull and Herman Toothrot make multiple appearances within the series as well. Ron Gilbert's two main inspirations for the story were Disneyland's Pirates of the Caribbean ride and Tim Powers' book "On Stranger Tides". The book was the inspiration for the story and characters, while the ride was the inspiration for the ambiance. "[The POTC Ride] keeps you moving through the adventure," Gilbert said in an interview, "but I've always wished I could get off and wander around, learn more about the characters, and find a way onto those pirate ships. So with The Secret of Monkey Island(TM) I wanted to create a game that had the same flavor, but where you could step off the boat and enter that whole storybook world." Several specific references to the ride are made throughout the series, including a puzzle in the based on the ride's famous Jail Cell/Dog With Keys scene (the dog in the scene is even named Walt). The banjo music in the opening menu of the third game is also very reminiscent of the banjo music at the beginning of the ride. Additional references are made to Disneyland and theme parks in general throughout the series, including Guybrush finding an E ticket. The series debuted in 1990 with "The Secret of Monkey Island" on the Amiga, MS-DOS, Atari ST and Macintosh platforms; the game was later ported to FM Towns and Mega-CD (1993). A remastered version with updated graphics and new voiceovers was released for PlayStation Network, PC Windows, Xbox Live Arcade and OS X. An iPhone version was also released on July 23, 2009. The game starts off with the main character Guybrush Threepwood stating "I want to be a pirate!" To do so, he must prove himself to three old pirate captains. During the perilous pirate trials, he meets the beautiful governor Elaine Marley, with whom he falls in love, unaware that the ghost pirate LeChuck also has his eyes on her. When Elaine is kidnapped, Guybrush procures crew and ship to track LeChuck down, defeat him and rescue his love. The second game, "Monkey Island 2: LeChuck's Revenge" from 1991, was available for fewer platforms; it was only released for PC MS-DOS, Amiga, Macintosh, and later for FM Towns. A Special Edition version, in a similar style as The Secret of Monkey Island: Special Edition, was released in July 2010 for iPhone, iPad, iPod Touch, Mac, PC, PS3 and Xbox 360. As Guybrush, with a treasure chest in hand, and Elaine hang onto ropes in a void, he tells her the story of the game. He has decided to find the greatest of all treasures, that of Big Whoop. Unwittingly, he helps revive LeChuck, who is now in zombie form. Guybrush is eventually captured by his nemesis, but escapes with help from Wally and finds the treasure only to find himself dangling from a rope, as depicted at the beginning of the game. As Guybrush concludes his story, his rope breaks and he finds himself facing LeChuck, whom he finally defeats using voodoo. The surrealistic ending is open to a number of interpretations. In the manual of The Curse of Monkey Island, it is stated that Guybrush falls victim to a hex implemented by LeChuck. "The Curse of Monkey Island", the third in the series, was exclusively available for PC Windows in 1997 after a 6-year hiatus. The Curse of Monkey Island was released after what could be said to be the biggest technological change in the gaming industry. This new era saw the advent of digital audio, CD-ROM technology, and advancements in graphics. "Monkey Island I" and "II" were originally released on floppy discs with text dialog only. The visuals of the third installment was also an advance over the old game, using a cel animation style. "The Curse of Monkey Island" is the only game in the series to feature this style of animation; subsequent games used 3D polygon animation. Guybrush unwittingly turns Elaine into a gold statue with a cursed ring, and she is soon stolen by pirates. He tracks her down before searching for a ring that can lift the curse. LeChuck appears in a fiery demon form, and is on the heels of Guybrush until a stand-off in LeChuck's amusement park ride, Monkey Mountain. "Escape from Monkey Island", the fourth installment, was released in 2000 for PC Windows, and in 2001 for Macintosh and PlayStation 2. When Guybrush Threepwood and Elaine Marley return from their honeymoon, they find that Elaine has been declared officially dead, her mansion is under destruction order, and her position as governor is up for election. Guybrush investigates and unearths a conspiracy by LeChuck and evil real estate developer Ozzie Mandrill to use a voodoo talisman, "The Ultimate Insult," to make all pirates docile in order to turn the Caribbean into a center of tourism. "Tales of Monkey Island" is the fifth installment within the series, co-developed by Telltale Games and LucasArts, with a simultaneous release both on WiiWare and PC. Unlike other installments, "Tales" is an episodic adventure consisting of five different episodes. The first episode was released on July 7, with the last one released on December 8, 2009. During a heated battle with his nemesis, the evil pirate LeChuck, Guybrush unwittingly unleashes an insidious pox that rapidly spreads across the Caribbean, turning pirates into zombie-like monsters. The Voodoo Lady sends Guybrush in search of a legendary sea sponge to stem the epidemic, but this seemingly straightforward quest has surprises around every corner. "Tales of Monkey Island" also released on PlayStation Network as a bundle for US$20.00. In November 2011, when CEO of Telltale games Dan Conners was asked a question about another season of "Monkey Island", he replied: "I wish we had the rights to do more "Monkey" but we don't. Right now what I gather is LA is focused on building AAA titles internally but honestly we don't talk much these days." There has also been some speculation on Telltale Games forums about a possible sequel to "Tales of Monkey Island", although this was dismissed by Gilbert, who stated, "Basically, when we were working on "Tales", I understood that ... I'm too old for that job now" in an interview with "Edge" in March 2010. The "Tales" team claims that, despite a considerably increasing fanbase since 2009–10, there are not any plans to continue the series within the next five-year interval. With the purchase of LucasArts by The Walt Disney Company in 2012, the rights to the franchise are now property of Disney. Ron Gilbert has been quoted in November 2012 as not being optimistic about the franchise's future, believing that Disney might abandon the franchise in favour of "Pirates of the Caribbean"; however, in December 2012, he was also quoted as wishing to contact Disney, hoping to "make the game he wants to make". In May 2016, Disney Interactive announced that they would cease production on gaming and transition to a licensing model. Gilbert then took to Twitter on 23 May 2016 to express a desire to buy back the franchise saying "Please sell me my "Monkey Island" and "Maniac Mansion" IP. I'll pay actual money for them.". In 2017, fans of the series launched an online petition in support of Ron Gilbert, asking Disney to sell the franchise to him; as of May 2020, the petition has gathered about 26000 signatures. In 2000, Lucasfilm together with Industrial Light & Magic were working on an animated film to be called "Curse of Monkey Island", which would be based on the series. Steve Purcell created the concept paintings and Ted Elliott wrote the story. However, during development the film was cancelled. Concept art of the film was released via Purcell's official blog. In 2007, fan site World of Monkey Island was contacted by an anonymous source who told them that Ted Elliot had written the script for the film who before then remained unknown to the project. Elliot would later go on to write the "Pirates of the Caribbean" film series, the scripts for the first two of which appear to have been adapted from the aborted project. The games in the series share several minigames, puzzles, in-jokes, and references. Each game contains a map puzzle, wherein Guybrush must use an unconventional map to find his way through a maze. The first game features a set of dance instructions that point the way through the dense forest of Mêlée Island to find the Island's fabled treasure. In the second game, Guybrush must use a song from a dream sequence to find his way through LeChuck's dungeon. The third game is the reverse of this, as the instructions the player receives are traditional directions and need to be used on a theatre light board. The fourth game has a set of directions based on time, and the fifth based on animal sounds and the direction of the wind and finally a map to get one of the items needed for "The Feast of the Senses". Each game features a sequence of some sort, where players must gather the ingredients to create an item. Then, later in the game, the player has to create the item again, but this time around with improvised materials. In 'Secret', Guybrush must brew a voodoo concoction but, lacking ingredients, must improvise with the contents of his inventory, leading to amusing puns. In Monkey Island 2, at two points of the game, Guybrush has to create a voodoo doll, one of Largo LaGrande with legitimate ingredients, and one of LeChuck with improvised ingredients. The same goes with the hangover medicine in 'Curse' and the Ultimate Insult in 'Escape'. 'Tales' starts with Guybrush having to obtain fizzy root-beer then dropping it and him having to instead put together some fizzy root-grog. Later 'Tales' requires Guybrush to put together a 'feast of the senses' to increase the size of La Esponja Grande, and later track down a reversed recipe for the 'diet of the senses'. Each game also contains a minigame based on learning and repetition of a sequence in order to become more proficient: Insult Sword fighting in the first and third games, a number-based "password" as well as a spitting contest in the second, banjo fighting in the third, insult arm wrestling and Monkey Kombat in the fourth, and Pirate Face-Off in the fifth. The first, second and fourth games also feature a puzzle which involves following another character through several locations, a trick also used in "Indiana Jones and the Fate of Atlantis". Some other minigames include naval cannon battles, and platform diving. The Monkey Island series is full of spoofs, in-jokes, humorous references, and Easter eggs: so many, in fact, that entire web sites are dedicated to their detection and listing. Running gags include lines such as "Look behind you, a three-headed monkey!", the introduction "My name is Guybrush Threepwood and I'm a mighty pirate", "How appropriate, you fight like a cow", "I'm selling these fine leather jackets" (a reference to ""), and "That's the second biggest [object] I've ever seen", a catchphrase from the TV series "Get Smart" (and in EMI "That's the second largest... No, that IS the "largest" conch shell I've ever seen!"), and the astounding fact that Guybrush can hold his breath for ten minutes. "The Secret of Monkey Island" poked fun at rival company Sierra's game-over screens. For example, when Guybrush falls off a cliff, a "game over" window appears, but then Guybrush bounces back to the top of the cliff, explaining that he landed in a "rubber tree". Also, when Guybrush stays underwater for more than ten minutes, he dies and a "game over" dialog box identical to that of Sierra's "King's Quest" series appears, giving the player an option to restore a saved game and jokingly stating: "Hope you saved the game!" The "stump joke" made fun about the use of multiple floppy disks for one program, but was not initially recognized by gamers as a joke. In "The Secret of Monkey Island", Guybrush comes across a passageway hidden beneath a stump, at which point a screen says to insert Disk No. 144. Later, in "The Curse of Monkey Island", Guybrush looks through a crack in the ceiling of an underground crypt to find himself peeking out of the same stump. Ron Gilbert has openly admitted that sections of "Monkey Island 2" borrowed extensively from the original Pirates of the Caribbean Disneyland ride, such as the famous "dog holding the keys to the jail-cell". He has also said that he thought the second movie ("") may have 'borrowed' from the "Monkey Island" series. The opening menu banjo music in "Curse" is also very reminiscent of the beginning of the Disneyland ride. Each game in the series features cameo appearances by Steve Purcell's characters Sam & Max, who were featured in their own LucasArts adventure game, "Sam & Max Hit the Road". These are replaced by the purple tentacle from yet another LucasArts adventure game Day of the Tentacle in the special edition versions. There are many comic references to various Lucas projects, especially Star Wars. For instance, in "Monkey Island 2" the Voodoo lady exclaims, "I just felt a sudden disturbance in the Force, as if a tiny, tiny voice just called out in fear," as an homage to Obi-Wan's speech in "Star Wars IV: A New Hope". . In "Curse," when you click on the fort that has been damaged by cannon fire from LeChuck's ship, Guybrush replies "That's funny, the damage doesn't look as bad from out here," which is C-3PO's line after he and R2-D2 escape from Princess Leia's ship in the escape pod. When trying to gain access to the Brimstone Beach Club on Plunder Island, Guybrush attempts to use a 'Jedi mind trick' on the Cabaña Boy at the entrance. In Part V of "Curse", LeChuck says to Guybrush during the opening dialogue "Search yer feelings, you know it to be true!", to which Guybrush replies "OH NO! IT CAN'T BE!", lines similar to those between Darth Vader and Luke Skywalker in "Star Wars V: The Empire Strikes Back". This scene is also reenacted at the end of "Monkey Island 2" almost verbatim. In LeChuck's Revenge, the Governor of Phatt Island, Governor Phatt, says in his sleep "Be careful with those snacks, Eugene." in reference to the Pink Floyd song "Careful With That Axe, Eugene." None of the games explicitly reveal the "Secret of Monkey Island" (although creator Ron Gilbert has stated that the secret was not revealed in any of the games, and that the true secret would be revealed if he got to work on the fifth entry in the series). LeChuck himself, when asked in the second and third games, refuses to answer the question; Guybrush can eventually prod LeChuck to confess that he does not know what the secret is. Gilbert stated that he never told anyone what the true secret of Monkey Island is. Gilbert stated in a 2004 interview that when the game was originally conceived it was considered "too big", so they split it into three parts. He added that he "knows what the third [part] is" and "how the story's supposed to end," indicating that he had a definite concept of the secret and a conclusive third game. The team behind "Escape from Monkey Island" attempted to resolve the issue by showing that the Giant Monkey Head was actually the control room of a Giant Monkey Robot. The cut-scene in which the revelation was made is called "The Real Secret of Monkey Island".
https://en.wikipedia.org/wiki?curid=19531
Cardiff Arms Park Cardiff Arms Park (), also known as The Arms Park, is situated in the centre of Cardiff, Wales. It is primarily known as a rugby union stadium, but it also has a bowling green. The Arms Park was host to the British Empire and Commonwealth Games in 1958, and hosted four games in the 1991 Rugby World Cup, including the third-place play-off. The Arms Park also hosted the inaugural Heineken Cup Final of 1995–96 and the following year in 1996–97. The history of the rugby ground begins with the first stands appearing for spectators in the ground in 1881–1882. Originally the Arms Park had a cricket ground to the north and a rugby union stadium to the south. By 1969, the cricket ground had been demolished to make way for the present day rugby ground to the north and a second rugby stadium to the south, called the National Stadium. The National Stadium, which was used by Wales national rugby union team, was officially opened on 7 April 1984, however in 1997 it was demolished to make way for the Millennium Stadium in 1999, which hosted the 1999 Rugby World Cup and became the national stadium of Wales. The rugby ground has remained the home of the semi-professional Cardiff RFC yet the professional Cardiff Blues regional rugby union team moved to the Cardiff City Stadium in 2009, but returned three years later. The site is owned by Cardiff Athletic Club and has been host to many sports, apart from rugby union and cricket; they include athletics, association football, greyhound racing, tennis, British baseball and boxing. The site also has a bowling green to the north of the rugby ground, which is used by Cardiff Athletic Bowls Club, which is the bowls section of the Cardiff Athletic Club. The National Stadium also hosted many music concerts including Michael Jackson, Dire Straits, David Bowie, Bon Jovi, The Rolling Stones and U2. The Cardiff Arms Park site was originally called the Great Park, a swampy meadow behind the Cardiff Arms Hotel. The hotel was built by Sir Thomas Morgan, during the reign of Charles I. Cardiff Arms Park was named after this hotel. From 1803, the Cardiff Arms Hotel and the Park had become the property of the Bute family. The Arms Park soon became a popular place for sporting events, and by 1848, Cardiff Cricket Club was using the site for its cricket matches. However, by 1878, Cardiff Arms Hotel had been demolished. The 3rd Marquess of Bute stipulated that the ground could only be used for "recreational purposes". At that time Cardiff Arms Park had a cricket ground to the north and a rugby union ground to the south. 1881–2 saw the first stands for spectators; they held 300 spectators and cost £50. The architect was Archibald Leitch, famous for designing Ibrox Stadium and Old Trafford, among others. In 1890, new standing areas were constructed along the entire length of the ground, with additional stands erected in 1896. By 1912, the Cardiff Football Ground, as it was then known, had a new south stand and temporary stands on the north, east and west ends of the ground. The south stand was covered, while the north terrace was initially without a roof. The improvements were partly funded by the Welsh Rugby Union (WRU). The opening ceremony took place on 5 October 1912, with a match between Newport RFC and Cardiff RFC. The new ground was opened by Lord Ninian Crichton-Stuart. This new development increased the ground capacity to 43,000 and much improved facilities at the ground compared to the earlier stands. In 1922 John Crichton-Stuart, 4th Marquess of Bute, had sold the entire site and it was bought by the Cardiff Arms Park Company Limited for £30,000, it was then leased to the Cardiff Athletic Club (cricket and rugby sections) for 99 years at a cost of £200 per annum. During 1934 the cricket pavilion had been demolished to make way for the new North Stand which was built on the rugby union ground, costing around £20,000. However, in 1941 the new North Stand and part of the west terracing was badly damaged in the Blitz by the Luftwaffe during the Second World War. At a general meeting of the WRU in June 1953 they made a decision "That until such time as the facilities at Swansea were improved, all international matches be played at Cardiff". At the same time, plans were made for a new South Stand which was estimated to cost £60,000; the tender price, however, came out at £90,000, so a compromise was made and it was decided to build a new upper South Stand costing £64,000 instead, with the Cardiff Athletic Club contributing £15,000 and the remainder coming from the WRU. The new South Stand opened in 1956, in time for the 1958 British Empire and Commonwealth Games. This brought the overall capacity of the Arms Park up to 60,000 spectators, of which 12,800 were seated and the remainder standing. The Arms Park hosted the 1958 British Empire and Commonwealth Games, which was used for the athletics events, but this event caused damage to the drainage system, so much so, that other rugby unions (England, Scotland and Ireland) complained after the Games about the state of the pitch. On 4 December 1960, due to torrential rain, the River Taff burst its banks with the Arms Park pitch being left under of water. The Development Committee was set up to resolve these issues on a permanent basis. They looked at various sites in Cardiff, but they all proved to be unsatisfactory. They also could not agree a solution with the Cardiff Athletic Club, so they purchased about of land at Island Farm in Bridgend, which was previously used as a prisoner-of-war camp. It is best known for being the camp where the biggest escape attempt was made by German prisoners of war in Great Britain during the Second World War. Due to problems including transport issues Glamorgan County Council never gave outline planning permission for the proposals and by June 1964 the scheme was abandoned. At that stage, the cricket ground to the north was still being used by Glamorgan County Cricket Club, and the rugby union ground to the south was used by the national Wales team and Cardiff RFC. By 7 October 1966, the first floodlit game was held at Cardiff Arms Park, a game in which Cardiff RFC beat the Barbarians by 12 points to 8. The National Stadium, which was also known as the Welsh National Rugby Ground, was designed by Osborne V Webb & Partners and built by G A Williamson & Associates of Porthcawl and Andrew Scott & Company of Port Talbot. After agreement from the Cardiff Athletic Club, the freehold of the south ground was transferred solely to the WRU in July 1968. Work could then begin on the new National Stadium. Glamorgan County Cricket Club would move to Sophia Gardens and the cricket ground to the north would be demolished and a new rugby union stadium built for Cardiff RFC, who would move out of the south ground, allowing the National Stadium to be built, for the sole use of the national rugby union team. On 17 October 1970, the new North Stand and the Cardiff RFC ground was completed, the North Stand cost just over £1 million. The West Stand was opened in 1977 and the new East Terrace was completed by March 1980. By the time the final South Stand had been completed and the stadium officially opened on 7 April 1984, the South Stand had cost £4.5 million. At the start of the project, the total cost was estimated at £2.25 million, although by time it was finished in 1984, it had risen by nearly four times that amount. Both stadia had approximately east-west alignment: the rugby ground to the north (Castle Street) end; the National Stadium to the south (Wood Street) end. The original capacity was 65,000 but this had to be reduced in later years to 53,000 for safety reasons. 11,000 of these were on the East Terrace and the conversion to all-seater stadium would have reduced the stadium capacity still further to 47,500. This capacity would have been much less than Twickenham and the other major rugby venues and also less than the demand for tickets to major events. A world record crowd of 56,000 for a rugby union club match watched Llanelli RFC beat Neath RFC by 28 points to 13 points in the final of the Schweppes Cup (WRU Challenge Cup) on 7 May 1988. The first evening game to be played under floodlights was held on 4 September 1991 at 8.00 pm, between Wales and France. The last international match to be held at the National Stadium was between Wales and England on 15 March 1997, and the last ever match that was held at the National Stadium was on 26 April 1997 between Cardiff and Swansea, Cardiff won the SWALEC Cup (WRU Challenge Cup) by 33 points to 26 points. In 1997, just thirteen years after the National Stadium had opened, it was considered too small and did not have the facilities required of the time and it was demolished and a new stadium, the Millennium Stadium, was built in its place (completed to a north-south alignment and opened in June 1999). This would become the fourth redevelopment of the Cardiff Arms Park site. Although the Millennium Stadium is on roughly two thirds of the National Stadium, Cardiff Arms Park site, it is currently no longer using the Arms Park name. The official website confuses the issue as well, one part states that "The Millennium Stadium is located on Westgate Street in Cardiff; next to the Cardiff Arms Park" whereas another section specifically refers to the stadium as "The Millennium Stadium, on the Cardiff Arms Park" Since 2016 it has been known as the Principality Stadium. An agreement in principle was reached in December 2015 between the landlord of the stadium site (Cardiff Athletic Club) and its tenant (Cardiff Blues) to give the club a 150-year lease on the stadium site. This could see the redevelopment of the Arms Park, including a new 15,000 seater stadium at 90 degrees to the existing stadium costing between £20 million and £30 million and surrounded by new offices and apartments. More detailed negotiations will begin with a final approval expected early in 2016. If the final agreement goes ahead, Cardiff Athletic Club would receive an upfront payment of approximately £8 million. As part of the agreement, the bowls section would have to vacate its current site at the Arms Park and move to a new facility. At present Cardiff Blues pay Cardiff Athletic Club rent of around £115,000 per annum, however this would nearly double to around £200,000. Only the rugby ground and the Cardiff Athletic Bowls Club now use the name Cardiff Arms Park. The rugby ground has two main stands, the North Stand and the South Stand. Both the Stands have terracing below seating. The other stands in the ground are the Westgate Street end Family Stand, which has rows of seating below executive boxes, plus the club shop, and the River Taff end (the Barry Nelmes Suite, named after Barry Nelmes, the former Cardiff RFC captain), which has 26 executive boxes. The rugby ground has two main entrances, the south entrance, and the Gwyn Nicholls Memorial Gates (Angel Hotel entrance), which was unveiled on 26 December 1949 in honour of the Welsh international rugby player Gwyn Nicholls. The Cardiff Athletic Clubhouse is situated in the corner of the ground between the South Stand and the Westgate Street end. The South Stand of the rugby ground formed a complete unit with the North Stand of the National Stadium. Now the same structure of the South Stand of the rugby ground is also physically attached to the North Stand of the Millennium Stadium. This section is known colloquially as Glanmor's Gap, after Glanmor Griffiths, former chair and President of the WRU. This came about because the WRU were unable to secure enough funding to include the North Stand in the Millennium Stadium, and the National Lottery Commission would not provide any additional funds to be used for the construction of a new ground for Cardiff RFC. The Millennium Stadium was therefore built with the old reinforced concrete structure of the National Stadium (North Stand) and the new steel Millennium Stadium structure built around it. There was doubt about the future of the Arms Park after 2010 following the move of the Cardiff Blues to the Cardiff City Stadium. Cardiff RFC Ltd, the company that runs Cardiff Blues and Cardiff RFC, still has a 15-year lease on the Arms Park, but talks are underway to release the rugby club from the terms of the lease, to enable the Millennium Stadium to be redeveloped with a new North Stand and adjoining convention centre. However, it still has the original requirement on the lease, that the land will only be used for "recreational purposes", as stipulated by the Bute family. But the Arms Park site is a prime piece of real estate in the centre of Cardiff, which means that it may be difficult to sell the land to property developers. The estimated value of the whole Arms Park site could be at least £25 million, although with the "recreational use" requirement, its actual value could be a lot less than that figure. A decision by Cardiff Athletic Club on the future of the Arms Park has yet to be made. In 2011, the Cardiff Blues regional rugby union team made a £6 million bid for the Arms Park, later the WRU made an increased bid of £10 million for the site. Both bids were rejected by the trustees of the Cardiff Athletic Club. However, in 2012 Cardiff Blues announced that they would be making a permanent return to Cardiff Arms Park following declining attendances at the Cardiff City Stadium. In the 2013 off-season, the pitch at the Arms Park was replaced with an artificial FieldTurf surface, intended to prevent any adverse weather conditions from affecting the rugby. Cardiff Arms Park is best known as a rugby union stadium, but Cardiff Athletic Bowls Club (CABC) was established in 1923, and ever since then, the club has used the Arms Park as its bowling green. The bowls club is a section of the Cardiff Athletic Club and shares many of the facilities of the Cardiff Arms Park athletics centre. The Les Spence Memorial Gates were erected in memory of the former Cardiff RFU player, who captained the team in 1936-37. He was born in 1907 and became chairman of the Cardiff RFU and president of the WRU between 1973 and 1974. He was awarded an MBE and died in 1988. The Club has produced two Welsh international bowlers; Mr. C Standfast in 1937 and Mr. B Hawkins who represented Wales in the 1982 World Pairs and captained Wales in 1982 and 1984. The Riverside Football Club, founded in 1899, played some matches at the Arms Park until 1910, when they moved to Ninian Park, and later became Cardiff City Football Club. On 31 May 1989, Wales played its first international game against West Germany at the National Stadium in a World Cup qualifying match, which ended goalless. It was also the first ever international football match held in Great Britain that was watched by all-seater spectators. The adjoining Cardiff Rugby Club ground has also been used for Association Football. In July 1995, Ton Pentre played two Intertoto Cup games there, against Heerenveen (Netherlands) and Uniao Leiria (Portugal) as their own ground was not suitable. The Heerenveen game - the first ever soccer match to be played there - kicked off at 6pm on Saturday 1 July 1995 and resulted in the Dutch side winning 7-0. The Wales U-21 team have also played a home game there in the late 1990s. In 1958, the British Empire and Commonwealth Games were held in Cardiff. The event was (to date) the biggest sporting event ever held in Wales; however, it would not have been possible without the financial support given by the WRU and the Cardiff Athletic Club. Both the opening and closing ceremonies took place at Cardiff Arms Park, plus all the track and field events, on what had been the greyhound track. It would turn out to be the last time that South Africa would participate in the Games until 1994. South Africa withdrew from the Commonwealth Games in 1961. Baseball was established early on in Cardiff, and one of the earliest of games to be held at the Arms Park was on 18 May 1918. It was a charity match in aid of the Prisoner of War Fund between Welsh and American teams of the U.S. Beaufort & U.S. Jupiter. British baseball matches have also regularly taken place at the Arms Park and hosted the annual England versus Wales international game every four years. The games are now usually held at Roath Park. The first boxing contest held at the Arms Park was on 24 January 1914, when Bombadier Billy Wells beat Gaston Pigot by a knockout in the first round of a 20-round contest. Boxing contests were held later on 14 June 1943, 12 August 1944, 4 October 1951 and 10 September 1952. Around 25,000 spectators watched international boxing on 1 October 1993, at the National Stadium with a World Boxing Council (WBC) Heavyweight title bout between Lennox Lewis and Frank Bruno. It was the first time that two British-born boxers had fought for the world heavyweight title. Lewis beat Bruno by a technical knockout in the 7th round, in what was called the "Battle of Britain". On 30 September 1995, Steve Robinson the World Boxing Organization (WBO) World Featherweight Champion, lost against Prince Naseem Hamed at the rugby ground in 8 rounds. In 1819 Cardiff Cricket Club was formed and by 1848 they had moved to their new home at the Arms Park. Glamorgan County Cricket Club, at the time not a first-class county, played their first match at the ground in June 1869 against Monmouthshire Cricket Club. The county club played their first County Championship match on the ground in 1921, competing there every season (except while first-class cricket was suspended during the Second World War) until their final match on the ground against Somerset in August 1966. Cardiff Cricket Club played their final game at the ground against Lydney Cricket Club on 17 September 1966. Both Cardiff Cricket Club and Glamorgan then moved to a new ground at Sophia Gardens on the opposite bank of the River Taff to the Arms Park following work on the creation of the national rugby stadium. The first first-class cricket match to be held on the ground was between West of England and East of England, on 20 June 1910. In all more than 240 first-class matches were played on the ground, all but two involving Glamorgan as the home team. Only one List A cricket match was played on the ground, Glamorgan's Gillette Cup fixture against Somerset on 22 May 1963. Greyhound racing took place at the Arms Park for fifty years from 1927 until 1977. In 1876, the Cardiff RFC was formed and soon after they also used the park. On 12 April 1884, the first international match was played at the ground between Wales and Ireland, when 5,000 people watched Wales beat Ireland by two tries and a drop goal to nil. The Arms Park rugby ground became the permanent home of the Wales national rugby union team in 1964. Later, the National Stadium was also home to the WRU Challenge Cup from 1972 until the match held at the Stadium on 26 April 1997, at a much reduced capacity, between Cardiff RFC and Swansea RFC. Cardiff RFC won the match 33–26. The National Stadium is best known as the venue for what is considered to be "the greatest try ever scored" by Gareth Edwards for the Barbarians against New Zealand in what is also called "the greatest match ever played" on 27 January 1973. The final result was a win for the Barbarians. The score, 23–11, which translates to 27–13 in today's scoring system. The scorers were: Barbarians: Tries: Gareth Edwards, Fergus Slattery, John Bevan, J P R Williams; Conversions: Phil Bennett (2); Penalty: Phil Bennett. All Blacks: Tries: Grant Batty (2); Penalty: Joseph Karam. The National Stadium hosted four games in the 1991 Rugby World Cup, including the third-place play-off. The National Stadium was also host to the inaugural Heineken Cup final of 1995–96 when Toulouse beat Cardiff RFC by 21–18 after extra time, in front of 21,800 spectators. The following final in 1996–97 was also held at the National Stadium, this time it was between Brive and Leicester Tigers. Brive won the match 28–9, in front of a crowd of 41,664. In 2008, the rugby ground hosted all the games in Pool A of the 2008 IRB Junior World Championship and also the semi-final on 18 June 2008, in which England beat South Africa 26–18. Until February 2012, it had been assumed that the last professional rugby union game to take place at the Arms Park was on 17 May 2009, when Edinburgh beat the Cardiff Blues 36–14 in a Celtic League match during the 2008–09 season. However, on Tuesday, 7 February 2012, it was confirmed that Cardiff Blues would face Connacht at the Arms Park on Friday, 10 February 2012. The Pro12 League game result was a win for the Cardiff Blues 22–15 and attendance of 8,000. The following Tuesday, it was announced that the match against Ulster on Friday, 17 February, would also be at the Arms Park, resulting in a Blues win, 21–14 and attendance of 8,600. The agreement signed during 2009 tied Cardiff Blues to a 20-year contract to play a maximum of 18 games per season for a set fee, rather than per match at Cardiff City Stadium. But on 23 February, it was announced that the two Welsh 'derbies' against the Scarlets and the Ospreys would be played at Cardiff City Stadium, rather than the Arms Park, because of Cardiff Blues' anticipation that the attendance figures would far exceed the maximum capacity of 9,000. On 8 May 2012, it was announced that Cardiff Blues would be returning to the Arms Park on a permanent basis after just three years at the Cardiff City Stadium. On 23 May 2014, the rugby ground hosted the final of the 2013–14 Amlin Challenge Cup in which Northampton Saints beat Bath 30-16. Cardiff Arms Park hosted matches of the 1991 Rugby World Cup. South Wales Scorpions played a Rugby League Championship 1 match against London Skolars at Cardiff Arms Park on Sunday, 27 July 2014 and on Sunday 10 May 2015 at Cardiff Arms Park, South Wales Scorpions took on North Wales Crusaders. The 2015 European Cup match between France and Wales was held at Cardiff Arms Park on Friday, 30 October 2015 at 18:30 GMT. The highest attendance for a rugby league game at the Arms Park was recorded on 8 June 1996 during the first Super League season when 6,708 saw St. Helens defeat the Sheffield Eagles 43-32. The St Helens team at the time contained Welsh players Anthony Sullivan, Karle Hammond and Keiron Cunningham. List of rugby league test matches played at Cardiff Arms Park. Tennis courts were laid out in the Arms Park for Cardiff Tennis Club until the club moved to Sophia Gardens in 1967. In 2003, the club amalgamated with Lisvane Tennis Club to form Lisvane (CAC) Tennis Club, which is still a section of Cardiff Athletic Club (CAC). Major music concerts were also held at the National Stadium from 1987 until 1996, they included Tina Turner, U2, Michael Jackson, The Rolling Stones, Dire Straits, Bon Jovi and R.E.M. The last music concert was held on 14 July 1996. Jehovah's Witnesses held their annual conventions at the National Stadium. The National Stadium was known primarily as the venue for massed voices singing such hymns as "Cwm Rhondda", "Calon Lân", "Men of Harlech" and "Hen Wlad Fy Nhadau" ("Land of my Fathers" – the national anthem of Wales). The legendary atmosphere including singing of the crowd was said to be worth at least a try or a goal to the home nation. This tradition of singing has now passed on to the Millennium Stadium. The Arms Park has its own choir, called the Cardiff Arms Park Male Choir. It was formed in 1966 as the Cardiff Athletic Club Male Voice Choir, and today performs internationally with a schedule of concerts and tours. In 2000, the choir changed their name to become the Cardiff Arms Park Male Choir.
https://en.wikipedia.org/wiki?curid=19532
Mikhail Kalashnikov Mikhail Timofeyevich Kalashnikov (; 10 November 1919 – 23 December 2013) was a Russian Lieutenant-General, inventor, military engineer, writer and small arms designer. He is most famous for developing the AK-47 assault rifle and its improvements, the AKM and AK-74, as well as the PK machine gun and RPK light machine gun. Kalashnikov was, according to himself, a self-taught tinkerer who combined innate mechanical skills with the study of weaponry to design arms that achieved battlefield ubiquity. Even though Kalashnikov felt sorrow at the weapons' uncontrolled distribution, he took pride in his inventions and in their reputation for reliability, emphasizing that his rifle is "a weapon of defense" and "not a weapon for offense". Kalashnikov was born in Kurya, Altai Governorate, Russian SFSR, now Altai Krai, Russia, as the seventeenth child of the 19 children of Aleksandra Frolovna Kalashnikova (née Kaverina) and Timofey Aleksandrovich Kalashnikov, who were peasants. In 1930, his father and most of his family had their properties confiscated and were deported to the village of Nizhnyaya Mokhovaya, Tomsk Oblast. In his youth, Mikhail suffered from various illnesses and was on the verge of death at age six. He was attracted to all kinds of machinery, but also wrote poetry, dreaming of becoming a poet. He went on to write six books and continued to write poetry all of his life. After deportation to Tomsk Oblast, his family had to combine farming with hunting, and thus Mikhail frequently used his father's rifle in his teens. Kalashnikov continued hunting into his 90s. After completing seventh grade, Mikhail, with his stepfather's permission, left his family and returned to Kurya, hitchhiking for nearly 1,000 km. In Kurya he found a job in mechanics at a tractor station and developed a passion for weaponry. In 1938, he was conscripted into the Red Army. Because of his small size and engineering skills he was assigned as a tank mechanic, and later became a tank commander. While training, he made his first inventions, which concerned not only tanks, but also small weapons, and was personally awarded a wrist watch by Georgy Zhukov. Kalashnikov served on the T-34s of the 24th Tank Regiment, 108th Tank Division stationed in Stryi before the regiment retreated after the Battle of Brody in June 1941. He was wounded in combat in the Battle of Bryansk in October 1941 and hospitalised until April 1942. In the last few months of being in hospital, he overheard some fellow soldiers bemoaning their current rifles, which were plagued with reliability issues, such as jamming. As he continued to overhear the complaints that the Soviet soldiers had, as soon as he was discharged, he went to work on what would become the famous AK-47 assault rifle. Seeing the drawbacks of the standard infantry weapons at the time, he decided to construct a new rifle for the Soviet military. During this time Kalashnikov began designing a submachine gun. Although his first submachine gun design was not accepted into service, his talent as a designer was noticed. From 1942 onwards Kalashnikov was assigned to the Central Scientific-developmental Firing Range for Rifle Firearms of the Chief Artillery Directorate of the Red Army. In 1944, he designed a gas-operated carbine for the new 7.62×39mm cartridge. This weapon, influenced by the M1 Garand rifle, lost out to the new Simonov carbine which would be eventually adopted as the SKS; but it became a basis for his entry in an assault rifle competition in 1946. His winning entry, the "Mikhtim" (so named by taking the first letters of his name and patronymic, Mikhail Timofeyevich) became the prototype for the development of a family of prototype rifles. This process culminated in 1947, when he designed the AK-47 (standing for "Avtomat Kalashnikova model 1947"). In 1949, the AK-47 became the standard issue assault rifle of the Soviet Army and went on to become Kalashnikov's most famous invention. While developing his first assault rifles, Kalashnikov competed with two much more experienced weapon designers, Vasily Degtyaryov and Georgy Shpagin, who both accepted the superiority of the AK-47. Kalashnikov named Alexandr Zaitsev and Vladimir Deikin as his major collaborators during those years. From 1949, Mikhail Kalashnikov lived and worked in Izhevsk, Udmurtia. He held a degree of Doctor of Technical Sciences (1971) and was a member of 16 academies. Over the course of his career, he evolved the basic design into a weapons family. The AKM (), first brought into service in 1959, was lighter and cheaper to manufacture owing to the use of a stamped steel receiver (in place of the AK-47's milled steel receiver) and contained detail improvements such as a re-shaped stock and muzzle compensator. From the AKM he developed a squad automatic weapon variant, known as the RPK (). He also developed the general-purpose PK machine gun (), which used the more powerful 7.62×54R cartridge of the Mosin–Nagant rifle. It is cartridge belt-fed, not magazine-fed, as it is intended to provide heavy sustained fire from a tripod mount, or be used as a light, bipod-mounted weapon. The common characteristics of all these weapons are simple design, ruggedness and ease of maintenance in all operating conditions. Approximately 100 million AK-47 assault rifles had been produced by 2009, and about half of them are counterfeit, manufactured at a rate of about a million per year. Izhmash, the official manufacturer of AK-47 in Russia, did not patent the weapon until 1997, and in 2006 accounted for only 10% of the world's production. This arm became famous due to its reliability in the most extreme climatic conditions, functioning as perfectly in the desert as in the tundra. It is in official use by the militaries of 55 nations, and has been so influential in military struggle that it has been used on national flags. Prominent examples include the flags of Mozambique and Hezbollah, as well as the East Timorese and Zimbabwean coats of arms. Kalashnikov himself claimed he was always motivated by service to his country rather than money, and made no direct profit from weapon production. He did however own 30% of a German company Marken Marketing International (MMI) run by his grandson Igor. The company revamps trademarks and produces merchandise carrying the Kalashnikov name, such as vodka, umbrellas and knives. One of the items is a knife named for the AK-74. During a visit to the United States in the early 2000s, Kalashnikov was invited to tour a Virginia holding site for the forthcoming American Wartime Museum. The former tanker Kalashnikov became visibly moved at the sight of his old tank in action, painted with his name in Cyrillic. After a prolonged illness Kalashnikov was hospitalized on 17 November 2013, in an Udmurtian medical facility in Izhevsk, the capital of Udmurtia and where he lived. He died 23 December 2013, at age 94 from gastric hemorrhage. In January 2014 a letter that Kalashnikov wrote six months before his death to the leader of the Russian Orthodox Church, Patriarch Kirill, was published by the Russian daily newspaper "Izvestia". In the letter he stated that he was suffering "spiritual pain" about whether he was responsible for the deaths caused by the weapons he created. Translated from the published letter he states, "I keep having the same unsolved question: if my rifle claimed people's lives, then can it be that I... a Christian and an Orthodox believer, was to blame for their deaths?" The patriarch wrote back, thanked Kalashnikov, and said that he "was an example of patriotism and a correct attitude toward the country". Kirill added about the design responsibility for the deaths by the rifle, "the church has a well-defined position when the weapon is defense of the Motherland, the Church supports its creators and the military, which use it." He became one of the first people buried in the Federal Military Memorial Cemetery. Kalashnikov's father, Timofey Aleksandrovich Kalashnikov (1883–1930), was a peasant. He completed two grades of parochial school and could read and write. In 1901 he married Aleksandra Frolovna Kaverina (1884–1957), who was illiterate throughout her life. They had 19 children, but only eight survived to adult age; Kalashnikov was born 17th, and was close to death at age six. In 1930, the government labeled Timofey Aleksandrovich a kulak, confiscated his property, and deported him to Siberia, along with most of the family. The eldest three siblings, daughters Agasha (b. 1905) and Anna and son Victor, were already married by 1930, and remained in Kuriya. After her husband's death in 1930, Aleksandra Frolovna married Efrem Kosach, a widower who had three children of his own. Mikhail Kalashnikov married twice, the first time to Ekaterina Danilovna Astakhova of Altai Krai. He married the second time to Ekaterina Viktorovna Moiseyeva (1921–1977). She was an engineer and did much technical drawing work for her husband. They had four children: daughters Nelli (b. 1942), Elena (b. 1948) and Natalya (1953–1983), and a son Victor (1942–2018). Victor also became a prominent small arms designer. The title to the AK-47 trademark belonged to Mikhail Kalashnikov's family until 4 April 2016, when the Kalashnikov Concern won a lawsuit to invalidate the registration of the trademark. During his career, Kalashnikov designed about 150 models of small weapons. The most famous of them are: "Incorporates information from the corresponding article in the Russian Wikipedia"
https://en.wikipedia.org/wiki?curid=19535
MUD A MUD (; originally multi-user dungeon, with later variants multi-user dimension and multi-user domain) is a multiplayer real-time virtual world, usually text-based. MUDs combine elements of role-playing games, hack and slash, player versus player, interactive fiction, and online chat. Players can read or view descriptions of rooms, objects, other players, non-player characters, and actions performed in the virtual world. Players typically interact with each other and the world by typing commands that resemble a natural language. Traditional MUDs implement a role-playing video game set in a fantasy world populated by fictional races and monsters, with players choosing classes in order to gain specific skills or powers. The objective of this sort of game is to slay monsters, explore a fantasy world, complete quests, go on adventures, create a story by roleplaying, and advance the created character. Many MUDs were fashioned around the dice-rolling rules of the "Dungeons & Dragons" series of games. Such fantasy settings for MUDs are common, while many others have science fiction settings or are based on popular books, movies, animations, periods of history, worlds populated by anthropomorphic animals, and so on. Not all MUDs are games; some are designed for educational purposes, while others are purely chat environments, and the flexible nature of many MUD servers leads to their occasional use in areas ranging from computer science research to geoinformatics to medical informatics to analytical chemistry. MUDs have attracted the interest of academic scholars from many fields, including communications, sociology, law, and economics. At one time, there was interest from the United States military in using them for teleconferencing. Most MUDs are run as hobbies and are free to players; some may accept donations or allow players to purchase virtual items, while others charge a monthly subscription fee. MUDs can be accessed via standard telnet clients, or specialized MUD clients which are designed to improve the user experience. Numerous games are listed at various web portals, such as The Mud Connector. The history of modern massively multiplayer online role-playing games (MMORPGs) like "EverQuest" and "Ultima Online", and related virtual world genres such as the social virtual worlds exemplified by "Second Life", can be traced directly back to the MUD genre. Indeed, before the invention of the term MMORPG, games of this style were simply called graphical MUDs. A number of influential MMORPG designers began as and/or players (such as Raph Koster, Brad McQuaid, Matt Firor, and Brian Green) or were involved with early MUDs (like Mark Jacobs and J. Todd Coleman). "Colossal Cave Adventure", created in 1975 by Will Crowther on a DEC PDP-10 computer, was the first widely used adventure game. The game was significantly expanded in 1976 by Don Woods. Also called "Adventure", it contained many D&D features and references, including a computer controlled dungeon master. Numerous dungeon crawlers were created on the PLATO system at the University of Illinois and other American universities that used PLATO, beginning in 1975. Among them were "pedit5", "oubliette", "moria", "avathar", "krozair", "dungeon", "dnd", "crypt", and "drygulch". By 1978–79, these games were heavily in use on various PLATO systems, and exhibited a marked increase in sophistication in terms of 3D graphics, storytelling, user involvement, team play, and depth of objects and monsters in the dungeons. Inspired by "Adventure", a group of students at MIT in the summer of 1977 wrote a game for the PDP-10 minicomputer; called "Zork", it became quite popular on the ARPANET. "Zork" was ported, under the filename DUNGEN ("dungeon"), to FORTRAN by a programmer working at DEC in 1978. In 1978 Roy Trubshaw, a student at the University of Essex in the UK, started working on a multi-user adventure game in the MACRO-10 assembly language for a DEC PDP-10. He named the game "MUD" ("Multi-User Dungeon"), in tribute to the "Dungeon" variant of "Zork", which Trubshaw had greatly enjoyed playing. Trubshaw converted MUD to BCPL (the predecessor of C), before handing over development to Richard Bartle, a fellow student at the University of Essex, in 1980. The game revolved around gaining points till one achieved the Wizard rank, giving the character immortality and special powers over mortals. "MUD", better known as "Essex MUD" and "MUD1" in later years, ran on the University of Essex network, and became more widely accessible when a guest account was set up that allowed users on JANET (a British academic X.25 computer network) to connect on weekends and between the hours of 2 AM and 8 AM on weekdays. It became the first Internet multiplayer online role-playing game in 1980, when the university connected its internal network to ARPANet. The original "MUD" game was closed down in late 1987, reportedly under pressure from CompuServe, to whom Richard Bartle had licensed the game. This left "MIST", a derivative of "MUD1" with similar gameplay, as the only remaining MUD running on the University of Essex network, becoming one of the first of its kind to attain broad popularity. "MIST" ran until the machine that hosted it, a PDP-10, was superseded in early 1991. 1985 saw the origin of a number of projects inspired by the original "MUD". These included "Gods" by Ben Laurie, a "MUD1" clone that included online creation in its endgame, and which became a commercial MUD in 1988; and "MirrorWorld", a tolkienesque MUD started by Pip Cordrey who gathered some people on a BBS he ran to create a "MUD1" clone that would run on a home computer. Neil Newell, an avid "MUD1" player, started programming his own MUD called "SHADES" during Christmas 1985, because "MUD1" was closed down during the holidays. Starting out as a hobby, "SHADES" became accessible in the UK as a commercial MUD via British Telecom's Prestel and Micronet networks. A scandal on "SHADES" led to the closure of Micronet, as described in Indra Sinha's net-memoir, "The Cybergypsies". At the same time, Compunet started a project named "Multi-User Galaxy Game" as a science fiction alternative to "MUD1", a copy of which they were running on their system at the time. When one of the two programmers left CompuNet, the remaining programmer, Alan Lenton, decided to rewrite the game from scratch and named it "Federation II" (at the time no "Federation I" existed). The MUD was officially launched in 1989. "Federation II" was later picked up by AOL, where it became known simply as "Federation: Adult Space Fantasy". "Federation" later left AOL to run on its own after AOL began offering unlimited service. In 1978, around the same time Roy Trubshaw wrote "MUD", Alan E. Klietz wrote a game called "Scepter" (Scepter of Goth), and later called "Milieu" using Multi-Pascal on a CDC Cyber 6600 series mainframe which was operated by the Minnesota Educational Computing Consortium. Klietz ported "Milieu" to an IBM XT in 1983, naming the new port "Scepter of Goth". "Scepter" supported 10 to 16 simultaneous users, typically connecting in by modem. It was the first commercial MUD; franchises were sold to a number of locations. "Scepter" was first owned and run by GamBit (of Minneapolis, Minnesota), founded by Bob Alberti. GamBit's assets were later sold to Interplay Productions. In 1984, Mark Peterson wrote "The Realm of Angmar", beginning as a clone of "Scepter of Goth". In 1994, Peterson rewrote "The Realm of Angmar", adapting it to MS-DOS (the basis for many dial-in BBS systems), and renamed it "Swords of Chaos". For a few years this was a very popular form of MUD, hosted on a number of BBS systems, until widespread Internet access eliminated most BBSes. In 1984, Mark Jacobs created and deployed a commercial gaming site, "Gamers World". The site featured two games coded and designed by Jacobs, a MUD called "Aradath" (which was later renamed, upgraded and ported to GEnie as "Dragon's Gate") and a 4X science-fiction game called "Galaxy", which was also ported to GEnie. At its peak, the site had about 100 monthly subscribers to both "Aradath" and "Galaxy". GEnie was shut down in the late 1990s, although "Dragon's Gate" was later brought to AOL before it was finally released on its own. Dragon's Gate was closed on February 10, 2007. In the summer of 1980, University of Virginia classmates John Taylor and Kelton Flinn wrote "Dungeons of Kesmai", a six player game inspired by "Dungeons & Dragons" which used roguelike ASCII graphics. They founded the Kesmai company in 1982 and in 1985 an enhanced version of "Dungeons of Kesmai", "Island of Kesmai", was launched on CompuServe. Later, its 2-D graphical descendant "Legends of Kesmai" was launched on AOL in 1996. The games were retired commercially in 2000. The popularity of MUDs of the University of Essex tradition escalated in the United States during the late 1980s when affordable personal computers with 300 to 2400 bit/s modems enabled role-players to log into multi-line BBSs and online service providers such as CompuServe. During this time it was sometimes said that MUD stands for "Multi Undergraduate Destroyer" due to their popularity among college students and the amount of time devoted to them. "" was published by Yehuda Simmons in 1989. It was the first persistent game world of its kind without the traditional hourly resets and points-based puzzle solving progression systems. Avalon introduced equilibrium and balance (cooldowns), skill-based player vs player combat and concepts such as player-run governments and player housing. The first popular MUD codebase was AberMUD, written in 1987 by Alan Cox, named after the University of Wales, Aberystwyth. Alan Cox had played the original University of Essex MUD, and the gameplay was heavily influenced by it. AberMUD was initially written in B for a Honeywell L66 mainframe under GCOS3/TSS. In late 1988 it was ported to C, which enabled it to spread rapidly to many Unix platforms upon its release in 1989. AberMUD's popularity resulted in several inspired works, the most notable of which were TinyMUD, LPMud, and DikuMUD. "Monster" was a multi-user adventure game created by Richard Skrenta for the VAX and written in VMS Pascal. It was publicly released in November 1988. "Monster" was disk-based and modifications to the game were immediate. "Monster" pioneered the approach of allowing players to build the game world, setting new puzzles or creating dungeons for other players to explore. Monster, which comprised about 60,000 lines of code, had a lot of features which appeared to be designed to allow "Colossal Cave Adventure" to work in it. Though there never were many network-accessible Monster servers, it inspired James Aspnes to create a stripped-down version of "Monster" which he called TinyMUD. TinyMUD, written in C and released in late 1989, spawned a number of descendants, including TinyMUCK and TinyMUSH. TinyMUCK version 2 contained a full programming language named MUF (Multi-User Forth), while MUSH greatly expanded the command interface. To distance itself from the combat-oriented traditional MUDs it was said that the "D" in TinyMUD stood for Multi-User "Domain" or "Dimension"; this, along with the eventual popularity of acronyms other than MUD (such as MUCK, MUSH, MUSE, and so on) for this kind of server, led to the eventual adoption of the term MU* to refer to the TinyMUD family. UberMUD, UnterMUD, and MOO were inspired by TinyMUD but are not direct descendants. The first version of Hourglass was written by Yehuda Simmons and later Daniel James for "" which debuted in 1989 at the last of the London MUD mega Meets aptly named "Adventure '89" and initially hosted on the IOWA system. Initially written in ARM assembly language on the Acorn Archimedes 440, in 1994 it made the leap from the venerable Archimedes to Debian Linux on the PC and later Red Hat where, other than shifting to Ubuntu, it has remained ever since. An early version of Hourglass was also ported to the PC, named Vortex, by Ben Maizels in 1992. Although written specifically for "Avalon: The Legend Lives", it went on to spawn a number of games, including "Avalon: The First Age", which ran from 1999 to 2014. The now defunct 1996 "Age of Thrones" and notably "Achaea, Dreams of Divine Lands" started life in Vortex prior to moving to its own Rapture engine. Hourglass continues to be developed as of 2016 and "Avalon: The Legend Lives" currently has 2,901,325 written words and 2,248,374 lines of game code (with 2,417,900 instructions). The original game came in at 1 KB in 1989, compared to 102 GB in January 2016. In 1989, LPMud was developed by Lars Pensjö (hence the LP in LPMud). Pensjö had been an avid player of TinyMUD and AberMUD and wanted to create a world with the flexibility of TinyMUD and the gameplay of AberMUD. In order to accomplish this he wrote what is nowadays known as a virtual machine, which he called the LPMud driver, that ran the C-like LPC programming language used to create the game world. Pensjö's interest in LPMud eventually waned and development was carried on by others such as Jörn "Amylaar" Rennecke, Felix "Dworkin" Croes, Tim "Beek" Hollebeek and Lars Düning. During the early 1990s, LPMud was one of the most popular MUD codebases. Descendants of the original LPMud include MudOS, DGD, SWLPC, FluffOS, and the Pike programming language, the latter the work of long-time LPMud developer Fredrik "Profezzorn" Hübinette. In 1990, the release of DikuMUD, which was inspired by AberMUD, led to a virtual explosion of hack and slash MUDs based upon its code. DikuMUD inspired numerous derivative codebases, including CircleMUD, Merc, ROM, SMAUG, and GodWars. The original Diku team comprised Sebastian Hammer, Tom Madsen, Katja Nyboe, Michael Seifert, and Hans Henrik Staerfeldt. DikuMUD had a key influence on the early evolution of the MMORPG genre, with "EverQuest" (created by avid DikuMUD player Brad McQuaid) displaying such Diku-like gameplay that Verant developers were made to issue a sworn statement that no actual DikuMUD code was incorporated. In 1987, David Whatley, having previously played "Scepter of Goth" and "Island of Kesmai", founded Simutronics with Tom and Susan Zelinski. In the same year they demonstrated a prototype of "GemStone" to GEnie. After a short-lived instance of "GemStone II", "GemStone III" was officially launched in February 1990. "GemStone III" became available on AOL in September 1995, followed by the release of "DragonRealms" in February 1996. By the end of 1997 "GemStone III" and "DragonRealms" had become the first and second most played games on AOL. The typical MUD will describe to the player the room or area they are standing in, listing the objects, players and non-player characters (NPCs) in the area, as well as all of the exits. To carry out a task the player would enter a text command such as take apple or attack dragon. Movement around the game environment is generally accomplished by entering the direction (or an abbreviation of it) in which the player wishes to move, for example typing north or just n would cause the player to exit the current area via the path to the north. MUD clients often contain functions which make certain tasks within a MUD easier to carry out, for example commands buttons which you can click in order to move in a particular direction or to pick up an item. There are also tools available which add hotkey-activated macros to telnet and MUD clients giving the player the ability to move around the MUD using the arrow keys on their keyboard for example. While there have been many variations in overall focus, gameplay and features in MUDs, some distinct sub-groups have formed that can be used to help categorize different game mechanics, game genres and non-game uses. Perhaps the most common approach to game design in MUDs is to loosely emulate the structure of a "Dungeons & Dragons" campaign focused more on fighting and advancement than role-playing. When these MUDs restrict player-killing in favor of player versus environment conflict and questing, they are labeled hack and slash MUDs. This may be considered particularly appropriate since, due to the room-based nature of traditional MUDs, ranged combat is typically difficult to implement, resulting in most MUDs equipping characters mainly with close-combat weapons. This style of game was also historically referred to within the MUD genre as "adventure games", but video gaming as a whole has developed a meaning of "adventure game" that is greatly at odds with this usage. Most MUDs restrict player versus player combat, often abbreviated as PK (Player Killing). This is accomplished through hard coded restrictions and various forms of social intervention. MUDs without these restrictions are commonly known as PK MUDs. Taking this a step further are MUDs devoted "solely" to this sort of conflict, called pure PK MUDs, the first of which was "Genocide" in 1992. "Genocide" ideas were influential in the evolution of player versus player online gaming. Roleplaying MUDs, generally abbreviated as RP MUDs, encourage or enforce that players act out the role of their playing characters at all times. Some RP MUDs provide an immersive gaming environment, while others only provide a virtual world with no game elements. MUDs where roleplay is enforced and the game world is heavily computer-modeled are sometimes known as roleplay intensive MUDs, or RPIMUDs. In many cases, role-playing MUDs attempt to differentiate themselves from hack and slash types, by dropping the "MUD" name entirely, and instead using MUX (Multi-User Experience) or MUSH (Multi-User Shared Hallucination). Social MUDs de-emphasize game elements in favor of an environment designed primarily for socializing. They are differentiated from talkers by retaining elements beyond online chat, typically online creation as a community activity and some element of role-playing. Often such MUDs have broadly defined contingents of socializers and roleplayers. Server software in the TinyMUD family, or MU*, is traditionally used to implement social MUDs. A less-known MUD variant is the talker, a variety of online chat environment typically based on server software like ew-too or NUTS. Most of the early Internet talkers were LPMuds with the majority of the complex game machinery stripped away, leaving just the communication commands. The first Internet talker was "Cat Chat" in 1990. Taking advantage of the flexibility of MUD server software, some MUDs are designed for educational purposes rather than gaming or chat. "MicroMUSE" is considered by some to have been the first educational MUD, but it can be argued that its evolution into this role was not complete until 1994, which would make the first of many educational MOOs, "Diversity University" in 1993, also the first educational MUD. The MUD medium lends itself naturally to constructionist learning pedagogical approaches. The Mud Institute (TMI) was an LPMud opened in February 1992 as a gathering place for people interested in developing LPMud and teaching LPC after it became clear that Lars Pensjö had lost interest in the project. TMI focussed on both the LPMud driver and library, the driver evolving into MudOS, the TMI Mudlib was never officially released, but was influential in the development of other libraries. A graphical MUD is a MUD that uses computer graphics to represent parts of the virtual world and its visitors. A prominent early graphical MUD was "Habitat", written by Randy Farmer and Chip Morningstar for Lucasfilm in 1985. Graphical MUDs require players to download a special client and the game's artwork. They range from simply enhancing the user interface to simulating 3D worlds with visual spatial relationships and customized avatar appearances. Games such as "Meridian 59", "EverQuest", "Ultima Online" and "Dark Age of Camelot" were routinely called graphical MUDs in their earlier years. "RuneScape" was actually originally intended to be a "text-based" MUD, but graphics were added very early in development. However, with the increase in computing power and Internet connectivity during the late 1990s, and the shift of online gaming to the mass market, the term "graphical MUD" fell out of favor, being replaced by MMORPG, Massively Multiplayer Online Role-Playing Game, a term coined by Richard Garriott in 1997. Within a MUD's technical infrastructure, a mudlib (concatenation of "MUD library") defines the rules of the in-game world. Examples of mudlibs include Ain Soph Mudlib, CDlib, Discworld Mudlib, Lima Mudlib, LPUniversity Mudlib, MorgenGrauen Mudlib, Nightmare Mudlib, and TMI Mudlib. MUD history has been preserved primarily through community sites and blogs and not through mainstream sources with journalistic repute. As of the late 1990s, a website called The Mud Connector has served as a central and curated repository for active MUDs. In 1995, "The Independent" reported that over 60,000 people regularly played about 600 MUDs, up from 170 MUDs three years prior. "The Independent" also noted distinct patterns of socialization within MUD communities. Seraphina Brennan of "Massively" wrote that the MUD community was "in decline" as of 2009. Sherry Turkle developed a theory that the constant use (and in many cases, overuse) of MUDs allows users to develop different personalities in their environments. She uses examples, dating back to the text-based MUDs of the mid-1990s, showing college students who simultaneously live different lives through characters in separate MUDs, up to three at a time, all while doing schoolwork. The students claimed that it was a way to "shut off" their own lives for a while and become part of another reality. Turkle claims that this could present a psychological problem of identity for today's youths. "A Story About A Tree" is a short essay written by Raph Koster regarding the death of a "LegendMUD" player named Karyn, raising the subject of inter-human relationships in virtual worlds. Observations of MUD-play show styles of play that can be roughly categorized. Achievers focus on concrete measurements of success such as experience points, levels, and wealth; Explorers investigate every nook and cranny of the game, and evaluate different game mechanical options; Socializers devote most of their energy to interacting with other players; and then there are Killers who focus on interacting negatively with other players, if permitted, killing the other characters or otherwise thwarting their play. Few players play only one way, or play one way all the time; most exhibit a diverse style. According to Richard Bartle, "People go there as part of a hero's journey—a means of self-discovery". Research has suggested that various factors combine in MUDs to provide users with a sense of "presence" rather than simply communication. As a noun, the word MUD is variously written MUD, Mud, and mud, depending on speaker and context. It is also used as a verb, with to mud meaning to play or interact with a MUD and mudding referring to the act of doing so. A mudder is, naturally, one who MUDs. Compound words and portmanteaux such as mudlist, mudsex, and mudflation are also regularly coined. Puns on the "wet dirt" meaning of "mud" are endemic, as with, for example, the names of the ROM (Rivers of MUD), MUCK, MUSH, and CoffeeMUD codebases and the MUD "Muddy Waters".
https://en.wikipedia.org/wiki?curid=19537
MUSH In multiplayer online games, a MUSH (a backronymed variation on MUD most often expanded as Multi-User Shared Hallucination, though Multi-User Shared Hack, Habitat, and Holodeck are also observed) is a text-based online social medium to which multiple users are connected at the same time. MUSHes are often used for online social intercourse and role-playing games, although the first forms of MUSH do not appear to be coded specifically to implement gaming activity. MUSH software was originally derived from MUDs; today's two major MUSH variants are descended from TinyMUD, which was fundamentally a social game. MUSH has forked over the years and there are now different varieties with different features, although most have strong similarities and one who is fluent in coding one variety can switch to coding for the other with only a little effort. The source code for most widely used MUSH servers is open source and available from its current maintainers. A primary feature of MUSH codebases that tends to distinguish it from other multi-user environments is the ability, by default, of any player to extend the world by creating new rooms or objects and specifying their behavior in the MUSH's internal scripting language. Another is the default lack of much player or administrative hierarchy imposed by the server itself. Over the years, both of these traits have become less pronounced, as many server administrators choose to eliminate or heavily restrict player-controlled building, and several games have custom coded systems to restore more of a hierarchal system. The programming language for MUSH, usually referred to as "MUSHcode" or "softcode" (to distinguish it from "hardcode"the language in which the MUSH server itself is written) was developed by Larry Foard. TinyMUSH started life as a set of enhancements to the original TinyMUD code. "MUSHcode" is similar in syntax to Lisp. Most customization is done in "softcode" rather than by directly modifying the hardcode. Traditionally, roleplay consists of a series of "poses". Each character makes a "pose"that is, writes a description of speech, actions, etc. which the character performs. Special commands allow players to print OOC (out of character) messages, distinguished by a prefixed string from IC (in character) action. This medium borrows traits from both improvisational stage acting and writing. Roleplaying is one of the primary activities of MUSHes, along with socializing. There is nothing in the code base that restricts a new MUSH from being a traditional hack-and-slash MUD-style game. However, the earliest uses of MUSH servers were for roleplaying and socializing, and these early trends have largely governed their descendants. A large number of roleplaying MUSHes have custom combat systems and other tools coded by their administrators in order to further encourage roleplay. However, as roleplay is the primary goal, many MUSHes have varying ideas of how these programs are used. All MUSH servers provide a flag that, when set on a player, bestows the ability to view and modify nearly everything in the game's database. Such players are usually called Wizards, and typically form the basis for the MUSH administration. Although MUSH servers do not impose strong administrative hierarchies, most MUSH games establish additional levels of management besides Wizards. Some do so on a purely organizational basis, naming some Wizards "Head Wizards" or "Junior Wizards" or assigning sphere of responsibility to Wizards, despite the technical equality of their abilities in the game world. Others provide finer-grained control over capabilities that can be assigned to players so that some players can be granted the ability to view, but not modify, the entire game world, or to perform limited modifications. Other levels of power can include added control over one's own character, or fewer limits on resources. PennMUSH, TinyMUSH, and TinyMUX include the "Royalty" flag, which gives a player the powers to do most things that don't involve modifying the database. RhostMUSH has a wide array of staff flags that differ in many ways from its sister servers. Maintainers and developers of MUSH servers have traditionally shared ideas with one another, so most MUSH servers include concepts or code developed originally in other servers. There is particular interest in ensuring that common MUSHcode features work similarly across servers. PennMUSH, TinyMUSH, TinyMUX and RhostMUSH are all open-source MUSH servers. Some enthusiasts may exclude one or more of the above on the basis of distribution method, name, or parentage, but all are free-form MUSH servers. Differences in the software tend to focus more on the administrative or softcode side (slightly different function syntax; or different functions altogether; more, or fewer, administrative controls). The set of commands that players use to interface with the game are essentially standard amongst servers bearing the appellation "MUSH".
https://en.wikipedia.org/wiki?curid=19542
Microevolution Microevolution is the change in allele frequencies that occurs over time within a population. This change is due to four different processes: mutation, selection (natural and artificial), gene flow and genetic drift. This change happens over a relatively short (in evolutionary terms) amount of time compared to the changes termed macroevolution. Population genetics is the branch of biology that provides the mathematical structure for the study of the process of microevolution. Ecological genetics concerns itself with observing microevolution in the wild. Typically, observable instances of evolution are examples of microevolution; for example, bacterial strains that have antibiotic resistance. Microevolution may lead to speciation, which provides the raw material for macroevolution. Macroevolution is guided by sorting of interspecific variation ("species selection"), as opposed to sorting of intraspecific variation in microevolution. Species selection may occur as (a) effect-macroevolution, where organism-level traits (aggregate traits) affect speciation and extinction rates, and (b) strict-sense species selection, where species-level traits (e.g. geographical range) affect speciation and extinction rates. Macroevolution does not produce evolutionary novelties, but it determines their proliferation within the clades in which they evolved, and it adds species-level traits as non-organismic factors of sorting to this process. Mutations are changes in the DNA sequence of a cell's genome and are caused by radiation, viruses, transposons and mutagenic chemicals, as well as errors that occur during meiosis or DNA replication. Errors are introduced particularly often in the process of DNA replication, in the polymerization of the second strand. These errors can also be induced by the organism itself, by cellular processes such as hypermutation. Mutations can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the proofreading ability of DNA polymerases. (Without proofreading error rates are a thousandfold higher; because many viruses rely on DNA and RNA polymerases that lack proofreading ability, they experience higher mutation rates.) Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well, and cells use DNA repair mechanisms to repair mismatches and breaks in DNA—nevertheless, the repair sometimes fails to return the DNA to its original sequence. In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment making some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions or deletions of entire regions, or the accidental exchanging of whole parts between different chromosomes (called translocation). Mutation can result in several different types of change in DNA sequences; these can either have no effect, alter the product of a gene, or prevent the gene from functioning. Studies in the fly "Drosophila melanogaster" suggest that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70 percent of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial. Due to the damaging effects that mutations can have on cells, organisms have evolved mechanisms such as DNA repair to remove mutations. Therefore, the optimal mutation rate for a species is a trade-off between costs of a high mutation rate, such as deleterious mutations, and the metabolic costs of maintaining systems to reduce the mutation rate, such as DNA repair enzymes. Viruses that use RNA as their genetic material have rapid mutation rates, which can be an advantage since these viruses will evolve constantly and rapidly, and thus evade the defensive responses of e.g. the human immune system. Mutations can involve large sections of DNA becoming duplicated, usually through genetic recombination. These duplications are a major source of raw material for evolving new genes, with tens to hundreds of genes duplicated in animal genomes every million years. Most genes belong to larger families of genes of shared ancestry. Novel genes are produced by several methods, commonly through the duplication and mutation of an ancestral gene, or by recombining parts of different genes to form new combinations with new functions. Here, domains act as modules, each with a particular and independent function, that can be mixed together to produce genes encoding new proteins with novel properties. For example, the human eye uses four genes to make structures that sense light: three for color vision and one for night vision; all four arose from a single ancestral gene. Another advantage of duplicating a gene (or even an entire genome) is that this increases redundancy; this allows one gene in the pair to acquire a new function while the other copy performs the original function. Other types of mutation occasionally create new genes from previously noncoding DNA. "Selection" is the process by which heritable traits that make it more likely for an organism to survive and successfully reproduce become more common in a population over successive generations. It is sometimes valuable to distinguish between naturally occurring selection, natural selection, and selection that is a manifestation of choices made by humans, artificial selection. This distinction is rather diffuse. Natural selection is nevertheless the dominant part of selection. The natural genetic variation within a population of organisms means that some individuals will survive more successfully than others in their current environment. Factors which affect reproductive success are also important, an issue which Charles Darwin developed in his ideas on sexual selection. Natural selection acts on the phenotype, or the observable characteristics of an organism, but the genetic (heritable) basis of any phenotype which gives a reproductive advantage will become more common in a population (see allele frequency). Over time, this process can result in adaptations that specialize organisms for particular ecological niches and may eventually result in the speciation (the emergence of new species). Natural selection is one of the cornerstones of modern biology. The term was introduced by Darwin in his groundbreaking 1859 book "On the Origin of Species", in which natural selection was described by analogy to artificial selection, a process by which animals and plants with traits considered desirable by human breeders are systematically favored for reproduction. The concept of natural selection was originally developed in the absence of a valid theory of heredity; at the time of Darwin's writing, nothing was known of modern genetics. The union of traditional Darwinian evolution with subsequent discoveries in classical and molecular genetics is termed the "modern evolutionary synthesis". Natural selection remains the primary explanation for adaptive evolution. "Genetic drift" is the change in the relative frequency in which a gene variant (allele) occurs in a population due to random sampling. That is, the alleles in the offspring in the population are a random sample of those in the parents. And chance has a role in determining whether a given individual survives and reproduces. A population's allele frequency is the fraction or percentage of its gene copies compared to the total number of gene alleles that share a particular form. Genetic drift is an evolutionary process which leads to changes in allele frequencies over time. It may cause gene variants to disappear completely, and thereby reduce genetic variability. In contrast to natural selection, which makes gene variants more common or less common depending on their reproductive success, the changes due to genetic drift are not driven by environmental or adaptive pressures, and may be beneficial, neutral, or detrimental to reproductive success. The effect of genetic drift is larger in small populations, and smaller in large populations. Vigorous debates wage among scientists over the relative importance of genetic drift compared with natural selection. Ronald Fisher held the view that genetic drift plays at the most a minor role in evolution, and this remained the dominant view for several decades. In 1968 Motoo Kimura rekindled the debate with his neutral theory of molecular evolution which claims that most of the changes in the genetic material are caused by genetic drift. The predictions of neutral theory, based on genetic drift, do not fit recent data on whole genomes well: these data suggest that the frequencies of neutral alleles change primarily due to selection at linked sites, rather than due to genetic drift by means of sampling error. Gene flow is the exchange of genes between populations, which are usually of the same species. Examples of gene flow within a species include the migration and then breeding of organisms, or the exchange of pollen. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Migration into or out of a population can change allele frequencies, as well as introducing genetic variation into a population. Immigration may add new genetic material to the established gene pool of a population. Conversely, emigration may remove genetic material. As barriers to reproduction between two diverging populations are required for the populations to become new species, gene flow may slow this process by spreading genetic differences between the populations. Gene flow is hindered by mountain ranges, oceans and deserts or even man-made structures such as the Great Wall of China, which has hindered the flow of plant genes. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile, due to the two different sets of chromosomes being unable to pair up during meiosis. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridization in creating new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example. Hybridization is, however, an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome) is tolerated in plants more readily than in animals. Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis. Polyploid hybrids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast "Saccharomyces cerevisiae" and the adzuki bean beetle "Callosobruchus chinensis" may also have occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which appear to have received a range of genes from bacteria, fungi, and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and prokaryotes, during the acquisition of chloroplasts and mitochondria. "Gene flow" is the transfer of alleles from one population to another. Migration into or out of a population may be responsible for a marked change in allele frequencies. Immigration may also result in the addition of new genetic variants to the established gene pool of a particular species or population. There are a number of factors that affect the rate of gene flow between different populations. One of the most significant factors is mobility, as greater mobility of an individual tends to give it greater migratory potential. Animals tend to be more mobile than plants, although pollen and seeds may be carried great distances by animals or wind. Maintained gene flow between two populations can also lead to a combination of the two gene pools, reducing the genetic variation between the two groups. It is for this reason that gene flow strongly acts against speciation, by recombining the gene pools of the groups, and thus, repairing the developing differences in genetic variation that would have led to full speciation and creation of daughter species. For example, if a species of grass grows on both sides of a highway, pollen is likely to be transported from one side to the other and vice versa. If this pollen is able to fertilise the plant where it ends up and produce viable offspring, then the alleles in the pollen have effectively been able to move from the population on one side of the highway to the other. The term "microevolution" was first used by botanist Robert Greenleaf Leavitt in the journal "Botanical Gazette" in 1909, addressing what he called the "mystery" of how formlessness gives rise to form. However, Leavitt was using the term to describe what we would now call developmental biology; it was not until Russian Entomologist Yuri Filipchenko used the terms "macroevolution" and "microevolution" in 1927 in his German language work, "Variabilität und Variation", that it attained its modern usage. The term was later brought into the English-speaking world by Filipchenko's student Theodosius Dobzhansky in his book Genetics and the Origin of Species (1937). In young Earth creationism and baraminology a central tenet is that evolution can explain diversity in a limited number of created kinds which can interbreed (which they call "microevolution") while the formation of new "kinds" (which they call "macroevolution") is impossible. This acceptance of "microevolution" only within a "kind" is also typical of old Earth creationism. Scientific organizations such as the American Association for the Advancement of Science describe microevolution as small scale change within species, and macroevolution as the formation of new species, but otherwise not being different from microevolution. In macroevolution, an accumulation of microevolutionary changes leads to speciation. The main difference between the two processes is that one occurs within a few generations, whilst the other takes place over thousands of years (i.e. a quantitative difference). Essentially they describe the same process; although evolution beyond the species level results in beginning and ending generations which could not interbreed, the intermediate generations could. Opponents to creationism argue that changes in the number of chromosomes can be accounted for by intermediate stages in which a single chromosome divides in generational stages, or multiple chromosomes fuse, and cite the chromosome difference between humans and the other great apes as an example. Creationists insist that since the actual divergence between the other great apes and humans was not observed, the evidence is circumstantial. Describing the fundamental similarity between macro and microevolution in his authoritative textbook "Evolutionary Biology," biologist Douglas Futuyma writes, Contrary to the claims of some antievolution proponents, evolution of life forms beyond the species level (i.e. speciation) has indeed been observed and documented by scientists on numerous occasions. In creation science, creationists accepted speciation as occurring within a "created kind" or "baramin", but objected to what they called "third level-macroevolution" of a new genus or higher rank in taxonomy. There is ambiguity in the ideas as to where to draw a line on "species", "created kinds", and what events and lineages fall within the rubric of microevolution or macroevolution.
https://en.wikipedia.org/wiki?curid=19544
MySQL MySQL ( "My S-Q-L") is an open-source relational database management system (RDBMS). Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language. A relational database organizes data into one or more data tables in which data types may be related to each other; these relations help structure the data. SQL is a language programmers use to create, modify and extract data from the relational database, as well as control user access to the database. In addition to relational databases and SQL, an RDBMS like MySQL works with an operating system to implement a relational database in a computer's storage system, manages users, allows for network access and facilitates testing database integrity and creation of backups. MySQL is free and open-source software under the terms of the GNU General Public License, and is also available under a variety of proprietary licenses. MySQL was owned and sponsored by the Swedish company MySQL AB, which was bought by Sun Microsystems (now Oracle Corporation). In 2010, when Oracle acquired Sun, Widenius forked the open-source MySQL project to create MariaDB. MySQL has stand-alone clients that allow users to interact directly with a MySQL database using SQL, but more often MySQL is used with other programs to implement applications that need relational database capability. MySQL is a component of the LAMP web application software stack (and others), which is an acronym for "Linux, Apache, MySQL, Perl/PHP/Python". MySQL is used by many database-driven web applications, including Drupal, Joomla, phpBB, and WordPress. MySQL is also used by many popular websites, including Facebook, Flickr, MediaWiki, Twitter, and YouTube. MySQL is written in C and C++. Its SQL parser is written in yacc, but it uses a home-brewed lexical analyzer. MySQL works on many system platforms, including AIX, BSDi, FreeBSD, HP-UX, eComStation, i5/OS, IRIX, Linux, macOS, Microsoft Windows, NetBSD, Novell NetWare, OpenBSD, OpenSolaris, OS/2 Warp, QNX, Oracle Solaris, Symbian, SunOS, SCO OpenServer, SCO UnixWare, Sanos and Tru64. A port of MySQL to OpenVMS also exists. The MySQL server software itself and the client libraries use dual-licensing distribution. They are offered under GPL version 2, or a proprietary license. Support can be obtained from the official manual. Free support additionally is available in different IRC channels and forums. Oracle offers paid support via its MySQL Enterprise products. They differ in the scope of services and in price. Additionally, a number of third party organisations exist to provide support and services. MySQL has received positive reviews, and reviewers noticed it "performs extremely well in the average case" and that the "developer interfaces are there, and the documentation (not to mention feedback in the real world via Web sites and the like) is very, very good". It has also been tested to be a "fast, stable and true multi-user, multi-threaded SQL database server". MySQL was created by a Swedish company, MySQL AB, founded by David Axmark, Allan Larsson and Michael "Monty" Widenius. Original development of MySQL by Widenius and Axmark began in 1994. The first version of MySQL appeared on 23 May 1995. It was initially created for personal usage from mSQL based on the low-level language ISAM, which the creators considered too slow and inflexible. They created a new SQL interface, while keeping the same API as mSQL. By keeping the API consistent with the mSQL system, many developers were able to use MySQL instead of the (proprietarily licensed) mSQL antecedent. Additional milestones in MySQL development included: Work on version 6 stopped after the Sun Microsystems acquisition. The MySQL Cluster product uses version 7. The decision was made to jump to version 8 as the next major version number. On 15 June 2001, NuSphere sued MySQL AB, TcX DataKonsult AB and its original authors Michael ("Monty") Widenius and David Axmark in U.S District Court in Boston for "breach of contract, tortious interference with third party contracts and relationships and unfair competition". In 2002, MySQL AB sued Progress NuSphere for copyright and trademark infringement in United States district court. NuSphere had allegedly violated MySQL AB's copyright by linking MySQL's GPL'ed code with NuSphere Gemini table without being in compliance with the license. After a preliminary hearing before Judge Patti Saris on 27 February 2002, the parties entered settlement talks and eventually settled. After the hearing, FSF commented that "Judge Saris made clear that she sees the GNU GPL to be an enforceable and binding license." In October 2005, Oracle Corporation acquired Innobase OY, the Finnish company that developed the third-party InnoDB storage engine that allows MySQL to provide such functionality as transactions and foreign keys. After the acquisition, an Oracle press release mentioned that the contracts that make the company's software available to MySQL AB would be due for renewal (and presumably renegotiation) some time in 2006. During the MySQL Users Conference in April 2006, MySQL AB issued a press release that confirmed that MySQL AB and Innobase OY agreed to a "multi-year" extension of their licensing agreement. In February 2006, Oracle Corporation acquired Sleepycat Software, makers of the Berkeley DB, a database engine providing the basis for another MySQL storage engine. This had little effect, as Berkeley DB was not widely used, and was dropped (due to lack of use) in MySQL 5.1.12, a pre-GA release of MySQL 5.1 released in October 2006. In January 2008, Sun Microsystems bought MySQL AB for $1 billion. In April 2009, Oracle Corporation entered into an agreement to purchase Sun Microsystems, then owners of MySQL copyright and trademark. Sun's board of directors unanimously approved the deal. It was also approved by Sun's shareholders, and by the U.S. government on 20 August 2009. On 14 December 2009, Oracle pledged to continue to enhance MySQL as it had done for the previous four years. A movement against Oracle's acquisition of MySQL AB, to "Save MySQL" from Oracle was started by one of the MySQL AB founders, Monty Widenius. The petition of 50,000+ developers and users called upon the European Commission to block approval of the acquisition. At the same time, some Free Software opinion leaders (including Pamela Jones of Groklaw, Jan Wildeboer and Carlo Piana, who also acted as co-counsel in the merger regulation procedure) advocated for the unconditional approval of the merger. As part of the negotiations with the European Commission, Oracle committed that MySQL server will continue until at least 2015 to use the dual-licensing strategy long used by MySQL AB, with proprietary and GPL versions available. The antitrust of the EU had been "pressuring it to divest MySQL as a condition for approval of the merger". But, as revealed by WikiLeaks, the US Department of Justice, at the request of Oracle, pressured the EU to approve the merger unconditionally. The European Commission eventually unconditionally approved Oracle's acquisition of MySQL AB on 21 January 2010. In January 2010, before Oracle's acquisition of MySQL AB, Monty Widenius started a GPL-only fork, MariaDB. MariaDB is based on the same code base as MySQL server 5.5 and aims to maintain compatibility with Oracle-provided versions. MySQL is offered under two different editions: the open source MySQL Community Server and the proprietary Enterprise Server. MySQL Enterprise Server is differentiated by a series of proprietary extensions which install as server plugins, but otherwise shares the version numbering system and is built from the same code base. Major features as available in MySQL 5.6: The developers release minor updates of the MySQL Server approximately every two months. The sources can be obtained from MySQL's website or from MySQL's GitHub repository, both under the GPL license. When using some storage engines other than the default of InnoDB, MySQL does not comply with the full SQL standard for some of the implemented functionality, including foreign key references. Check constraints are parsed but ignored by all storage engines before MySQL version 8.0.15. Up until MySQL 5.7, triggers are limited to one per action / timing, meaning that at most one trigger can be defined to be executed after an operation, and one before on the same table. No triggers can be defined on views. MySQL database's inbuilt functions like will return after 03:14:07 UTC on 19 January 2038. Recently, there had been an attempt to solve the problem which had been assigned to the internal queue. MySQL can be built and installed manually from source code, but it is more commonly installed from a binary package unless special customizations are required. On most Linux distributions, the package management system can download and install MySQL with minimal effort, though further configuration is often required to adjust security and optimization settings. Though MySQL began as a low-end alternative to more powerful proprietary databases, it has gradually evolved to support higher-scale needs as well. It is still most commonly used in small to medium scale single-server deployments, either as a component in a LAMP-based web application or as a standalone database server. Much of MySQL's appeal originates in its relative simplicity and ease of use, which is enabled by an ecosystem of open source tools such as phpMyAdmin. In the medium range, MySQL can be scaled by deploying it on more powerful hardware, such as a multi-processor server with gigabytes of memory. There are, however, limits to how far performance can scale on a single server ('scaling up'), so on larger scales, multi-server MySQL ('scaling out') deployments are required to provide improved performance and reliability. A typical high-end configuration can include a powerful master database which handles data write operations and is replicated to multiple slaves that handle all read operations. The master server continually pushes binlog events to connected slaves so in the event of failure a slave can be promoted to become the new master, minimizing downtime. Further improvements in performance can be achieved by caching the results from database queries in memory using memcached, or breaking down a database into smaller chunks called shards which can be spread across a number of distributed server clusters. Oracle MySQL offers a high availability solution with a mix of tools including the MySQL router and the MySQL shell. They are based on Group Replication, open source tools. MariaDB offers a similar offer in terms of products MySQL can also be run on cloud computing platforms such as Microsoft Azure, Amazon EC2, Oracle Cloud Infrastructure. Some common deployment models for MySQL on the cloud are: A graphical user interface (GUI) is a type of interface that allows users to interact with electronic devices or programs through graphical icons and visual indicators such as secondary notation, as opposed to text-based interfaces, typed command labels or text navigation. GUIs are easier to learn than command-line interfaces (CLIs), which require commands to be typed on the keyboard. Third-party proprietary and free graphical administration applications (or "front ends") are available that integrate with MySQL and enable users to work with database structure and data visually. Some well-known front ends are: A command-line interface is a means of interacting with a computer program where the user issues commands to the program by typing in successive lines of text (command lines). MySQL ships with many command line tools, from which the main interface is the client. MySQL Utilities is a set of utilities designed to perform common maintenance and administrative tasks. Originally included as part of the MySQL Workbench, the utilities are a stand-alone download available from Oracle. Percona Toolkit is a cross-platform toolkit for MySQL, developed in Perl. Percona Toolkit can be used to prove replication is working correctly, fix corrupted data, automate repetitive tasks, and speed up servers. Percona Toolkit is included with several Linux distributions such as CentOS and Debian, and packages are available for Fedora and Ubuntu as well. Percona Toolkit was originally developed as Maatkit, but as of late 2011, Maatkit is no longer developed. MySQL shell is a tool for interactive use and administration of the MySQL database. It supports JavaScript, Python or SQL modes and it can be used for administration and access purposes. Many programming languages with language-specific APIs include libraries for accessing MySQL databases. These include MySQL Connector/Net for integration with Microsoft's Visual Studio (languages such as C# and VB are most commonly used) and the JDBC driver for Java. In addition, an ODBC interface called MySQL Connector/ODBC allows additional programming languages that support the ODBC interface to communicate with a MySQL database, such as ASP or ColdFusion. The HTSQL URL-based query method also ships with a MySQL adapter, allowing direct interaction between a MySQL database and any web client via structured URLs. Other drivers exists for languages like Python or Node.js. A variety of MySQL forks exist, including the following.
https://en.wikipedia.org/wiki?curid=19545
Modernism Modernism is both a philosophical movement and an art movement that arose from broad transformations in Western society during the late 19th and early 20th centuries. The movement reflected a desire for the creation of new forms of art, religion, philosophy, and social organization which reflected the newly emerging industrial world, including features such as urbanization, new technologies, and war. Artists attempted to depart from traditional forms of art, which they considered outdated or obsolete. The poet Ezra Pound's 1934 injunction to "Make it new!" was the touchstone of the movement's approach. Modernist innovations included abstract art, the stream-of-consciousness novel, montage cinema, atonal and twelve-tone music, and divisionist painting. Modernism explicitly rejected the ideology of realism and made use of the works of the past by the employment of reprise, incorporation, rewriting, recapitulation, revision and parody. Modernism also rejected the certainty of Enlightenment thinking, and many modernists also rejected religious belief. A notable characteristic of modernism is self-consciousness concerning artistic and social traditions, which often led to experimentation with form, along with the use of techniques that drew attention to the processes and materials used in creating works of art. While some scholars see modernism continuing into the 21st century, others see it evolving into late modernism or high modernism. Postmodernism is a departure from modernism and rejects its basic assumptions. Some commentators define modernism as a mode of thinking—one or more philosophically defined characteristics, like self-consciousness or self-reference, that run across all the novelties in the arts and the disciplines. More common, especially in the West, are those who see it as a socially progressive trend of thought that affirms the power of human beings to create, improve and reshape their environment with the aid of practical experimentation, scientific knowledge, or technology. From this perspective, modernism encouraged the re-examination of every aspect of existence, from commerce to philosophy, with the goal of finding that which was 'holding back' progress, and replacing it with new ways of reaching the same end. Others focus on modernism as an aesthetic introspection. This facilitates consideration of specific reactions to the use of technology in the First World War, and anti-technological and nihilistic aspects of the works of diverse thinkers and artists spanning the period from Friedrich Nietzsche (1844–1900) to Samuel Beckett (1906–1989). According to Roger Griffin, modernism can be defined in a maximalist vision as a broad cultural, social, or political initiative, sustained by the ethos of "the temporality of the new". Modernism sought to restore, Griffin writes, a "sense of sublime order and purpose to the contemporary world, thereby counteracting the (perceived) erosion of an overarching ‘nomos’, or ‘sacred canopy’, under the fragmenting and secularizing impact of modernity." Therefore, phenomena apparently unrelated to each other such as "Expressionism, Futurism, vitalism, Theosophy, psychoanalysis, nudism, eugenics, utopian town planning and architecture, modern dance, Bolshevism, organic nationalism – and even the cult of self-sacrifice that sustained the hecatomb of the First World War – disclose a common cause and psychological matrix in the fight against (perceived) decadence." All of them embody bids to access a "supra-personal experience of reality", in which individuals believed they could transcend their own mortality, and eventually that they had ceased to be victims of history to become instead its creators. According to one critic, modernism developed out of Romanticism's revolt against the effects of the Industrial Revolution and bourgeois values: "The ground motive of modernism, Graff asserts, was criticism of the nineteenth-century bourgeois social order and its world view [...] the modernists, carrying the torch of romanticism." While J. M. W. Turner (1775–1851), one of the greatest landscape painters of the 19th century, was a member of the Romantic movement, as "a pioneer in the study of light, colour, and atmosphere", he "anticipated the French Impressionists" and therefore modernism "in breaking down conventional formulas of representation; [though] unlike them, he believed that his works should always express significant historical, mythological, literary, or other narrative themes." The dominant trends of industrial Victorian England were opposed, from about 1850, by the English poets and painters that constituted the Pre-Raphaelite Brotherhood, because of their "opposition to technical skill without inspiration." They were influenced by the writings of the art critic John Ruskin (1819–1900), who had strong feelings about the role of art in helping to improve the lives of the urban working classes, in the rapidly expanding industrial cities of Britain. Art critic Clement Greenberg describes the Pre-Raphaelite Brotherhood as proto-Modernists: "There the proto-Modernists were, of all people, the pre-Raphaelites (and even before them, as proto-proto-Modernists, the German Nazarenes). The Pre-Raphaelites actually foreshadowed Manet (1832–1883), with whom Modernist painting most definitely begins. They acted on a dissatisfaction with painting as practiced in their time, holding that its realism wasn't truthful enough." Rationalism has also had opponents in the philosophers Søren Kierkegaard (1813–55) and later Friedrich Nietzsche (1844–1900), both of whom had significant influence on existentialism and nihilism. However, the Industrial Revolution continued. Influential innovations included steam-powered industrialization, and especially the development of railways, starting in Britain in the 1830s, and the subsequent advancements in physics, engineering, and architecture associated with this. A major 19th-century engineering achievement was The Crystal Palace, the huge cast-iron and plate glass exhibition hall built for The Great Exhibition of 1851 in London. Glass and iron were used in a similar monumental style in the construction of major railway terminals in London, such as Paddington Station (1854) and King's Cross station (1852). These technological advances led to the building of later structures like the Brooklyn Bridge (1883) and the Eiffel Tower (1889). The latter broke all previous limitations on how tall man-made objects could be. These engineering marvels radically altered the 19th-century urban environment and the daily lives of people. The human experience of time itself was altered, with the development of the electric telegraph from 1837, and the adoption of standard time by British railway companies from 1845, and in the rest of the world over the next fifty years. Despite continuing technological advances, the idea that history and civilization were inherently progressive, and that progress was always good, came under increasing attack in the nineteenth century. Arguments arose that the values of the artist and those of society were not merely different, but that Society was antithetical to Progress, and could not move forward in its present form. Early in the century, the philosopher Schopenhauer (1788–1860) ("The World as Will and Representation", 1819) had called into question the previous optimism, and his ideas had an important influence on later thinkers, including Nietzsche. Two of the most significant thinkers of the mid nineteenth century were biologist Charles Darwin (1809–1882), author of "On the Origin of Species by Means of Natural Selection" (1859), and political scientist Karl Marx (1818–1883), author of "Das Kapital" (1867). Darwin's theory of evolution by natural selection undermined religious certainty and the idea of human uniqueness. In particular, the notion that human beings were driven by the same impulses as "lower animals" proved to be difficult to reconcile with the idea of an ennobling spirituality. Karl Marx argued that there were fundamental contradictions within the capitalist system, and that the workers were anything but free. Historians, and writers in different disciplines, have suggested various dates as starting points for modernism. Historian William Everdell, for example, has argued that modernism began in the 1870s, when metaphorical (or ontological) continuity began to yield to the discrete with mathematician Richard Dedekind's (1831–1916) Dedekind cut, and Ludwig Boltzmann's (1844–1906) statistical thermodynamics. Everdell also thinks modernism in painting began in 1885–1886 with Seurat's Divisionism, the "dots" used to paint "A Sunday Afternoon on the Island of La Grande Jatte". On the other hand, visual art critic Clement Greenberg called Immanuel Kant (1724–1804) "the first real Modernist", though he also wrote, "What can be safely called Modernism emerged in the middle of the last century—and rather locally, in France, with Baudelaire in literature and Manet in painting, and perhaps with Flaubert, too, in prose fiction. (It was a while later, and not so locally, that Modernism appeared in music and architecture)." The poet Baudelaire's "Les Fleurs du mal" ("The Flowers of Evil"), and Flaubert's novel "Madame Bovary" were both published in 1857. In the arts and letters, two important approaches developed separately in France, beginning in the 1860s. The first was Impressionism, a school of painting that initially focused on work done, not in studios, but outdoors ("en plein air"). Impressionist paintings demonstrated that human beings do not see objects, but instead see light itself. The school gathered adherents despite internal divisions among its leading practitioners, and became increasingly influential. Initially rejected from the most important commercial show of the time, the government-sponsored Paris Salon, the Impressionists organized yearly group exhibitions in commercial venues during the 1870s and 1880s, timing them to coincide with the official Salon. A significant event of 1863 was the Salon des Refusés, created by Emperor Napoleon III to display all of the paintings rejected by the Paris Salon. While most were in standard styles, but by inferior artists, the work of Manet attracted tremendous attention, and opened commercial doors to the movement. The second French school was Symbolism, which literary historians see beginning with Charles Baudelaire (1821–1867), and including the later poets, Arthur Rimbaud (1854–1891) "Une Saison en Enfer" ("A Season in Hell", 1873), Paul Verlaine (1844–1896), Stéphane Mallarmé (1842–1898), and Paul Valéry (1871–1945). The symbolists "stressed the priority of suggestion and evocation over direct description and explicit analogy," and were especially interested in "the musical properties of language." Cabaret, which gave birth to so many of the arts of modernism, including the immediate precursors of film, may be said to have begun in France in 1881 with the opening of the Black Cat in Montmartre, the beginning of the ironic monologue, and the founding of the Society of Incoherent Arts. Influential in the early days of modernism were the theories of Sigmund Freud (1856–1939). Freud's first major work was "Studies on Hysteria" (with Josef Breuer, 1895). Central to Freud's thinking is the idea "of the primacy of the unconscious mind in mental life," so that all subjective reality was based on the play of basic drives and instincts, through which the outside world was perceived. Freud's description of subjective states involved an unconscious mind full of primal impulses, and counterbalancing self-imposed restrictions derived from social values. Friedrich Nietzsche (1844–1900) was another major precursor of modernism, with a philosophy in which psychological drives, specifically the "will to power" ("Wille zur Macht"), was of central importance: "Nietzsche often identified life itself with 'will to power', that is, with an instinct for growth and durability." Henri Bergson (1859–1941), on the other hand, emphasized the difference between scientific, clock time and the direct, subjective, human experience of time. His work on time and consciousness "had a great influence on twentieth-century novelists," especially those modernists who used the stream of consciousness technique, such as Dorothy Richardson, James Joyce, and Virginia Woolf (1882–1941). Also important in Bergson's philosophy was the idea of "élan vital", the life force, which "brings about the creative evolution of everything." His philosophy also placed a high value on intuition, though without rejecting the importance of the intellect. Important literary precursors of modernism were Fyodor Dostoyevsky (1821–1881), who wrote the novels "Crime and Punishment" (1866) and "The Brothers Karamazov" (1880); Walt Whitman (1819–1892), who published the poetry collection "Leaves of Grass" (1855–1891); and August Strindberg (1849–1912), especially his later plays, including the trilogy "To Damascus" 1898–1901, "A Dream Play" (1902) and "The Ghost Sonata" (1907). Henry James has also been suggested as a significant precursor, in a work as early as "The Portrait of a Lady" (1881). Out of the collision of ideals derived from Romanticism, and an attempt to find a way for knowledge to explain that which was as yet unknown, came the first wave of works in the first decade of the 20th century, which, while their authors considered them extensions of existing trends in art, broke the implicit contract with the general public that artists were the interpreters and representatives of bourgeois culture and ideas. These "Modernist" landmarks include the atonal ending of Arnold Schoenberg's Second String Quartet in 1908, the expressionist paintings of Wassily Kandinsky starting in 1903, and culminating with his first abstract painting and the founding of the Blue Rider group in Munich in 1911, and the rise of fauvism and the inventions of cubism from the studios of Henri Matisse, Pablo Picasso, Georges Braque, and others, in the years between 1900 and 1910. An important aspect of modernism is how it relates to tradition through its adoption of techniques like reprise, incorporation, rewriting, recapitulation, revision and parody in new forms. T. S. Eliot made significant comments on the relation of the artist to tradition, including: "[W]e shall often find that not only the best, but the most individual parts of [a poet's] work, may be those in which the dead poets, his ancestors, assert their immortality most vigorously." However, the relationship of Modernism with tradition was complex, as literary scholar Peter Childs indicates: "There were paradoxical if not opposed trends towards revolutionary and reactionary positions, fear of the new and delight at the disappearance of the old, nihilism and fanatical enthusiasm, creativity and despair." An example of how Modernist art can be both revolutionary and yet be related to past tradition, is the music of the composer Arnold Schoenberg. On the one hand Schoenberg rejected traditional tonal harmony, the hierarchical system of organizing works of music that had guided music making for at least a century and a half. He believed he had discovered a wholly new way of organizing sound, based in the use of twelve-note rows. Yet while this was indeed wholly new, its origins can be traced back in the work of earlier composers, such as Franz Liszt, Richard Wagner, Gustav Mahler, Richard Strauss and Max Reger. Schoenberg also wrote tonal music throughout his career. In the world of art, in the first decade of the 20th century, young painters such as Pablo Picasso and Henri Matisse were causing a shock with their rejection of traditional perspective as the means of structuring paintings, though the impressionist Monet had already been innovative in his use of perspective. In 1907, as Picasso was painting "Les Demoiselles d'Avignon", Oskar Kokoschka was writing "Mörder, Hoffnung der Frauen" ("Murderer, Hope of Women"), the first Expressionist play (produced with scandal in 1909), and Arnold Schoenberg was composing his String Quartet No.2 in F sharp minor (1908), his first composition without a tonal centre. A primary influence that led to Cubism was the representation of three-dimensional form in the late works of Paul Cézanne, which were displayed in a retrospective at the 1907 Salon d'Automne. In Cubist artwork, objects are analyzed, broken up and reassembled in an abstracted form; instead of depicting objects from one viewpoint, the artist depicts the subject from a multitude of viewpoints to represent the subject in a greater context. Cubism was brought to the attention of the general public for the first time in 1911 at the Salon des Indépendants in Paris (held 21 April – 13 June). Jean Metzinger, Albert Gleizes, Henri Le Fauconnier, Robert Delaunay, Fernand Léger and Roger de La Fresnaye were shown together in Room 41, provoking a 'scandal' out of which Cubism emerged and spread throughout Paris and beyond. Also in 1911, Kandinsky painted "Bild mit Kreis" ("Picture with a Circle"), which he later called the first abstract painting. In 1912, Metzinger and Gleizes wrote the first (and only) major Cubist manifesto, "Du "Cubisme"", published in time for the Salon de la Section d'Or, the largest Cubist exhibition to date. In 1912 Metzinger painted and exhibited his enchanting "La Femme au Cheval (Woman with a Horse)" and "Danseuse au Café (Dancer in a Café)". Albert Gleizes painted and exhibited his "Les Baigneuses (The Bathers)" and his monumental "Le Dépiquage des Moissons (Harvest Threshing)". This work, along with "La Ville de Paris" ("City of Paris") by Robert Delaunay, was the largest and most ambitious Cubist painting undertaken during the pre-War Cubist period. In 1905, a group of four German artists, led by Ernst Ludwig Kirchner, formed Die Brücke (the Bridge) in the city of Dresden. This was arguably the founding organization for the German Expressionist movement, though they did not use the word itself. A few years later, in 1911, a like-minded group of young artists formed Der Blaue Reiter (The Blue Rider) in Munich. The name came from Wassily Kandinsky's "Der Blaue Reiter" painting of 1903. Among their members were Kandinsky, Franz Marc, Paul Klee, and August Macke. However, the term "Expressionism" did not firmly establish itself until 1913. Though initially mainly a German artistic movement, most predominant in painting, poetry and the theatre between 1910 and 1930, most precursors of the movement were not German. Furthermore, there have been expressionist writers of prose fiction, as well as non-German speaking expressionist writers, and, while the movement had declined in Germany with the rise of Adolf Hitler in the 1930s, there were subsequent expressionist works. Expressionism is notoriously difficult to define, in part because it "overlapped with other major 'isms' of the modernist period: with Futurism, Vorticism, Cubism, Surrealism and Dada." Richard Murphy also comments: "the search for an all-inclusive definition is problematic to the extent that the most challenging expressionists" such as the novelist Franz Kafka, poet Gottfried Benn, and novelist Alfred Döblin were simultaneously the most vociferous anti-expressionists. What, however, can be said, is that it was a movement that developed in the early 20th century mainly in Germany in reaction to the dehumanizing effect of industrialization and the growth of cities, and that "one of the central means by which expressionism identifies itself as an avant-garde movement, and by which it marks its distance to traditions and the cultural institution as a whole is through its relationship to realism and the dominant conventions of representation." More explicitly: that the expressionists rejected the ideology of realism. There was a concentrated Expressionist movement in early 20th century German theatre, of which Georg Kaiser and Ernst Toller were the most famous playwrights. Other notable Expressionist dramatists included Reinhard Sorge, Walter Hasenclever, Hans Henny Jahnn, and Arnolt Bronnen. They looked back to Swedish playwright August Strindberg and German actor and dramatist Frank Wedekind as precursors of their dramaturgical experiments. Oskar Kokoschka's "Murderer, the Hope of Women" was the first fully Expressionist work for the theatre, which opened on 4 July 1909 in Vienna. The extreme simplification of characters to mythic types, choral effects, declamatory dialogue and heightened intensity would become characteristic of later Expressionist plays. The first full-length Expressionist play was "The Son" by Walter Hasenclever, which was published in 1914 and first performed in 1916. Futurism is yet another modernist movement. In 1909, the Parisian newspaper "Le Figaro" published F. T. Marinetti's first manifesto. Soon afterwards a group of painters (Giacomo Balla, Umberto Boccioni, Carlo Carrà, Luigi Russolo, and Gino Severini) co-signed the Futurist Manifesto. Modeled on Marx and Engels' famous "Communist Manifesto" (1848), such manifestoes put forward ideas that were meant to provoke and to gather followers. However, arguments in favor of geometric or purely abstract painting were, at this time, largely confined to "little magazines" which had only tiny circulations. Modernist primitivism and pessimism were controversial, and the mainstream in the first decade of the 20th century was still inclined towards a faith in progress and liberal optimism. Abstract artists, taking as their examples the impressionists, as well as Paul Cézanne (1839–1906) and Edvard Munch (1863–1944), began with the assumption that color and shape, not the depiction of the natural world, formed the essential characteristics of art. Western art had been, from the Renaissance up to the middle of the 19th century, underpinned by the logic of perspective and an attempt to reproduce an illusion of visible reality. The arts of cultures other than the European had become accessible and showed alternative ways of describing visual experience to the artist. By the end of the 19th century many artists felt a need to create a new kind of art which would encompass the fundamental changes taking place in technology, science and philosophy. The sources from which individual artists drew their theoretical arguments were diverse, and reflected the social and intellectual preoccupations in all areas of Western culture at that time. Wassily Kandinsky, Piet Mondrian, and Kazimir Malevich all believed in redefining art as the arrangement of pure color. The use of photography, which had rendered much of the representational function of visual art obsolete, strongly affected this aspect of modernism. Modernist architects and designers, such as Frank Lloyd Wright and Le Corbusier, believed that new technology rendered old styles of building obsolete. Le Corbusier thought that buildings should function as "machines for living in", analogous to cars, which he saw as machines for traveling in. Just as cars had replaced the horse, so modernist design should reject the old styles and structures inherited from Ancient Greece or from the Middle Ages. Following this machine aesthetic, modernist designers typically rejected decorative motifs in design, preferring to emphasize the materials used and pure geometrical forms. The skyscraper is the archetypal modernist building, and the Wainwright Building, a 10-story office building built 1890–91, in St. Louis, Missouri, United States, is among the first skyscrapers in the world. Ludwig Mies van der Rohe's Seagram Building in New York (1956–1958) is often regarded as the pinnacle of this modernist high-rise architecture. Many aspects of modernist design still persist within the mainstream of contemporary architecture, though previous dogmatism has given way to a more playful use of decoration, historical quotation, and spatial drama. In 1913—which was the year of philosopher Edmund Husserl's "Ideas", physicist Niels Bohr's quantized atom, Ezra Pound's founding of imagism, the Armory Show in New York, and in Saint Petersburg the "first futurist opera", Mikhail Matyushin's "Victory over the Sun"—another Russian composer, Igor Stravinsky, composed "The Rite of Spring", a ballet that depicts human sacrifice, and has a musical score full of dissonance and primitive rhythm. This caused uproar on its first performance in Paris. At this time though modernism was still "progressive", increasingly it saw traditional forms and traditional social arrangements as hindering progress, and was recasting the artist as a revolutionary, engaged in overthrowing rather than enlightening society. Also in 1913 a less violent event occurred in France with the publication of the first volume of Marcel Proust's important novel sequence "À la recherche du temps perdu" (1913–1927) ("In Search of Lost Time"). This is often presented as an early example of a writer using the stream-of-consciousness technique, but Robert Humphrey comments that Proust "is concerned only with the reminiscent aspect of consciousness" and that he "was deliberately recapturing the past for the purpose of communicating; hence he did not write a stream-of-consciousness novel." Stream of consciousness was an important modernist literary innovation, and it has been suggested that Arthur Schnitzler (1862–1931) was the first to make full use of it in his short story "Leutnant Gustl" ("None but the Brave") (1900). Dorothy Richardson was the first English writer to use it, in the early volumes of her novel sequence "Pilgrimage" (1915–1967). The other modernist novelists that are associated with the use of this narrative technique include James Joyce in "Ulysses" (1922) and Italo Svevo in "La coscienza di Zeno" (1923). However, with the coming of the Great War of 1914–1918 and the Russian Revolution of 1917, the world was drastically changed and doubt cast on the beliefs and institutions of the past. The failure of the previous status quo seemed self-evident to a generation that had seen millions die fighting over scraps of earth: prior to 1914 it had been argued that no one would fight such a war, since the cost was too high. The birth of a machine age which had made major changes in the conditions of daily life in the 19th century now had radically changed the nature of warfare. The traumatic nature of recent experience altered basic assumptions, and realistic depiction of life in the arts seemed inadequate when faced with the fantastically surreal nature of trench warfare. The view that mankind was making steady moral progress now seemed ridiculous in the face of the senseless slaughter, described in works such as Erich Maria Remarque's novel "All Quiet on the Western Front" (1929). Therefore, modernism's view of reality, which had been a minority taste before the war, became more generally accepted in the 1920s. In literature and visual art some modernists sought to defy expectations mainly in order to make their art more vivid, or to force the audience to take the trouble to question their own preconceptions. This aspect of modernism has often seemed a reaction to consumer culture, which developed in Europe and North America in the late 19th century. Whereas most manufacturers try to make products that will be marketable by appealing to preferences and prejudices, high modernists rejected such consumerist attitudes in order to undermine conventional thinking. The art critic Clement Greenberg expounded this theory of modernism in his essay "Avant-Garde and Kitsch". Greenberg labeled the products of consumer culture "kitsch", because their design aimed simply to have maximum appeal, with any difficult features removed. For Greenberg, modernism thus formed a reaction against the development of such examples of modern consumer culture as commercial popular music, Hollywood, and advertising. Greenberg associated this with the revolutionary rejection of capitalism. Some modernists saw themselves as part of a revolutionary culture that included political revolution. In Russia after the 1917 Revolution there was indeed initially a burgeoning of avant-garde cultural activity, which included Russian Futurism. However others rejected conventional politics as well as artistic conventions, believing that a revolution of political consciousness had greater importance than a change in political structures. But many modernists saw themselves as apolitical. Others, such as T. S. Eliot, rejected mass popular culture from a conservative position. Some even argue that modernism in literature and art functioned to sustain an elite culture which excluded the majority of the population. Surrealism, which originated in the early 1920s, came to be regarded by the public as the most extreme form of modernism, or "the avant-garde of Modernism". The word "surrealist" was coined by Guillaume Apollinaire and first appeared in the preface to his play "Les Mamelles de Tirésias", which was written in 1903 and first performed in 1917. Major surrealists include Paul Éluard, Robert Desnos, Max Ernst, Hans Arp, Antonin Artaud, Raymond Queneau, Joan Miró, and Marcel Duchamp. By 1930, Modernism had won a place in the establishment, including the political and artistic establishment, although by this time Modernism itself had changed. Modernism continued to evolve during the 1930s. Between 1930 and 1932 composer Arnold Schoenberg worked on "Moses und Aron", one of the first operas to make use of the twelve-tone technique, Pablo Picasso painted in 1937 "Guernica", his cubist condemnation of fascism, while in 1939 James Joyce pushed the boundaries of the modern novel further with "Finnegans Wake". Also by 1930 Modernism began to influence mainstream culture, so that, for example, "The New Yorker" magazine began publishing work, influenced by Modernism, by young writers and humorists like Dorothy Parker, Robert Benchley, E. B. White, S. J. Perelman, and James Thurber, amongst others. Perelman is highly regarded for his humorous short stories that he published in magazines in the 1930s and 1940s, most often in "The New Yorker", which are considered to be the first examples of surrealist humor in America. Modern ideas in art also began to appear more frequently in commercials and logos, an early example of which, from 1916, is the famous London Underground logo designed by Edward Johnston. One of the most visible changes of this period was the adoption of new technologies into daily life of ordinary people in Western Europe and North America. Electricity, the telephone, the radio, the automobile—and the need to work with them, repair them and live with them—created social change. The kind of disruptive moment that only a few knew in the 1880s became a common occurrence. For example, the speed of communication reserved for the stock brokers of 1890 became part of family life, at least in middle class North America. Associated with urbanization and changing social mores also came smaller families and changed relationships between parents and their children. Another strong influence at this time was Marxism. After the generally primitivistic/irrationalist aspect of pre-World War I Modernism (which for many modernists precluded any attachment to merely political solutions) and the neoclassicism of the 1920s (as represented most famously by T. S. Eliot and Igor Stravinsky—which rejected popular solutions to modern problems), the rise of fascism, the Great Depression, and the march to war helped to radicalise a generation. Bertolt Brecht, W. H. Auden, André Breton, Louis Aragon and the philosophers Antonio Gramsci and Walter Benjamin are perhaps the most famous exemplars of this Modernist form of Marxism. There were, however, also modernists explicitly of 'the right', including Salvador Dalí, Wyndham Lewis, T. S. Eliot, Ezra Pound, the Dutch author Menno ter Braak and others. Significant Modernist literary works continued to be created in the 1920s and 1930s, including further novels by Marcel Proust, Virginia Woolf, Robert Musil, and Dorothy Richardson. The American Modernist dramatist Eugene O'Neill's career began in 1914, but his major works appeared in the 1920s, 1930s and early 1940s. Two other significant Modernist dramatists writing in the 1920s and 1930s were Bertolt Brecht and Federico García Lorca. D. H. Lawrence's "Lady Chatterley's Lover" was privately published in 1928, while another important landmark for the history of the modern novel came with the publication of William Faulkner's "The Sound and the Fury" in 1929. In the 1930s, in addition to further major works by Faulkner, Samuel Beckett published his first major work, the novel "Murphy" (1938). Then in 1939 James Joyce's "Finnegans Wake" appeared. This is written in a largely idiosyncratic language, consisting of a mixture of standard English lexical items and neologistic multilingual puns and portmanteau words, which attempts to recreate the experience of sleep and dreams. In poetry T. S. Eliot, E. E. Cummings, and Wallace Stevens were writing from the 1920s until the 1950s. While Modernist poetry in English is often viewed as an American phenomenon, with leading exponents including Ezra Pound, T. S. Eliot, Marianne Moore, William Carlos Williams, H.D., and Louis Zukofsky, there were important British Modernist poets, including David Jones, Hugh MacDiarmid, Basil Bunting, and W. H. Auden. European Modernist poets include Federico García Lorca, Anna Akhmatova, Constantine Cavafy, and Paul Valéry. The Modernist movement continued during this period in Soviet Russia. In 1930 composer Dimitri Shostakovich's (1906–1975) opera "The Nose" was premiered, in which he uses a montage of different styles, including folk music, popular song and atonality. Amongst his influences was Alban Berg's (1985–1935) opera "Wozzeck" (1925), which "had made a tremendous impression on Shostakovich when it was staged in Leningrad." However, from 1932 Socialist realism began to oust Modernism in the Soviet Union, and in 1936 Shostakovich was attacked and forced to withdraw his 4th Symphony. Alban Berg wrote another significant, though incomplete, Modernist opera, "Lulu", which premiered in 1937. Berg's Violin Concerto was first performed in 1935. Like Shostakovich, other composers faced difficulties in this period. In Germany Arnold Schoenberg (1874–1951) was forced to flee to the U.S. when Hitler came to power in 1933, because of his Modernist atonal style as well as his Jewish ancestry. His major works from this period are a Violin Concerto, Op. 36 (1934/36), and a Piano Concerto, Op. 42 (1942). Schoenberg also wrote tonal music in this period with the Suite for Strings in G major (1935) and the Chamber Symphony No. 2 in E minor, Op. 38 (begun in 1906, completed in 1939). During this time Hungarian Modernist Béla Bartók (1881–1945) produced a number of major works, including "Music for Strings, Percussion and Celesta" (1936) and the "Divertimento for String Orchestra" (1939), String Quartet No. 5 (1934), and No. 6 (his last, 1939). But he too left for the US in 1940, because of the rise of fascism in Hungary. Igor Stravinsky (1882–1971) continued writing in his neoclassical style during the 1930s and 1940s, writing works like the "Symphony of Psalms" (1930), Symphony in C (1940) and "Symphony in Three Movements" (1945). He also emigrated to the US because of World War II. Olivier Messiaen (1908–1992), however, served in the French army during the war and was imprisoned at Stalag VIII-A by the Germans, where he composed his famous "Quatuor pour la fin du temps" ("Quartet for the End of Time"). The quartet was first performed in January 1941 to an audience of prisoners and prison guards. In painting, during the 1920s and the 1930s and the Great Depression, modernism was defined by Surrealism, late Cubism, Bauhaus, De Stijl, Dada, German Expressionism, and Modernist and masterful color painters like Henri Matisse and Pierre Bonnard as well as the abstractions of artists like Piet Mondrian and Wassily Kandinsky which characterized the European art scene. In Germany, Max Beckmann, Otto Dix, George Grosz and others politicized their paintings, foreshadowing the coming of World War II, while in America, modernism is seen in the form of American Scene painting and the social realism and regionalism movements that contained both political and social commentary dominated the art world. Artists like Ben Shahn, Thomas Hart Benton, Grant Wood, George Tooker, John Steuart Curry, Reginald Marsh, and others became prominent. Modernism is defined in Latin America by painters Joaquín Torres García from Uruguay and Rufino Tamayo from Mexico, while the muralist movement with Diego Rivera, David Siqueiros, José Clemente Orozco, Pedro Nel Gómez and Santiago Martinez Delgado, and Symbolist paintings by Frida Kahlo, began a renaissance of the arts for the region, characterized by a freer use of color and an emphasis on political messages. Diego Rivera is perhaps best known by the public world for his 1933 mural, "Man at the Crossroads", in the lobby of the RCA Building at Rockefeller Center. When his patron Nelson Rockefeller discovered that the mural included a portrait of Vladimir Lenin and other communist imagery, he fired Rivera, and the unfinished work was eventually destroyed by Rockefeller's staff. Frida Kahlo's works are often characterized by their stark portrayals of pain. Kahlo was deeply influenced by indigenous Mexican culture, which is apparent in her paintings' bright colors and dramatic symbolism. Christian and Jewish themes are often depicted in her work as well; she combined elements of the classic religious Mexican tradition, which were often bloody and violent. Frida Kahlo's Symbolist works relate strongly to Surrealism and to the Magic Realism movement in literature. Political activism was an important piece of David Siqueiros' life, and frequently inspired him to set aside his artistic career. His art was deeply rooted in the Mexican Revolution. The period from the 1920s to the 1950s is known as the Mexican Renaissance, and Siqueiros was active in the attempt to create an art that was at once Mexican and universal. The young Jackson Pollock attended the workshop and helped build floats for the parade. During the 1930s radical leftist politics characterized many of the artists connected to Surrealism, including Pablo Picasso. On 26 April 1937, during the Spanish Civil War, the Basque town of Gernika was bombed by Nazi Germany's Luftwaffe. The Germans were attacking to support the efforts of Francisco Franco to overthrow the Basque government and the Spanish Republican government. Pablo Picasso painted his mural-sized "Guernica" to commemorate the horrors of the bombing. During the Great Depression of the 1930s and through the years of World War II, American art was characterized by Social Realism and American Scene Painting, in the work of Grant Wood, Edward Hopper, Ben Shahn, Thomas Hart Benton, and several others. "Nighthawks" (1942) is a painting by Edward Hopper that portrays people sitting in a downtown diner late at night. It is not only Hopper's most famous painting, but one of the most recognizable in American art. The scene was inspired by a diner in Greenwich Village. Hopper began painting it immediately after the attack on Pearl Harbor. After this event there was a large feeling of gloominess over the country, a feeling that is portrayed in the painting. The urban street is empty outside the diner, and inside none of the three patrons is apparently looking or talking to the others but instead is lost in their own thoughts. This portrayal of modern urban life as empty or lonely is a common theme throughout Hopper's work. "American Gothic" is a painting by Grant Wood from 1930. Portraying a pitchfork-holding farmer and a younger woman in front of a house of Carpenter Gothic style, it is one of the most familiar images in 20th-century American art. Art critics had favorable opinions about the painting; like Gertrude Stein and Christopher Morley, they assumed the painting was meant to be a satire of rural small-town life. It was thus seen as part of the trend towards increasingly critical depictions of rural America, along the lines of Sherwood Anderson's 1919 "Winesburg, Ohio", Sinclair Lewis's 1920 "Main Street", and Carl Van Vechten's "The Tattooed Countess" in literature. However, with the onset of the Great Depression, the painting came to be seen as a depiction of steadfast American pioneer spirit. The situation for artists in Europe during the 1930s deteriorated rapidly as the Nazis' power in Germany and across Eastern Europe increased. "Degenerate art" was a term adopted by the Nazi regime in Germany for virtually all modern art. Such art was banned on the grounds that it was un-German or Jewish Bolshevist in nature, and those identified as degenerate artists were subjected to sanctions. These included being dismissed from teaching positions, being forbidden to exhibit or to sell their art, and in some cases being forbidden to produce art entirely. Degenerate Art was also the title of an exhibition, mounted by the Nazis in Munich in 1937. The climate became so hostile for artists and art associated with modernism and abstraction that many left for the Americas. German artist Max Beckmann and scores of others fled Europe for New York. In New York City a new generation of young and exciting Modernist painters led by Arshile Gorky, Willem de Kooning, and others were just beginning to come of age. Arshile Gorky's portrait of someone who might be Willem de Kooning is an example of the evolution of abstract expressionism from the context of figure painting, cubism and surrealism. Along with his friends de Kooning and John D. Graham, Gorky created biomorphically shaped and abstracted figurative compositions that by the 1940s evolved into totally abstract paintings. Gorky's work seems to be a careful analysis of memory, emotion and shape, using line and color to express feeling and nature. While "The Oxford Encyclopedia of British Literature" states that modernism ended by c. 1939 with regard to British and American literature, "When (if) Modernism petered out and postmodernism began has been contested almost as hotly as when the transition from Victorianism to Modernism occurred." Clement Greenberg sees modernism ending in the 1930s, with the exception of the visual and performing arts, but with regard to music, Paul Griffiths notes that, while Modernism "seemed to be a spent force" by the late 1920s, after World War II, "a new generation of composers—Boulez, Barraqué, Babbitt, Nono, Stockhausen, Xenakis" revived modernism". In fact many literary modernists lived into the 1950s and 1960s, though generally they were no longer producing major works. The term "late modernism" is also sometimes applied to Modernist works published after 1930. Among modernists (or late modernists) still publishing after 1945 were Wallace Stevens, Gottfried Benn, T. S. Eliot, Anna Akhmatova, William Faulkner, Dorothy Richardson, John Cowper Powys, and Ezra Pound. Basil Bunting, born in 1901, published his most important Modernist poem "Briggflatts" in 1965. In addition, Hermann Broch's "The Death of Virgil" was published in 1945 and Thomas Mann's "Doctor Faustus" in 1947. Samuel Beckett, who died in 1989, has been described as a "later Modernist". Beckett is a writer with roots in the expressionist tradition of Modernism, who produced works from the 1930s until the 1980s, including "Molloy" (1951), "Waiting for Godot" (1953), "Happy Days" (1961), and "Rockaby" (1981). The terms "minimalist" and "post-Modernist" have also been applied to his later works. The poets Charles Olson (1910–1970) and J. H. Prynne (born 1936) are among the writers in the second half of the 20th century who have been described as late modernists. More recently the term "late modernism" has been redefined by at least one critic and used to refer to works written after 1945, rather than 1930. With this usage goes the idea that the ideology of modernism was significantly re-shaped by the events of World War II, especially the Holocaust and the dropping of the atom bomb. The postwar period left the capitals of Europe in upheaval with an urgency to economically and physically rebuild and to politically regroup. In Paris (the former center of European culture and the former capital of the art world) the climate for art was a disaster. Important collectors, dealers, and Modernist artists, writers, and poets had fled Europe for New York and America. The surrealists and modern artists from every cultural center of Europe had fled the onslaught of the Nazis for safe haven in the United States. Many of those who didn't flee perished. A few artists, notably Pablo Picasso, Henri Matisse, and Pierre Bonnard, remained in France and survived. The 1940s in New York City heralded the triumph of American abstract expressionism, a Modernist movement that combined lessons learned from Henri Matisse, Pablo Picasso, surrealism, Joan Miró, cubism, Fauvism, and early modernism via great teachers in America like Hans Hofmann and John D. Graham. American artists benefited from the presence of Piet Mondrian, Fernand Léger, Max Ernst and the André Breton group, Pierre Matisse's gallery, and Peggy Guggenheim's gallery "The Art of This Century", as well as other factors. Paris, moreover, recaptured much of its luster in the 1950s and 60s as the center of a machine art florescence, with both of the leading machine art sculptors Jean Tinguely and Nicolas Schöffer having moved there to launch their careers—and which florescence, in light of the technocentric character of modern life, may well have a particularly long lasting influence. The term "Theatre of the Absurd" is applied to plays, written primarily by Europeans, that express the belief that human existence has no meaning or purpose and therefore all communication breaks down. Logical construction and argument gives way to irrational and illogical speech and to its ultimate conclusion, silence. While there are significant precursors, including Alfred Jarry (1873–1907), the Theatre of the Absurd is generally seen as beginning in the 1950s with the plays of Samuel Beckett. Critic Martin Esslin coined the term in his 1960 essay "Theatre of the Absurd". He related these plays based on a broad theme of the Absurd, similar to the way Albert Camus uses the term in his 1942 essay, "The Myth of Sisyphus". The Absurd in these plays takes the form of man's reaction to a world apparently without meaning, and/or man as a puppet controlled or menaced by invisible outside forces. Though the term is applied to a wide range of plays, some characteristics coincide in many of the plays: broad comedy, often similar to vaudeville, mixed with horrific or tragic images; characters caught in hopeless situations forced to do repetitive or meaningless actions; dialogue full of clichés, wordplay, and nonsense; plots that are cyclical or absurdly expansive; either a parody or dismissal of realism and the concept of the "well-made play". Playwrights commonly associated with the Theatre of the Absurd include Samuel Beckett (1906–1989), Eugène Ionesco (1909–1994), Jean Genet (1910–1986), Harold Pinter (1930–2008), Tom Stoppard (born 1937), Alexander Vvedensky (1904–1941), Daniil Kharms (1905–1942), Friedrich Dürrenmatt (1921–1990), Alejandro Jodorowsky (born 1929), Fernando Arrabal (born 1932), Václav Havel (1936–2011) and Edward Albee (1928–2016). During the late 1940s Jackson Pollock's radical approach to painting revolutionized the potential for all contemporary art that followed him. To some extent Pollock realized that the journey toward making a work of art was as important as the work of art itself. Like Pablo Picasso's innovative reinventions of painting and sculpture in the early 20th century via Cubism and constructed sculpture, Pollock redefined the way art is made. His move away from easel painting and conventionality was a liberating signal to the artists of his era and to all who came after. Artists realized that Jackson Pollock's process—placing unstretched raw canvas on the floor where it could be attacked from all four sides using artistic and industrial materials; dripping and throwing linear skeins of paint; drawing, staining, and brushing; using imagery and nonimagery—essentially blasted artmaking beyond any prior boundary. Abstract expressionism generally expanded and developed the definitions and possibilities available to artists for the creation of new works of art. The other abstract expressionists followed Pollock's breakthrough with new breakthroughs of their own. In a sense the innovations of Jackson Pollock, Willem de Kooning, Franz Kline, Mark Rothko, Philip Guston, Hans Hofmann, Clyfford Still, Barnett Newman, Ad Reinhardt, Robert Motherwell, Peter Voulkos and others opened the floodgates to the diversity and scope of all the art that followed them. Rereadings into abstract art by art historians such as Linda Nochlin, Griselda Pollock and Catherine de Zegher critically show, however, that pioneering women artists who produced major innovations in modern art had been ignored by official accounts of its history. Henry Moore (1898–1986) emerged after World War II as Britain's leading sculptor. He was best known for his semi-abstract monumental bronze sculptures which are located around the world as public works of art. His forms are usually abstractions of the human figure, typically depicting mother-and-child or reclining figures, usually suggestive of the female body, apart from a phase in the 1950s when he sculpted family groups. His forms are generally pierced or contain hollow spaces. In the 1950s, Moore began to receive increasingly significant commissions, including a reclining figure for the UNESCO building in Paris in 1958. With many more public works of art, the scale of Moore's sculptures grew significantly. The last three decades of Moore's life continued in a similar vein, with several major retrospectives taking place around the world, notably a prominent exhibition in the summer of 1972 in the grounds of the Forte di Belvedere overlooking Florence. By the end of the 1970s, there were some 40 exhibitions a year featuring his work. On the campus of the University of Chicago in December 1967, 25 years to the minute after the team of physicists led by Enrico Fermi achieved the first controlled, self-sustaining nuclear chain reaction, Moore's "Nuclear Energy" was unveiled. Also in Chicago, Moore commemorated science with a large bronze sundial, locally named "Man Enters the Cosmos" (1980), which was commissioned to recognise the space exploration program. The "London School" of figurative painters, including Francis Bacon (1909–1992), Lucian Freud (1922–2011), Frank Auerbach (born 1931), Leon Kossoff (born 1926), and Michael Andrews (1928–1995), have received widespread international recognition. Francis Bacon was an Irish-born British figurative painter known for his bold, graphic and emotionally raw imagery. His painterly but abstracted figures typically appear isolated in glass or steel geometrical cages set against flat, nondescript backgrounds. Bacon began painting during his early 20s but worked only sporadically until his mid-30s. His breakthrough came with the 1944 triptych "Three Studies for Figures at the Base of a Crucifixion" which sealed his reputation as a uniquely bleak chronicler of the human condition. His output can be crudely described as consisting of sequences or variations on a single motif; beginning with the 1940s male heads isolated in rooms, the early 1950s screaming popes, and mid to late 1950s animals and lone figures suspended in geometric structures. These were followed by his early 1960s modern variations of the crucifixion in the triptych format. From the mid-1960s to early 1970s, Bacon mainly produced strikingly compassionate portraits of friends. Following the suicide of his lover George Dyer in 1971, his art became more personal, inward-looking, and preoccupied with themes and motifs of death. During his lifetime, Bacon was equally reviled and acclaimed. Lucian Freud was a German-born British painter, known chiefly for his thickly impastoed portrait and figure paintings, who was widely considered the pre-eminent British artist of his time. His works are noted for their psychological penetration, and for their often discomforting examination of the relationship between artist and model. According to William Grimes of "The New York Times", "Lucien Freud and his contemporaries transformed figure painting in the 20th century. In paintings like "Girl with a White Dog" (1951–1952), Freud put the pictorial language of traditional European painting in the service of an anti-romantic, confrontational style of portraiture that stripped bare the sitter's social facade. Ordinary people—many of them his friends—stared wide-eyed from the canvas, vulnerable to the artist's ruthless inspection." In abstract painting during the 1950s and 1960s several new directions like hard-edge painting and other forms of geometric abstraction began to appear in artist studios and in radical avant-garde circles as a reaction against the subjectivism of abstract expressionism. Clement Greenberg became the voice of post-painterly abstraction when he curated an influential exhibition of new painting that toured important art museums throughout the United States in 1964. Color Field painting, hard-edge painting and lyrical abstraction emerged as radical new directions. By the late 1960s however, postminimalism, process art and Arte Povera also emerged as revolutionary concepts and movements that encompassed both painting and sculpture, via lyrical abstraction and the postminimalist movement, and in early conceptual art. Process art as inspired by Pollock enabled artists to experiment with and make use of a diverse encyclopedia of style, content, material, placement, sense of time, and plastic and real space. Nancy Graves, Ronald Davis, Howard Hodgkin, Larry Poons, Jannis Kounellis, Brice Marden, Colin McCahon, Bruce Nauman, Richard Tuttle, Alan Saret, Walter Darby Bannard, Lynda Benglis, Dan Christensen, Larry Zox, Ronnie Landfield, Eva Hesse, Keith Sonnier, Richard Serra, Sam Gilliam, Mario Merz and Peter Reginato were some of the younger artists who emerged during the era of late modernism that spawned the heyday of the art of the late 1960s. In 1962 the Sidney Janis Gallery mounted "The New Realists", the first major pop art group exhibition in an uptown art gallery in New York City. Janis mounted the exhibition in a 57th Street storefront near his gallery. The show sent shockwaves through the New York School and reverberated worldwide. Earlier in England in 1958 the term "Pop Art" was used by Lawrence Alloway to describe paintings that celebrated the consumerism of the post World War II era. This movement rejected abstract expressionism and its focus on the hermeneutic and psychological interior in favor of art that depicted and often celebrated material consumer culture, advertising, and the iconography of the mass production age. The early works of David Hockney and the works of Richard Hamilton and Eduardo Paolozzi (who created the groundbreaking "I was a Rich Man's Plaything", 1947) are considered seminal examples in the movement. Meanwhile, in the downtown scene in New York's East Village 10th Street galleries, artists were formulating an American version of pop art. Claes Oldenburg had his storefront, and the Green Gallery on 57th Street began to show the works of Tom Wesselmann and James Rosenquist. Later Leo Castelli exhibited the works of other American artists, including those of Andy Warhol and Roy Lichtenstein for most of their careers. There is a connection between the radical works of Marcel Duchamp and Man Ray, the rebellious Dadaists with a sense of humor, and pop artists like Claes Oldenburg, Andy Warhol, and Roy Lichtenstein, whose paintings reproduce the look of Ben-Day dots, a technique used in commercial reproduction. Minimalism describes movements in various forms of art and design, especially visual art and music, wherein artists intend to expose the essence or identity of a subject through eliminating all nonessential forms, features, or concepts. Minimalism is any design or style wherein the simplest and fewest elements are used to create the maximum effect. As a specific movement in the arts it is identified with developments in post–World War II Western art, most strongly with American visual arts in the 1960s and early 1970s. Prominent artists associated with this movement include Donald Judd, John McCracken, Agnes Martin, Dan Flavin, Robert Morris, Ronald Bladen, Anne Truitt, and Frank Stella. It derives from the reductive aspects of modernism and is often interpreted as a reaction against Abstract expressionism and a bridge to Postminimal art practices. By the early 1960s minimalism emerged as an abstract movement in art (with roots in the geometric abstraction of Kazimir Malevich, the Bauhaus and Piet Mondrian) that rejected the idea of relational and subjective painting, the complexity of abstract expressionist surfaces, and the emotional zeitgeist and polemics present in the arena of action painting. Minimalism argued that extreme simplicity could capture all of the sublime representation needed in art. Minimalism is variously construed either as a precursor to postmodernism, or as a postmodern movement itself. In the latter perspective, early minimalism yielded advanced Modernist works, but the movement partially abandoned this direction when some artists like Robert Morris changed direction in favor of the anti-form movement. Hal Foster, in his essay "The Crux of Minimalism", examines the extent to which Donald Judd and Robert Morris both acknowledge and exceed Greenbergian Modernism in their published definitions of minimalism. He argues that minimalism is not a "dead end" of Modernism, but a "paradigm shift toward postmodern practices that continue to be elaborated today." The terms have expanded to encompass a movement in music that features such repetition and iteration as those of the compositions of La Monte Young, Terry Riley, Steve Reich, Philip Glass, and John Adams. Minimalist compositions are sometimes known as systems music. The term "minimalist" often colloquially refers to anything that is spare or stripped to its essentials. It has also been used to describe the plays and novels of Samuel Beckett, the films of Robert Bresson, the stories of Raymond Carver, and the automobile designs of Colin Chapman. In the late 1960s Robert Pincus-Witten coined the term "postminimalism" to describe minimalist-derived art which had content and contextual overtones that minimalism rejected. The term was applied by Pincus-Whitten to the work of Eva Hesse, Keith Sonnier, Richard Serra and new work by former minimalists Robert Smithson, Robert Morris, Sol LeWitt, Barry Le Va, and others. Other minimalists including Donald Judd, Dan Flavin, Carl Andre, Agnes Martin, John McCracken and others continued to produce late Modernist paintings and sculpture for the remainders of their careers. Since then, many artists have embraced minimal or postminimal styles, and the label "Postmodern" has been attached to them. Related to abstract expressionism was the emergence of combining manufactured items with artist materials, moving away from previous conventions of painting and sculpture. The work of Robert Rauschenberg exemplifies this trend. His "combines" of the 1950s were forerunners of pop art and installation art, and used assemblages of large physical objects, including stuffed animals, birds and commercial photographs. Rauschenberg, Jasper Johns, Larry Rivers, John Chamberlain, Claes Oldenburg, George Segal, Jim Dine, and Edward Kienholz were among important pioneers of both abstraction and pop art. Creating new conventions of art-making, they made acceptable in serious contemporary art circles the radical inclusion in their works of unlikely materials. Another pioneer of collage was Joseph Cornell, whose more intimately scaled works were seen as radical because of both his personal iconography and his use of found objects. In the early 20th century Marcel Duchamp submitted for exhibition a urinal as a sculpture. He professed his intent that people look at the urinal as if it were a work of art because he said it was a work of art. He referred to his work as "readymades". "Fountain" was a urinal signed with the pseudonym "R. Mutt", the exhibition of which shocked the art world in 1917. This and Duchamp's other works are generally labelled as Dada. Duchamp can be seen as a precursor to conceptual art, other famous examples being John Cage's "4'33"", which is four minutes and thirty three seconds of silence, and Rauschenberg's "Erased de Kooning Drawing". Many conceptual works take the position that art is the result of the viewer viewing an object or act as art, not of the intrinsic qualities of the work itself. In choosing "an ordinary article of life" and creating "a new thought for that object" Duchamp invited onlookers to view "Fountain" as a sculpture. Marcel Duchamp famously gave up "art" in favor of chess. Avant-garde composer David Tudor created a piece, "Reunion" (1968), written jointly with Lowell Cross, that features a chess game in which each move triggers a lighting effect or projection. Duchamp and Cage played the game at the work's premier. Steven Best and Douglas Kellner identify Rauschenberg and Jasper Johns as part of the transitional phase, influenced by Duchamp, between Modernism and Postmodernism. Both used images of ordinary objects, or the objects themselves, in their work, while retaining the abstraction and painterly gestures of high Modernism. During the late 1950s and 1960s artists with a wide range of interests began to push the boundaries of contemporary art. Yves Klein in France, Carolee Schneemann, Yayoi Kusama, Charlotte Moorman and Yoko Ono in New York City, and Joseph Beuys, Wolf Vostell and Nam June Paik in Germany were pioneers of performance-based works of art. Groups like The Living Theatre with Julian Beck and Judith Malina collaborated with sculptors and painters creating environments, radically changing the relationship between audience and performer, especially in their piece "Paradise Now". The Judson Dance Theater, located at the Judson Memorial Church, New York; and the Judson dancers, notably Yvonne Rainer, Trisha Brown, Elaine Summers, Sally Gross, Simonne Forti, Deborah Hay, Lucinda Childs, Steve Paxton and others; collaborated with artists Robert Morris, Robert Whitman, John Cage, Robert Rauschenberg, and engineers like Billy Klüver. Park Place Gallery was a center for musical performances by electronic composers Steve Reich, Philip Glass, and other notable performance artists including Joan Jonas. These performances were intended as works of a new art form combining sculpture, dance, and music or sound, often with audience participation. They were characterized by the reductive philosophies of minimalism and the spontaneous improvisation and expressivity of abstract expressionism. Images of Schneeman's performances of pieces meant to shock are occasionally used to illustrate these kinds of art, and she is often seen photographed while performing her piece "Interior Scroll". However, according to modernist philosophy surrounding performance art, it is cross-purposes to publish images of her performing this piece, for performance artists reject publication entirely: the performance itself is the medium. Thus, other media cannot illustrate performance art; performance is momentary, evanescent, and personal, not for capturing; representations of performance art in other media, whether by image, video, narrative or otherwise, select certain points of view in space or time or otherwise involve the inherent limitations of each medium. The artists deny that recordings illustrate the medium of performance as art. During the same period, various avant-garde artists created Happenings, mysterious and often spontaneous and unscripted gatherings of artists and their friends and relatives in various specified locations, often incorporating exercises in absurdity, physicality, costuming, spontaneous nudity, and various random or seemingly disconnected acts. Notable creators of happenings included Allan Kaprow—who first used the term in 1958, Claes Oldenburg, Jim Dine, Red Grooms, and Robert Whitman. Another trend in art which has been associated with the term postmodern is the use of a number of different media together. Intermedia is a term coined by Dick Higgins and meant to convey new art forms along the lines of Fluxus, concrete poetry, found objects, performance art, and computer art. Higgins was the publisher of the Something Else Press, a concrete poet married to artist Alison Knowles and an admirer of Marcel Duchamp. Ihab Hassan includes "Intermedia, the fusion of forms, the confusion of realms," in his list of the characteristics of postmodern art. One of the most common forms of "multi-media art" is the use of video-tape and CRT monitors, termed video art. While the theory of combining multiple arts into one art is quite old, and has been revived periodically, the postmodern manifestation is often in combination with performance art, where the dramatic subtext is removed, and what is left is the specific statements of the artist in question or the conceptual statement of their action. Fluxus was named and loosely organized in 1962 by George Maciunas (1931–1978), a Lithuanian-born American artist. Fluxus traces its beginnings to John Cage's 1957 to 1959 Experimental Composition classes at the New School for Social Research in New York City. Many of his students were artists working in other media with little or no background in music. Cage's students included Fluxus founding members Jackson Mac Low, Al Hansen, George Brecht and Dick Higgins. Fluxus encouraged a do-it-yourself aesthetic and valued simplicity over complexity. Like Dada before it, Fluxus included a strong current of anti-commercialism and an anti-art sensibility, disparaging the conventional market-driven art world in favor of an artist-centered creative practice. Fluxus artists preferred to work with whatever materials were at hand, and either created their own work or collaborated in the creation process with their colleagues. Andreas Huyssen criticises attempts to claim Fluxus for Postmodernism as "either the master-code of postmodernism or the ultimately unrepresentable art movement—as it were, postmodernism's sublime." Instead he sees Fluxus as a major Neo-Dadaist phenomena within the avant-garde tradition. It did not represent a major advance in the development of artistic strategies, though it did express a rebellion against "the administered culture of the 1950s, in which a moderate, domesticated modernism served as ideological prop to the Cold War." The continuation of abstract expressionism, color field painting, lyrical abstraction, geometric abstraction, minimalism, abstract illusionism, process art, pop art, postminimalism, and other late 20th-century Modernist movements in both painting and sculpture continued through the first decade of the 21st century and constitute radical new directions in those mediums. At the turn of the 21st century, well-established artists such as Sir Anthony Caro, Lucian Freud, Cy Twombly, Robert Rauschenberg, Jasper Johns, Agnes Martin, Al Held, Ellsworth Kelly, Helen Frankenthaler, Frank Stella, Kenneth Noland, Jules Olitski, Claes Oldenburg, Jim Dine, James Rosenquist, Alex Katz, Philip Pearlstein, and younger artists including Brice Marden, Chuck Close, Sam Gilliam, Isaac Witkin, Sean Scully, Mahirwan Mamtani, Joseph Nechvatal, Elizabeth Murray, Larry Poons, Richard Serra, Walter Darby Bannard, Larry Zox, Ronnie Landfield, Ronald Davis, Dan Christensen, Joel Shapiro, Tom Otterness, Joan Snyder, Ross Bleckner, Archie Rand, Susan Crile, and others continued to produce vital and influential paintings and sculpture. See also and Hanshinkan Modernism Peter Kalliney suggests that,"Modernist concepts, especially aesthetic autonomy, were fundamental to the literature of decolonization in anglophone Africa." In his opinion, Rajat Neogy, Christopher Okigbo, and Wole Soyinka, were among the writers who "repurposed modernist versions of aesthetic autonomy to declare their freedom from colonial bondage, from systems of racial discrimination, and even from the new postcolonial state". The terms "modernism" and "modernist", according to scholar William J. Tyler, "have only recently become part of the standard discourse in English on modern Japanese literature and doubts concerning their authenticity vis-a-vis Western European modernism remain". Tyler finds this odd, given "the decidedly modern prose" of such "well-known Japanese writers as Kawabata Yasunari, Nagai Kafu, and Jun'ichirō Tanizaki". However, "scholars in the visual and fine arts, architecture, and poetry readily embraced "modanizumu" as a key concept for describing and analyzing Japanese culture in the 1920s and 1930s". In 1924, various young Japanese writers, including Yasunari Kawabata and Riichi Yokomitsu started a literary journal "Bungei Jidai" ("The Artistic Age"). This journal was "part of an 'art for art's sake' movement, influenced by European Cubism, Expressionism, Dada, and other modernist styles". Japanese modernist architect Kenzō Tange (1913–2005) was one of the most significant architects of the 20th century, combining traditional Japanese styles with modernism, and designing major buildings on five continents. Tange was also an influential patron of the Metabolist movement. He said: "It was, I believe, around 1959 or at the beginning of the sixties that I began to think about what I was later to call structuralism", He was influenced from an early age by the Swiss modernist, Le Corbusier, Tange gained international recognition in 1949 when he won the competition for the design of Hiroshima Peace Memorial Park. In China the "New Sensationists" (新感觉派, Xīn Gǎnjué Pài) were a group of writers based in Shanghai who in the 1930s and 1940s were influenced, to varying degrees, by Western and Japanese modernism. They wrote fiction that was more concerned with the unconscious and with aesthetics than with politics or social problems. Among these writers were Mu Shiying, Liu Na'ou, and Shi Zhecun. In India, the Progressive Artists' Group was a group of modern artists, mainly based in Mumbai, India formed in 1947. Though it lacked any particular style, it synthesised Indian art with European and North America influences from the first half of the 20th Century, including Post-Impressionism, Cubism and Expressionism. By the early 1980s the Postmodern movement in art and architecture began to establish its position through various conceptual and intermedia formats. Postmodernism in music and literature began to take hold earlier. In music, postmodernism is described in one reference work as a "term introduced in the 1970s", while in British literature, "The Oxford Encyclopedia of British Literature" sees modernism "ceding its predominance to postmodernism" as early as 1939. However, dates are highly debatable, especially as according to Andreas Huyssen: "one critic's postmodernism is another critic's modernism." This includes those who are critical of the division between the two and see them as two aspects of the same movement, and believe that late Modernism continues. Modernism is an encompassing label for a wide variety of cultural movements. Postmodernism is essentially a centralized movement that named itself, based on sociopolitical theory, although the term is now used in a wider sense to refer to activities from the 20th century onwards which exhibit awareness of and reinterpret the modern. Postmodern theory asserts that the attempt to canonise Modernism "after the fact" is doomed to undisambiguable contradictions. In a narrower sense, what was Modernist was not necessarily also postmodern. Those elements of Modernism which accentuated the benefits of rationality and socio-technological progress were only Modernist. Modernism's stress on freedom of expression, experimentation, radicalism, and primitivism disregards conventional expectations. In many art forms this often meant startling and alienating audiences with bizarre and unpredictable effects, as in the strange and disturbing combinations of motifs in Surrealism or the use of extreme dissonance and atonality in Modernist music. In literature this often involved the rejection of intelligible plots or characterization in novels, or the creation of poetry that defied clear interpretation. From 1932, socialist realism began to oust Modernism in the Soviet Union; it had previously endorsed Futurism and Constructivism. The Nazi government of Germany deemed modernism narcissistic and nonsensical, as well as "Jewish" (see Antisemitism) and "Negro". The Nazis exhibited Modernist paintings alongside works by the mentally ill in an exhibition entitled "Degenerate Art". Accusations of "formalism" could lead to the end of a career, or worse. For this reason many modernists of the postwar generation felt that they were the most important bulwark against totalitarianism, the "canary in the coal mine", whose repression by a government or other group with supposed authority represented a warning that individual liberties were being threatened. Louis A. Sass compared madness, specifically schizophrenia, and modernism in a less fascist manner by noting their shared disjunctive narratives, surreal images, and incoherence. In fact, modernism flourished mainly in consumer/capitalist societies, despite the fact that its proponents often rejected consumerism itself. However, high modernism began to merge with consumer culture after World War II, especially during the 1960s. In Britain, a youth subculture emerged calling itself "Modernist" (usually shortened to Mod), following such representative music groups as the Who and the Kinks. The likes of Bob Dylan, Serge Gainsbourg and the Rolling Stones combined popular musical traditions with Modernist verse, adopting literary devices derived from James Joyce, Samuel Beckett, James Thurber, T. S. Eliot, Guillaume Apollinaire, Allen Ginsberg, and others. The Beatles developed along similar lines, creating various Modernist musical effects on several albums, while musicians such as Frank Zappa, Syd Barrett and Captain Beefheart proved even more experimental. Modernist devices also started to appear in popular cinema, and later on in music videos. Modernist design also began to enter the mainstream of popular culture, as simplified and stylized forms became popular, often associated with dreams of a space age high-tech future. This merging of consumer and high versions of Modernist culture led to a radical transformation of the meaning of "Modernism". First, it implied that a movement based on the rejection of tradition had become a tradition of its own. Second, it demonstrated that the distinction between elite Modernist and mass consumerist culture had lost its precision. Some writers declared that modernism had become so institutionalized that it was now "post avant-garde", indicating that it had lost its power as a revolutionary movement. Many have interpreted this transformation as the beginning of the phase that became known as postmodernism. For others, such as art critic Robert Hughes, postmodernism represents an extension of modernism. "Anti-modern" or "counter-modern" movements seek to emphasize holism, connection and spirituality as remedies or antidotes to modernism. Such movements see modernism as reductionist, and therefore subject to an inability to see systemic and emergent effects. Many modernists came to this viewpoint, for example Paul Hindemith in his late turn towards mysticism. Writers such as Paul H. Ray and Sherry Ruth Anderson, in "" (2000), Fredrick Turner in "A Culture of Hope" and Lester Brown in "Plan B", have articulated a critique of the basic idea of modernism itself—that individual creative expression should conform to the realities of technology. Instead, they argue, individual creativity should make everyday life more emotionally acceptable. Some traditionalist artists like Alexander Stoddart reject modernism generally as the product of "an epoch of false money allied with false culture". In some fields, the effects of modernism have remained stronger and more persistent than in others. Visual art has made the most complete break with its past. Most major capital cities have museums devoted to modern art as distinct from post-Renaissance art (c. 1400 to c. 1900). Examples include the Museum of Modern Art in New York, the Tate Modern in London, and the Centre Pompidou in Paris. These galleries make no distinction between modernist and Postmodernist phases, seeing both as developments within Modern Art.
https://en.wikipedia.org/wiki?curid=19547
Marshall McLuhan Herbert Marshall McLuhan (July 21, 1911 – December 31, 1980) was a Canadian philosopher, whose work is among the cornerstones of the study of media theory. Born in Edmonton, Alberta, McLuhan studied at the University of Manitoba and the University of Cambridge. He began his teaching career as a professor of English at several universities in the US and Canada before moving to the University of Toronto in 1946, where he remained for the rest of his life. McLuhan coined the expression "the medium is the message" and the term "global village", as well as predicting the World Wide Web almost 30 years before it was invented. He was a fixture in media discourse in the late 1960s, though his influence began to wane in the early 1970s. In the years following his death, he continued to be a controversial figure in academic circles. However, with the arrival of the Internet and the World Wide Web, interest would be renewed in his work and perspective. McLuhan was born on 21 July 1911 in Edmonton, Alberta, and was named "Marshall" after his maternal grandmother's surname. His brother, Maurice, was born two years later. His parents were both also born in Canada: his mother, Elsie Naomi (née Hall), was a Baptist school teacher who later became an actress; and his father, Herbert Ernest McLuhan, was a Methodist with a real-estate business in Edmonton. When the business failed at the break out of World War I, McLuhan's father enlisted in the Canadian Army. After a year of service, he contracted influenza and remained in Canada, away from the front lines. After Herbert's discharge from the army in 1915, the McLuhan family moved to Winnipeg, Manitoba, where Marshall grew up and went to school, attending Kelvin Technical School before enrolling in the University of Manitoba in 1928. After studying for one year as an engineering student, he changed majors and earned a Bachelor of Arts degree (1933), winning a University Gold Medal in Arts and Sciences. He went on to receive a Master of Arts degree (1934) in English from the University as well. He had long desired to pursue graduate studies in England and was accepted to the University of Cambridge, having failed to secure a Rhodes scholarship to Oxford. Though having already earned his B.A. and M.A. in Manitoba, Cambridge required him to enrol as an undergraduate "affiliated" student, with one year's credit towards a three-year bachelor's degree, before entering any doctoral studies. He entered Trinity Hall, Cambridge, in the autumn of 1934, where he studied under I. A. Richards and F. R. Leavis, and was influenced by New Criticism. Years afterward, upon reflection, he credited the faculty there with influencing the direction of his later work because of their emphasis on the "training of perception", as well as such concepts as Richards' notion of "feedforward". These studies formed an important precursor to his later ideas on technological forms. He received the required bachelor's degree from Cambridge in 1936 and entered their graduate program. At the University of Manitoba, McLuhan explored his conflicted relationship with religion and turned to literature to "gratify his soul's hunger for truth and beauty," later referring to this stage as agnosticism. While studying the trivium at Cambridge, he took the first steps toward his eventual conversion to Catholicism in 1937, founded on his reading of G. K. Chesterton. In 1935, he wrote to his mother:Had I not encountered Chesterton I would have remained agnostic for many years at least. Chesterton did not convince me of religious faith, but he prevented my despair from becoming a habit or hardening into misanthropy. He opened my eyes to European culture and encouraged me to know it more closely. He taught me the reasons for all that in me was simply blind anger and misery.At the end of March 1937, McLuhan completed what was a slow but total conversion process, when he was formally received into the Catholic Church. After consulting a minister, his father accepted the decision to convert. His mother, however, felt that his conversion would hurt his career and was inconsolable. McLuhan was devout throughout his life, but his religion remained a private matter. He had a lifelong interest in the number three (e.g., the trivium, the Trinity) and sometimes said that the Virgin Mary provided intellectual guidance for him. For the rest of his career, he taught in Catholic institutions of higher education. Unable to find a suitable job in Canada, he returned from England to take a job as a teaching assistant at the University of Wisconsin–Madison for the 1936–37 academic year. From 1937 to 1944, he taught English at Saint Louis University (with an interruption from 1939–40 when he returned to Cambridge). There he taught courses on Shakespeare, eventually tutoring and befriending Walter J. Ong, who would write his doctoral dissertation on a topic that McLuhan had called to his attention, as well as become a well-known authority on communication and technology. McLuhan met Corinne Lewis in St. Louis, a teacher and aspiring actress from Fort Worth, Texas, whom he married on 4 August 1939. They spent 1939–40 in Cambridge, where he completed his master's degree (awarded in January 1940) and began to work on his doctoral dissertation on Thomas Nashe and the verbal arts. While the McLuhans were in England, World War II had broken out in Europe. For this reason, he obtained permission to complete and submit his dissertation from the United States, without having to return to Cambridge for an oral defence. In 1940, the McLuhans returned to Saint Louis University, where they started a family as he continued teaching. He was awarded a Doctor of Philosophy degree in December 1943. He next taught at Assumption College in Windsor, Ontario, from 1944 to 1946, then moved to Toronto in 1946 where he joined the faculty of St. Michael's College, a Catholic college of the University of Toronto, where Hugh Kenner would be one of his students. Canadian economist and communications scholar Harold Innis was a university colleague who had a strong influence on his work. McLuhan wrote in 1964: "I am pleased to think of my own book "The Gutenberg Galaxy" as a footnote to the observations of Innis on the subject of the psychic and social consequences, first of writing then of printing." In the early 1950s, McLuhan began the Communication and Culture seminars at the University of Toronto, funded by the Ford Foundation. As his reputation grew, he received a growing number of offers from other universities. During this period, he published his first major work, "" (1951), in which he examines the effect of advertising on society and culture. Throughout the 1950s, he and Edmund Carpenter also produced an important academic journal called "Explorations". McLuhan and Carpenter have been characterized as the Toronto School of communication theory, together with Harold Innis, Eric A. Havelock, and Northrop Frye. During this time, McLuhan supervised the doctoral thesis of modernist writer Sheila Watson on the subject of Wyndham Lewis. Hoping to keep him from moving to another institute, the University of Toronto created the Centre for Culture and Technology (CCT) in 1963. He remained at the University through 1979, spending much of this time as head of his CCT program. From 1967–68, McLuhan was named the Albert Schweitzer Chair in Humanities at Fordham University in the Bronx. While at Fordham, he was diagnosed with a benign brain tumour, which was treated successfully. He returned to Toronto where he taught at the University of Toronto for the rest of his life and lived in Wychwood Park, a bucolic enclave on a hill overlooking the downtown where Anatol Rapoport was his neighbour. In 1970, he was made a Companion of the Order of Canada. In 1975, the University of Dallas hosted him from April to May, appointing him to the McDermott Chair. Marshall and Corinne McLuhan had six children: Eric, twins Mary and Teresa, Stephanie, Elizabeth, and Michael. The associated costs of a large family eventually drove him to advertising work and accepting frequent consulting and speaking engagements for large corporations, including IBM and AT&T. Woody Allen's Oscar-winning "Annie Hall" (1977) featured McLuhan in a cameo as himself. In the film, a pompous academic is arguing with Allen in a cinema queue when McLuhan suddenly appears and silences him, saying, "You know nothing of my work." This was one of McLuhan's most frequent statements to and about those who disagreed with him. In September 1979, McLuhan suffered a stroke which affected his ability to speak. The University of Toronto's School of Graduate Studies tried to close his research centre shortly thereafter, but was deterred by substantial protests, most notably by Woody Allen. McLuhan never fully recovered from the stroke and died in his sleep on 31 December 1980. He is buried at Holy Cross Cemetery in Thornhill, Ontario, Canada. During his years at Saint Louis University (1937–1944), McLuhan worked concurrently on two projects: his doctoral dissertation and the manuscript that was eventually published in 1951 as a book, titled "The Mechanical Bride: Folklore of Industrial Man", which included only a representative selection of the materials that McLuhan had prepared for it. McLuhan's 1942 Cambridge University doctoral dissertation surveys the history of the verbal arts (grammar, logic, and rhetoric—collectively known as the trivium) from the time of Cicero down to the time of Thomas Nashe. In his later publications, McLuhan at times uses the Latin concept of the "trivium" to outline an orderly and systematic picture of certain periods in the history of Western culture. McLuhan suggests that the Late Middle Ages, for instance, were characterized by the heavy emphasis on the formal study of logic. The key development that led to the Renaissance was not the rediscovery of ancient texts, but a shift in emphasis from the formal study of logic to rhetoric and grammar. Modern life is characterized by the re-emergence of grammar as its most salient feature—a trend McLuhan felt was exemplified by the New Criticism of Richards and Leavis. McLuhan also began the academic journal "Explorations" with anthropologist Edmund "Ted" Carpenter. In a letter to Walter Ong, dated 31 May 1953, McLuhan reports that he had received a two-year grant of $43,000 from the Ford Foundation to carry out a communication project at the University of Toronto involving faculty from different disciplines, which led to the creation of the journal. At a Fordham lecture in 1999, Tom Wolfe suggested that a major under-acknowledged influence on McLuhan's work is the Jesuit philosopher Pierre Teilhard de Chardin, whose ideas anticipated those of McLuhan, especially the evolution of the human mind into the "noosphere." In fact, McLuhan warns against outright dismissing or whole-heartedly accepting de Chardin's observations early on in his second published book "The Gutenberg Galaxy": In his private life, McLuhan wrote to friends saying: "I am not a fan of Pierre Teilhard de Chardin. The idea that anything is better because it comes later is surely borrowed from pre-electronic technologies." Further, McLuhan noted to a Catholic collaborator: "The idea of a Cosmic thrust in one direction ... is surely one of the lamest semantic fallacies ever bred by the word 'evolution'.… That development should have any direction at all is inconceivable except to the highly literate community." McLuhan's first book, The Mechanical Bride: Folklore of Industrial Man (1951), is a pioneering study in the field now known as popular culture. In the book, McLuhan turns his attention to analysing and commenting on numerous examples of persuasion in contemporary popular culture. This followed naturally from his earlier work as both dialectic and rhetoric in the classical trivium aimed at persuasion. At this point, his focus shifted dramatically, turning inward to study the influence of communication media independent of their content. His famous aphorism "the medium is the message" (elaborated in his "", 1964) calls attention to this intrinsic effect of communications media. His interest in the critical study of popular culture was influenced by the 1933 book "Culture and Environment" by F. R. Leavis and Denys Thompson, and the title "The Mechanical Bride" is derived from a piece by the Dadaist artist Marcel Duchamp. Like his later "Gutenberg Galaxy" (1962), "The Mechanical Bride" is composed of a number of short essays that may be read in any order—what he styled the "mosaic approach" to writing a book. Each essay begins with a newspaper or magazine article, or an advertisement, followed by McLuhan's analysis thereof. The analyses bear on aesthetic considerations as well as on the implications behind the imagery and text. McLuhan chose these ads and articles not only to draw attention to their symbolism, as well as their implications for the corporate entities who created and disseminated them, but also to mull over what such advertising implies about the wider society at which it is aimed. Written in 1961 and first published by University of Toronto Press, The Gutenberg Galaxy: The Making of Typographic Man (1962) is a pioneering study in the fields of oral culture, print culture, cultural studies, and media ecology. Throughout the book, McLuhan efforts to reveal how communication technology (i.e., alphabetic writing, the printing press, and the electronic media) affects cognitive organization, which in turn has profound ramifications for social organization: [I]f a new technology extends one or more of our senses outside us into the social world, then new ratios among all of our senses will occur in that particular culture. It is comparable to what happens when a new note is added to a melody. And when the sense ratios alter in any culture then what had appeared lucid before may suddenly become opaque, and what had been vague or opaque will become translucent. McLuhan's episodic history takes the reader from pre-alphabetic, tribal humankind to the electronic age. According to McLuhan, the invention of movable type greatly accelerated, intensified, and ultimately enabled cultural and cognitive changes that had already been taking place since the invention and implementation of the alphabet, by which McLuhan means phonemic orthography. (McLuhan is careful to distinguish the phonetic alphabet from logographic or logogramic writing systems, such as Egyptian hieroglyphs or ideograms.) Print culture, ushered in by the advance in printing during the middle of the 15th century when the Gutenberg press was invented, brought about the cultural predominance of the visual over the aural/oral. Quoting (with approval) an observation on the nature of the printed word from William Ivins' "Prints and Visual Communication", McLuhan remarks: In this passage [Ivins] not only notes the ingraining of lineal, sequential habits, but, even more important, points out the visual homogenizing of experience of print culture, and the relegation of auditory and other sensuous complexity to the background.…The technology and social effects of typography incline us to abstain from noting interplay and, as it were, "formal" causality, both in our inner and external lives. Print exists by virtue of the static separation of functions and fosters a mentality that gradually resists any but a separative and compartmentalizing or specialist outlook. The main concept of McLuhan's argument (later elaborated upon in "The Medium Is the Massage") is that new technologies (such as alphabets, printing presses, and even speech) exert a gravitational effect on cognition, which in turn, affects social organization: print technology changes our perceptual habits—"visual homogenizing of experience"—which in turn affects social interactions—"fosters a mentality that gradually resists all but a…specialist outlook"). According to McLuhan, this advance of print technology contributed to and made possible most of the salient trends in the modern period in the Western world: individualism, democracy, Protestantism, capitalism, and nationalism. For McLuhan, these trends all reverberate with print technology's principle of "segmentation of actions and functions and principle of visual quantification." In the early 1960s, McLuhan wrote that the visual, individualistic print culture would soon be brought to an end by what he called "electronic interdependence:" when electronic media replaces visual culture with aural/oral culture. In this new age, humankind will move from individualism and fragmentation to a collective identity, with a "tribal base." McLuhan's coinage for this new social organization is the "global village". The term is sometimes described as having negative connotations in "The Gutenberg Galaxy", but McLuhan was interested in exploring effects, not making value judgments: Instead of tending towards a vast Alexandrian library the world has become a computer, an electronic brain, exactly as an infantile piece of science fiction. And as our senses have gone outside us, Big Brother goes inside. So, unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence.… Terror is the normal state of any oral society, for in it everything affects everything all the time.…In our long striving to recover for the Western world a unity of sensibility and of thought and feeling we have no more been prepared to accept the tribal consequences of such unity than we were ready for the fragmentation of the human psyche by print culture. Key to McLuhan's argument is the idea that technology has no "per se" moral bent—it is a tool that profoundly shapes an individual's and, by extension, a society's self-conception and realization: Is it not obvious that there are always enough moral problems without also taking a moral stand on technological grounds?…Print is the extreme phase of alphabet culture that detribalizes or decollectivizes man in the first instance. Print raises the visual features of alphabet to highest intensity of definition. Thus print carries the individuating power of the phonetic alphabet much further than manuscript culture could ever do. Print is the technology of individualism. If men decided to modify this visual technology by an electric technology, individualism would also be modified. To raise a moral complaint about this is like cussing a buzz-saw for lopping off fingers. "But", someone says, "we didn't know it would happen." Yet even witlessness is not a moral issue. It is a problem, but not a moral problem; and it would be nice to clear away some of the moral fogs that surround our technologies. It would be good for morality. The moral valence of technology's effects on cognition is, for McLuhan, a matter of perspective. For instance, McLuhan contrasts the considerable alarm and revulsion that the growing quantity of books aroused in the latter 17th century with the modern concern for the "end of the book." If there can be no universal moral sentence passed on technology, McLuhan believes that "there can only be disaster arising from unawareness of the causalities and effects inherent in our technologies". Though the World Wide Web was invented almost 30 years after "The Gutenberg Galaxy", and 10 years after his death, McLuhan prophesied the web technology seen today as early as 1962: The next medium, whatever it is—it may be the extension of consciousness—will include television as its content, not as its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual's encyclopedic function and flip into a private line to speedily tailored data of a saleable kind. Furthermore, McLuhan coined and certainly popularized the usage of the term "surfing" to refer to rapid, irregular, and multidirectional movement through a heterogeneous body of documents or knowledge, e.g., statements such as "Heidegger surf-boards along on the electronic wave as triumphantly as Descartes rode the mechanical wave." Paul Levinson's 1999 book "Digital McLuhan" explores the ways that McLuhan's work may be understood better through using the lens of the digital revolution. McLuhan frequently quoted Walter Ong's "Ramus, Method, and the Decay of Dialogue" (1958), which evidently had prompted McLuhan to write "The Gutenberg Galaxy". Ong wrote a highly favorable review of this new book in "America". However, Ong later tempered his praise, by describing McLuhan's "The Gutenberg Galaxy" as "a racy survey, indifferent to some scholarly detail, but uniquely valuable in suggesting the sweep and depth of the cultural and psychological changes entailed in the passage from illiteracy to print and beyond." McLuhan himself said of the book, "I'm not concerned to get any kudos out of ["The Gutenberg Galaxy"]. It seems to me a book that somebody should have written a century ago. I wish somebody else had written it. It will be a useful prelude to the rewrite of "Understanding Media" [the 1960 NAEB report] that I'm doing now." McLuhan's "The Gutenberg Galaxy" won Canada's highest literary award, the Governor-General's Award for Non-Fiction, in 1962. The chairman of the selection committee was McLuhan's colleague at the University of Toronto and oftentime intellectual sparring partner, Northrop Frye. McLuhan's most widely-known work, Understanding Media: The Extensions of Man (1964), is a seminal study in media theory. Dismayed by the way in which people approach and use new media such as television, McLuhan famously argues that in the modern world "we live mythically and integrally…but continue to think in the old, fragmented space and time patterns of the pre-electric age." McLuhan proposed that media themselves, not the content they carry, should be the focus of study—popularly quoted as "the medium is the message." McLuhan's insight was that a medium affects the society in which it plays a role not by the content delivered over the medium, but by the characteristics of the medium itself. McLuhan pointed to the light bulb as a clear demonstration of this concept. A light bulb does not have content in the way that a newspaper has articles or a television has programs, yet it is a medium that has a social effect; that is, a light bulb enables people to create spaces during nighttime that would otherwise be enveloped by darkness. He describes the light bulb as a medium without any content. McLuhan states that "a light bulb creates an environment by its mere presence." More controversially, he postulated that content had little effect on society—in other words, it did not matter if television broadcasts children's shows or violent programming, to illustrate one example—the effect of television on society would be identical. He noted that all media have characteristics that engage the viewer in different ways; for instance, a passage in a book could be reread at will, but a movie had to be screened again in its entirety to study any individual part of it. In the first part of "Understanding Media", McLuhan states that different media invite different degrees of participation on the part of a person who chooses to consume a medium. A "cool medium" incorporates increased involvement but decreased description, while a "hot medium" is the opposite, decreasing involvement and increasing description. In other words, a society that appears to be actively participating in the streaming of content but not considering the effects of the tool is not allowing an "extension of ourselves." A movie is thus said to be "high definition," demanding a viewer's attention, while a comic book to be "low definition," requiring much more conscious participation by the reader to extract value: "Any hot medium allows of less participation than a cool one, as a lecture makes for less participation than a seminar, and a book for less than a dialogue." Some media, such as movies, are "hot"—that is, they enhance one single sense, in this case vision, in such a manner that a person does not need to exert much effort in filling in the details of a movie image. Hot media usually, but not always, provide complete involvement without considerable stimulus. For example, print occupies visual space, uses visual senses, but can immerse its reader. Hot media favour analytical precision, quantitative analysis and sequential ordering, as they are usually sequential, linear, and logical. They emphasize one sense (for example, of sight or sound) over the others. For this reason, along with film, hot media also include radio, the lecture, and photography. McLuhan contrasts "hot" media with "cool"—specifically, television, which he claims requires more effort on the part of the viewer to determine meaning; and comics, which, due to their minimal presentation of visual detail, require a high degree of effort to fill in details that the cartoonist may have intended to portray. Cool media are usually, but not always, those that provide little involvement with substantial stimulus. They require more active participation on the part of the user, including the perception of abstract patterning and simultaneous comprehension of all parts. Therefore, in addition to television, cool media include the seminar and cartoons. McLuhan describes the term "cool media" as emerging from jazz and popular music used, in this context, to mean "detached." This concept appears to force media into binary categories. However, McLuhan's hot and cool exist on a continuum: they are more correctly measured on a scale than as dichotomous terms. Some theorists have attacked McLuhan's definition and treatment of the word "medium" for being too simplistic. Umberto Eco, for instance, contends that McLuhan's medium conflates channels, codes, and messages under the overarching term of the medium, confusing the vehicle, internal code, and content of a given message in his framework. In "Media Manifestos", Régis Debray also takes issue with McLuhan's envisioning of the medium. Like Eco, he is ill at ease with this reductionist approach, summarizing its ramifications as follows: Furthermore, when "Wired" magazine interviewed him in 1995, Debray stated that he views McLuhan "more as a poet than a historian, a master of intellectual collage rather than a systematic analyst.… McLuhan overemphasizes the technology behind cultural change at the expense of the usage that the messages and codes make of that technology." Dwight Macdonald, in turn, reproached McLuhan for his focus on television and for his "aphoristic" style of prose, which he believes leaves "Understanding Media" filled with "contradictions, non-sequiturs, facts that are distorted and facts that are not facts, exaggerations, and chronic rhetorical vagueness." Additionally, Brian Winston's "Misunderstanding Media", published in 1986, chides McLuhan for what he sees as his technologically deterministic stances. Raymond Williams and James W. Carey further this point of contention, claiming: The work of McLuhan was a particular culmination of an aesthetic theory which became, negatively, a social theory ... It is an apparently sophisticated technological determinism which has the significant effect of indicating a social and cultural determinism.… If the medium – whether print or television – is the cause, of all other causes, all that men ordinarily see as history is at once reduced to effects. (Williams 1990, 126/7) David Carr states that there has been a long line of "academics who have made a career out of deconstructing McLuhan’s effort to define the modern media ecosystem", whether it be due to what they see as McLuhan's ignorance toward sociohistorical context or the style of his argument. While some critics have taken issue with McLuhan's writing style and mode of argument, McLuhan himself urged readers to think of his work as "probes" or "mosaics" offering a toolkit approach to thinking about the media. His eclectic writing style has also been praised for its postmodern sensibilities and suitability for virtual space. The Medium Is the Massage: An Inventory of Effects, published in 1967, was McLuhan's best seller, "eventually selling nearly a million copies worldwide." Initiated by Quentin Fiore, McLuhan adopted the term "massage" to denote the effect each medium has on the human sensorium, taking inventory of the "effects" of numerous media in terms of how they "massage" the sensorium. Fiore, at the time a prominent graphic designer and communications consultant, set about composing the visual illustration of these effects which were compiled by Jerome Agel. Near the beginning of the book, Fiore adopted a pattern in which an image demonstrating a media effect was presented with a textual synopsis on the facing page. The reader experiences a repeated shifting of analytic registers—from "reading" typographic print to "scanning" photographic facsimiles—reinforcing McLuhan's overarching argument in this book: namely, that each medium produces a different "massage" or "effect" on the human sensorium. In "The Medium Is the Massage", McLuhan also rehashed the argument—which first appeared in the Prologue to 1962's "The Gutenberg Galaxy"—that all media are "extensions" of our human senses, bodies and minds. Finally, McLuhan described key points of change in how man has viewed the world and how these views were changed by the adoption of new media. "The technique of invention was the discovery of the nineteenth [century]", brought on by the adoption of fixed points of view and perspective by typography, while "[t]he technique of the suspended judgment is the discovery of the twentieth century," brought on by the bard abilities of radio, movies and television.The past went that-a-way. When faced with a totally new situation we tend always to attach ourselves to the objects, to the flavor of the most recent past. We look at the present through a rear-view mirror. We march backward into the future. Suburbia lives imaginatively in Bonanza-land.An audio recording version of McLuhan's famous work was made by Columbia Records. The recording consists of a pastiche of statements made by McLuhan "interrupted" by other speakers, including people speaking in various phonations and falsettos, discordant sounds and 1960s incidental music in what could be considered a deliberate attempt to translate the disconnected images seen on TV into an audio format, resulting in the prevention of a connected stream of conscious thought. Various audio recording techniques and statements are used to illustrate the relationship between spoken, literary speech and the characteristics of electronic audio media. McLuhan biographer Philip Marchand called the recording "the 1967 equivalent of a McLuhan video.""I wouldn't be seen dead with a living work of art."—'Old man' speaking "Drop this jiggery-pokery and talk straight turkey."—'Middle aged man' speaking McLuhan used James Joyce's "Finnegans Wake", an inspiration for this study of war throughout history, as an indicator as to how war may be conducted in the future. Joyce's "Wake" is claimed to be a gigantic cryptogram which reveals a cyclic pattern for the whole history of man through its Ten Thunders. Each "thunder" below is a 100-character portmanteau of other words to create a statement he likens to an effect that each technology has on the society into which it is introduced. In order to glean the most understanding out of each, the reader must break the portmanteau into separate words (and many of these are themselves portmanteaus of words taken from multiple languages other than English) and speak them aloud for the spoken effect of each word. There is much dispute over what each portmanteau truly denotes. McLuhan claims that the ten thunders in "Wake" represent different stages in the history of man: Collaborating with Canadian poet Wilfred Watson in "From Cliché to Archetype" (1970), McLuhan approaches the various implications of the verbal cliché and of the archetype. One major facet in McLuhan's overall framework introduced in this book that is seldom noticed is the provision of a new term that actually succeeds the global village: the "global theater". In McLuhan's terms, a cliché is a "normal" action, phrase, etc. which becomes so often used that we are "anesthetized" to its effects. McLuhan provides the example of Eugène Ionesco's play "The Bald Soprano", whose dialogue consists entirely of phrases Ionesco pulled from an Assimil language book: "Ionesco originally put all these idiomatic English clichés into literary French which presented the English in the most absurd aspect possible." McLuhan's archetype "is a quoted extension, medium, technology, or environment." "Environment" would also include the kinds of "awareness" and cognitive shifts brought upon people by it, not totally unlike the psychological context Carl Jung described. McLuhan also posits that there is a factor of interplay between the "cliché" and the "archetype", or a "doubleness:" Another theme of the Wake ["Finnegans Wake"] that helps in the understanding of the paradoxical shift from cliché to archetype is 'past time are pastimes.' The dominant technologies of one age become the games and pastimes of a later age. In the 20th century, the number of 'past times' that are simultaneously available is so vast as to create cultural anarchy. When all the cultures of the world are simultaneously present, the work of the artist in the elucidation of form takes on new scope and new urgency. Most men are pushed into the artist's role. The artist cannot dispense with the principle of 'doubleness' or 'interplay' because this type of hendiadys dialogue is essential to the very structure of consciousness, awareness, and autonomy. McLuhan relates the cliché-to-archetype process to the Theater of the Absurd: Pascal, in the seventeenth century, tells us that the heart has many reasons of which the head knows nothing. The Theater of the Absurd is essentially a communicating to the head of some of the silent languages of the heart which in two or three hundred years it has tried to forget all about. In the seventeenth century world the languages of the heart were pushed down into the unconscious by the dominant print cliché. The "languages of the heart," or what McLuhan would otherwise define as oral culture, were thus made archetype by means of the printing press, and turned into cliché. The satellite medium, McLuhan states, encloses the Earth in a man-made environment, which "ends 'Nature' and turns the globe into a repertory theater to be programmed." All previous environments (book, newspaper, radio, etc.) and their artifacts are retrieved under these conditions ("past times are pastimes"). McLuhan thereby meshes this into the term "global theater". It serves as an update to his older concept of the global village, which, in its own definitions, can be said to be subsumed into the overall condition described by that of the global theater. In his posthumous book, The Global Village: Transformations in World Life and Media in the 21st Century (1989), McLuhan, collaborating with Bruce R. Powers, provides a strong conceptual framework for understanding the cultural implications of the technological advances associated with the rise of a worldwide electronic network. This is a major work of McLuhan's as it contains the most extensive elaboration of his concept of "acoustic space", and provides a critique of standard 20th-century communication models such as the Shannon–Weaver model. McLuhan distinguishes between the existing worldview of "visual space"—a linear, quantitative, classically geometric model—and that of acoustic space—a holistic, qualitative order with an intricate, paradoxical topology: "Acoustic Space has the basic character of a sphere whose focus or center is simultaneously everywhere and whose margin is nowhere." The transition from "visual" to "acoustic" "space" was not automatic with the advent of the global network, but would have to be a conscious project. The "universal environment of simultaneous electronic flow" inherently favors right-brain Acoustic Space, yet we are held back by habits of adhering to a fixed point of view. There are no boundaries to sound. We hear from all directions at once. Yet Acoustic and Visual Space are, in fact, inseparable. The resonant interval is the invisible borderline between Visual and Acoustic Space. This is like the television camera that the Apollo 8 astronauts focused on the Earth after they had orbited the moon. Reading, writing, and hierarchical ordering are associated with the left brain, as are the linear concept of time and phonetic literacy. The left brain is the locus of analysis, classification, and rationality. The right brain is the locus of the spatial, tactile, and musical. ""Comprehensive awareness"" results when the two sides of the brain are in true balance. Visual Space is associated with the simplified worldview of Euclidean geometry, the intuitive three dimensions useful for the architecture of buildings and the surveying of land. It is too rational and has no grasp of the acoustic. Acoustic Space is multisensory. McLuhan writes about robotism in the context of Japanese Zen Buddhism and how it can offer us new ways of thinking about technology. The Western way of thinking about technology is too much related to the left hemisphere of our brain, which has a rational and linear focus. What he called robotism might better be called androidism in the wake of "Blade Runner" and the novels of Philip K. Dick. Robotism-androidism emerges from the further development of the right hemisphere of the brain, creativity and a new relationship to spacetime (most humans are still living in 17th-century classical Newtonian physics spacetime). Robots-androids will have much greater flexibility than humans have had until now, in both mind and body. Robots-androids will teach humanity this new flexibility. And this flexibility of androids (what McLuhan calls robotism) has a strong affinity with Japanese culture and life. McLuhan quotes from Ruth Benedict, "The Chrysanthemum and the Sword", an anthropological study of Japanese culture published in 1946:Occidentals cannot easily credit the ability of the Japanese to swing from one behavior to another without psychic cost. Such extreme possibilities are not included in our experience. Yet in Japanese life the contradictions, as they seem to us, are as deeply based in their view of life as our uniformities are in ours.The ability to live in the present and instantly readjust. "All Western scientific models of communication are—like the Shannon–Weaver model—linear, sequential, and logical as a reflection of the late medieval emphasis on the Greek notion of efficient causality." McLuhan and Powers criticize the Shannon-Weaver model of communication as emblematic of left-hemisphere bias and linearity, descended from a print-era perversion of Aristotle's notion of efficient causality. A third term of "The Global Village" that McLuhan and Powers develop at length is The Tetrad. McLuhan had begun development on the Tetrad as early as 1974. The tetrad an analogical, simultaneous, four-fold pattern of transformation. "At full maturity the tetrad reveals the metaphoric structure of the artifact as having two figures and two grounds in dynamic and analogical relationship to each other." Like the camera focused on the Earth by the Apollo 8 astronauts, the tetrad reveals figure (Moon) and ground (Earth) simultaneously. The right-brain hemisphere thinking is the capability of being in many places at the same time. Electricity is acoustic. It is simultaneously everywhere. The Tetrad, with its fourfold Möbius topological structure of enhancement, reversal, retrieval and obsolescence, is mobilized by McLuhan and Powers to illuminate the media or technological inventions of cash money, the compass, the computer, the database, the satellite, and the global media network. In "Laws of Media" (1988), published posthumously by his son Eric, McLuhan summarized his ideas about media in a concise tetrad of media effects. The tetrad is a means of examining the effects on society of any technology (i.e., any medium) by dividing its effects into four categories and displaying them simultaneously. McLuhan designed the tetrad as a pedagogical tool, phrasing his laws as questions with which to consider any medium: The laws of the tetrad exist simultaneously, not successively or chronologically, and allow the questioner to explore the "grammar and syntax" of the "language" of media. McLuhan departs from his mentor Harold Innis in suggesting that a medium "overheats," or reverses into an opposing form, when taken to its extreme. Visually, a tetrad can be depicted as four diamonds forming an X, with the name of a medium in the centre. The two diamonds on the left of a tetrad are the "Enhancement" and "Retrieval" qualities of the medium, both "Figure" qualities. The two diamonds on the right of a tetrad are the "Obsolescence" and "Reversal" qualities, both "Ground" qualities. Using the example of radio: McLuhan adapted the Gestalt psychology idea of a "figure and a ground", which underpins the meaning of "the medium is the message." He used this concept to explain how a form of communications technology, the medium, or "figure", necessarily operates through its context, or "ground". McLuhan believed that in order to grasp fully the effect of a new technology, one must examine figure (medium) and ground (context) together, since neither is completely intelligible without the other. McLuhan argued that we must study media in their historical context, particularly in relation to the technologies that preceded them. The present environment, itself made up of the effects of previous technologies, gives rise to new technologies, which, in their turn, further affect society and individuals. All technologies have embedded within them their own assumptions about time and space. The message which the medium conveys can only be understood if the medium and the environment in which the medium is used—and which, simultaneously, it effectively creates—are analysed together. He believed that an examination of the figure-ground relationship can offer a critical commentary on culture and society. In McLuhan's (and Harley Parker's) work, electric media has an affinity with haptic and hearing perception, while mechanical media have an affinity with visual perception. This opposition between optic and haptic, had been previously formulated by art historians Alois Riegl, in his 1901 "Late Roman art industry", and then Erwin Panofsky (in his "Perspective as Symbolic Form"). However, McLuhan's comments are more aware of the contingent cultural context in which for instance linear perspective arose, while Panofsky's ones are more teleological. Also Walter Benjamin, in his "The Work of Art in the Age of Mechanical Reproduction" (1935), observed how in the scenario of perceptions of modern Western culture, from about the 19th century, has begun a shift from the optic towards the haptic. This shift is one of the main recurring topics in McLuhan work, which McLuhan attributes to the advent of the electronic era. After the publication of "Understanding Media", McLuhan received an astonishing amount of publicity, making him perhaps the most publicized English teacher in the twentieth century and arguably the most controversial. This publicity began with the work of two California advertising executives, Howard Gossage and Gerald Feigen who used personal funds to fund their practice of "genius scouting". Much enamoured with McLuhan's work, Feigen and Gossage arranged for McLuhan to meet with editors of several major New York magazines in May 1965 at the Lombardy Hotel in New York. Philip Marchand reports that, as a direct consequence of these meetings, McLuhan was offered the use of an office in the headquarters of both "Time" and "Newsweek", any time he needed it. In August 1965, Feigen and Gossage held what they called a "McLuhan festival" in the offices of Gossage's advertising agency in San Francisco. During this "festival", McLuhan met with advertising executives, members of the mayor's office, and editors from the "San Francisco Chronicle" and "Ramparts" magazine. More significant was the presence at the festival of Tom Wolfe, who wrote about McLuhan in a subsequent article, "What If He Is Right?", published in "New York" magazine and Wolfe's own "The Pump House Gang". According to Feigen and Gossage, their work had only a moderate effect on McLuhan's eventual celebrity: they claimed that their work only "probably speeded up the recognition of [McLuhan's] genius by about six months." In any case, McLuhan soon became a fixture of media discourse. "Newsweek" magazine did a cover story on him; articles appeared in "Life", "Harper's", "Fortune", "Esquire", and others. Cartoons about him appeared in "The New Yorker". In 1969, "Playboy" magazine published a lengthy interview with him. In a running gag on the popular sketch comedy "Rowan & Martin's Laugh-In", the "poet" Henry Gibson would randomly say, "Marshall McLuhan, what are you doin'?" McLuhan was credited with coining the phrase "Turn on, tune in, drop out" by its popularizer, Timothy Leary, in the 1960s. In a 1988 interview with Neil Strauss, Leary stated that the slogan was "given to him" by McLuhan during a lunch in New York City. Leary said McLuhan "was very much interested in ideas and marketing, and he started singing something like, 'Psychedelics hit the spot / Five hundred micrograms, that’s a lot,' to the tune of a Pepsi commercial. Then he started going, 'Tune in, turn on, and drop out.'" During his lifetime and afterward, McLuhan heavily influenced cultural critics, thinkers, and media theorists such as Neil Postman, Jean Baudrillard, Timothy Leary, Terence McKenna, William Irwin Thompson, Paul Levinson, Douglas Rushkoff, Jaron Lanier, Hugh Kenner, and John David Ebert, as well as political leaders such as Pierre Elliott Trudeau and Jerry Brown. Andy Warhol was paraphrasing McLuhan with his now famous "15 minutes of fame" quote. When asked in the 1970s for a way to sedate violences in Angola, he suggested a massive spread of TV devices. Douglas Coupland, argued that McLuhan "was conservative, socially, but he never let politics enter his writing or his teaching". The character "Brian O'Blivion" in David Cronenberg's 1983 film "Videodrome" is a "media oracle" based on McLuhan. In 1991, McLuhan was named as the "patron saint" of "Wired" magazine and a quote of his appeared on the masthead for the first ten years of its publication. He is mentioned by name in a Peter Gabriel–penned lyric in the song "Broadway Melody of 1974". This song appears on the concept album "The Lamb Lies Down on Broadway", from progressive rock band Genesis. The lyric is: "Marshall McLuhan, casual viewin' head buried in the sand." McLuhan is also jokingly referred to during an episode of "The Sopranos" entitled "House Arrest". Despite his death in 1980, someone claiming to be McLuhan was posting on a "Wired" mailing list in 1996. The information this individual provided convinced one writer for "Wired" that "if the poster was not McLuhan himself, it was a bot programmed with an eerie command of McLuhan's life and inimitable perspective." A new centre known as the McLuhan Program in Culture and Technology, formed soon after his death in 1980, was the successor to McLuhan's Centre for Culture and Technology at the University of Toronto. Since 1994, it has been part of the University of Toronto Faculty of Information and in 2008 the McLuhan Program in Culture and Technology incorporated in the Coach House Institute. The first director was literacy scholar and OISE Professor David R. Olsen. From 1983 until 2008, the McLuhan Program was under the direction of Derrick de Kerckhove who was McLuhan's student and translator. From 2008 through 2015 Professor Dominique Scheffel-Dunand of York University served Director of the Program. In 2011 at the time of his centenary the Coach House Institute established a Marshall McLuhan Centenary Fellowship program in his honor, and each year appoints up to four fellows for a maximum of two years. In May 2016 the Coach House Institute was renamed the McLuhan Centre for Culture and Technology; its Interim Director was Seamus Ross (2015–16). Sarah Sharma, an Associate Professor of Media Theory from the Institute of Communication, Culture, Information and Technology (ICCIT) and the Faculty of Information (St. George), began a five-year term as director of the Coach House (2017- ). Professor Sharma's research and teaching focuses on feminist approaches to technology, including issues related to temporality and media. Professor Sharma's thematic for the 2017-2018 Monday Night Seminars at the McLuhan Centre is MsUnderstanding Media which extends and introduces feminist approaches to technology to McLuhan's formulations of technology and culture. In Toronto, Marshall McLuhan Catholic Secondary School is named after him. This is a partial list of works cited in this article.
https://en.wikipedia.org/wiki?curid=19548
Multiple inheritance Multiple inheritance is a feature of some object-oriented computer programming languages in which an object or class can inherit characteristics and features from more than one parent object or parent class. It is distinct from single inheritance, where an object or class may only inherit from one particular object or class. Multiple inheritance has been a sensitive issue for many years, with opponents pointing to its increased complexity and ambiguity in situations such as the "diamond problem", where it may be ambiguous as to which parent class a particular feature is inherited from if more than one parent class implements said feature. This can be addressed in various ways, including using virtual inheritance. Alternate methods of object composition not based on inheritance such as mixins and traits have also been proposed to address the ambiguity. In object-oriented programming (OOP), "inheritance" describes a relationship between two classes in which one class (the "child" class) "subclasses" the "parent" class. The child inherits methods and attributes of the parent, allowing for shared functionality. For example, one might create a variable class "Mammal" with features such as eating, reproducing, etc.; then define a child class "Cat" that inherits those features without having to explicitly program them, while adding new features like "chasing mice". Multiple inheritance allows programmers to use more than one totally orthogonal hierarchy simultaneously, such as allowing "Cat" to inherit from "Cartoon character" and "Pet" and "Mammal" and access features from within all of those classes. Languages that support multiple inheritance include: C++, Common Lisp (via Common Lisp Object System (CLOS)), EuLisp (via The EuLisp Object System TELOS), Curl, Dylan, Eiffel, Logtalk, Object REXX, Scala (via use of mixin classes), OCaml, Perl, POP-11, Python, R, Raku, and Tcl (built-in from 8.6 or via Incremental Tcl (Incr Tcl) in earlier versions). IBM System Object Model (SOM) runtime supports multiple inheritance, and any programming language targeting SOM can implement new SOM classes inherited from multiple bases. Some object-oriented languages, such as Java, C#, and Ruby implement "single inheritance", although protocols, or "interfaces," provide some of the functionality of true multiple inheritance. PHP uses traits classes to inherit specific method implementations. Ruby uses modules to inherit multiple methods. The "diamond problem" (sometimes referred to as the "Deadly Diamond of Death") is an ambiguity that arises when two classes B and C inherit from A, and class D inherits from both B and C. If there is a method in A that B and C have overridden, and D does not override it, then which version of the method does D inherit: that of B, or that of C? For example, in the context of GUI software development, a class codice_1 may inherit from both classes codice_2 (for appearance) and codice_3 (for functionality/input handling), and classes codice_2 and codice_3 both inherit from the codice_6 class. Now if the codice_7 method is called for a codice_1 object and there is no such method in the codice_1 class but there is an overridden codice_7 method in codice_2 or codice_3 (or both), which method should be eventually called? It is called the "diamond problem" because of the shape of the class inheritance diagram in this situation. In this case, class A is at the top, both B and C separately beneath it, and D joins the two together at the bottom to form a diamond shape. Languages have different ways of dealing with these problems of repeated inheritance. Languages that allow only single inheritance, where a class can only derive from one base class, do not have the diamond problem. The reason for this is that such languages have at most one implementation of any method at any level in the inheritance chain regardless of the repetition or placement of methods. Typically these languages allow classes to implement multiple protocols, called interfaces in Java. These protocols define methods but do not provide concrete implementations. This strategy has been used by ActionScript, C#, D, Java, Nemerle, Object Pascal, Objective-C, Smalltalk, Swift and PHP. All these languages allow classes to implement multiple protocols. Moreover, Ada, C#, Java, Object Pascal, Objective-C, Swift and PHP allow multiple-inheritance of interfaces (called protocols in Objective-C and Swift). Interfaces are like abstract base classes that specify method signatures without implementing any behaviour. ("Pure" interfaces such as the ones in Java up to version 7 do not permit any implementation or instance data in the interface.) Nevertheless, even when several interfaces declare the same method signature, as soon as that method is implemented (defined) anywhere in the inheritance chain, it overrides any implementation of that method in the chain above it (in its superclasses). Hence, at any given level in the inheritance chain, there can be at most one implementation of any method. Thus, single-inheritance method implementation does not exhibit the Diamond Problem even with multiple-inheritance of interfaces. With the introduction of default implementation for interfaces in Java 8 and C# 8, it is still possible to generate a Diamond Problem, although this will only appear as a compile-time error.
https://en.wikipedia.org/wiki?curid=19550
Media studies Researchers may also develop and employ theories and methods from disciplines including cultural studies, rhetoric (including digital rhetoric), philosophy, literary theory, psychology, political science, political economy, economics, sociology, anthropology, social theory, art history and criticism, film theory, and information theory. For a history of the field, see "History of media studies".The first Media Studies M.A. program in the U.S. was introduced by John Culkin at The New School in 1975, which has since graduated more than 2,000 students. Culkin was responsible for bringing Marshall McLuhan to Fordham in 1968 and subsequently founded the Center for Understanding Media, which became the New School program. Media is studied as a broad subject in most states in Australia, with the state of Victoria being world leaders in curriculum development . Media studies in Australia was first developed as an area of study in Victorian universities in the early 1960s, and in secondary schools in the mid 1960s. Today, almost all Australian universities teach media studies. According to the Government of Australia's "Excellence in Research for Australia" report, the leading universities in the country for media studies (which were ranked well above World standards by the report's scoring methodology) are Monash University, QUT, RMIT, University of Melbourne, University of Queensland and UTS. In secondary schools, an early film studies course first began being taught as part of the Victorian junior secondary curriculum during the mid 1960s. And, by the early 1970s, an expanded media studies course was being taught. The course became part of the senior secondary curriculum (later known as the Victorian Certificate of Education or "VCE") in the 1980s. It has since become, and continues to be, a strong component of the VCE. Notable figures in the development of the Victorian secondary school curriculum were the long time Rusden College media teacher Peter Greenaway (not the British film director), Trevor Barr (who authored one of the first media text books "Reflections of Reality") and later John Murray (who authored "The Box in the Corner", "In Focus", and "10 Lessons in Film Appreciation"). Today, Australian states and territories that teach media studies at a secondary level are Australian Capital Territory, Northern Territory, Queensland, South Australia, Victoria and Western Australia. Media studies does not appear to be taught in the state of New South Wales at a secondary level. In Victoria, the VCE media studies course is structured as: Unit 1 - Representation, Technologies of Representation, and New Media; Unit 2 - Media Production, Australian Media Organisations; Unit 3 - Narrative Texts, Production Planning; and Unit 4 - Media Process, Social Values, and Media Influence. Media studies also form a major part of the primary and junior secondary curriculum, and includes areas such as photography, print media and television. Victoria also hosts the peak media teaching body known as ATOM which publishes "Metro" and "Screen Education" magazines. In Canada, media studies and communication studies are incorporated in the same departments and cover a wide range of approaches (from critical theory to organizations to research-creation and political economy, for example). Over time, research developed to employ theories and methods from cultural studies, philosophy, political economy, gender, sexuality and race theory, management, rhetoric, film theory, sociology, and anthropology. Harold Innis and Marshall McLuhan are famous Canadian scholars for their contributions to the fields of media ecology and political economy in the 20th century. They were both important members of the Toronto School of Communication at the time. More recently, the School of Montreal and its founder James R. Taylor significantly contributed to the field of organizational communication by focusing on the ontological processes of organizations. Carleton University and the University of Western Ontario, 1945 and 1946 prospectively, created Journalism specific programs or schools. A Journalism specific program was also created at Ryerson in 1950. The first communication programs in Canada were started at Ryerson and Concordia Universities. The Radio and Television Arts program at Ryerson were started in the 1950s, while the Film, Media Studies/Media Arts, and Photography programs also originated from programs started in the 1950s. The Communication studies department at Concordia was created in the late 1960s. Ryerson's Radio and Television, Film, Media and Photography programs were renowned by the mid 1970s, and its programs were being copied by other colleges and universities nationally and Internationally. Today, most universities offer undergraduate degrees in Media and Communication Studies, and many Canadian scholars actively contribute to the field, among which: Brian Massumi (philosophy, cultural studies), Kim Sawchuk (cultural studies, feminist, ageing studies), Carrie Rentschler (feminist theory), and François Cooren (organizational communication). In his book “Understanding Media, The Extensions of Man”, media theorist Marshall McLuhan suggested that "the medium is the message", and that all human artefacts and technologies are media. His book introduced the usage of terms such as “media” into our language along with other precepts, among them “global village” and “Age of Information”. A medium is anything that mediates our interaction with the world or other humans. Given this perspective, media study is not restricted to just media of communications but all forms of technology. Media and their users form an ecosystem and the study of this ecosystem is known as media ecology. McLuhan says that the “technique of fragmentation that is the essence of machine technology” shaped the restructuring of human work and association and “the essence of automation technology is the opposite”. He uses an example of the electric light to make this connection and to explain “the medium is the message”. The electric light is pure information and it is a medium without a message unless it is used to spell out some verbal ad or a name. The characteristic of all media means the “content” of any medium is always another medium. For example, the content of writing is speech, the written word is the content of print, and print is the content of the telegraph. The change that the medium or technology introduces into human affairs is the “message”. If the electric light is used for Friday night football or to light up your desk you could argue that the content of the electric light is these activities. The fact that it is the medium that shapes and controls the form of human association and action makes it the message. The electric light is over looked as a communication medium because it doesn't have any content. It is not until the electric light is used to spell a brand name that it is recognized as medium. Similar to radio and other mass media electric light eliminates time and space factors in human association creating deeper involvement. McLuhan compared the “content” to a juicy piece of meat being carried by a burglar to distract the “watchdog of the mind”. The effect of the medium is made strong because it is given another media “content”. The content of a movie is a book, play or maybe even an opera. McLuhan talks about media being “hot” or “cold” and touches on the principle that distinguishes them from one another. A hot medium (i.e., radio or Movie) extends a single sense in “high definition”. High definition means the state of being well filled with data. A cool medium (i.e., Telephone and TV) is considered “low definition” because a small amount of data/information is given and has to be filled in. Hot media are low in participation and cool media are high in participation. Hot media are low in participation because it is giving most of the information and it excludes. Cool media are high in participation because it gives you information but you have to fill in the blanks and it is inclusive. He used lecturing as an example for hot media and seminars as an example for low media. If you use a hot medium in a hot or cool culture makes a difference. There are two universities in China that specialize in media studies. Communication University of China, formerly known as the Beijing Broadcasting Institute, that dates back to 1954. CUC has 15,307 full-time students, including 9264 undergraduates, 3512 candidates for doctor and master's degrees and 16780 students in programs of continuing education. The other university known for media studies in China is Zhejiang University of Media and Communications (ZUMC) which has campuses in Hangzhou and Tongxiang. Almost 10,000 full-time students are currently studying in over 50 programs at the 13 Colleges and Schools of ZUMC. Both institutions have produced some of China's brightest broadcasting talents for television as well as leading journalists at magazines and newspapers. There is no university specialized on journalism and media studies, but there are seven public universities which have a department of media studies. Three biggest are based in Prague (Charles University), Brno (Masaryk University) and Olomouc (Palacký University). There are another nine private universities and colleges which has media studies department. One prominent French media critic is the sociologist Pierre Bourdieu who wrote among other books "On Television" (New Press, 1999). Bourdieu's analysis is that television provides far less autonomy, or freedom, than we think. In his view, the market (which implies the hunt for higher advertising revenue) not only imposes uniformity and banality, but also a form of invisible censorship. When, for example, television producers "pre-interview" participants in news and public affairs programs, to ensure that they will speak in simple, attention-grabbing terms, and when the search for viewers leads to an emphasis on the sensational and the spectacular, people with complex or nuanced views are not allowed a hearing. In Germany two main branches of media theory or media studies can be identified. The first major branch of media theory has its roots in the humanities and cultural studies, such as film studies ("Filmwissenschaft"), theater studies ("Theaterwissenschaft") and German language and literature studies ("Germanistik") as well as Comparative Literature Studies ("Komparatistik"). This branch has broadened out substantially since the 1990s. And it is on this initial basis that a culturally-based media studies (often emphasised more recently through the disciplinary title "Medienkulturwissenschaft") in Germany has primarily developed and established itself. This plurality of perspectives make it difficult to single out one particular site where this branch of Medienwissenschaft originated. While the Frankfurt-based theatre scholar, Hans-Theis Lehmanns term "post dramatic theater" points directly to the increased blending of co-presence and mediatized material in the German theater (and elsewhere) since the 1970s, the field of theater studies from the 1990s onwards at the Freie Universität Berlin, led in particular by Erika Fischer-Lichte, showed particular interest in the ways in which theatricality influenced notions of performativity in aesthetic events. Within the field of Film Studies, again, both Frankfurt and Berlin were dominant in the development of new perspectives on moving image media. Heide Schlüpman in Frankfurt and Gertrud Koch, first in Bochum then in Berlin, were key theorists contributing to an aesthetic theory of the cinema (Schlüpmann) as "dispositif" and the moving image as medium, particularly in the context of illusion (Koch). Many scholars who became known as media scholars in Germany originally were scholars of German, such as Friedrich Kittler, who taught at the Humboldt Universität zu Berlin, completed both his dissertation and habilitation in the context of "Germanistik". One of the early publications in this new direction is a volume edited by Helmut Kreuzer, "Literature Studies - Media Studies" ("Literaturwissenschaft – Medienwissenschaft"), which summarizes the presentations given at the Düsseldorfer Germanistentag 1976. The second branch of media studies in Germany is comparable to Communication Studies. Pioneered by Elisabeth Noelle-Neumann in the 1940s, this branch studies mass media, its institutions and its effects on society and individuals. The German Institute for Media and Communication Policy, founded in 2005 by media scholar Lutz Hachmeister, is one of the few independent research institutions that is dedicated to issues surrounding media and communications policies. The term "Wissenschaft" cannot be translated straightforwardly as "studies", as it calls to mind both scientific methods and the humanities. Accordingly, German media theory combines philosophy, psychoanalysis, history, and scienctific studies with media-specific research. "Medienwissenschaften" is currently one of the most popular courses of study at universities in Germany, with many applicants mistakenly assuming that studying it will automatically lead to a career in TV or other media. This has led to widespread disillusionment, with students blaming the universities for offering highly theoretical course content. The universities maintain that practical journalistic training is not the aim of the academic studies they offer. Media Studies is a fast growing academic field in India, with several dedicated departments and research institutes. With a view to making the best use of communication facilities for information, publicity and development, the Government of India in 1962-63 sought the advice of the Ford Foundation/UNESCO team of internationally known mass communication specialists who recommended the setting up of a national institute for training, teaching and research in mass communication. Anna University was the first university to start Master of Science in Electronic Media programmes. It offers a five-year integrated programme and a two-year programme in Electronic Media. The Department of Media Sciences was started in January 2002, branching off from the UGC's Educational Multimedia Research Centre (EMMRC). National Institute of Open Schooling, the world's largest open schooling system, offers Mass Communication as a subject of studies at senior secondary level. All the major universities in the country have mass media and journalism studies departments. Centre for the Study of Developing Societies (CSDS), Delhi has media studies as one of their major emphasis. In the Netherlands, media studies are split into several academic courses such as (applied) communication sciences, communication- and information sciences, communication and media, media and culture or theater, film and television sciences. Whereas communication sciences focuses on the way people communicate, be it mediated or unmediated, media studies tends to narrow the communication down to just mediated communication. However, it would be a mistake to consider media studies a specialism of communication sciences, since media make up just a small portion of the overall course. Indeed, both studies tend to borrow elements from one another. Communication sciences (or a derivative thereof) can be studied at Erasmus University Rotterdam, Radboud University, Tilburg University, University of Amsterdam, University of Groningen, University of Twente, Roosevelt Academy, University of Utrecht, VU University Amsterdam and Wageningen University and Research Centre. Media studies (or something similar) can be studied at the University of Amsterdam, VU University Amsterdam, Erasmus University Rotterdam, University of Groningen and the University of Utrecht. Eight Dutch universities collaborate in the overarching Netherlands Research school for Media Studies (RMeS), which acts as a platform for graduate students in media research. Media studies in New Zealand is healthy, especially due to renewed activity in the country's film industry and is taught at both secondary and tertiary education institutes. Media studies in NZ can be regarded as a singular success, with the subject well-established in the tertiary sector (such as Screen and Media Studies at the University of Waikato; Media Studies, Victoria University of Wellington; Film, Television and Media Studies, University of Auckland; Media Studies, Massey University; Communication Studies, University of Otago). Different Media Studies courses can offer students a range of specialisations- such as cultural studies, media theory and analysis, practical film-making, journalism and communications studies. But what makes the case of New Zealand particularly significant in respect of Media Studies is that for more than a decade it has been a nationally mandated and very popular subject in secondary (high) schools, taught across three years in a very structured and developmental fashion, with Scholarship in Media Studies available for academically gifted students. According to the New Zealand Ministry of Education Subject Enrolment figures 229 New Zealand schools offered Media Studies as a subject in 2016, representing more than 14,000 students. In Pakistan, media studies programs are widely offered. University of the Punjab Lahore is the oldest department. Later on University of Karachi, Peshawar University, BZU Multaan, Islamia University Bahwalpur also started communication programs. Now, newly established universities are also offering mass communication program in which University of Gujrat emerged as a leading department. Bahria University which is established by Pakistan Navy is also offering BS in media studies. In Switzerland, media and communication studies are offered by several higher education institutions including the International University in Geneva, Zurich University of Applied Sciences, University of Lugano, University of Fribourg and others. In the United Kingdom, media studies developed in the 1960s from the academic study of English, and from literary criticism more broadly. The key date, according to Andrew Crisell, is 1959: When Joseph Trenaman left the BBC's Further Education Unit to become the first holder of the Granada Research Fellowship in Television at Leeds University. Soon after in 1966, the Centre for Mass Communication Research was founded at Leicester University, and degree programmes in media studies began to sprout at polytechnics and other universities during the 1970s and 1980s. James Halloran at Leicester University is credited with much influence in the development of media studies and communication studies, as the head of the university's Centre for Mass Communication Research, and founder of the International Association for Media and Communication Research. Media Studies is now taught all over the UK. It is taught at Key Stages 1– 3, Entry Level, GCSE and at A level and the Scottish Qualifications Authority offers formal qualifications at a number of different levels. It is offered through a large area of exam boards including AQA and WJEC. Much research in the field of news media studies has been led by the Reuters Institute for the Study of Journalism. Details of the research projects and results are published in the RISJ annual report. Mass communication, Communication studies or simply 'Communication' may be more popular names than “media studies” for academic departments in the United States. However, the focus of such programs sometimes excludes certain media—film, book publishing, video games, etc. The title “media studies” may be used alone, to designate film studies and rhetorical or critical theory, or it may appear in combinations like “media studies and communication” to join two fields or emphasize a different focus. It involves the study of many emerging, contemporary media and platforms, with social media having boomed in recent years. Broadcast and cable TV is no longer the primary form of entertainment, with various screens offering worldwide events and pastimes around the clock. In 1999, the MIT Comparative Media Studies program started under the leadership of Henry Jenkins, since growing into a graduate program, MIT's largest humanities major, and, following a 2012 merger with the Writing and Humanistic Studies program, a roster of twenty faculty, including Pulitzer Prize-winning author Junot Diaz, science fiction writer Joe Haldeman, games scholar T. L. Taylor, and media scholars William Uricchio (a CMS co-founder), Edward Schiappa, and Heather Hendershot. Now named Comparative Media Studies/Writing, the department places an emphasis on what Jenkins and colleagues had termed "applied humanities": it hosts several research groups for civic media, digital humanities, games, computational media, documentary, and mobile design, and these groups are used to provide graduate students with research assistantships to cover the cost of tuition and living expenses. The incorporation of Writing and Humanistic Studies also placed MIT's Science Writing program, Writing Across the Curriculum, and Writing and Communications Center under the same roof. Formerly an interdisciplinary major at the University of Virginia the Department of Media Studies was officially established in 2001 and has quickly grown to wide recognition. This is partly thanks to the acquisition of Professor Siva Vaidhyanathan, a cultural historian and media scholar, as well as the Inaugural Verklin Media Policy and Ethics Conference, endowed by the CEO of Canoe Ventures and UVA alumnus David Verklin. In 2010, a group of undergraduate students in the Media Studies Department established the Movable Type Academic Journal, the first ever undergraduate academic journal of its kind. The department is expanding rapidly and doubled in size in 2011. Brooklyn College, part of the City University of New York, has been offering graduate studies in television and media since 1961. Currently, the Department of Television and Radio administers an MS in Media Studies, and hosts the Center for the Study of World Television. The University of Southern California has three distinct centers for media studies: the Center for Visual Anthropology (founded in 1984), the Institute for Media Literacy at the School of Cinematic Arts (founded in 1998) and the Annenberg School for Communication and Journalism (founded in 1971). University of California, Irvine had in Mark Poster one of the first and foremost theorists of media culture in the US, and can boast a strong Department of Film & Media Studies. University of California, Berkeley has three institutional structures within which media studies can take place: the department of Film and Media (formerly Film Studies Program), including famous theorists as Mary Ann Doane and Linda Williams, the Center for New Media, and a long established interdisciplinary program formerly titled Mass Communications, which recently changed its name to Media Studies, dropping any connotations which accompany the term “Mass” in the former title. Until recently, Radford University in Virginia used the title "media studies" for a department that taught practitioner-oriented major concentrations in journalism, advertising, broadcast production and Web design. In 2008, those programs were combined with a previous department of communication (speech and public relations) to create a School of Communication. (A media studies major at Radford still means someone concentrating on journalism, broadcasting, advertising or Web production.) The University of Denver has a renowned program for digital media studies. It is an interdisciplinary program combining Communications, Computer Science, and the arts.
https://en.wikipedia.org/wiki?curid=19552
Microprocessor A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single (or more) integrated circuit (IC) of MOSFET construction. The microprocessor is a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory and provides results (also in binary form) as output. Microprocessors contain both combinational logic and sequential digital logic. Microprocessors operate on numbers and symbols represented in the binary number system. The integration of a whole CPU onto a single or a few integrated circuits using Very-Large-Scale Integration (VLSI) greatly reduced the cost of processing power. Integrated circuit processors are produced in large numbers by highly automated metal-oxide-semiconductor (MOS) fabrication processes, resulting in a low unit price. Single-chip processors increase reliability because there are many fewer electrical connections that could fail. As microprocessor designs improve, the cost of manufacturing a chip (with smaller components built on a semiconductor chip the same size) generally stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits, typically of TTL type. Microprocessors combined this into one or a few large-scale ICs. The first microprocessor was the Intel 4004. Continued increases in microprocessor capacity have since rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers. The complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, and the heat that the chip can dissipate. Advancing technology makes more complex and powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit (ALU), and a control logic section. The ALU performs addition, subtraction, and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation (zero value, negative number, overflow, or others). The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths, registers, and other elements of the processor. As integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger; allowing more transistors on a chip allowed word sizes to increase from 4- and 8-bit words up to today's 64-bit words. Additional features were added to the processor architecture; more on-chip registers sped up programs, and complex instructions could be used to make more compact programs. Floating-point arithmetic, for example, was often not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and then as part of the same microprocessor chip sped up floating point calculations. Occasionally, physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example, carry and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each. The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more rapidly than external memory speed, so cache memory is necessary if the processor is not to be delayed by slower external memory. A microprocessor is a general-purpose entity. Several specialized processing devices have followed: Microprocessors can be selected for differing applications based on their word size, which is a measure of their complexity. Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4, 8 or 12 bit processors are widely integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require extremely low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions., Thousands of items that were traditionally not computer-related include microprocessors. These include large and small household appliances, cars (and their accessory equipment units), car keys, tools and test instruments, toys, light switches/dimmers and electrical circuit breakers, smoke alarms, battery packs, and hi-fi audio/visual components (from DVD players to phonograph turntables). Such products as cellular telephones, DVD video system and HDTV broadcast systems fundamentally require consumer devices with powerful, low-cost, microprocessors. Increasingly stringent pollution control standards effectively require automobile manufacturers to use microprocessor engine management systems to allow optimal control of emissions over the widely varying operating conditions of an automobile. Non-programmable controls would require complex, bulky, or costly implementation to achieve the results possible with a microprocessor. A microprocessor control program (embedded software) can be easily tailored to different needs of a product line, allowing upgrades in performance with minimal redesign of the product. Different features can be implemented in different models of a product line at negligible production cost. Microprocessor control of a system can provide control strategies that would be impractical to implement using electromechanical controls or purpose-built electronic controls. For example, an engine control system in an automobile can adjust ignition timing based on engine speed, load on the engine, ambient temperature, and any observed tendency for knocking—allowing an automobile to operate on a range of fuel grades. The advent of low-cost computers on integrated circuits has transformed modern society. General-purpose microprocessors in personal computers are used for computation, text editing, multimedia display, and communication over the Internet. Many more microprocessors are part of embedded systems, providing digital control over myriad objects from appliances to automobiles to cellular phones and industrial process control. Microprocessors perform binary operations based on Boolean Logic, named after George Boole. The ability to operate computer systems using Boolean Logic was first proven in a 1938 Thesis by Masters Student Claude Shannon, who later went on to become a Professor. Shannon is considered "The Father of Information Theory". The microprocessor has origins in the development of the MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor), which was first demonstrated by Mohamed M. Atalla and Dawon Kahng of Bell Labs in 1960. Following the development of MOS integrated circuit chips in the early 1960s, MOS chips reached higher transistor density and lower manufacturing costs than bipolar integrated circuits by 1964. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on several MOS LSI chips. Designers in the late 1960s were striving to integrate the central processing unit (CPU) functions of a computer onto a handful of MOS LSI chips, called microprocessor unit (MPU) chipsets. The first true microprocessor was the Intel 4004, released as a single MOS LSI chip in 1971. The single-chip microprocessor was made possible with the development of MOS silicon-gate technology (SGT). The earliest MOS transistors had aluminium metal gates, which Italian engineer Federico Faggin replaced with silicon self-aligned gates to develop the first silicon-gate MOS chip at Fairchild Semiconductor in 1968. Faggin later joined Intel and used his silicon-gate MOS technology to develop the 4004, along with Marcian Hoff, Stanley Mazor and Masatoshi Shima in 1971. The 4004 was designed for Busicom, which had earlier proposed a multi-chip design in 1969, before Faggin's team at Intel changed it into a new single-chip design. Intel introduced the first commercial microprocessor, the 4-bit Intel 4004, in 1971. It was soon followed by the 8-bit microprocessor Intel 8008 in 1972. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general-purpose microcomputers from the mid-1970s on. The first use of the term "microprocessor" is attributed to Viatron Computer Systems describing the custom integrated circuit used in their System 21 small computer system announced in 1968. Since the early 1970s, the increase in capacity of microprocessors has followed Moore's law; this originally suggested that the number of components that can be fitted onto a chip doubles every year. With present technology, it is actually every two years, and as a result Moore later changed the period to two years. These projects delivered a microprocessor at about the same time: Garrett AiResearch's Central Air Data Computer (CADC) (1970), Texas Instruments' TMS 1802NC (September 1971) and Intel's 4004 (November 1971, based on an earlier 1969 Busicom design). Arguably, Four-Phase Systems AL1 microprocessor was also delivered in 1969. In 1968, Garrett AiResearch (who employed designers Ray Holt and Steve Geller) was invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy's new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against, and was used in all of the early Tomcat models. This system contained "a 20-bit, pipelined, parallel multi-microprocessor". The Navy refused to allow publication of the design until 1997. Released in 1998, the documentation on the CADC, and the MP944 chipset, are well known. Ray Holt's autobiographical story of this design and development is presented in the book: The Accidental Engineer. Ray Holt graduated from California Polytechnic University in 1968, and began his computer design career with the CADC. From its inception, it was shrouded in secrecy until 1998 when at Holt's request, the US Navy allowed the documents into the public domain. Holt has claimed that no-one has compared this microprocessor with those that came later. According to Parab et al. (2007), This convergence of DSP and microcontroller architectures is known as a digital signal controller. The Four-Phase Systems AL1 was an 8-bit bit slice chip containing eight registers and an ALU. It was designed by Lee Boysel in 1969. At the time, it formed part of a nine-chip, 24-bit CPU with three AL1s, but it was later called a microprocessor when, in response to 1990s litigation by Texas Instruments, a demonstration system was constructed where a single AL1 formed part of a courtroom demonstration computer system, together with RAM, ROM, and an input-output device. In 1971, Pico Electronics and General Instrument (GI) introduced their first collaboration in ICs, a complete single chip calculator IC for the Monroe/Litton Royal Digital III calculator. This chip could also arguably lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and a RISC instruction set on-chip. The layout for the four layers of the PMOS process was hand drawn at x500 scale on mylar film, a significant task at the time given the complexity of the chip. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. They had significant previous design experience on multiple calculator chipsets with both GI and Marconi-Elliott. The key team members had originally been tasked by Elliott Automation to create an 8-bit computer in MOS and had helped establish a MOS Research Laboratory in Glenrothes, Scotland in 1967. Calculators were becoming the largest single market for semiconductors so Pico and GI went on to have significant success in this burgeoning market. GI continued to innovate in microprocessors and microcontrollers with products including the CP1600, IOB1680 and PIC1650. In 1987, the GI Microelectronics business was spun out into the Microchip PIC microcontroller business. The Intel 4004 is generally regarded as the first true microprocessor built on a single chip, priced at The first known advertisement for the 4004 is dated November 15, 1971 and appeared in "Electronic News". The microprocessor was designed by a team consisting of Italian engineer Federico Faggin, American engineers Marcian Hoff and Stanley Mazor, and Japanese engineer Masatoshi Shima. The project that produced the 4004 originated in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performance desktop calculators. Busicom's original design called for a programmable chip set consisting of seven different chips. Three of the chips were to make a special-purpose CPU with its program stored in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a four-chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simple I/O device and a 4-bit central processing unit (CPU). Although not a chip designer, he felt the CPU could be integrated into a single chip, but as he lacked the technical know-how the idea remained just a wish for the time being. While the architecture and specifications of the MCS-4 came from the interaction of Hoff with Stanley Mazor, a software engineer reporting to him, and with Busicom engineer Masatoshi Shima, during 1969, Mazor and Hoff moved on to other projects. In April 1970, Intel hired Italian engineer Federico Faggin as project leader, a move that ultimately made the single-chip CPU final design a reality (Shima meanwhile designed the Busicom calculator firmware and assisted Faggin during the first six months of the implementation). Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor and designed the world's first commercial integrated circuit using SGT, the Fairchild 3708, had the correct background to lead the project into what would become the first commercial general purpose microprocessor. Since SGT was his very own invention, Faggin also used it to create his new methodology for random logic design that made it possible to implement a single-chip CPU with the proper speed, power dissipation and cost. The manager of Intel's MOS Design Department was Leslie L. Vadász at the time of the MCS-4 development but Vadász's attention was completely focused on the mainstream business of semiconductor memories so he left the leadership and the management of the MCS-4 project to Faggin, who was ultimately responsible for leading the 4004 project to its realization. Production units of the 4004 were first delivered to Busicom in March 1971 and shipped to other customers in late 1971. Along with Intel (who developed the 8008), Texas Instruments developed in 1970–1971 a one-chip CPU replacement for the Datapoint 2200 terminal, the TMX 1795 (later TMC 1795.) Like the 8008, it was rejected by customer Datapoint. According to Gary Boone, the TMX 1795 never reached production. Since it was built to the same specification, its instruction set was very similar to the Intel 8008. The TMS1802NC was announced September 17, 1971 and implemented a four-function calculator. The TMS1802NC, despite its designation, was not part of the TMS 1000 series; it was later redesignated as part of the TMS 0100 series, which was used in the TI Datamath calculator. Although marketed as a calculator-on-a-chip, the TMS1802NC was fully programmable, including on the chip a CPU with an 11-bit instruction word, 3520 bits (320 instructions) of ROM and 182 bits of RAM. Gilbert Hyatt was awarded a patent claiming an invention pre-dating both TI and Intel, describing a "microcontroller". The patent was later invalidated, but not before substantial royalties were paid out. The Intel 4004 was followed in 1972 by the Intel 8008, the world's first 8-bit microprocessor. The 8008 was not, however, an extension of the 4004 design, but instead the culmination of a separate design project at Intel, arising from a contract with Computer Terminals Corporation, of San Antonio TX, for a chip for a terminal they were designing, the Datapoint 2200—fundamental aspects of the design came not from Intel but from CTC. In 1968, CTC's Vic Poor and Harry Pyle developed the original design for the instruction set and operation of the processor. In 1969, CTC contracted two companies, Intel and Texas Instruments, to make a single-chip implementation, known as the CTC 1201. In late 1970 or early 1971, TI dropped out being unable to make a reliable part. In 1970, with Intel yet to deliver the part, CTC opted to use their own implementation in the Datapoint 2200, using traditional TTL logic instead (thus the first machine to run "8008 code" was not in fact a microprocessor at all and was delivered a year earlier). Intel's version of the 1201 microprocessor arrived in late 1971, but was too late, slow, and required a number of additional support chips. CTC had no interest in using it. CTC had originally contracted Intel for the chip, and would have owed them for their design work. To avoid paying for a chip they did not want (and could not use), CTC released Intel from their contract and allowed them free use of the design. Intel marketed it as the 8008 in April, 1972, as the world's first 8-bit microprocessor. It was the basis for the famous "Mark-8" computer kit advertised in the magazine "Radio-Electronics" in 1974. This processor had an 8-bit data bus and a 14-bit address bus. The 8008 was the precursor to the successful Intel 8080 (1974), which offered improved performance over the 8008 and required fewer support chips. Federico Faggin conceived and designed it using high voltage N channel MOS. The Zilog Z80 (1976) was also a Faggin design, using low voltage N channel with depletion load and derivative Intel 8-bit processors: all designed with the methodology Faggin created for the 4004. Motorola released the competing 6800 in August 1974, and the similar MOS Technology 6502 was released in 1975 (both designed largely by the same people). The 6502 family rivaled the Z80 in popularity during the 1980s. A low overall cost, little packaging, simple computer bus requirements, and sometimes the integration of extra circuitry (e.g. the Z80's built-in memory refresh circuitry) allowed the home computer "revolution" to accelerate sharply in the early 1980s. This delivered such inexpensive machines as the Sinclair ZX81, which sold for . A variation of the 6502, the MOS Technology 6510 was used in the Commodore 64 and yet another variant, the 8502, powered the Commodore 128. The Western Design Center, Inc (WDC) introduced the CMOS WDC 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in the Apple IIe and IIc personal computers as well as in medical implantable grade pacemakers and defibrillators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by ARM (32-bit) and other microprocessor intellectual property (IP) providers in the 1990s. Motorola introduced the MC6809 in 1978. It was an ambitious and well thought-through 8-bit design that was source compatible with the 6800, and implemented using purely hard-wired logic (subsequent 16-bit microprocessors typically used microcode to some extent, as CISC design requirements were becoming too complex for pure hard-wired logic). Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture. A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used on board the "Galileo" probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement CMOS technology. The CDP1802 was used because it could be run at very low power, and because a variant was available fabricated using a special production process, silicon on sapphire (SOS), which provided much better protection against cosmic radiation and electrostatic discharge than that of any other processor of the era. Thus, the SOS version of the 1802 was said to be the first radiation-hardened microprocessor. The RCA 1802 had a static design, meaning that the clock frequency could be made arbitrarily low, or even stopped. This let the "Galileo" spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers or sensors would awaken the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication. Current versions of the Western Design Center 65C02 and 65C816 have static cores, and thus retain data even when the clock is completely halted. The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support and memory ICs. The microprocessor recognised the DEC PDP-8 minicomputer instruction set. As such it was sometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980s. The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. Other early multi-chip 16-bit microprocessors include one that Digital Equipment Corporation (DEC) used in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputer—and the Fairchild Semiconductor MicroFlame 9440, both introduced in 1975–76. In 1975, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900. Another early single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the Texas Instruments TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110. The Western Design Center (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time. Intel "upsized" their 8080 design into the 16-bit Intel 8086, the first member of the x86 family, which powers most modern PC type computers. Intel introduced the 8086 as a cost-effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an 8-bit external data bus, was the microprocessor in the first IBM PC. Intel then released the 80186 and 80188, the 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family's backwards compatibility. The 80186 and 80188 were essentially versions of the 8086 and 8088, enhanced with some onboard peripherals and a few new instructions. Although Intel's 80186 and 80188 were not used in IBM PC type designs, second source versions from NEC, the V20 and V30 frequently were. The 8086 and successors had an innovative but limited method of memory segmentation, while the 80286 introduced a full-featured segmented memory management unit (MMU). The 80386 introduced a flat 32-bit memory model with paged memory management. The 16-bit Intel x86 processors up to and including the 80386 do not include floating-point units (FPUs). Intel introduced the 8087, 80187, 80287 and 80387 math coprocessors to add hardware floating-point and transcendental function capabilities to the 8086 through 80386 CPUs. The 8087 works with the 8086/8088 and 80186/80188, the 80187 works with the 80186 but not the 80188, the 80287 works with the 80286 and the 80387 works with the 80386. The combination of an x86 CPU and an x87 coprocessor forms a single multi-chip microprocessor; the two chips are programmed as a unit using a single integrated instruction set. The 8087 and 80187 coprocessors are connected in parallel with the data and address buses of their parent processor and directly execute instructions intended for them. The 80287 and 80387 coprocessors are interfaced to the CPU through I/O ports in the CPU's address space, this is transparent to the program, which does not need to know about or access these I/O ports directly; the program accesses the coprocessor and its registers through normal instruction opcodes. 16-bit designs had only been on the market briefly when 32-bit implementations started to appear. The most significant of the 32-bit designs is the Motorola MC68000, introduced in 1979. The 68k, as it was widely known, had 32-bit registers in its programming model but used 16-bit internal data paths, three 16-bit Arithmetic Logic Units, and a 16-bit external data bus (to reduce pin count), and externally supported only 24-bit addresses (internally it worked with full 32 bit addresses). In PC-based IBM-compatible mainframes the MC68000 internal microcode was modified to emulate the 32-bit System/370 IBM mainframe. Motorola generally described it as a 16-bit processor. The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga. The world's first single-chip fully 32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982. After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop super microcomputer; in the "Companion", the world's first 32-bit laptop computer; and in "Alexander", the world's first book-sized super microcomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All these systems ran the UNIX System V operating system. The first commercial, single chip, fully 32-bit microprocessor available on the market was the HP FOCUS. Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981, but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to contemporary architectures such as Intel's own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the results for the iAPX432 was partly due to a rushed and therefore suboptimal Ada compiler. Motorola's success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1984 added full 32-bit data and address buses. The 68020 became hugely popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River Data Systems, Cromemco) produced desktop-size systems. The MC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to the MC68040, which included an FPU for better math performance. The 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68k family faded from use in the early 1990s. Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs. The ColdFire processor cores are derivatives of the 68020. During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032. Later, National Semiconductor produced the NS 32132, which allowed two CPUs to reside on the same memory bus with built in arbitration. The NS32016/32 outperformed the MC68000/10, but the NS32332—which arrived at approximately the same time as the MC68020—did not have enough performance. The third generation chip, the NS32532, was different. It had about double the performance of the MC68030, which was released around the same time. The appearance of RISC processors like the AM29000 and MC88000 (now both dead) influenced the architecture of the final core, the NS32764. Technically advanced—with a superscalar RISC core, 64-bit bus, and internally overclocked—it could still execute Series 32000 instructions through real-time translation. When National Semiconductor decided to leave the Unix market, the chip was redesigned into the Swordfish Embedded processor with a set of on chip peripherals. The chip turned out to be too expensive for the laser printer market and was killed. The design team went to Intel and there designed the Pentium processor, which is very similar to the NS32764 core internally. The big success of the Series 32000 was in the laser printer market, where the NS32CG16 with microcoded BitBlt instructions had very good price/performance and was adopted by large companies like Canon. By the mid-1980s, Sequent introduced the first SMP server-class computer using the NS 32032. This was one of the design's few wins, and it disappeared in the late 1980s. The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others. Other designs included the Zilog Z80000, which arrived too late to market to stand a chance and disappeared quickly. The ARM first appeared in 1985. This is a RISC processor design, which has since come to dominate the 32-bit embedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores and integrate them into their own system on a chip products; only a few such vendors are licensed to modify the ARM cores. Most cell phones include an ARM processor, as do a wide variety of other products. There are microcontroller-oriented ARM cores without virtual memory support, as well as symmetric multiprocessor (SMP) applications processors with virtual memory. From 1993 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel's Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at broad. While 64-bit microprocessor designs have been in use in several markets since the early 1990s (including the Nintendo 64 gaming console in 1996), the early 2000s saw the introduction of 64-bit microprocessors targeted at the PC market. With AMD's introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (also called AMD64), in September 2003, followed by Intel's near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With operating systems Windows XP x64, Windows Vista x64, Windows 7 x64, Linux, BSD, and macOS that run 64-bit natively, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers. The move to 64 bits by PowerPC had been intended since the architecture's design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating point and vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal. In 2011, ARM introduced a new 64-bit ARM architecture. In the mid-1980s to early 1990s, a crop of new high-performance reduced instruction set computer (RISC) microprocessors appeared, influenced by discrete RISC-like CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special-purpose machines and Unix workstations, but then gained wide acceptance in other roles. The first commercial RISC microprocessor design was released in 1984, by MIPS Computer Systems, the 32-bit R2000 (the R1000 was not released). In 1986, HP released its first system with a PA-RISC CPU. In 1987, in the non-Unix Acorn computers' 32-bit, then cache-less, ARM2-based Acorn Archimedes became the first commercial success using the ARM architecture, then known as Acorn RISC Machine (ARM); first silicon ARM1 in 1985. The R3000 made the design truly practical, and the R4000 introduced the world's first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBM POWER and Sun SPARC architectures. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha. In the late 1990s, only two 64-bit RISC architectures were still produced in volume for non-embedded applications: SPARC and Power ISA, but as ARM has become increasingly powerful, in the early 2010s, it became the third RISC architecture in the general computing segment. A different approach to improving a computer's performance is to add extra processors, as in symmetric multiprocessing designs, which have been popular in servers and workstations since the early 1990s. Keeping up with Moore's law is becoming increasingly challenging as chip-making technologies approach their physical limits. In response, microprocessor manufacturers look for other ways to improve performance so they can maintain the momentum of constant upgrades. A multi-core processor is a single chip that contains more than one microprocessor core. Each core can simultaneously execute processor instructions in parallel. This effectively multiplies the processor's potential performance by the number of cores, if the software is designed to take advantage of more than one processor core. Some components, such as bus interface and cache, may be shared between cores. Because the cores are physically close to each other, they can communicate with each other much faster than separate (off-chip) processors in a multiprocessor system, which improves overall system performance. In 2001, IBM introduced the first commercial multi-core processor, the monolithic two-core POWER4. Personal computers did not receive multi-core processors until the 2005 introduction, of the two-core Intel Pentium D. The Pentium D, however, was not a monolithic multi-core processor. It was constructed from two dies, each containing a core, packaged on a multi-chip module. The first monolithic multi-core processor in the personal computer market was the AMD Athlon X2, which was introduced a few weeks after the Pentium D. , dual- and quad-core processors are widely used in home PCs and laptops, while quad-, six-, eight-, ten-, twelve-, and sixteen-core processors are common in the professional and enterprise markets with workstations and servers. Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core design. The Niagara 2 supports more threads and operates at 1.6 GHz. High-end Intel Xeon processors that are on the LGA 775, LGA 1366, and LGA 2011 sockets and high-end AMD Opteron processors that are on the C32 and G34 sockets are DP (dual processor) capable, as well as the older Intel Core 2 Extreme QX9775 also used in an older Mac Pro by Apple and the Intel Skulltrail motherboard. AMD's G34 motherboards can support up to four CPUs and Intel's LGA 1567 motherboards can support up to eight CPUs. Modern desktop computers support systems with multiple CPUs, but few applications outside of the professional market can make good use of more than four cores. Both Intel and AMD currently offer fast quad, hex and octa-core desktop CPUs, making multi-CPU systems obsolete for many purposes. The desktop market has been in a transition towards quad-core CPUs since Intel's Core 2 Quad was released and are now common, although dual-core CPUs are still more prevalent. Older or mobile computers are less likely to have more than two cores than newer desktops. Not all software is optimised for multi-core CPUs, making fewer, more powerful cores preferable. AMD offers CPUs with more cores for a given amount of money than similarly priced Intel CPUs—but the AMD cores are somewhat slower, so the two trade blows in different applications depending on how well-threaded the programs running are. For example, Intel's cheapest Sandy Bridge quad-core CPUs often cost almost twice as much as AMD's cheapest Athlon II, Phenom II, and FX quad-core CPUs but Intel has dual-core CPUs in the same price ranges as AMD's cheaper quad-core CPUs. In an application that uses one or two threads, the Intel dual-core CPUs outperform AMD's similarly priced quad-core CPUs—and if a program supports three or four threads the cheap AMD quad-core CPUs outperform the similarly priced Intel dual-core CPUs. Historically, AMD and Intel have switched places as the company with the fastest CPU several times. In 2012, Intel led on the desktop side of the computer CPU market, with their Sandy Bridge and Ivy Bridge series, while at the same time, AMD's Opterons had superior performance for their price point. AMD were therefore more competitive in low- to mid-end servers and workstations that more effectively used fewer cores and threads. Taken to the extreme, this trend also includes manycore designs, with hundreds of cores, with qualitatively different architectures. In 1997, about 55% of all CPUs sold in the world were 8-bit microcontrollers, of which over 2 billion were sold. In 2002, less than 10% of all the CPUs sold in the world were 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers. Most microprocessors are used in embedded control applications such as household appliances, automobiles, and computer peripherals. Taken as a whole, the average price for a microprocessor, microcontroller, or DSP is just over . In 2003, about $44 billion (equivalent to about $ billion in ) worth of microprocessors were manufactured and sold. Although about half of that money was spent on CPUs used in desktop or laptop personal computers, those count for only about 2% of all CPUs sold. The quality-adjusted price of laptop microprocessors improved −25% to −35% per year in 2004–2010, and the rate of improvement slowed to −15% to −25% per year in 2010–2013. About 10 billion CPUs were manufactured in 2008. Most new CPUs produced each year are embedded.
https://en.wikipedia.org/wiki?curid=19553
Molecule A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. Molecules are distinguished from ions by their lack of electrical charge. However, in quantum physics, organic chemistry, and biochemistry, the term "molecule" is often used less strictly, also being applied to polyatomic ions. In the kinetic theory of gases, the term "molecule" is often used for any gaseous particle regardless of its composition. According to this definition, noble gas atoms are considered molecules as they are monatomic molecules. A molecule may be homonuclear, that is, it consists of atoms of one chemical element, as with oxygen (O2); or it may be heteronuclear, a chemical compound composed of more than one element, as with water (H2O). Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are typically not considered single molecules. Molecules as components of matter are common in organic substances (and therefore biochemistry). They also make up most of the oceans and atmosphere. However, the majority of familiar solid substances on Earth, including most of the minerals that make up the crust, mantle, and core of the Earth, contain many chemical bonds, but are "not" made of identifiable molecules. Also, no typical molecule can be defined for ionic crystals (salts) and covalent crystals (network solids), although these are often composed of repeating unit cells that extend either in a plane (such as in graphene) or three-dimensionally (such as in diamond, quartz, or sodium chloride). The theme of repeated unit-cellular-structure also holds for most condensed phases with metallic bonding, which means that solid metals are also not made of molecules. In glasses (solids that exist in a vitreous disordered state), atoms may also be held together by chemical bonds with no presence of any definable molecule, nor any of the regularity of repeating units that characterizes crystals. The science of molecules is called "molecular chemistry" or "molecular physics", depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system (bound state) composed of two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term "unstable molecule" is used for very reactive species, i.e., short-lived assemblies (resonances) of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose–Einstein condensate. According to Merriam-Webster and the Online Etymology Dictionary, the word "molecule" derives from the Latin "moles" or small unit of mass. The definition of the molecule has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules. Molecules are held together by either covalent bonding or ionic bonding. Several types of non-metal elements exist only as molecules in the environment. For example, hydrogen only exists as hydrogen molecule. A molecule of a compound is made out of two or more elements. A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are termed "shared pairs" or "bonding pairs", and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed "covalent bonding". Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. The ions are atoms that have lost one or more electrons (termed cations) and atoms that have gained one or more electrons (termed anions). This transfer of electrons is termed "electrovalence" in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complicated nature, e.g. molecular ions like NH4+ or SO42−. Most molecules are far too small to be seen with the naked eye, although molecules of many polymers can reach macroscopic sizes, including biopolymers such as DNA. Molecules commonly used as building blocks for organic synthesis have a dimension of a few angstroms (Å) to several dozen Å, or around one billionth of a meter. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope. Some of the largest molecules are macromolecules or supermolecules. The smallest molecule is the diatomic hydrogen (H2), with a bond length of 0.74 Å. Effective molecular radius is the size a molecule displays in solution. The table of permselectivity for different substances contains examples. The chemical formula for a molecule uses one line of chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, and "plus" (+) and "minus" (−) signs. These are limited to one typographic line of symbols, which may include subscripts and superscripts. A compound's empirical formula is a very simple type of chemical formula. It is the simplest integer ratio of the chemical elements that constitute it. For example, water is always composed of a 2:1 ratio of hydrogen to oxygen atoms, and ethanol (ethyl alcohol) is always composed of carbon, hydrogen, and oxygen in a 2:6:1 ratio. However, this does not determine the kind of molecule uniquely – dimethyl ether has the same ratios as ethanol, for instance. Molecules with the same atoms in different arrangements are called isomers. Also carbohydrates, for example, have the same ratio (carbon:hydrogen:oxygen= 1:2:1) (and thus the same empirical formula) but different total numbers of atoms in the molecule. The molecular formula reflects the exact number of atoms that compose the molecule and so characterizes different molecules. However different isomers can have the same atomic composition while being different molecules. The empirical formula is often the same as the molecular formula but not always. For example, the molecule acetylene has molecular formula C2H2, but the simplest integer ratio of elements is CH. The molecular mass can be calculated from the chemical formula and is expressed in conventional atomic mass units equal to 1/12 of the mass of a neutral carbon-12 (12C isotope) atom. For network solids, the term formula unit is used in stoichiometric calculations. For molecules with a complicated 3-dimensional structure, especially involving atoms bonded to four different substituents, a simple molecular formula or even semi-structural chemical formula may not be enough to completely specify the molecule. In this case, a graphical type of formula called a structural formula may be needed. Structural formulas may in turn be represented with a one-dimensional chemical name, but such chemical nomenclature requires many words and terms which are not part of chemical formulas. Molecules have fixed equilibrium geometries—bond lengths and angles— about which they continuously oscillate through vibrational and rotational motions. A pure substance is composed of molecules with the same average geometrical structure. The chemical formula and the structure of a molecule are the two important factors that determine its properties, particularly its reactivity. Isomers share a chemical formula but normally have very different properties because of their different structures. Stereoisomers, a particular type of isomer, may have very similar physico-chemical properties and at the same time different biochemical activities. Molecular spectroscopy deals with the response (spectrum) of molecules interacting with probing signals of known energy (or frequency, according to Planck's formula). Molecules have quantized energy levels that can be analyzed by detecting the molecule's energy exchange through absorbance or emission. Spectroscopy does not generally refer to diffraction studies where particles such as neutrons, electrons, or high energy X-rays interact with a regular arrangement of molecules (as in a crystal). Microwave spectroscopy commonly measures changes in the rotation of molecules, and can be used to identify molecules in outer space. Infrared spectroscopy measures the vibration of molecules, including stretching, bending or twisting motions. It is commonly used to identify the kinds of bonds or functional groups in molecules. Changes in the arrangements of electrons yield absorption or emission lines in ultraviolet, visible or near infrared light, and result in colour. Nuclear resonance spectroscopy measures the environment of particular nuclei in the molecule, and can be used to characterise the numbers of atoms in different positions in a molecule. The study of molecules by molecular physics and theoretical chemistry is largely based on quantum mechanics and is essential for the understanding of the chemical bond. The simplest of molecules is the hydrogen molecule-ion, H2+, and the simplest of all the chemical bonds is the one-electron bond. H2+ is composed of two positively charged protons and one negatively charged electron, which means that the Schrödinger equation for the system can be solved more easily due to the lack of electron–electron repulsion. With the development of fast digital computers, approximate solutions for more complicated molecules became possible and are one of the main aspects of computational chemistry. When trying to define rigorously whether an arrangement of atoms is "sufficiently stable" to be considered a molecule, IUPAC suggests that it "must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state". This definition does not depend on the nature of the interaction between the atoms, but only on the strength of the interaction. In fact, it includes weakly bound species that would not traditionally be considered molecules, such as the helium dimer, He2, which has one vibrational bound state and is so loosely bound that it is only likely to be observed at very low temperatures. Whether or not an arrangement of atoms is "sufficiently stable" to be considered a molecule is inherently an operational definition. Philosophically, therefore, a molecule is not a fundamental entity (in contrast, for instance, to an elementary particle); rather, the concept of a molecule is the chemist's way of making a useful statement about the strengths of atomic-scale interactions in the world that we observe.
https://en.wikipedia.org/wiki?curid=19555
Mode (music) In the theory of Western music, a mode is a type of musical scale coupled with a set of characteristic melodic behaviors. Musical modes have been a part of western musical thought since the Middle Ages, and were inspired by the theory of ancient Greek music. The name "mode" derives from the Latin word "modus," "measure, standard, manner, way, size, limit of quantity, method" (; OED). Regarding the concept of mode as applied to pitch relationships generally, Harold S. Powers proposed mode as a general term but limited for melody types, which were based on the modal interpretation of ancient Greek octave species called "tonos" (τόνος) or "harmonia" (ἁρμονία), with "most of the area between ... being in the domain of mode" . This synthesis between tonus as a church tone and the older meaning associated with an octave species was done by medieval theorists for the Western monodic plainchant tradition (see Hucbald and Aurelian). Musicologists generally assume that Carolingian theorists imported monastic Octoechos propagated in the patriarchates of Jerusalem (Mar Saba) and Constantinople (Stoudios Monastery), which meant the eight echoi they used for the composition of hymns (e.g., ), though direct adaptations of Byzantine chants in the surviving Gregorian repertoire are extremely rare. Since the end of the 18th century, the term "mode" has also applied to pitch structures in non-European musical cultures, sometimes with doubtful compatibility . The concept is also heavily used with regard to Western polyphony before the onset of the common practice period, as for example "modale Mehrstimmigkeit" by Carl Dahlhaus or "Tonarten" of the 16th and 17th centuries found by Bernhard Meier (; ). The word encompasses several additional meanings, however. Authors from the 9th century until the early 18th century (e.g., Guido of Arezzo) sometimes employed the Latin "modus" for interval. In the theory of late-medieval mensural polyphony (e.g., Franco of Cologne), "modus" is a rhythmic relationship between long and short values or a pattern made from them ; in mensural music most often theorists applied it to division of longa into 3 or 2 breves. A musical scale is a series of pitches in a distinct order. The concept of "mode" in Western music theory has three successive stages: in Gregorian chant theory, in Renaissance polyphonic theory, and in tonal harmonic music of the common practice period. In all three contexts, "mode" incorporates the idea of the diatonic scale, but differs from it by also involving an element of melody type. This concerns particular repertories of short musical figures or groups of tones within a certain scale so that, depending on the point of view, mode takes on the meaning of either a "particularized scale" or a "generalized tune". Modern musicological practice has extended the concept of mode to earlier musical systems, such as those of Ancient Greek music, Jewish cantillation, and the Byzantine system of "octoechos", as well as to other non-Western types of music (; ). By the early 19th century, the word "mode" had taken on an additional meaning, in reference to the difference between major and minor keys, specified as "major mode" and "minor mode". At the same time, composers were beginning to conceive of "modality" as something outside of the major/minor system that could be used to evoke religious feelings or to suggest folk-music idioms . Early Greek treatises describe three interrelated concepts that are related to the later, medieval idea of "mode": (1) scales (or "systems"), (2) "tonos"—pl. "tonoi"—(the more usual term used in medieval theory for what later came to be called "mode"), and (3) "harmonia" (harmony)—pl. "harmoniai"—this third term subsuming the corresponding "tonoi" but not necessarily the converse . The Greek scales in the Aristoxenian tradition were (; ): These names are derived from an Ancient Greek subgroup (Dorians), a small region in central Greece (Locris), and certain neighboring peoples (non-Greek but related to them) from Asia Minor (Lydia, Phrygia). The association of these ethnic names with the octave species appears to precede Aristoxenus, who criticized their application to the "tonoi" by the earlier theorists whom he called the "Harmonicists" . Depending on the positioning (spacing) of the interposed tones in the tetrachords, three "genera" of the seven octave species can be recognized. The diatonic genus (composed of tones and semitones), the chromatic genus (semitones and a minor third), and the enharmonic genus (with a major third and two quarter tones or dieses) . The framing interval of the perfect fourth is fixed, while the two internal pitches are movable. Within the basic forms, the intervals of the chromatic and diatonic genera were varied further by three and two "shades" ("chroai"), respectively (; ). In contrast to the medieval modal system, these scales and their related "tonoi" and "harmoniai" appear to have had no hierarchical relationships amongst the notes that could establish contrasting points of tension and rest, although the "mese" ("middle note") may have had some sort of gravitational function . The term "tonos" (pl. "tonoi") was used in four senses: "as note, interval, region of the voice, and pitch. We use it of the region of the voice whenever we speak of Dorian, or Phrygian, or Lydian, or any of the other tones" . Cleonides attributes thirteen "tonoi" to Aristoxenus, which represent a progressive transposition of the entire system (or scale) by semitone over the range of an octave between the Hypodorian and the Hypermixolydian . Aristoxenus's transpositional "tonoi", according to , were named analogously to the octave species, supplemented with new terms to raise the number of degrees from seven to thirteen. However, according to the interpretation of at least three modern authorities, in these transpositional "tonoi" the Hypodorian is the lowest, and the Mixolydian next-to-highest—the reverse of the case of the octave species (; ; ), with nominal base pitches as follows (descending order): Ptolemy, in his "Harmonics", ii.3–11, construed the "tonoi" differently, presenting all seven octave species within a fixed octave, through chromatic inflection of the scale degrees (comparable to the modern conception of building all seven modal scales on a single tonic). In Ptolemy's system, therefore there are only seven "tonoi" (; ). Pythagoras also construed the intervals arithmetically (if somewhat more rigorously, initially allowing for 1:1 = Unison, 2:1 = Octave, 3:2 = Fifth, 4:3 = Fourth and 5:4 = Major Third within the octave). In their diatonic genus, these "tonoi" and corresponding "harmoniai" correspond with the intervals of the familiar modern major and minor scales. See Pythagorean tuning and Pythagorean interval. In music theory the Greek word "harmonia" can signify the enharmonic genus of tetrachord, the seven octave species, or a style of music associated with one of the ethnic types or the "tonoi" named by them . Particularly in the earliest surviving writings, "harmonia" is regarded not as a scale, but as the epitome of the stylised singing of a particular district or people or occupation . When the late-6th-century poet Lasus of Hermione referred to the Aeolian "harmonia", for example, he was more likely thinking of a melodic style characteristic of Greeks speaking the Aeolic dialect than of a scale pattern . By the late 5th century BC, these regional types are being described in terms of differences in what is called "harmonia"—a word with several senses, but here referring to the pattern of intervals between the notes sounded by the strings of a lyra or a kithara. However, there is no reason to suppose that, at this time, these tuning patterns stood in any straightforward and organised relations to one another. It was only around the year 400 that attempts were made by a group of theorists known as the harmonicists to bring these "harmoniai" into a single system and to express them as orderly transformations of a single structure. Eratocles was the most prominent of the harmonicists, though his ideas are known only at second hand, through Aristoxenus, from whom we learn they represented the "harmoniai" as cyclic reorderings of a given series of intervals within the octave, producing seven octave species. We also learn that Eratocles confined his descriptions to the enharmonic genus . In the "Republic", Plato uses the term inclusively to encompass a particular type of scale, range and register, characteristic rhythmic pattern, textual subject, etc. . He held that playing music in a particular "harmonia" would incline one towards specific behaviors associated with it, and suggested that soldiers should listen to music in Dorian or Phrygian "harmoniai" to help make them stronger but avoid music in Lydian, Mixolydian or Ionian "harmoniai", for fear of being softened. Plato believed that a change in the musical modes of the state would cause a wide-scale social revolution The philosophical writings of Plato and Aristotle (c. 350 BC) include sections that describe the effect of different "harmoniai" on mood and character formation. For example, Aristotle in the "Politics" : Aristotle continues by describing the effects of rhythm, and concludes about the combined effect of rhythm and "harmonia" (viii:1340b:10–13): The word "ethos" (ἦθος) in this context means "moral character", and Greek ethos theory concerns the ways that music can convey, foster, and even generate ethical states . Some treatises also describe "melic" composition (μελοποιΐα), "the employment of the materials subject to harmonic practice with due regard to the requirements of each of the subjects under consideration" —which, together with the scales, "tonoi", and "harmoniai" resemble elements found in medieval modal theory . According to Aristides Quintilianus ("On Music", i.12), melic composition is subdivided into three classes: dithyrambic, nomic, and tragic. These parallel his three classes of rhythmic composition: systaltic, diastaltic and hesychastic. Each of these broad classes of melic composition may contain various subclasses, such as erotic, comic and panegyric, and any composition might be elevating (diastaltic), depressing (systaltic), or soothing (hesychastic) . According to Mathiesen, music as a performing art was called "melos", which in its perfect form (μέλος τέλειον) comprised not only the melody and the text (including its elements of rhythm and diction) but also stylized dance movement. Melic and rhythmic composition (respectively, μελοποιΐα and ῥυθμοποιΐα) were the processes of selecting and applying the various components of melos and rhythm to create a complete work. Aristides Quintilianus: Tonaries, lists of chant titles grouped by mode, appear in western sources around the turn of the 9th century. The influence of developments in Byzantium, from Jerusalem and Damascus, for instance the works of Saints John of Damascus (d. 749) and Cosmas of Maiouma (; ), are still not fully understood. The eight-fold division of the Latin modal system, in a four-by-two matrix, was certainly of Eastern provenance, originating probably in Syria or even in Jerusalem, and was transmitted from Byzantine sources to Carolingian practice and theory during the 8th century. However, the earlier Greek model for the Carolingian system was probably ordered like the later Byzantine "oktōēchos", that is, with the four principal (authentic) modes first, then the four plagals, whereas the Latin modes were always grouped the other way, with the authentics and plagals paired . The 6th-century scholar Boethius had translated Greek music theory treatises by Nicomachus and Ptolemy into Latin . Later authors created confusion by applying mode as described by Boethius to explain plainchant modes, which were a wholly different system . In his "De institutione musica", book 4 chapter 15, Boethius, like his Hellenistic sources, twice used the term "harmonia" to describe what would likely correspond to the later notion of "mode", but also used the word "modus"—probably translating the Greek word τρόπος ("tropos"), which he also rendered as Latin "tropus"—in connection with the system of transpositions required to produce seven diatonic octave species , so the term was simply a means of describing transposition and had nothing to do with the church modes . Later, 9th-century theorists applied Boethius's terms "tropus" and "modus" (along with "tonus") to the system of church modes. The treatise "De Musica" (or "De harmonica institutione") of Hucbald synthesized the three previously disparate strands of modal theory: chant theory, the Byzantine "oktōēchos" and Boethius's account of Hellenistic theory . The late-9th- and early 10th-century compilation known as the "Alia musica" imposed the seven octave transpositions, known as "tropus" and described by Boethius, onto the eight church modes , but its compilator also mentions the Greek (Byzantine) echoi translated by the Latin term "sonus". Thus, the names of the modes became associated with the eight church tones and their modal formulas—but this medieval interpretation doesn't fit the concept of the Ancient Greek harmonics treatises. The modern understanding of mode does not reflect that it is made of different concepts that don't all fit. According to Carolingian theorists the eight church modes, or Gregorian modes, can be divided into four pairs, where each pair shares the "final" note and the four notes above the final, but they have different intervals concerning the species of the fifth. If the octave is completed by adding three notes above the fifth, the mode is termed "authentic", but if the octave is completed by adding three notes below, it is called "plagal" (from Greek πλάγιος, "oblique, sideways"). Otherwise explained: if the melody moves mostly above the final, with an occasional cadence to the sub-final, the mode is authentic. Plagal modes shift range and also explore the fourth below the final as well as the fifth above. In both cases, the strict ambitus of the mode is one octave. A melody that remains confined to the mode's ambitus is called "perfect"; if it falls short of it, "imperfect"; if it exceeds it, "superfluous"; and a melody that combines the ambituses of both the plagal and authentic is said to be in a "mixed mode" . Although the earlier (Greek) model for the Carolingian system was probably ordered like the Byzantine "oktōēchos", with the four authentic modes first, followed by the four plagals, the earliest extant sources for the Latin system are organized in four pairs of authentic and plagal modes sharing the same final: protus authentic/plagal, deuterus authentic/plagal, tritus authentic/plagal, and tetrardus authentic/plagal . Each mode has, in addition to its final, a "reciting tone", sometimes called the "dominant" (; ). It is also sometimes called the "tenor", from Latin "tenere" "to hold", meaning the tone around which the melody principally centres . The reciting tones of all authentic modes began a fifth above the final, with those of the plagal modes a third above. However, the reciting tones of modes 3, 4, and 8 rose one step during the 10th and 11th centuries with 3 and 8 moving from B to C (half step) and that of 4 moving from G to A (whole step) . After the reciting tone, every mode is distinguished by scale degrees called "mediant" and "participant". The mediant is named from its position between the final and reciting tone. In the authentic modes it is the third of the scale, unless that note should happen to be B, in which case C substitutes for it. In the plagal modes, its position is somewhat irregular. The participant is an auxiliary note, generally adjacent to the mediant in authentic modes and, in the plagal forms, coincident with the reciting tone of the corresponding authentic mode (some modes have a second participant) . Only one accidental is used commonly in Gregorian chant—B may be lowered by a half-step to B. This usually (but not always) occurs in modes V and VI, as well as in the upper tetrachord of IV, and is optional in other modes except III, VII and VIII . In 1547, the Swiss theorist Henricus Glareanus published the "Dodecachordon", in which he solidified the concept of the church modes, and added four additional modes: the Aeolian (mode 9), Hypoaeolian (mode 10), Ionian (mode 11), and Hypoionian (mode 12). A little later in the century, the Italian Gioseffo Zarlino at first adopted Glarean's system in 1558, but later (1571 and 1573) revised the numbering and naming conventions in a manner he deemed more logical, resulting in the widespread promulgation of two conflicting systems. Zarlino's system reassigned the six pairs of authentic–plagal mode numbers to finals in the order of the natural hexachord, C–D–E–F–G–A, and transferred the Greek names as well, so that modes 1 through 8 now became C-authentic to F-plagal, and were now called by the names Dorian to Hypomixolydian. The pair of G modes were numbered 9 and 10 and were named Ionian and Hypoionian, while the pair of A modes retained both the numbers and names (11, Aeolian, and 12 Hypoaeolian) of Glarean's system. While Zarlino's system became popular in France, Italian composers preferred Glarean's scheme because it retained the traditional eight modes, while expanding them. Luzzasco Luzzaschi was an exception in Italy, in that he used Zarlino's new system . In the late-18th and 19th centuries, some chant reformers (notably the editors of the Mechlin, Pustet-Ratisbon (Regensburg), and Rheims-Cambrai Office-Books, collectively referred to as the Cecilian Movement) renumbered the modes once again, this time retaining the original eight mode numbers and Glareanus's modes 9 and 10, but assigning numbers 11 and 12 to the modes on the final B, which they named Locrian and Hypolocrian (even while rejecting their use in chant). The Ionian and Hypoionian modes (on C) become in this system modes 13 and 14 . Given the confusion between ancient, medieval, and modern terminology, "today it is more consistent and practical to use the traditional designation of the modes with numbers one to eight" (), using Roman numeral (I–VIII), rather than using the pseudo-Greek naming system. Medieval terms, first used in Carolingian treatises, later in Aquitanian tonaries, are still used by scholars today: the Greek ordinals ("first", "second", etc.) transliterated into the Latin alphabet protus (πρῶτος), deuterus (δεύτερος), tritus (τρίτος), and tetrardus (τέταρτος). In practice they can be specified as authentic or as plagal like "protus authentus / plagalis". A mode indicated a primary pitch (a final); the organization of pitches in relation to the final; suggested range; melodic formulas associated with different modes; location and importance of cadences; and affect (i.e., emotional effect/character). Liane Curtis writes that "Modes should not be equated with scales: principles of melodic organization, placement of cadences, and emotional affect are essential parts of modal content" in Medieval and Renaissance music (. Carl lists "three factors that form the respective starting points for the modal theories of Aurelian of Réôme, Hermannus Contractus, and Guido of Arezzo: The oldest medieval treatise regarding modes is "Musica disciplina" by Aurelian of Réôme (dating from around 850) while Hermannus Contractus was the first to define modes as partitionings of the octave . However, the earliest Western source using the system of eight modes is the Tonary of St Riquier, dated between about 795 and 800 . Various interpretations of the "character" imparted by the different modes have been suggested. Three such interpretations, from Guido of Arezzo (995–1050), Adam of Fulda (1445–1505), and Juan de Espinosa Medrano (1632–1688), follow: The modern Western modes, however, consist merely of seven scales related to the familiar major and minor keys. Although the names of the modern modes are Greek and some have names used in ancient Greek theory for some of the "harmoniai", the names of the modern modes are conventional and do not indicate a link between them and ancient Greek theory, and they do not present the sequences of intervals found even in the diatonic genus of the Greek octave species sharing the same name. Modern Western modes use the same set of notes as the major scale, in the same order, but starting from one of its seven degrees in turn as a tonic, and so present a different sequence of whole and half steps. The interval sequence of the major scale being W–W–H–W–W–W–H, where "H" means a semitone (half step) and "W" means a whole tone (whole step), it is thus possible to generate the following scales: For the sake of simplicity, the examples shown above are formed by natural notes (also called "white notes", as they can be played using the white keys of a piano keyboard). However, any transposition of each of these scales is a valid example of the corresponding mode. In other words, transposition preserves mode. Each mode has characteristic intervals and chords that give it its distinctive sound. The following is an analysis of each of the seven modern modes. The examples are provided in a key signature with no sharps or flats (scales composed of natural notes). The Ionian mode may arbitrarily be designated the first mode. It is the modern major scale. The example composed of natural notes begins on C, and is also known as the C-major scale: The Dorian mode is the second mode. The example composed of natural notes begins on D: The Dorian mode is very similar to the modern natural minor scale (see Aeolian mode below). The only difference with respect to the natural minor scale is in the sixth scale degree, which is a major sixth (M6) above the tonic, rather than a minor sixth (m6). The Phrygian mode is the third mode. The example composed of natural notes starts on E: The Phrygian mode is very similar to the modern natural minor scale (see Aeolian mode below). The only difference with respect to the natural minor scale is in the second scale degree, which is a minor second (m2) above the tonic, rather than a major second (M2). The Lydian mode is the fourth mode. The example composed of natural notes starts on F: The single tone that differentiates this scale from the major scale (Ionian mode) is its fourth degree, which is an augmented fourth (A4) above the tonic (F), rather than a perfect fourth (P4). The Mixolydian mode is the fifth mode. The example composed of natural notes begins on G: The single tone that differentiates this scale from the major scale (Ionian mode), is its seventh degree, which is a minor seventh (m7) above the tonic (G), rather than a major seventh (M7). Therefore, the seventh scale degree becomes a subtonic to the tonic because it is now a whole tone lower than the tonic, in contrast to the seventh degree in the major scale, which is a semitone tone lower than the tonic (leading-tone). The Aeolian mode is the sixth mode. It is also called the natural minor scale. The example composed of natural notes begins on A, and is also known as the A natural-minor scale: The Locrian mode is the seventh mode. The example composed of natural notes begins on B: The distinctive scale degree here is the diminished fifth (d5). This makes the tonic triad diminished, so this mode is the only one in which the chords built on the tonic and dominant scale degrees have their roots separated by a diminished, rather than perfect, fifth. Similarly the tonic seventh chord is half-diminished. The modes can be arranged in the following sequence, which follows the circle of fifths. In this sequence, each mode has one more lowered interval relative to the tonic than the mode preceding it. Thus, taking Lydian as reference, Ionian (major) has a lowered fourth; Mixolydian, a lowered fourth and seventh; Dorian, a lowered fourth, seventh, and third; Aeolian (Natural Minor), a lowered fourth, seventh, third, and sixth; Phrygian, a lowered fourth, seventh, third, sixth, and second; and Locrian, a lowered fourth, seventh, third, sixth, second, and fifth. Put another way, the augmented fourth of the Lydian scale has been reduced to a perfect fourth in Ionian, the major seventh in Ionian, to a minor seventh in Mixolydian, etc. The first three modes are sometimes called major (; ; ), the next three minor (; ; ), and the last one diminished (Locrian), according to the quality of their tonic triads. The Locrian mode is traditionally considered theoretical rather than practical because the triad built on the first scale degree is diminished. Because diminished triads are not consonant they do not lend themselves to cadential endings and cannot be tonicized according to traditional practice. Use and conception of modes or modality today is different from that in early music. As Jim Samson explains, "Clearly any comparison of medieval and modern modality would recognize that the latter takes place against a background of some three centuries of harmonic tonality, permitting, and in the 19th century requiring, a dialogue between modal and diatonic procedure" . Indeed, when 19th-century composers revived the modes, they rendered them more strictly than Renaissance composers had, to make their qualities distinct from the prevailing major-minor system. Renaissance composers routinely sharped leading tones at cadences and lowered the fourth in the Lydian mode . The Ionian, or Iastian (; ; ; ; ; ; ; ) mode is another name for the major scale used in much Western music. The Aeolian forms the base of the most common Western minor scale; in modern practice the Aeolian mode is differentiated from the minor by using only the seven notes of the Aeolian scale. By contrast, minor mode compositions of the common practice period frequently raise the seventh scale degree by a semitone to strengthen the cadences, and in conjunction also raise the sixth scale degree by a semitone to avoid the awkward interval of an augmented second. This is particularly true of vocal music . Traditional folk music provides countless examples of modal melodies. For example, Irish traditional music makes extensive usage not only of the major mode, but also the Mixolydian, Dorian, and Aeolian modes . Much Flamenco music is in the Phrygian mode, though frequently with the third and seventh degrees raised by a semitone . Zoltán Kodály, Gustav Holst, Manuel de Falla use modal elements as modifications of a diatonic background, while in the music of Debussy and Béla Bartók modality replaces diatonic tonality (). While the term "mode" is still most commonly understood to refer to Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, or Locrian scales, in modern music theory the word is sometimes applied to scales other than the diatonic. This is seen, for example, in melodic minor scale harmony, which is based on the seven rotations of the ascending melodic minor scale, yielding some interesting scales as shown below. The "chord" row lists tetrads that can be built from the pitches in the given mode (in jazz notation, the symbol Δ is for a major seventh). The number of possible modes for any intervallic set is dictated by the pattern of intervals in the scale. For scales built of a pattern of intervals that only repeats at the octave (like the diatonic set), the number of modes is equal to the number of notes in the scale. Scales with a recurring interval pattern smaller than an octave, however, have only as many modes as notes within that subdivision: e.g., the diminished scale, which is built of alternating whole and half steps, has only two distinct modes, since all odd-numbered modes are equivalent to the first (starting with a whole step) and all even-numbered modes are equivalent to the second (starting with a half step). The chromatic and whole-tone scales, each containing only steps of uniform size, have only a single mode each, as any rotation of the sequence results in the same sequence. Another general definition excludes these equal-division scales, and defines modal scales as subsets of them: "If we leave out certain steps of a[n equal-step] scale we get a modal construction" (Karlheinz Stockhausen, in ). In "Messiaen's narrow sense, "a mode is any scale" made up from the 'chromatic total,' the twelve tones of the tempered system" .
https://en.wikipedia.org/wiki?curid=19556
Mechanics Mechanics (Greek ) is the area of physics concerned with the motions of macroscopic objects. Forces applied to objects result in displacements, or changes of an object's position relative to its environment. This branch of physics has its origins in Ancient Greece with the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo, Kepler, and Newton laid the foundation for what is now known as classical mechanics. It is a branch of classical physics that deals with particles that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as a branch of science which deals with the motion of and forces on bodies not in the quantum realm. The field is today less widely understood in terms of quantum theory. Historically, classical mechanics came first and quantum mechanics is a comparatively recent development. Classical mechanics originated with Isaac Newton's laws of motion in Philosophiæ Naturalis Principia Mathematica; Quantum Mechanics was developed in the early 20th century. Both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the extensive use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers. Quantum mechanics has superseded classical mechanics at the foundation level and is indispensable for the explanation and prediction of processes at the molecular, atomic, and sub-atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult in quantum mechanics and hence remains useful and well used. Modern descriptions of such behavior begin with a careful definition of such quantities as displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400 years ago, however, motion was explained from a very different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the Earth; the sun, the moon, and the stars travel in circles around the earth because it is the nature of heavenly objects to travel in perfect circles. Often cited as father to modern science, Galileo brought together the ideas of other great thinkers of his time and began to calculate motion in terms of distance travelled from some starting position and the time that it took. He showed that the speed of falling objects increases steadily during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction (air resistance) is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton's laws were superseded by Albert Einstein’s theory of relativity. [A sentence illustrating the computational complication of Einstein's theory of relativity.] For atomic and subatomic particles, Newton's laws were superseded by quantum theory. For everyday phenomena, however, Newton's three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. In analogy to the distinction between quantum and classical mechanics, Albert Einstein's general and special theories of relativity have expanded the scope of Newton and Galileo's formulation of mechanics. The differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a massive body approaches the speed of light. For instance, in Newtonian mechanics, Newton's laws of motion specify that F = "ma, whereas in relativistic mechanics and Lorentz transformations, which were first discovered by Hendrik Lorentz, F = γ"ma (where γ is the Lorentz factor, which is almost equal to 1 for low speeds). Relativistic corrections are also needed for quantum mechanics, although general relativity has not been integrated. The two theories remain incompatible, a hurdle which must be overcome in developing a theory of everything. The main theory of mechanics in antiquity was Aristotelian mechanics. A later developer in this tradition is Hipparchus. In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus. Persian Islamic polymath Ibn Sīnā published his theory of motion in "The Book of Healing" (1020). He said that an impetus is imparted to a projectile by the thrower, and viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that projectile in a vacuum would not stop unless it is acted upon. This conception of motion is consistent with Newton's first law of motion, inertia. Which states that an object in motion will stay in motion unless it is acted on by an external force. This idea which dissented from the Aristotelian view was later described as "impetus" by John Buridan, who was influenced by Ibn Sina's "Book of Healing". On the question of a body subject to a constant (uniform) force, the 12th-century Jewish-Arab scholar Hibat Allah Abu'l-Barakat al-Baghdaadi (born Nathanel, Iraqi, of Baghdad) stated that constant force imparts constant acceleration. According to Shlomo Pines, al-Baghdaadi's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]." The same century, Ibn Bajjah proposed that for every force there is always a reaction force. While he did not specify that these forces be equal, it is still an early version of the third law of motion which states that for every action there is an equal and opposite reaction. Influenced by earlier writers such as Ibn Sina and al-Baghdaadi, the 14th-century French priest Jean Buridan developed the theory of impetus, which later developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th-century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies. The concept that the main properties of a body are uniformly accelerated motion (as of falling bodies) was worked out by the 14th-century Oxford Calculators. Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his "Two New Sciences" (1638). Newton's 1687 "Philosophiæ Naturalis Principia Mathematica" provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics. There is some dispute over priority of various ideas: Newton's "Principia" is certainly the seminal work and has been tremendously influential, and the systematic mathematics therein did not and could not have been stated earlier because calculus had not been developed. However, many of the ideas, particularly as pertain to inertia (impetus) and falling bodies had been developed and stated by earlier researchers, both the then-recent Galileo and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are "equivalent" to modern statements or "sufficient" proof, or instead "similar" to modern statements and "hypotheses" is often debatable. Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th-century ideas. The development in the modern continuum mechanics, particularly in the areas of elasticity, plasticity, fluid dynamics, electrodynamics and thermodynamics of deformable media, started in the second half of the 20th century. The often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc. Other distinctions between the various sub-disciplines of mechanics, concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space. Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study. For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics. The following are two lists of various subjects that are studied in mechanics. Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function. The following are described as forming classical mechanics: The following are categorized as being part of quantum mechanics:
https://en.wikipedia.org/wiki?curid=19559
Mandelbrot set The Mandelbrot set is the set of complex numbers formula_1 for which the function formula_2 does not diverge when iterated from formula_3, i.e., for which the sequence formula_4, formula_5, etc., remains bounded in absolute value. Its definition is credited to Adrien Douady who named it in tribute to the mathematician Benoit Mandelbrot. The set is connected to a Julia set, and related Julia sets produce similarly complex fractal shapes. Mandelbrot set images may be created by sampling the complex numbers and testing, for each sample point formula_1, whether the sequence formula_7 goes to infinity (in practice – whether it leaves some predetermined bounded neighborhood of 0 after a predetermined number of iterations). Treating the real and imaginary parts of formula_1 as image coordinates on the complex plane, pixels may then be coloured according to how soon the sequence formula_9 crosses an arbitrarily chosen threshold, with a special color (usually black) used for the values of formula_1 for which the sequence has not crossed the threshold after the predetermined number of iterations (this is necessary to clearly distinguish the Mandelbrot set image from the image of its complement). If formula_1 is held constant and the initial value of formula_12—denoted by formula_13—is variable instead, one obtains the corresponding Julia set for each point formula_1 in the parameter space of the simple function. Images of the Mandelbrot set exhibit an elaborate and infinitely complicated boundary that reveals progressively ever-finer recursive detail at increasing magnifications. In other words, the boundary of the Mandelbrot set is a "fractal curve". The "style" of this repeating detail depends on the region of the set being examined. The set's boundary also incorporates smaller versions of the main shape, so the fractal property of self-similarity applies to the entire set, and not just to its parts. The Mandelbrot set has become popular outside mathematics both for its aesthetic appeal and as an example of a complex structure arising from the application of simple rules. It is one of the best-known examples of mathematical visualization and mathematical beauty. The Mandelbrot set has its origin in complex dynamics, a field first investigated by the French mathematicians Pierre Fatou and Gaston Julia at the beginning of the 20th century. This fractal was first defined and drawn in 1978 by Robert W. Brooks and Peter Matelski as part of a study of Kleinian groups. On 1 March 1980, at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York, Benoit Mandelbrot first saw a visualization of the set. Mandelbrot studied the parameter space of quadratic polynomials in an article that appeared in 1980. The mathematical study of the Mandelbrot set really began with work by the mathematicians Adrien Douady and John H. Hubbard (1985), who established many of its fundamental properties and named the set in honor of Mandelbrot for his influential work in fractal geometry. The mathematicians Heinz-Otto Peitgen and Peter Richter became well known for promoting the set with photographs, books (1986), and an internationally touring exhibit of the German Goethe-Institut (1985). The cover article of the August 1985 "Scientific American" introduced a wide audience to the algorithm for computing the Mandelbrot set. The cover featured an image located at −0.909 + −0.275 "i" and was created by Peitgen et al. The Mandelbrot set became prominent in the mid-1980s as a computer graphics demo, when personal computers became powerful enough to plot and display the set in high resolution. The work of Douady and Hubbard coincided with a huge increase in interest in complex dynamics and abstract mathematics, and the study of the Mandelbrot set has been a centerpiece of this field ever since. An exhaustive list of all who have contributed to the understanding of this set since then is long but would include Mikhail Lyubich, Curt McMullen, John Milnor, Mitsuhiro Shishikura and Jean-Christophe Yoccoz. The Mandelbrot set is the set of values of "c" in the complex plane for which the orbit of 0 under iteration of the quadratic map remains bounded. Thus, a complex number "c" is a member of the Mandelbrot set if, when starting with "z"0 = 0 and applying the iteration repeatedly, the absolute value of "z""n" remains bounded for all "n" > 0. For example, for "c" = 1, the sequence is 0, 1, 2, 5, 26, ..., which tends to infinity, so 1 is not an element of the Mandelbrot set. On the other hand, for "c" = −1, the sequence is 0, −1, 0, −1, 0, ..., which is bounded, so −1 does belong to the set. The Mandelbrot set can also be defined as the connectedness locus of a family of polynomials. The Mandelbrot set is a compact set, since it is closed and contained in the closed disk of radius 2 around the origin. More specifically, a point formula_1 belongs to the Mandelbrot set if and only if In other words, if the absolute value of formula_21 ever becomes larger than 2, the sequence will escape to infinity. The intersection of formula_22 with the real axis is precisely the interval [−2, 1/4]. The parameters along this interval can be put in one-to-one correspondence with those of the real logistic family, The correspondence is given by In fact, this gives a correspondence between the entire parameter space of the logistic family and that of the Mandelbrot set. Douady and Hubbard have shown that the Mandelbrot set is connected. In fact, they constructed an explicit conformal isomorphism between the complement of the Mandelbrot set and the complement of the closed unit disk. Mandelbrot had originally conjectured that the Mandelbrot set is disconnected. This conjecture was based on computer pictures generated by programs that are unable to detect the thin filaments connecting different parts of formula_22. Upon further experiments, he revised his conjecture, deciding that formula_22 should be connected. There also exists a topological proof to the connectedness that was discovered in 2001 by Jeremy Kahn. The dynamical formula for the uniformisation of the complement of the Mandelbrot set, arising from Douady and Hubbard's proof of the connectedness of formula_22, gives rise to external rays of the Mandelbrot set. These rays can be used to study the Mandelbrot set in combinatorial terms and form the backbone of the Yoccoz parapuzzle. The boundary of the Mandelbrot set is exactly the bifurcation locus of the quadratic family; that is, the set of parameters formula_1 for which the dynamics changes abruptly under small changes of formula_29 It can be constructed as the limit set of a sequence of plane algebraic curves, the "Mandelbrot curves", of the general type known as polynomial lemniscates. The Mandelbrot curves are defined by setting "p"0 = "z", "p""n"+1 = "p""n"2 + "z", and then interpreting the set of points = 2 in the complex plane as a curve in the real Cartesian plane of degree 2"n"+1 in "x" and "y". These algebraic curves appear in images of the Mandelbrot set computed using the "escape time algorithm" mentioned below. Upon looking at a picture of the Mandelbrot set, one immediately notices the large cardioid-shaped region in the center. This "main cardioid" is the region of parameters formula_1 for which formula_31 has an attracting fixed point. It consists of all parameters of the form for some formula_33 in the open unit disk. To the left of the main cardioid, attached to it at the point formula_34, a circular-shaped bulb is visible. This bulb consists of those parameters formula_1 for which formula_31 has an attracting cycle of period 2. This set of parameters is an actual circle, namely that of radius 1/4 around −1. There are infinitely many other bulbs tangent to the main cardioid: for every rational number formula_37, with "p" and "q" coprime, there is such a bulb that is tangent at the parameter This bulb is called the "formula_37-bulb" of the Mandelbrot set. It consists of parameters that have an attracting cycle of period formula_40 and combinatorial rotation number formula_37. More precisely, the formula_40 periodic Fatou components containing the attracting cycle all touch at a common point (commonly called the "formula_43-fixed point"). If we label these components formula_44 in counterclockwise orientation, then formula_31 maps the component formula_46 to the component formula_47. The change of behavior occurring at formula_48 is known as a bifurcation: the attracting fixed point "collides" with a repelling period "q"-cycle. As we pass through the bifurcation parameter into the formula_37-bulb, the attracting fixed point turns into a repelling fixed point (the formula_43-fixed point), and the period "q"-cycle becomes attracting. All the bulbs we encountered in the previous section were interior components of the Mandelbrot set in which the maps formula_31 have an attracting periodic cycle. Such components are called "hyperbolic components". It is conjectured that these are the "only" interior regions of formula_22. This problem, known as "density of hyperbolicity", may be the most important open problem in the field of complex dynamics. Hypothetical non-hyperbolic components of the Mandelbrot set are often referred to as "queer" or ghost components. For "real" quadratic polynomials, this question was answered positively in the 1990s independently by Lyubich and by Graczyk and Świątek. (Note that hyperbolic components intersecting the real axis correspond exactly to periodic windows in the Feigenbaum diagram. So this result states that such windows exist near every parameter in the diagram.) Not every hyperbolic component can be reached by a sequence of direct bifurcations from the main cardioid of the Mandelbrot set. However, such a component "can" be reached by a sequence of direct bifurcations from the main cardioid of a little Mandelbrot copy (see below). Each of the hyperbolic components has a "center", which is a point "c" such that the inner Fatou domain for formula_53 has a super-attracting cycle – that is, that the attraction is infinite (see the image ). This means that the cycle contains the critical point 0, so that 0 is iterated back to itself after some iterations. We therefore have that formula_31nformula_55 for some "n". If we call this polynomial formula_56 (letting it depend on "c" instead of "z"), we have that formula_57 and that the degree of formula_56 is formula_59. We can therefore construct the centers of the hyperbolic components by successively solving the equations formula_60. The number of new centers produced in each step is given by Sloane's . It is conjectured that the Mandelbrot set is locally connected. This famous conjecture is known as "MLC" (for "Mandelbrot locally connected"). By the work of Adrien Douady and John H. Hubbard, this conjecture would result in a simple abstract "pinched disk" model of the Mandelbrot set. In particular, it would imply the important "hyperbolicity conjecture" mentioned above. The work of Jean-Christophe Yoccoz established local connectivity of the Mandelbrot set at all finitely renormalizable parameters; that is, roughly speaking those contained only in finitely many small Mandelbrot copies. Since then, local connectivity has been proved at many other points of formula_22, but the full conjecture is still open. The Mandelbrot set is self-similar under magnification in the neighborhoods of the Misiurewicz points. It is also conjectured to be self-similar around generalized Feigenbaum points (e.g., −1.401155 or −0.1528 + 1.0397"i"), in the sense of converging to a limit set. The Mandelbrot set in general is not strictly self-similar but it is quasi-self-similar, as small slightly different versions of itself can be found at arbitrarily small scales. The little copies of the Mandelbrot set are all slightly different, mostly because of the thin threads connecting them to the main body of the set. The Hausdorff dimension of the boundary of the Mandelbrot set equals 2 as determined by a result of Mitsuhiro Shishikura. It is not known whether the boundary of the Mandelbrot set has positive planar Lebesgue measure. In the Blum–Shub–Smale model of real computation, the Mandelbrot set is not computable, but its complement is computably enumerable. However, many simple objects ("e.g.", the graph of exponentiation) are also not computable in the BSS model. At present, it is unknown whether the Mandelbrot set is computable in models of real computation based on computable analysis, which correspond more closely to the intuitive notion of "plotting the set by a computer". Hertling has shown that the Mandelbrot set is computable in this model if the hyperbolicity conjecture is true. As a consequence of the definition of the Mandelbrot set, there is a close correspondence between the geometry of the Mandelbrot set at a given point and the structure of the corresponding Julia set. For instance, a point is in the Mandelbrot set exactly when the corresponding Julia set is connected. This principle is exploited in virtually all deep results on the Mandelbrot set. For example, Shishikura proved that, for a dense set of parameters in the boundary of the Mandelbrot set, the Julia set has Hausdorff dimension two, and then transfers this information to the parameter plane. Similarly, Yoccoz first proved the local connectivity of Julia sets, before establishing it for the Mandelbrot set at the corresponding parameters. Adrien Douady phrases this principle as: For every rational number formula_37, where "p" and "q" are relatively prime, a hyperbolic component of period "q" bifurcates from the main cardioid. The part of the Mandelbrot set connected to the main cardioid at this bifurcation point is called the "p"/"q"-limb. Computer experiments suggest that the diameter of the limb tends to zero like formula_63. The best current estimate known is the "Yoccoz-inequality", which states that the size tends to zero like formula_64. A period-"q" limb will have "q" − 1 "antennae" at the top of its limb. We can thus determine the period of a given bulb by counting these antennas. In an attempt to demonstrate that the thickness of the "p"/"q"-limb is zero, David Boll carried out a computer experiment in 1991, where he computed the number of iterations required for the series to diverge for "z" = − + "iε" (− being the location thereof). As the series doesn't diverge for the exact value of "z" = −, the number of iterations required increases with a small "ε". It turns out that multiplying the value of ε with the number of iterations required yields an approximation of π that becomes better for smaller "ε". For example, for "ε" = 0.0000001 the number of iterations is 31415928 and the product is 3.1415928. It can be shown that the Fibonacci sequence is located within the Mandelbrot Set and that a relation exists between the main cardioid and the Farey Diagram. Upon mapping the main cardioid to a disk, one can notice that the amount of antennae that extends from the next largest Hyperbolic component, and that is located between the two previously selected components, follows suit with the Fibonacci sequence. The amount of antennae also correlates with the Farey Diagram and the denominator amounts within the corresponding fractional values, of which relate to the distance around the disk. Both portions of these fractional values themselves can be summed together after formula_65to produce the location of the next Hyperbolic component within the sequence. Thus, the Fibonacci sequence of 1, 2, 3, 5, 8, 13, and 21 can be found within the Mandelbrot set. The Mandelbrot set shows more intricate detail the closer one looks or magnifies the image, usually called "zooming in". The following example of an image sequence zooming to a selected "c" value gives an impression of the infinite richness of different geometrical structures and explains some of their typical rules. The magnification of the last image relative to the first one is about 1010 to 1. Relating to an ordinary monitor, it represents a section of a Mandelbrot set with a diameter of 4 million kilometers. Its border would show an astronomical number of different fractal structures. The seahorse "body" is composed by 25 "spokes" consisting of two groups of 12 "spokes" each and one "spoke" connecting to the main cardioid. These two groups can be attributed by some kind of metamorphosis to the two "fingers" of the "upper hand" of the Mandelbrot set; therefore, the number of "spokes" increases from one "seahorse" to the next by 2; the "hub" is a so-called Misiurewicz point. Between the "upper part of the body" and the "tail" a distorted small copy of the Mandelbrot set called satellite may be recognized. The islands in the third-to-last step seem to consist of infinitely many parts like Cantor sets, as is actually the case for the corresponding Julia set "Jc". However, they are connected by tiny structures, so that the whole represents a simply connected set. The tiny structures meet each other at a satellite in the center that is too small to be recognized at this magnification. The value of "c" for the corresponding "Jc" is not that of the image center but, relative to the main body of the Mandelbrot set, has the same position as the center of this image relative to the satellite shown in the 6th zoom step. In addition to creating two dimensional images of the Mandelbrot set, various techniques can be used to render Mandelbrot and Julia sets as 3D Heightmap images, where each pixel in a 2D image is given a height value, and the resulting image is rendered as a 3D graphic. The simplest approach to 3D rendering uses the iteration value for each pixel as a height value. This produces images with distinct "steps" in the height value. If instead you use the fractional iteration value (also known as the potential function) to calculate the height value for each point, you avoid steps in the resulting image. However, images rendered in 3D using fractional iteration data still look rather bumpy and visually noisy. An alternative approach is to use Distance Estimate (DE) data for each point to calculate a height value. Non-linear mapping of distance estimate value using an exponential function can produce visually pleasing images. Images plotted using DE data are often visually striking, and more importantly, the 3D shape makes it easy to visualize the thin "tendrils" that connect points of the set. Color plates 29 and 30 on page 121 of "The Science of Fractal Images" show a 2D and 3D image plotted using External Distance Estimates. Below is a 3D version of the "Image gallery of a zoom sequence" gallery above, rendered as height maps using Distance Estimate data, and using similar cropping and coloring. The image below is similar to "zoom 5", above, but is an attempt to create a 3D version of the image "Map 44" from page 85 of the book "The Beauty of Fractals" using a visually similar color scheme that shows the details of the plot in 3D. Multibrot sets are bounded sets found in the complex plane for members of the general monic univariate polynomial family of recursions For an integer d, these sets are connectedness loci for the Julia sets built from the same formula. The full cubic connectedness locus has also been studied; here one considers the two-parameter recursion formula_67, whose two critical points are the complex square roots of the parameter "k". A parameter is in the cubic connectedness locus if both critical points are stable. For general families of holomorphic functions, the "boundary" of the Mandelbrot set generalizes to the bifurcation locus, which is a natural object to study even when the connectedness locus is not useful. The Multibrot set is obtained by varying the value of the exponent "d". The article has a video that shows the development from "d" = 0 to 7, at which point there are 6 i.e. ("d" − 1) lobes around the perimeter. A similar development with negative exponents results in (1 − "d") clefts on the inside of a ring. There is no perfect extension of the Mandelbrot set into 3D. This is because there is no 3D analogue of the complex numbers for it to iterate on. However, there is an extension of the complex numbers into 4 dimensions, called the quaternions, that creates a perfect extension of the Mandelbrot set and the Julia sets into 4 dimensions. These can then be either cross-sectioned or projected into a 3D structure. Of particular interest is the tricorn fractal, the connectedness locus of the anti-holomorphic family The tricorn (also sometimes called the "Mandelbar") was encountered by Milnor in his study of parameter slices of real cubic polynomials. It is "not" locally connected. This property is inherited by the connectedness locus of real cubic polynomials. Another non-analytic generalization is the Burning Ship fractal, which is obtained by iterating the following : ] There exist a multitude of various algorithms for plotting the Mandelbrot set via a computing device. Here, the most widely used and simplest algorithm will be demonstrated, namely, the naïve "escape time algorithm". In the escape time algorithm, a repeating calculation is performed for each "x", "y" point in the plot area and based on the behavior of that calculation, a color is chosen for that pixel. The "x" and "y" locations of each point are used as starting values in a repeating, or iterating calculation (described in detail below). The result of each iteration is used as the starting values for the next. The values are checked during each iteration to see whether they have reached a critical "escape" condition, or "bailout". If that condition is reached, the calculation is stopped, the pixel is drawn, and the next "x", "y" point is examined. The color of each point represents how quickly the values reached the escape point. Often black is used to show values that fail to escape before the iteration limit, and gradually brighter colors are used for points that escape. This gives a visual representation of how many cycles were required before reaching the escape condition. To render such an image, the region of the complex plane we are considering is subdivided into a certain number of pixels. To color any such pixel, let formula_1 be the midpoint of that pixel. We now iterate the critical point 0 under formula_31, checking at each step whether the orbit point has modulus larger than 2. When this is the case, we know that formula_1 does not belong to the Mandelbrot set, and we color our pixel according to the number of iterations used to find out. Otherwise, we keep iterating up to a fixed number of steps, after which we decide that our parameter is "probably" in the Mandelbrot set, or at least very close to it, and color the pixel black. In pseudocode, this algorithm would look as follows. The algorithm does not use complex numbers and manually simulates complex-number operations using two real numbers, for those who do not have a complex data type. The program may be simplified if the programming language includes complex-data-type operations. Here, relating the pseudocode to formula_1, formula_12 and formula_31: and so, as can be seen in the pseudocode in the computation of "x" and "y": To get colorful images of the set, the assignment of a color to each value of the number of executed iterations can be made using one of a variety of functions (linear, exponential, etc.). The Mandelbrot set is considered by many the most popular fractal, and has been referenced several times in popular culture.
https://en.wikipedia.org/wiki?curid=19562
Michael Mann Michael Kenneth Mann (born February 5, 1943) is an American director, screenwriter, and producer of film and television who is best known for his distinctive brand of stylized crime drama. His most acclaimed works include the crime films "Thief" (1981), "Manhunter" (1986), "Heat" (1995), "Collateral" (2004), and "Public Enemies" (2009), the historical drama "The Last of the Mohicans" (1992), and the docudrama "The Insider" (1999). He is also known for his role as executive producer on the popular TV series "Miami Vice" (1984–89), which he later adapted into a 2006 feature film. For his work, he has received nominations from international organizations and juries, including the British Academy of Film and Television Arts, Cannes, and the Academy of Motion Picture Arts and Sciences. "Total Film" ranked Mann No. 28 on its list of the 100 Greatest Directors Ever, "Sight and Sound" ranked him No. 5 on their list of the 10 Best Directors of the Last 25 Years, and "Entertainment Weekly" ranked Mann No. 8 on their 25 Greatest Active Film Directors list. Mann was born February 5, 1943, in Chicago, Illinois, to a family of Jewish ancestry. He is the son of grocers Esther and Jack Mann. He received a B.A. in English at the University of Wisconsin–Madison, where he developed interests in history, philosophy and architecture. During this time, he saw Stanley Kubrick's "Dr. Strangelove" and fell in love with movies. In an "L.A. Weekly" interview, he described the film's impact on him: "It said to my whole generation of filmmakers that you could make an individual statement of high integrity and have that film be successfully seen by a mass audience all at the same time. In other words, you didn't have to be making "Seven Brides for Seven Brothers" if you wanted to work in the mainstream film industry, or be reduced to niche filmmaking if you wanted to be serious about cinema. So that's what Kubrick meant, aside from the fact that "Strangelove" was a revelation." He later earned his M.A. at London Film School. His daughter Ami Canaan Mann is also a film director and producer. Mann later moved to London in the mid 1960s to go to graduate school in cinema. He went on to receive a graduate degree at the London Film School in 1967. He spent seven years in the United Kingdom going to film school and then working on commercials along with contemporaries Alan Parker, Ridley Scott and Adrian Lyne. In 1968, footage he shot of the Paris student revolt for a documentary, "Insurrection", aired on NBC's "First Tuesday" news program and he developed his '68 experiences into the short film "Jaunpuri" which won the Jury Prize at Cannes in 1970. Mann returned to United States after divorcing his first wife in 1971. He went on to direct a road trip documentary, "17 Days Down the Line". Three years later, "Hawaii Five-O" veteran Robert Lewin gave Mann a shot and a crash course on television writing and story structure. Mann wrote four episodes of "Starsky and Hutch" (three in the first series and one in the second) and the pilot episode for "Vega$". Around this time, he worked on a show called "Police Story" with cop-turned-novelist Joseph Wambaugh. "Police Story" concentrated on the detailed realism of a real cop's life and taught Mann that first-hand research was essential to bring authenticity to his work. Mann also wrote an early draft of the 1978 film "Straight Time". His first feature movie was a television special called "The Jericho Mile", which was released theatrically in Europe. It won the Emmy for Outstanding Writing in a Limited Series or a Special in 1979 and the DGA Best Director award. His television work also includes being the executive producer on "Miami Vice" and "Crime Story". Contrary to popular belief, he was not the creator of these shows, but the executive producer and showrunner, produced by his production company. Mann's first cinema feature as director was "Thief" (1981) starring James Caan, a relatively accurate depiction of thieves that operated in New York City and Chicago at that time. Mann used actual former professional burglars to keep the technical scenes as genuine as possible. His next film was "The Keep" (1983), a supernatural thriller set in Nazi-occupied Romania. Though it was a commercial flop, the film has since attained cult status amongst fans. In 1986, Mann was the first to bring Thomas Harris' character of serial killer Hannibal Lecter to the screen with "Manhunter", his adaptation of the novel "Red Dragon", which starred Brian Cox as Hannibal. In an interview on the "Manhunter" DVD, star William Petersen comments that because Mann is so focused on his creations, it takes several years for him to complete a film; Petersen believes that this is why Mann does not make films very often. Mann gained widespread recognition in 1992 for his film adaptation of James Fenimore Cooper's novel into the epic film "The Last of the Mohicans". This was followed by the films "Heat" and "The Insider", the former featuring Al Pacino co-starring with Robert De Niro and the latter starring Russell Crowe, showcased Mann's cinematic style and garnered the most critical recognition of his career up to this point. "The Insider" was nominated for seven Academy Awards as a result, including a nomination for Mann's direction. With his next film, "Ali" (2001), starring Will Smith, Mann started experimenting with digital cameras. For his action thriller film "Collateral", which cast Tom Cruise against type by giving him the role of a hitman, Mann shot all of the exterior scenes digitally so that he could achieve more depth and detail during the night scenes while shooting most of the interiors on film stock. Jamie Foxx was nominated for an Academy Award for his performance in "Collateral". In 2004, Mann produced "The Aviator", based on the life of Howard Hughes, which he had developed with Leonardo DiCaprio. "The Aviator" was nominated for an Academy Award for Best Picture but lost to "Million Dollar Baby". After "Collateral", Mann directed the film adaptation of "Miami Vice" which he also executive produced. It stars a completely new cast with Colin Farrell as Don Johnson's character Sonny Crockett, and Jamie Foxx filling Philip Michael Thomas' shoes. Mann served as a producer with Peter Berg as director for "The Kingdom" and "Hancock". "Hancock" stars Will Smith as a hard-drinking superhero who has fallen out of favor with the public and who begins to have a relationship with the wife (Charlize Theron) of a public relations expert (Jason Bateman), who is helping him to repair his image. Mann also makes a cameo appearance in the film as an executive. In 2009, Mann wrote and directed "Public Enemies" for Universal Pictures, about the Depression-era crime wave, based on Brian Burrough's nonfiction book, "Public Enemies: America's Greatest Crime Wave and the Birth of the FBI, 1933–34". It starred Johnny Depp and Christian Bale. Depp played John Dillinger in the film, and Bale played Melvin Purvis, the FBI agent in charge of capturing Dillinger. In January 2010 it was reported by "Variety" that Mann, alongside David Milch, would serve as co-executive producer of new TV series "Luck". The series was an hour-long HBO production, and Mann directed the series' pilot. Although initially renewed for a second season after the airing of the pilot, it was eventually cancelled due to the death of three horses during production. On February 14, 2013, it was announced that Mann had been developing an untitled thriller film with screenwriter Morgan Davis Foehl for over a year, for Legendary Pictures. In May 2013, Mann started filming the action thriller, named "Blackhat", in Los Angeles, Kuala Lumpur, Hong Kong and Jakarta. The film, starring Chris Hemsworth as a hacker who gets released from prison to pursue a cyberterrorist across the globe, was released on January 16, 2015 by Universal. It received mixed reviews and was a commercial disaster, although many critics included it in their year-end "best-of" lists. Mann will next direct "Ferrari", from a screenplay he wrote about Ferrari founder Enzo Ferrari, with Hugh Jackman set to portray the role of Ferrari. Mann will direct the pilot episode of "Tokyo Vice" for HBO Max. His trademarks include powerfully-lit nighttime scenes and unusual scores, such as Tangerine Dream in "Thief" or the new-age score to "Manhunter". Dante Spinotti is a frequent cinematographer of Mann's films. F.X. Feeney describes Mann's body of work in "DGA Quarterly" as "abundantly energetic in its precision and variety" and "psychologically layered". "Indiewire"'s 2014 retrospective of the director's filmography focused on the intensity of Mann's ongoing interest in "stories pitting criminals against those who seek to put them behind bars ("Heat, Public Enemies, Thief, Collateral, Miami Vice"). His films frequently suggest that in fact, at the top of their respective games, crooks and cops are not so dissimilar as men: they each live and die by their own codes and they each recognize themselves in the other." Mann directed the 2002 "Lucky Star" advertisement for Mercedes-Benz, which took the form of a film trailer for a purported thriller featuring Benicio del Toro. In the fall of 2007, Mann directed two commercials for Nike. The ad campaign "Leave Nothing" features football action scenes with former NFL players Shawne Merriman and Steven Jackson, as well as using the score "Promontory" from the soundtrack of "The Last of the Mohicans". Mann also directed the 2008 promotional video for Ferrari's California sports car. Producer only Television movies
https://en.wikipedia.org/wiki?curid=19565
Main-group element In chemistry and atomic physics, the main group is the group of elements (sometimes called the representative elements) whose lightest members are represented by helium, lithium, beryllium, boron, carbon, nitrogen, oxygen, and fluorine as arranged in the periodic table of the elements. The main group includes the elements (except hydrogen, which is sometimes not included) in groups 1 and 2 (s-block), and groups 13 to 18 (p-block). The s-block elements are primarily characterised by one main oxidation state, and the p-block elements, when they have multiple oxidation states, often have common oxidation states separated by two units. Main-group elements (with some of the lighter transition metals) are the most abundant elements on earth, in the solar system, and in the universe. Group 12 elements are often considered to be transition metals; however, zinc (Zn), cadmium (Cd), and mercury (Hg) share some properties of both groups, and many scientists believe they should be included in the main group. Occasionally, even the group 3 elements as well as the lanthanides and actinides have been included, because especially the group 3 elements and lanthanides are electropositive elements with only one main oxidation state like the group 1 and 2 elements. The position of the actinides is more questionable, but the most common and stable of them, thorium (Th) and uranium (U), are similar to main-group elements as thorium is an electropositive element with only one main oxidation state (+4), and uranium has two main ones separated by two oxidation units (+4 and +6). In older nomenclature, the main-group elements are groups IA and IIA, and groups IIIB to 0 (CAS groups IIIA to VIIIA). Group 12 is labelled as group IIB in both systems. Group 3 is labelled as group IIIA in the older nomenclature (CAS group IIIB).
https://en.wikipedia.org/wiki?curid=19566
Microscopy Microscopy is the technical field of using microscopes to view objects and areas of objects that cannot be seen with the naked eye (objects that are not within the resolution range of the normal eye). There are three well-known branches of microscopy: optical, electron, and scanning probe microscopy, along with the emerging field of X-ray microscopy. Optical microscopy and electron microscopy involve the diffraction, reflection, or refraction of electromagnetic radiation/electron beams interacting with the specimen, and the collection of the scattered radiation or another signal in order to create an image. This process may be carried out by wide-field irradiation of the sample (for example standard light microscopy and transmission electron microscopy) or by scanning a fine beam over the sample (for example confocal laser scanning microscopy and scanning electron microscopy). Scanning probe microscopy involves the interaction of a scanning probe with the surface of the object of interest. The development of microscopy revolutionized biology, gave rise to the field of histology and so remains an essential technique in the life and physical sciences. X-ray microscopy is three-dimensional and non-destructive, allowing for repeated imaging of the same sample for in situ or 4D studies, and providing the ability to "see inside" the sample being studied before sacrificing it to higher resolution techniques. A 3D X-ray microscope uses the technique of computed tomography (microCT), rotating the sample 360 degrees and reconstructing the images. CT is typically carried out with a flat panel display. A 3D X-ray microscope employs a range of objectives, e.g., from 4X to 40X, and can also include a flat panel. The field of microscopy (optical microscopy) dates back to at least the 17th-century. Earlier microscopes, single lens magnifying glasses with limited magnification, date at least as far back as the wide spread use of lenses in eyeglasses in the 13th century but more advanced compound microscopes first appeared in Europe around 1620 The earliest practitioners of microscopy include Galileo Galilei, who found in 1610 that he could close focus his telescope to view small objects close up and Cornelis Drebbel, who may have invented the compound microscope around 1620 Antonie van Leeuwenhoek developed a very high magnification simple microscope in the 1670s and is often considered to be the first acknowledged microscopist and microbiologist. Optical or light microscopy involves passing visible light transmitted through or reflected from the sample through a single lens or multiple lenses to allow a magnified view of the sample. The resulting image can be detected directly by the eye, imaged on a photographic plate, or captured digitally. The single lens with its attachments, or the system of lenses and imaging equipment, along with the appropriate lighting equipment, sample stage, and support, makes up the basic light microscope. The most recent development is the digital microscope, which uses a CCD camera to focus on the exhibit of interest. The image is shown on a computer screen, so eye-pieces are unnecessary. Limitations of standard optical microscopy (bright field microscopy) lie in three areas; Live cells in particular generally lack sufficient contrast to be studied successfully, since the internal structures of the cell are colorless and transparent. The most common way to increase contrast is to stain the different structures with selective dyes, but this often involves killing and fixing the sample. Staining may also introduce artifacts, which are apparent structural details that are caused by the processing of the specimen and are thus not legitimate features of the specimen. In general, these techniques make use of differences in the refractive index of cell structures. Bright field microscopy is comparable to looking through a glass window: one sees not the glass but merely the dirt on the glass. There is a difference, as glass is a denser material, and this creates a difference in phase of the light passing through. The human eye is not sensitive to this difference in phase, but clever optical solutions have been devised to change this difference in phase into a difference in amplitude (light intensity). In order to improve specimen contrast or highlight certain structures in a sample, special techniques must be used. A huge selection of microscopy techniques are available to increase contrast or label a sample. Image:Paper_Micrograph_Bright.png|Bright field illumination, sample contrast comes from absorbance of light in the sample. Image:Paper_Micrograph_Cross-Polarised.png|Cross-polarized light illumination, sample contrast comes from rotation of polarized light through the sample. Image:Paper_Micrograph_Dark.png|Dark field illumination, sample contrast comes from light scattered by the sample. Image:Paper_Micrograph_Phase.png|Phase contrast illumination, sample contrast comes from interference of different path lengths of light through the sample. Bright field microscopy is the simplest of all the light microscopy techniques. Sample illumination is via transmitted white light, i.e. illuminated from below and observed from above. Limitations include low contrast of most biological samples and low apparent resolution due to the blur of out-of-focus material. The simplicity of the technique and the minimal sample preparation required are significant advantages. The use of oblique (from the side) illumination gives the image a three-dimensional (3D) appearance and can highlight otherwise invisible features. A more recent technique based on this method is "Hoffmann's modulation contrast", a system found on inverted microscopes for use in cell culture. Oblique illumination suffers from the same limitations as bright field microscopy (low contrast of many biological samples; low apparent resolution due to out of focus objects). Dark field microscopy is a technique for improving the contrast of unstained, transparent specimens. Dark field illumination uses a carefully aligned light source to minimize the quantity of directly transmitted (unscattered) light entering the image plane, collecting only the light scattered by the sample. Dark field can dramatically improve image contrast – especially of transparent objects – while requiring little equipment setup or sample preparation. However, the technique suffers from low light intensity in final image of many biological samples and continues to be affected by low apparent resolution. "Rheinberg illumination" is a special variant of dark field illumination in which transparent, colored filters are inserted just before the condenser so that light rays at high aperture are differently colored than those at low aperture (i.e., the background to the specimen may be blue while the object appears self-luminous red). Other color combinations are possible, but their effectiveness is quite variable. Dispersion staining is an optical technique that results in a colored image of a colorless object. This is an optical staining technique and requires no stains or dyes to produce a color effect. There are five different microscope configurations used in the broader technique of dispersion staining. They include brightfield Becke line, oblique, darkfield, phase contrast, and objective stop dispersion staining. More sophisticated techniques will show proportional differences in optical density. Phase contrast is a widely used technique that shows differences in refractive index as difference in contrast. It was developed by the Dutch physicist Frits Zernike in the 1930s (for which he was awarded the Nobel Prize in 1953). The nucleus in a cell for example will show up darkly against the surrounding cytoplasm. Contrast is excellent; however it is not for use with thick objects. Frequently, a halo is formed even around small objects, which obscures detail. The system consists of a circular annulus in the condenser, which produces a cone of light. This cone is superimposed on a similar sized ring within the phase-objective. Every objective has a different size ring, so for every objective another condenser setting has to be chosen. The ring in the objective has special optical properties: it, first of all, reduces the direct light in intensity, but more importantly, it creates an artificial phase difference of about a quarter wavelength. As the physical properties of this direct light have changed, interference with the diffracted light occurs, resulting in the phase contrast image. One disadvantage of phase-contrast microscopy is halo formation (halo-light ring). Superior and much more expensive is the use of interference contrast. Differences in optical density will show up as differences in relief. A nucleus within a cell will actually show up as a globule in the most often used differential interference contrast system according to Georges Nomarski. However, it has to be kept in mind that this is an "optical effect", and the relief does not necessarily resemble the true shape. Contrast is very good and the condenser aperture can be used fully open, thereby reducing the depth of field and maximizing resolution. The system consists of a special prism (Nomarski prism, Wollaston prism) in the condenser that splits light in an ordinary and an extraordinary beam. The spatial difference between the two beams is minimal (less than the maximum resolution of the objective). After passage through the specimen, the beams are reunited by a similar prism in the objective. In a homogeneous specimen, there is no difference between the two beams, and no contrast is being generated. However, near a refractive boundary (say a nucleus within the cytoplasm), the difference between the ordinary and the extraordinary beam will generate a relief in the image. Differential interference contrast requires a polarized light source to function; two polarizing filters have to be fitted in the light path, one below the condenser (the polarizer), and the other above the objective (the analyzer). Note: In cases where the optical design of a microscope produces an appreciable lateral separation of the two beams we have the case of classical interference microscopy, which does not result in relief images, but can nevertheless be used for the quantitative determination of mass-thicknesses of microscopic objects. An additional technique using interference is interference reflection microscopy (also known as reflected interference contrast, or RIC). It relies on cell adhesion to the slide to produce an interference signal. If there is no cell attached to the glass, there will be no interference. Interference reflection microscopy can be obtained by using the same elements used by DIC, but without the prisms. Also, the light that is being detected is reflected and not transmitted as it is when DIC is employed. When certain compounds are illuminated with high energy light, they emit light of a lower frequency. This effect is known as fluorescence. Often specimens show their characteristic autofluorescence image, based on their chemical makeup. This method is of critical importance in the modern life sciences, as it can be extremely sensitive, allowing the detection of single molecules. Many different fluorescent dyes can be used to stain different structures or chemical compounds. One particularly powerful method is the combination of antibodies coupled to a fluorophore as in immunostaining. Examples of commonly used fluorophores are fluorescein or rhodamine. The antibodies can be tailor-made for a chemical compound. For example, one strategy often in use is the artificial production of proteins, based on the genetic code (DNA). These proteins can then be used to immunize rabbits, forming antibodies which bind to the protein. The antibodies are then coupled chemically to a fluorophore and used to trace the proteins in the cells under study. Highly efficient fluorescent proteins such as the green fluorescent protein (GFP) have been developed using the molecular biology technique of gene fusion, a process that links the expression of the fluorescent compound to that of the target protein. This combined fluorescent protein is, in general, non-toxic to the organism and rarely interferes with the function of the protein under study. Genetically modified cells or organisms directly express the fluorescently tagged proteins, which enables the study of the function of the original protein in vivo. Growth of protein crystals results in both protein and salt crystals. Both are colorless and microscopic. Recovery of the protein crystals requires imaging which can be done by the intrinsic fluorescence of the protein or by using transmission microscopy. Both methods require an ultraviolet microscope as protein absorbs light at 280 nm. Protein will also fluorescence at approximately 353 nm when excited with 280 nm light. Since fluorescence emission differs in wavelength (color) from the excitation light, an ideal fluorescent image shows only the structure of interest that was labeled with the fluorescent dye. This high specificity led to the widespread use of fluorescence light microscopy in biomedical research. Different fluorescent dyes can be used to stain different biological structures, which can then be detected simultaneously, while still being specific due to the individual color of the dye. To block the excitation light from reaching the observer or the detector, filter sets of high quality are needed. These typically consist of an excitation filter selecting the range of excitation wavelengths, a dichroic mirror, and an emission filter blocking the excitation light. Most fluorescence microscopes are operated in the Epi-illumination mode (illumination and detection from one side of the sample) to further decrease the amount of excitation light entering the detector. See also: total internal reflection fluorescence microscope Neuroscience Confocal laser scanning microscopy uses a focused laser beam (e.g. 488 nm) that is scanned across the sample to excite fluorescence in a point-by-point fashion. The emitted light is directed through a pinhole to prevent out-of-focus light from reaching the detector, typically a photomultiplier tube. The image is constructed in a computer, plotting the measured fluorescence intensities according to the position of the excitation laser. Compared to full sample illumination, confocal microscopy gives slightly higher lateral resolution and significantly improves optical sectioning (axial resolution). Confocal microscopy is, therefore, commonly used where 3D structure is important. A subclass of confocal microscopes are spinning disc microscopes which are able to scan multiple points simultaneously across the sample. A corresponding disc with pinholes rejects out-of-focus light. The light detector in a spinning disc microscope is a digital camera, typically EM-CCD or sCMOS. A two-photon microscope is also a laser-scanning microscope, but instead of UV, blue or green laser light, a pulsed infrared laser is used for excitation. Only in the tiny focus of the laser is the intensity high enough to generate fluorescence by two-photon excitation, which means that no out-of-focus fluorescence is generated, and no pinhole is necessary to clean up the image. This allows imaging deep in scattering tissue, where a confocal microscope would not be able to collect photons efficiently. Two-photon microscopes with wide-field detection are frequently used for functional imaging, e.g. calcium imaging, in brain tissue. They are marketed as Multiphoton microscopes by several companies, although the gains of using 3-photon instead of 2-photon excitation are marginal. Using a plane of light formed by focusing light through a cylindrical lens at a narrow angle or by scanning a line of light in a plane perpendicular to the axis of objective, high resolution optical sections can be taken. Single plane illumination, or light sheet illumination, is also accomplished using beam shaping techniques incorporating multiple-prism beam expanders. The images are captured by CCDs. These variants allow very fast and high signal to noise ratio image capture. Wide-field multiphoton microscopy refers to an optical non-linear imaging technique in which a large area of the object is illuminated and imaged without the need for scanning. High intensities are required to induce non-linear optical processes such as two-photon fluorescence or second harmonic generation. In scanning multiphoton microscopes the high intensities are achieved by tightly focusing the light, and the image is obtained by beam scanning. In wide-field multiphoton microscopy the high intensities are best achieved using an optically amplified pulsed laser source to attain a large field of view (~100 µm). The image in this case is obtained as a single frame with a CCD camera without the need of scanning, making the technique particularly useful to visualize dynamic processes simultaneously across the object of interest. With wide-field multiphoton microscopy the frame rate can be increased up to a 1000-fold compared to multiphoton scanning microscopy. In scattering tissue, however, image quality rapidly degrades with increasing depth. Fluorescence microscopy is a powerful technique to show specifically labeled structures within a complex environment and to provide three-dimensional information of biological structures. However, this information is blurred by the fact that, upon illumination, all fluorescently labeled structures emit light, irrespective of whether they are in focus or not. So an image of a certain structure is always blurred by the contribution of light from structures that are out of focus. This phenomenon results in a loss of contrast especially when using objectives with a high resolving power, typically oil immersion objectives with a high numerical aperture. However, blurring is not caused by random processes, such as light scattering, but can be well defined by the optical properties of the image formation in the microscope imaging system. If one considers a small fluorescent light source (essentially a bright spot), light coming from this spot spreads out further from our perspective as the spot becomes more out of focus. Under ideal conditions, this produces an "hourglass" shape of this point source in the third (axial) dimension. This shape is called the point spread function (PSF) of the microscope imaging system. Since any fluorescence image is made up of a large number of such small fluorescent light sources, the image is said to be "convolved by the point spread function". The mathematically modeled PSF of a terahertz laser pulsed imaging system is shown on the right. The output of an imaging system can be described using the equation: formula_1 Where is the additive noise. Knowing this point spread function means that it is possible to reverse this process to a certain extent by computer-based methods commonly known as deconvolution microscopy. There are various algorithms available for 2D or 3D deconvolution. They can be roughly classified in "nonrestorative" and "restorative" methods. While the nonrestorative methods can improve contrast by removing out-of-focus light from focal planes, only the restorative methods can actually reassign light to its proper place of origin. Processing fluorescent images in this manner can be an advantage over directly acquiring images without out-of-focus light, such as images from confocal microscopy, because light signals otherwise eliminated become useful information. For 3D deconvolution, one typically provides a series of images taken from different focal planes (called a Z-stack) plus the knowledge of the PSF, which can be derived either experimentally or theoretically from knowing all contributing parameters of the microscope. A multitude of super-resolution microscopy techniques have been developed in recent times which circumvent the diffraction barrier. This is mostly achieved by imaging a sufficiently static sample multiple times and either modifying the excitation light or observing stochastic changes in the image. The deconvolution methods described in the previous section, which removes the PSF induced blur and assigns a mathematically 'correct' origin of light, are used, albeit with slightly different understanding of what the value of a pixel mean. Assuming "most of the time", one single fluorophore contributes to one single blob on one single taken image, the blobs in the images can be replaced with their calculated position, vastly improving resolution to well below the diffraction limit. To realize such assumption, Knowledge of and chemical control over fluorophore photophysics is at the core of these techniques, by which resolutions of ~20 nanometers are regularly obtained. Serial time encoded amplified microscopy (STEAM) is an imaging method that provides ultrafast shutter speed and frame rate, by using optical image amplification to circumvent the fundamental trade-off between sensitivity and speed, and a single-pixel photodetector to eliminate the need for a detector array and readout time limitations The method is at least 1000 times faster than the state-of-the-art CCD and CMOS cameras. Consequently, it is potentially useful for a broad range of scientific, industrial, and biomedical applications that require high image acquisition rates, including real-time diagnosis and evaluation of shockwaves, microfluidics, MEMS, and laser surgery. Most modern instruments provide simple solutions for micro-photography and image recording electronically. However such capabilities are not always present and the more experienced microscopist will, in many cases, still prefer a hand drawn image to a photograph. This is because a microscopist with knowledge of the subject can accurately convert a three-dimensional image into a precise two-dimensional drawing. In a photograph or other image capture system however, only one thin plane is ever in good focus. The creation of careful and accurate micrographs requires a microscopical technique using a monocular eyepiece. It is essential that both eyes are open and that the eye that is not observing down the microscope is instead concentrated on a sheet of paper on the bench besides the microscope. With practice, and without moving the head or eyes, it is possible to accurately record the observed details by tracing round the observed shapes by simultaneously "seeing" the pencil point in the microscopical image. Practicing this technique also establishes good general microscopical technique. It is always less tiring to observe with the microscope focused so that the image is seen at infinity and with both eyes open at all times. Microspectroscopy:spectroscopy with a microscope As resolution depends on the wavelength of the light. Electron microscopy has been developed since the 1930s that use electron beams instead of light. Because of the much smaller wavelength of the electron beam, resolution is far higher. Though less common, X-ray microscopy has also been developed since the late 1940s. The resolution of X-ray microscopy lies between that of light microscopy and electron microscopy. Until the invention of sub-diffraction microscopy, the wavelength of the light limited the resolution of traditional microscopy to around 0.2 micrometers. In order to gain higher resolution, the use of an electron beam with a far smaller wavelength is used in electron microscopes. Electron microscopes equipped for X-ray spectroscopy can provide qualitative and quantitative elemental analysis. This type of electron microscope, also known as analytical electron microscope, can be a very powerful characterisation tool for investigation of nanomaterials. This is a sub-diffraction technique. Examples of scanning probe microscopes are the atomic force microscope (AFM), the Scanning tunneling microscope, the photonic force microscope and the recurrence tracking microscope. All such methods use the physical contact of a solid probe tip to scan the surface of an object, which is supposed to be almost flat. Ultrasonic force microscopy (UFM) has been developed in order to improve the details and image contrast on "flat" areas of interest where AFM images are limited in contrast. The combination of AFM-UFM allows a near field acoustic microscopic image to be generated. The AFM tip is used to detect the ultrasonic waves and overcomes the limitation of wavelength that occurs in acoustic microscopy. By using the elastic changes under the AFM tip, an image of much greater detail than the AFM topography can be generated. Ultrasonic force microscopy allows the local mapping of elasticity in atomic force microscopy by the application of ultrasonic vibration to the cantilever or sample. In an attempt to analyze the results of ultrasonic force microscopy in a quantitative fashion, a force-distance curve measurement is done with ultrasonic vibration applied to the cantilever base, and the results are compared with a model of the cantilever dynamics and tip-sample interaction based on the finite-difference technique. Ultraviolet microscopes have two main purposes. The first is to utilize the shorter wavelength of ultraviolet electromagnetic energy to improve the image resolution beyond that of the diffraction limit of standard optical microscopes. This technique is used for non-destructive inspection of devices with very small features such as those found in modern semiconductors. The second application for UV microscopes is contrast enhancement where the response of individual samples is enhanced, relative to their surrounding, due to the interaction of light with the molecules within the sample itself. One example is in the growth of protein crystals. Protein crystals are formed in salt solutions. As salt and protein crystals are both formed in the growth process, and both are commonly transparent to the human eye, they cannot be differentiated with a standard optical microscope. As the tryptophan of protein absorbs light at 280 nm, imaging with a UV microscope with 280 nm bandpass filters makes it simple to differentiate between the two types of crystals. The protein crystals appear dark while the salt crystals are transparent. The term "infrared microscopy" refers to microscopy performed at infrared wavelengths. In the typical instrument configuration, a Fourier Transform Infrared Spectrometer (FTIR) is combined with an optical microscope and an infrared detector. The infrared detector can be a single point detector, a linear array or a 2D focal plane array. FTIR provides the ability to perform chemical analysis via infrared spectroscopy and the microscope and point or array detector enable this chemical analysis to be spatially resolved, i.e. performed at different regions of the sample. As such, the technique is also called infrared microspectroscopy An alternative architecture called Laser Direct Infrared (LDIR) Imaging involves the combination of a tuneable infrared light source and single point detector on a flying objective). This technique is frequently used for infrared chemical imaging, where the image contrast is determined by the response of individual sample regions to particular IR wavelengths selected by the user, usually specific IR absorption bands and associated molecular resonances. A key limitation of conventional infrared microspectroscopy is that the spatial resolution is diffraction-limited. Specifically the spatial resolution is limited to a figure related to the wavelength of the light. For practical IR microscopes, the spatial resolution is limited to 1-3x the wavelength, depending on the specific technique and instrument used. For mid-IR wavelengths, this sets a practical spatial resolution limit of ~3-30 μm. IR versions of sub-diffraction microscopy also exist. These include IR Near-field scanning optical microscope (NSOM), photothermal microspectroscopy, Nano-FTIR and atomic force microscope based infrared spectroscopy (AFM-IR). In digital holographic microscopy (DHM), interfering wave fronts from a coherent (monochromatic) light-source are recorded on a sensor. The image is digitally reconstructed by a computer from the recorded hologram. Besides the ordinary bright field image, a phase shift image is created. DHM can operate both in reflection and transmission mode. In reflection mode, the phase shift image provides a relative distance measurement and thus represents a topography map of the reflecting surface. In transmission mode, the phase shift image provides a label-free quantitative measurement of the optical thickness of the specimen. Phase shift images of biological cells are very similar to images of stained cells and have successfully been analyzed by high content analysis software. A unique feature of DHM is the ability to adjust focus after the image is recorded, since all focus planes are recorded simultaneously by the hologram. This feature makes it possible to image moving particles in a volume or to rapidly scan a surface. Another attractive feature is DHM's ability to use low cost optics by correcting optical aberrations by software. Digital pathology is an image-based information environment enabled by computer technology that allows for the management of information generated from a digital slide. Digital pathology is enabled in part by virtual microscopy, which is the practice of converting glass slides into digital slides that can be viewed, managed, and analyzed. Laser microscopy is a rapidly growing field that uses laser illumination sources in various forms of microscopy. For instance, laser microscopy focused on biological applications uses ultrashort pulse lasers, in a number of techniques labeled as nonlinear microscopy, saturation microscopy, and two-photon excitation microscopy. High-intensity, short-pulse laboratory x-ray lasers have been under development for several years. When this technology comes to fruition, it will be possible to obtain magnified three-dimensional images of elementary biological structures in the living state at a precisely defined instant. For optimum contrast between water and protein and for best sensitivity and resolution, the laser should be tuned near the nitrogen line at about 0.3 nanometers. Resolution will be limited mainly by the hydrodynamic expansion that occurs while the necessary number of photons is being registered. Thus, while the specimen is destroyed by the exposure, its configuration can be captured before it explodes. Scientists have been working on practical designs and prototypes for x-ray holographic microscopes, despite the prolonged development of the appropriate laser. A microscopy technique relying on the photoacoustic effect, i.e. the generation of (ultra)sound caused by light absorption. A focused and intensity modulated laser beam is raster scanned over a sample. The generated (ultra)sound is detected via an ultrasound transducer. Commonly, piezoelectric ultrasound transducers are employed. The image contrast is related to the sample's absorption coefficient formula_2. This is in contrast to bright or dark field microscopy, where the image contrast is due to transmittance or scattering. In principle, the contrast of fluorescence microscopy is proportional to the sample's absorption too. However, in fluorescence microscopy the fluorescence quantum yield formula_3 needs to be unequal to zero in order that a signal can be detected. In photoacoustic microscopy, however, every absorbing substance gives a photoacoustic signal formula_4 which is proportional to formula_5 Here formula_6 is the Grüneisen coefficient, formula_7 is the laser's photon energy and formula_8 is the sample's band gap energy. Therefore, photoacoustic microscopy seems well suited as a complementary technique to fluorescence microscopy, as a high fluorescence quantum yield leads to high fluorescence signals and a low fluorescence quantum yield leads to high photoacoustic signals. Neglecting non-linear effects, the lateral resolution is limited by the Abbe diffraction limit: formula_9 where formula_10 is the wavelength of the excitation laser and is the numerical aperture of the objective lens. The Abbe diffraction limit holds if the incoming wave front is parallel. In reality, however, the laser beam profile is Gaussian. Therefore, in order to the calculate the achievable resolution, formulas for truncated Gaussian beams have to be used. "Amateur Microscopy" is the investigation and observation of biological and non-biological specimens for recreational purposes. Collectors of minerals, insects, seashells, and plants may use microscopes as tools to uncover features that help them classify their collected items. Other amateurs may be interested in observing the life found in pond water and of other samples. Microscopes may also prove useful for the water quality assessment for people that keep a home aquarium. Photographic documentation and drawing of the microscopic images are additional tasks that augment the spectrum of tasks of the amateur. There are even competitions for photomicrograph art. Participants of this pastime may either use commercially prepared microscopic slides or engage in the task of specimen preparation. While microscopy is a central tool in the documentation of biological specimens, it is, in general, insufficient to justify the description of a new species based on microscopic investigations alone. Often genetic and biochemical tests are necessary to confirm the discovery of a new species. A laboratory and access to academic literature is a necessity, which is specialized and, in general, not available to amateurs. There is, however, one huge advantage that amateurs have above professionals: time to explore their surroundings. Often, advanced amateurs team up with professionals to validate their findings and (possibly) describe new species. In the late 1800s, amateur microscopy became a popular hobby in the United States and Europe. Several 'professional amateurs' were being paid for their sampling trips and microscopic explorations by philanthropists, to keep them amused on the Sunday afternoon (e.g., the diatom specialist A. Grunow, being paid by (among others) a Belgian industrialist). Professor John Phin published "Practical Hints on the Selection and Use of the Microscope (Second Edition, 1878)," and was also the editor of the "American Journal of Microscopy." Examples of amateur microscopy images: Microscopy has many applications in the forensic sciences; it provides precision, quality, accuracy, and reproducibility of results. These applications are almost limitless. This is due to the ability of microscope to detect, resolve and image the smallest items of evidence, often without any alteration or destruction. The microscope is used to identify and compare fibers, hairs, soils, and dust...etc. The aim of any microscope is to magnify images or photos of a small object and to see fine details. In forensic; the type of specimen, the information one wishes to obtain from it and the type of microscope chosen for the task will determine if the sample preparation is required. For example, ink lines, blood stains or bullets, no treatment is required and the evidence shows directly from appropriate microscope without any form of sample preparation, but for traces of particular matter, the sample preparation must be done before microscopical examination occurs. A variety of microscopes are used in forensic science laboratory. The light microscopes are the most use in forensic and these microscopes use photons to form images4, these microscopes which are most applicable for examining forensic specimens as mentioned before are as follows: 1. The compound microscope 2. The comparison microscope 3. The stereoscopic microscope 4. The polarizing microscope 5. The micro spectrophotometer This diversity of the types of microscopes in forensic applications comes mainly from their magnification ranges, which are (1- 1200X), (50 -30,000X) and (500- 250,000X) for the optical microscopy, SEM and TEM respectively.
https://en.wikipedia.org/wiki?curid=19567
Microscope A microscope (from the , "mikrós", "small" and , "skopeîn", "to look" or "see") is an instrument used to see objects that are too small to be seen by the naked eye. Microscopy is the science of investigating small objects and structures using such an instrument. Microscopic means invisible to the eye unless aided by a microscope. There are many types of microscopes, and they may be grouped in different ways. One way is to describe the way the instruments interact with a sample to create images, either by sending a beam of light or electrons to a sample in its optical path, or by scanning across, and a short distance from the surface of a sample using a probe. The most common microscope (and the first to be invented) is the optical microscope, which uses light to pass through a sample to produce an image. Other major types of microscopes are the fluorescence microscope, the electron microscope (both the transmission electron microscope and the scanning electron microscope) and the various types of scanning probe microscopes. Although objects resembling lenses date back 4000 years and there are Greek accounts of the optical properties of water-filled spheres (5th century BC) followed by many centuries of writings on optics, the earliest known use of simple microscopes (magnifying glasses) dates back to the widespread use of lenses in eyeglasses in the 13th century. The earliest known examples of compound microscopes, which combine an objective lens near the specimen with an eyepiece to view a real image, appeared in Europe around 1620. The inventor is unknown although many claims have been made over the years. Several revolve around the spectacle-making centers in the Netherlands including claims it was invented in 1590 by Zacharias Janssen (claim made by his son) and/or Zacharias' father, Hans Martens, claims it was invented by their neighbor and rival spectacle maker, Hans Lippershey (who applied for the first telescope patent in 1608), and claims it was invented by expatriate Cornelis Drebbel who was noted to have a version in London in 1619. Galileo Galilei (also sometimes cited as compound microscope inventor) seems to have found after 1610 that he could close focus his telescope to view small objects and, after seeing a compound microscope built by Drebbel exhibited in Rome in 1624, built his own improved version. Giovanni Faber coined the name "microscope" for the compound microscope Galileo submitted to the Accademia dei Lincei in 1625 (Galileo had called it the ""occhiolino"" or ""little eye""). The first detailed account of the microscopic anatomy of organic tissue based on the use of a microscope did not appear until 1644, in Giambattista Odierna's "L'occhio della mosca", or "The Fly's Eye". The microscope was still largely a novelty until the 1660s and 1670s when naturalists in Italy, the Netherlands and England began using them to study biology. Italian scientist Marcello Malpighi, called the father of histology by some historians of biology, began his analysis of biological structures with the lungs. The publication in 1665 of Robert Hooke's "Micrographia" had a huge impact, largely because of its impressive illustrations. A significant contribution came from Antonie van Leeuwenhoek who achieved up to 300 times magnification using a simple single lens microscope. He sandwiched a very small glass ball lens between the holes in two metal plates riveted together, and with an adjustable-by-screws needle attached to mount the specimen. Then, Van Leeuwenhoek re-discovered red blood cells (after Jan Swammerdam) and spermatozoa, and helped popularise the use of microscopes to view biological ultrastructure. On 9 October 1676, van Leeuwenhoek reported the discovery of micro-organisms. The performance of a light microscope depends on the quality and correct use of the condensor lens system to focus light on the specimen and the objective lens to capture the light from the specimen and form an image. Early instruments were limited until this principle was fully appreciated and developed from the late 19th to very early 20th century, and until electric lamps were available as light sources. In 1893 August Köhler developed a key principle of sample illumination, Köhler illumination, which is central to achieving the theoretical limits of resolution for the light microscope. This method of sample illumination produces even lighting and overcomes the limited contrast and resolution imposed by early techniques of sample illumination. Further developments in sample illumination came from the discovery of phase contrast by Frits Zernike in 1953, and differential interference contrast illumination by Georges Nomarski in 1955; both of which allow imaging of unstained, transparent samples. In the early 20th century a significant alternative to the light microscope was developed, an instrument that uses a beam of electrons rather than light to generate an image. The German physicist, Ernst Ruska, working with electrical engineer Max Knoll, developed the first prototype electron microscope in 1931, a transmission electron microscope (TEM). The transmission electron microscope works on similar principles to an optical microscope but uses electrons in the place of light and electromagnets in the place of glass lenses. Use of electrons, instead of light, allows for much higher resolution. Development of the transmission electron microscope was quickly followed in 1935 by the development of the scanning electron microscope by Max Knoll. Although TEMs were being used for research before WWII, and became popular afterwards, the SEM was not commercially available until 1965. Transmission electron microscopes became popular following the Second World War. Ernst Ruska, working at Siemens, developed the first commercial transmission electron microscope and, in the 1950s, major scientific conferences on electron microscopy started being held. In 1965, the first commercial scanning electron microscope was developed by Professor Sir Charles Oatley and his postgraduate student Gary Stewart, and marketed by the Cambridge Instrument Company as the "Stereoscan". One of the latest discoveries made about using an electron microscope is the ability to identify a virus. Since this microscope produces a visible, clear image of small organelles, in an electron microscope there is no need for reagents to see the virus or harmful cells, resulting in a more efficient way to detect pathogens. From 1981 to 1983 Gerd Binnig and Heinrich Rohrer worked at IBM in Zurich, Switzerland to study the quantum tunnelling phenomenon. They created a practical instrument, a scanning probe microscope from quantum tunnelling theory, that read very small forces exchanged between a probe and the surface of a sample. The probe approaches the surface so closely that electrons can flow continuously between probe and sample, making a current from surface to probe. The microscope was not initially well received due to the complex nature of the underlying theoretical explanations. In 1984 Jerry Tersoff and D.R. Hamann, while at AT&T's Bell Laboratories in Murray Hill, New Jersey began publishing articles that tied theory to the experimental results obtained by the instrument. This was closely followed in 1985 with functioning commercial instruments, and in 1986 with Gerd Binnig, Quate, and Gerber's invention of the atomic force microscope, then Binnig's and Rohrer's Nobel Prize in Physics for the SPM. New types of scanning probe microscope have continued to be developed as the ability to machine ultra-fine probes and tips has advanced. The most recent developments in light microscope largely centre on the rise of fluorescence microscopy in biology. During the last decades of the 20th century, particularly in the post-genomic era, many techniques for fluorescent staining of cellular structures were developed. The main groups of techniques involve targeted chemical staining of particular cell structures, for example, the chemical compound DAPI to label DNA, use of antibodies conjugated to fluorescent reporters, see immunofluorescence, and fluorescent proteins, such as green fluorescent protein. These techniques use these different fluorophores for analysis of cell structure at a molecular level in both live and fixed samples. The rise of fluorescence microscopy drove the development of a major modern microscope design, the confocal microscope. The principle was patented in 1957 by Marvin Minsky, although laser technology limited practical application of the technique. It was not until 1978 when Thomas and Christoph Cremer developed the first practical confocal laser scanning microscope and the technique rapidly gained popularity through the 1980s. Much current research (in the early 21st century) on optical microscope techniques is focused on development of superresolution analysis of fluorescently labelled samples. Structured illumination can improve resolution by around two to four times and techniques like stimulated emission depletion (STED) microscopy are approaching the resolution of electron microscopes. This occurs because the diffraction limit is occurred from light or excitation, which makes the resolution must be doubled to become super saturated. Stefan Hell was awarded the 2014 Nobel Prize in Chemistry for the development of the STED technique, along with Eric Betzig and William Moerner who adapted fluorescence microscopy for single-molecule visualization. X-ray microscopes are instruments that use electromagnetic radiation usually in the soft X-ray band to image objects. Technological advances in X-ray lens optics in the early 1970s made the instrument a viable imaging choice. They are often used in tomography (see micro-computed tomography) to produce three dimensional images of objects, including biological materials that have not been chemically fixed. Currently research is being done to improve optics for hard X-rays which have greater penetrating power. Microscopes can be separated into several different classes. One grouping is based on what interacts with the sample to generate the image, i.e., light or photons (optical microscopes), electrons (electron microscopes) or a probe (scanning probe microscopes). Alternatively, microscopes can be classified based on whether they analyze the sample via a scanning point (confocal optical microscopes, scanning electron microscopes and scanning probe microscopes) or analyze the sample all at once (wide field optical microscopes and transmission electron microscopes). Wide field optical microscopes and transmission electron microscopes both use the theory of lenses (optics for light microscopes and electromagnet lenses for electron microscopes) in order to magnify the image generated by the passage of a wave transmitted through the sample, or reflected by the sample. The waves used are electromagnetic (in optical microscopes) or electron beams (in electron microscopes). Resolution in these microscopes is limited by the wavelength of the radiation used to image the sample, where shorter wavelengths allow for a higher resolution. Scanning optical and electron microscopes, like the confocal microscope and scanning electron microscope, use lenses to focus a spot of light or electrons onto the sample then analyze the signals generated by the beam interacting with the sample. The point is then scanned over the sample to analyze a rectangular region. Magnification of the image is achieved by displaying the data from scanning a physically small sample area on a relatively large screen. These microscopes have the same resolution limit as wide field optical, probe, and electron microscopes. Scanning probe microscopes also analyze a single point in the sample and then scan the probe over a rectangular sample region to build up an image. As these microscopes do not use electromagnetic or electron radiation for imaging they are not subject to the same resolution limit as the optical and electron microscopes described above. The most common type of microscope (and the first invented) is the optical microscope. This is an optical instrument containing one or more lenses producing an enlarged image of a sample placed in the focal plane. Optical microscopes have refractive glass (occasionally plastic or quartz), to focus light on the eye or on to another light detector. Mirror-based optical microscopes operate in the same manner. Typical magnification of a light microscope, assuming visible range light, is up to 1250x with a theoretical resolution limit of around 0.250 micrometres or 250 nanometres. This limits practical magnification to ~1500x. Specialized techniques (e.g., scanning confocal microscopy, Vertico SMI) may exceed this magnification but the resolution is diffraction limited. The use of shorter wavelengths of light, such as ultraviolet, is one way to improve the spatial resolution of the optical microscope, as are devices such as the near-field scanning optical microscope. Sarfus is a recent optical technique that increases the sensitivity of a standard optical microscope to a point where it is possible to directly visualize nanometric films (down to 0.3 nanometre) and isolated nano-objects (down to 2 nm-diameter). The technique is based on the use of non-reflecting substrates for cross-polarized reflected light microscopy. Ultraviolet light enables the resolution of microscopic features as well as the imaging of samples that are transparent to the eye. Near infrared light can be used to visualize circuitry embedded in bonded silicon devices, since silicon is transparent in this region of wavelengths. In fluorescence microscopy many wavelengths of light ranging from the ultraviolet to the visible can be used to cause samples to fluoresce, which allows viewing by eye or with specifically sensitive cameras. Phase-contrast microscopy is an optical microscopy illumination technique in which small phase shifts in the light passing through a transparent specimen are converted into amplitude or contrast changes in the image. The use of phase contrast does not require staining to view the slide. This microscope technique made it possible to study the cell cycle in live cells. The traditional optical microscope has more recently evolved into the digital microscope. In addition to, or instead of, directly viewing the object through the eyepieces, a type of sensor similar to those used in a digital camera is used to obtain an image, which is then displayed on a computer monitor. These sensors may use CMOS or charge-coupled device (CCD) technology, depending on the application. Digital microscopy with very low light levels to avoid damage to vulnerable biological samples is available using sensitive photon-counting digital cameras. It has been demonstrated that a light source providing pairs of entangled photons may minimize the risk of damage to the most light-sensitive samples. In this application of ghost imaging to photon-sparse microscopy, the sample is illuminated with infrared photons, each of which is spatially correlated with an entangled partner in the visible band for efficient imaging by a photon-counting camera. The two major types of electron microscopes are transmission electron microscopes (TEMs) and scanning electron microscopes (SEMs). They both have series of electromagnetic and electrostatic lenses to focus a high energy beam of electrons on a sample. In a TEM the electrons pass through the sample, analogous to basic optical microscopy. This requires careful sample preparation, since electrons are scattered strongly by most materials. The samples must also be very thin (50 – 100 nm) in order for the electrons to pass through it. Cross-sections of cells stained with osmium and heavy metals reveal clear organelle membranes and proteins such as ribosomes. With a 0.1 nm level of resolution, detailed views of viruses (20 – 300 nm) and a strand of DNA (2 nm in width) can be obtained. In contrast, the SEM has raster coils to scan the surface of bulk objects with a fine electron beam. Therefore, the specimen do not necessarily need to be sectioned, but coating with a nanometric metal or carbon layer may be needed for nonconductive samples. SEM allows fast surface imaging of samples, possibly in thin water vapor to prevent drying. The different types of scanning probe microscopes arise from the many different types of interactions that occur when a small probe is scanned over and interacts with a specimen. These interactions or modes can be recorded or mapped as function of location on the surface to form a characterization map. The three most common types of scanning probe microscopes are atomic force microscopes (AFM), near-field scanning optical microscopes (MSOM or SNOM, scanning near-field optical microscopy), and scanning tunneling microscopes (STM). An atomic force microscope has a fine probe, usually of silicon or silicon nitride, attached to a cantilever; the probe is scanned over the surface of the sample, and the forces that cause an interaction between the probe and the surface of the sample are measured and mapped. A near-field scanning optical microscope is similar to an AFM but its probe consists of a light source in an optical fiber covered with a tip that has usually an aperture for the light to pass through. The microscope can capture either transmitted or reflected light to measure very localized optical properties of the surface, commonly of a biological specimen. Scanning tunneling microscopes have a metal tip with a single apical atom; the tip is attached to a tube through which a current flows. The tip is scanned over the surface of a conductive sample until a tunneling current flows; the current is kept constant by computer movement of the tip and an image is formed by the recorded movements of the tip. Scanning acoustic microscopes use sound waves to measure variations in acoustic impedance. Similar to Sonar in principle, they are used for such jobs as detecting defects in the subsurfaces of materials including those found in integrated circuits. On February 4, 2013, Australian engineers built a "quantum microscope" which provides unparalleled precision.
https://en.wikipedia.org/wiki?curid=19568
Midrash Midrash (; ; pl. "") is biblical exegesis by ancient Judaic authorities, using a mode of interpretation prominent in the Talmud. The word itself means "textual interpretation", "study". Midrash and rabbinic readings "discern value in texts, words, and letters, as potential revelatory spaces," writes the Hebrew scholar Wilda C. Gafney. "They reimagine dominant narratival readings while crafting new ones to stand alongside—not replace—former readings. Midrash also asks questions of the text; sometimes it provides answers, sometimes it leaves the reader to answer the questions." Vanessa Lovelace defines midrash as "a Jewish mode of interpretation that not only engages the words of the text, behind the text, and beyond the text, but also focuses on each letter, and the words left unsaid by each line." The term is also used of a rabbinic work that interprets Scripture in that manner. Such works contain early interpretations and commentaries on the Written Torah and Oral Torah (spoken law and sermons), as well as non-legalistic rabbinic literature (') and occasionally Jewish religious laws ('), which usually form a running commentary on specific passages in the Hebrew Scripture (""). "Midrash", especially if capitalized, can refer to a specific compilation of these rabbinic writings composed between 400 and 1200 CE. According to Gary Porton and Jacob Neusner, "midrash" has three technical meanings: 1) Judaic biblical interpretation; 2) the method used in interpreting; 3) a collection of such interpretations. The Hebrew word "midrash" is derived from the root of the verb "darash" (), which means "resort to, seek, seek with care, enquire, require", forms of which appear frequently in the Bible. The word "midrash" occurs twice in the Hebrew Bible: 2 Chronicles 13:22 "in the "midrash" of the prophet Iddo", and 24:27 "in the "midrash" of the book of the kings". KJV and ESV translate the word as "story" in both instances; the Septuagint translates it as βιβλίον (book) in the first, as γραφή (writing) in the second. The meaning of the Hebrew word in these contexts is uncertain: it has been interpreted as referring to "a body of authoritative narratives, or interpretations thereof, concerning historically important figures" and seems to refer to a "book", perhaps even a "book of interpretation", which might make its use a foreshadowing of the technical sense that the rabbis later gave to the word. Since the early Middle Ages the function of much of midrashic interpretation has been distinguished from that of "peshat", straight or direct interpretation aiming at the original literal meaning of a scriptural text. A definition of "midrash" repeatedly quoted by other scholars is that given by Gary G. Porton in 1981: "a type of literature, oral or written, which stands in direct relationship to a fixed, canonical text, considered to be the authoritative and revealed word of God by the midrashist and his audience, and in which this canonical text is explicitly cited or clearly alluded to". Lieve M. Teugels, who would limit midrash to rabbinic literature, offered a definition of midrash as "rabbinic interpretation of Scripture that bears the lemmatic form", a definition that, unlike Porton's, has not been adopted by others. While some scholars agree with the limitation of the term "midrash" to rabbinic writings, others apply it also to certain Qumran writings, to parts of the New Testament, and of the Hebrew Bible (in particular the superscriptions of the Psalms, Deuteronomy, and Chronicles), and even modern compositions are called midrashim. Midrash is now viewed more as method than genre, although the rabbinic midrashim do constitute a distinct literary genre. According to the "Encyclopaedia Britannica", "Midrash was initially a philological method of interpreting the literal meaning of biblical texts. In time it developed into a sophisticated interpretive system that reconciled apparent biblical contradictions, established the scriptural basis of new laws, and enriched biblical content with new meaning. Midrashic creativity reached its peak in the schools of Rabbi Ishmael and Akiba, where two different hermeneutic methods were applied. The first was primarily logically oriented, making inferences based upon similarity of content and analogy. The second rested largely upon textual scrutiny, assuming that words and letters that seem superfluous teach something not openly stated in the text." Many different exegetical methods are employed in an effort to derive deeper meaning from a text. This is not limited to the traditional thirteen textual tools attributed to the Tanna Rabbi Ishmael, which are used in the interpretation of "halakha" (Jewish law). The presence of words or letters which are seen to be apparently superfluous, and the chronology of events, parallel narratives or what are seen as other textual "anomalies" are often used as a springboard for interpretation of segments of Biblical text. In many cases, a handful of lines in the Biblical narrative may become a long philosophical discussion Jacob Neusner distinguishes three midrash processes: Numerous Jewish midrashim previously preserved in manuscript form have been published in print, including those denominated as smaller or minor midrashim. Bernard H. Mehlman and Seth M. Limmer deprecate this usage on the grounds that the term "minor" seems judgmental and "small" is inappropriate for midrashim some of which are lengthy. They propose instead the term "medieval midrashim", since the period of their production extended from the twilight of the rabbinic age to the dawn of the Age of Enlightenment. Generally speaking, rabbinic midrashim either focus on religious law and practice ("halakha") or interpret biblical narrative in relation to non-legal ethics or theology, creating homilies and parables based on the text. In the latter case they are described as "aggadic". "Midrash halakha" is the name given to a group of tannaitic expositions on the first five books of the Hebrew Bible. These midrashim, written in Mishnahic Hebrew, clearly distinguish between the Biblical texts that they discuss, and the rabbinic interpretation of that text. They often go well beyond simple interpretation and derive or provide support for halakha. This work is based on pre-set assumptions about the sacred and divine nature of the text, and the belief in the legitimacy that accords with rabbinic interpretation. Although this material treats the biblical texts as the authoritative word of God, it is clear that not all of the Hebrew Bible was fixed in its wording at this time, as some verses that are cited differ from the Masoretic, and accord with the Septuagint, or Samaritan Torah instead. With the growing canonization of the contents of the Hebrew Bible, both in terms of the books that it contained, and the version of the text in them, and an acceptance that new texts could not be added, there came a need to produce material that would clearly differentiate between that text, and rabbinic interpretation of it. By collecting and compiling these thoughts they could be presented in a manner which helped to refute claims that they were only human interpretations. The argument being that by presenting the various collections of different schools of thought each of which relied upon close study of the text, the growing difference between early biblical law, and its later rabbinic interpretation could be reconciled. Midrashim that seek to explain the non-legal portions of the Hebrew Bible are sometimes referred to as "aggadah" or "haggadah". Aggadic discussions of the non-legal parts of Scripture are characterized by a much greater freedom of exposition than the halakhic midrashim (midrashim on Jewish law). Aggadic expositors availed themselves of various techniques, including sayings of prominent rabbis. These aggadic explanations could be philosophical or mystical disquisitions concerning angels, demons, paradise, hell, the messiah, Satan, feasts and fasts, parables, legends, satirical assaults on those who practice idolatry, etc. Some of these midrashim entail mystical teachings. The presentation is such that the midrash is a simple lesson to the uninitiated, and a direct allusion, or analogy, to a mystical teaching for those educated in this area. An example of a midrashic interpretation: A wealth of literature and artwork has been created in the 20th and 21st centuries by people aspiring to create "contemporary midrash". Forms include poetry, prose, Bibliodrama (the acting out of Bible stories), murals, masks, and music, among others. The Institute for Contemporary Midrash was formed to facilitate these reinterpretations of sacred texts. The institute hosted several week-long intensives between 1995 and 2004, and published eight issues of "Living Text: The Journal of Contemporary Midrash" from 1997 to 2000. According to Carol Bakhos, recent studies that use literary-critical tools to concentrate on the cultural and literary aspects of midrash have led to a rediscovery of the importance of these texts for finding insights into the rabbinic culture that created them. Midrash is increasingly seen as a literary and cultural construction, responsive to literary means of analysis. Reverend Wilda C. Gafney has coined and expanded on "womanist midrash", a particular practice and method that Gafney defines as: "[...] A set of interpretive practices, including translation, exegesis, and biblical narratives, that attends to marginalized characters in biblical narratives, specially women and girls, intentionally including and centering on non-Israelite peoples and enslaved persons. Womanist midrash listens to and for their voices in and through the Hebrew Bible, while acknowledging that often the text does not speak, or even intend to speak, to or for them, let alone hear them. In the tradition of rabbinic midrash and contemporary feminist biblical scholarship, womanist midrash offers names for anonymized characters and crafts/listens to/gives voice to those characters." Gafney's analysis also draws parallels between midrash as a Jewish exegetical practice and African American Christian practices of biblical interpretation, wherein both practices privilege "sacred imaginative interrogation," or "sanctified imagination" as it is referred to in the black preaching tradition, when exegeting and interpreting the text. "Like classical and contemporary Jewish midrash, the sacred imagination [as practiced in black preaching traditions] tells us the story behind the story, the story between the lines on the page," Gafney writes. Frank Kermode has written that midrash is an imaginative way of "updating, enhancing, augmenting, explaining, and justifying the sacred text". Because the Tanakh came to be seen as unintelligible or even offensive, midrash could be used as a means of rewriting it in a way that both makes it more acceptable to later ethical standards and renders it less obviously implausible. James L. Kugel, in "The Bible as It Was" (Cambridge, Massachusetts: Harvard University Press, 1997), examines a number of early Jewish and Christian texts that comment on, expand, or re-interpret passages from the first five books of the Tanakh between the third century BCE and the second century CE. Kugel traces how and why biblical interpreters produced new meanings by the use of exegesis on ambiguities, syntactical details, unusual or awkward vocabulary, repetitions, etc. in the text. As an example, Kugel examines the different ways in which the biblical story that God's instructions are not to be found in heaven (Deut 30:12) has been interpreted. Baruch 3:29-4:1 states that this means that divine wisdom is not available anywhere other than in the Torah. Targum Neophyti (Deut 30:12) and b. Baba Metzia 59b claim that this text means that Torah is no longer hidden away, but has been given to humans who are then responsible for following it.
https://en.wikipedia.org/wiki?curid=19570
Missouri Missouri is a state in the Midwestern United States. With more than six million residents, it is the 18th-most populous state of the country. The largest urban areas are St. Louis, Kansas City, Springfield and Columbia; the capital is Jefferson City. The state is the 21st-most extensive in area. Missouri is bordered by eight states (tied for the most with Tennessee): Iowa to the north, Illinois, Kentucky and Tennessee (via the Mississippi River) to the east, Arkansas to the south and Oklahoma, Kansas and Nebraska to the west. In the south are the Ozarks, a forested highland, providing timber, minerals and recreation. The Missouri River, after which the state is named, flows through the center of the state into the Mississippi River, which makes up Missouri's eastern border. Humans have inhabited the land now known as Missouri for at least 12,000 years. The Mississippian culture built cities and mounds, before declining in the 14th century. When European explorers arrived in the 17th century, they encountered the Osage and Missouria nations. The French established Louisiana, a part of New France, founding Ste. Genevieve in 1735 and St. Louis in 1764. After a brief period of Spanish rule, the United States acquired the Louisiana Purchase in 1803. Americans from the Upland South, including enslaved African Americans, rushed into the new Missouri Territory. Missouri was admitted as a slave state as part of the Missouri Compromise. Many from Virginia, Kentucky and Tennessee settled in the Boonslick area of Mid-Missouri. Soon after, heavy German immigration formed the Missouri Rhineland. Missouri played a central role in the westward expansion of the United States, as memorialized by the Gateway Arch. The Pony Express, Oregon Trail, Santa Fe Trail and California Trail all began in Missouri. As a border state, Missouri's role in the American Civil War was complex and there were many conflicts within. After the war, both Greater St. Louis and the Kansas City metropolitan area became centers of industrialization and business. Today the state is divided into 114 counties and the independent city of St. Louis. Missouri's culture blends elements from the Midwestern and Southern United States. The musical styles of ragtime, Kansas City jazz and St. Louis blues developed in Missouri. The well-known Kansas City-style barbecue and lesser-known St. Louis-style barbecue, can be found across the state and beyond. Missouri is also a major center of beer brewing; Anheuser-Busch is the largest producer in the world. Missouri wine is produced in the Missouri Rhineland and Ozarks. Missouri's alcohol laws are among the most permissive in the United States. Outside of the state's major cities, popular tourist destinations include the Lake of the Ozarks, Table Rock Lake and Branson. Well-known Missourians include Chuck Berry, Sheryl Crow, Walt Disney, Edwin Hubble, Nelly, Brad Pitt, Harry S. Truman, and Mark Twain. Some of the largest companies based in the state include Cerner, Express Scripts, Monsanto, Emerson Electric, Edward Jones, H&R Block, Wells Fargo Advisors, and O'Reilly Auto Parts. Universities in Missouri include the University of Missouri and the top-ranked Washington University in St. Louis. Missouri has been called the "Mother of the West" and the "Cave State", but its most famous nickname is the "Show Me State". The state is named for the Missouri River, which was named after the indigenous Missouri Indians, a Siouan-language tribe. It is said they were called the "ouemessourita" ("wimihsoorita"), meaning "those who have dugout canoes", by the Miami-Illinois language speakers. This appears to be folk etymology—the Illinois spoke an Algonquian language and the closest approximation that can be made in that of their close neighbors, the Ojibwe, is ""You Ought to Go Downriver & Visit Those People."" This would be an odd occurrence, as the French who first explored and attempted to settle the Mississippi River usually got their translations during that time fairly accurate, often giving things French names that were exact translations of the native tongue(s). Assuming Missouri were deriving from the Siouan language, it would translate as ""It connects to the side of it,"" in reference to the river itself. This is not entirely likely either, as this would be coming out as ""Maya Sunni"" (Mah-yah soo-nee) Most likely, though, the name Missouri comes from Chiwere, a Siouan language spoken by people who resided in the modern day states of Wisconsin, Iowa, South Dakota, Missouri & Nebraska. The name "Missouri" has several different pronunciations even among its present-day natives, the two most common being and . Further pronunciations also exist in Missouri or elsewhere in the United States, involving the realization of the medial consonant as either or ; the vowel in the second syllable as either or ; and the third syllable as (phonetically , or ) or . Any combination of these phonetic realizations may be observed coming from speakers of American English. In British received pronunciation, the preferred variant is , with being a possible alternative. The linguistic history was treated definitively by Donald M. Lance, who acknowledged that the question is sociologically complex, but no pronunciation could be declared "correct", nor could any be clearly defined as native or outsider, rural or urban, southern or northern, educated or otherwise. Politicians often employ multiple pronunciations, even during a single speech, to appeal to a greater number of listeners. Often, informal respellings of the state's name, such as "Missour-"ee"" or "Missour-"uh"", are used informally to phonetically distinguish pronunciations. There is no official state nickname. However, Missouri's unofficial nickname is the "Show Me State", which appears on its license plates. This phrase has several origins. One is popularly ascribed to a speech by Congressman Willard Vandiver in 1899, who declared that "I come from a state that raises corn and cotton, cockleburs and Democrats, and frothy eloquence neither convinces nor satisfies me. I'm from Missouri, and you have got to show me." This is in keeping with the saying "I'm from Missouri" which means "I'm skeptical of the matter and not easily convinced." However, according to researchers, the phrase "show me" was already in use before the 1890s. Another one states that it is a reference to Missouri miners who were taken to Leadville, Colorado to replace striking workers. Since the new men were unfamiliar with the mining methods, they required frequent instruction. Other nicknames for Missouri include "The Lead State", "The Bullion State", "The Ozark State", "The Mother of the West", "The Iron Mountain State", and "Pennsylvania of the West". It is also known as the "Cave State" because there are more than 7,300 recorded caves in the state (second to Tennessee). Perry County is the county with the largest number of caves and the single longest cave. The official state motto is , which means "Let the welfare of the people be the supreme law." Indigenous peoples inhabited Missouri for thousands of years before European exploration and settlement. Archaeological excavations along the rivers have shown continuous habitation for more than 7,000 years. Beginning before 1000 CE, there arose the complex Mississippian culture, whose people created regional political centers at present-day St. Louis and across the Mississippi River at Cahokia, near present-day Collinsville, Illinois. Their large cities included thousands of individual residences, but they are known for their surviving massive earthwork mounds, built for religious, political and social reasons, in platform, ridgetop and conical shapes. Cahokia was the center of a regional trading network that reached from the Great Lakes to the Gulf of Mexico. The civilization declined by 1400 CE, and most descendants left the area long before the arrival of Europeans. St. Louis was at one time known as Mound City by the European Americans, because of the numerous surviving prehistoric mounds, since lost to urban development. The Mississippian culture left mounds throughout the middle Mississippi and Ohio river valleys, extending into the southeast as well as the upper river. The first European settlers were mostly ethnic French Canadians, who created their first settlement in Missouri at present-day Ste. Genevieve, about an hour south of St. Louis. They had migrated about 1750 from the Illinois Country. They came from colonial villages on the east side of the Mississippi River, where soils were becoming exhausted and there was insufficient river bottom land for the growing population. Sainte-Geneviève became a thriving agricultural center, producing enough surplus wheat, corn and tobacco to ship tons of grain annually downriver to Lower Louisiana for trade. Grain production in the Illinois Country was critical to the survival of Lower Louisiana and especially the city of New Orleans. St. Louis was founded soon after by French fur traders, Pierre Laclède and stepson Auguste Chouteau from New Orleans in 1764. From 1764 to 1803, European control of the area west of the Mississippi to the northernmost part of the Missouri River basin, called Louisiana, was assumed by the Spanish as part of the Viceroyalty of New Spain, due to Treaty of Fontainebleau (in order to have Spain join with France in the war against England). The arrival of the Spanish in St. Louis was in September 1767. St. Louis became the center of a regional fur trade with Native American tribes that extended up the Missouri and Mississippi rivers, which dominated the regional economy for decades. Trading partners of major firms shipped their furs from St. Louis by river down to New Orleans for export to Europe. They provided a variety of goods to traders, for sale and trade with their Native American clients. The fur trade and associated businesses made St. Louis an early financial center and provided the wealth for some to build fine houses and import luxury items. Its location near the confluence of the Illinois River meant it also handled produce from the agricultural areas. River traffic and trade along the Mississippi were integral to the state's economy, and as the area's first major city, St. Louis expanded greatly after the invention of the steamboat and the increased river trade. Napoleon Bonaparte had gained Louisiana for French ownership from Spain in 1800 under the Treaty of San Ildefonso, after it had been a Spanish colony since 1762. But the treaty was kept secret. Louisiana remained nominally under Spanish control until a transfer of power to France on November 30, 1803, just three weeks before the cession to the United States. Part of the 1803 Louisiana Purchase by the United States, Missouri earned the nickname "Gateway to the West" because it served as a major departure point for expeditions and settlers heading to the West during the 19th century. St. Charles, just west of St. Louis, was the starting point and the return destination of the Lewis and Clark Expedition, which ascended the Missouri River in 1804, in order to explore the western lands to the Pacific Ocean. St. Louis was a major supply point for decades, for parties of settlers heading west. As many of the early settlers in western Missouri migrated from the Upper South, they brought enslaved African Americans as agricultural laborers, and they desired to continue their culture and the institution of slavery. They settled predominantly in 17 counties along the Missouri River, in an area of flatlands that enabled plantation agriculture and became known as "Little Dixie." The state was rocked by the 1811–12 New Madrid earthquakes. Casualties were few due to the sparse population. In 1821, the former Missouri Territory was admitted as a slave state, in accordance with the Missouri Compromise, and with a temporary state capital in St. Charles. In 1826, the capital was shifted to its current, permanent location of Jefferson City, also on the Missouri River. Originally the state's western border was a straight line, defined as the meridian passing through the Kawsmouth, the point where the Kansas River enters the Missouri River. The river has moved since this designation. This line is known as the Osage Boundary. In 1836 the Platte Purchase was added to the northwest corner of the state after purchase of the land from the native tribes, making the Missouri River the border north of the Kansas River. This addition increased the land area of what was already the largest state in the Union at the time (about to Virginia's 65,000 square miles, which then included West Virginia). In the early 1830s, Mormon migrants from northern states and Canada began settling near Independence and areas just north of there. Conflicts over religion and slavery arose between the 'old settlers' (mainly from the South) and the Mormons (mainly from the North). The Mormon War erupted in 1838. By 1839, with the help of an "Extermination Order" by Governor Lilburn Boggs, the old settlers forcefully expelled the Mormons from Missouri and confiscated their lands. Conflicts over slavery exacerbated border tensions among the states and territories. From 1838 to 1839, a border dispute with Iowa over the so-called Honey Lands resulted in both states' calling-up of militias along the border. With increasing migration, from the 1830s to the 1860s Missouri's population almost doubled with every decade. Most of the newcomers were American-born, but many Irish and German immigrants arrived in the late 1840s and 1850s. As a majority were Catholic, they set up their own religious institutions in the state, which had been mostly Protestant. Many settled in cities, where they created a regional and then state network of Catholic churches and schools. Nineteenth-century German immigrants created the wine industry along the Missouri River and the beer industry in St. Louis. While many German immigrants were strongly anti-slavery, many Irish immigrants living in cities were pro-slavery, fearing that liberating African-American slaves would create a glut of unskilled labor, driving wages down. Most Missouri farmers practiced subsistence farming before the American Civil War. The majority of those who held slaves had fewer than five each. Planters, defined by some historians as those holding twenty slaves or more, were concentrated in the counties known as "Little Dixie", in the central part of the state along the Missouri River. The tensions over slavery chiefly had to do with the future of the state and nation. In 1860, enslaved African Americans made up less than 10% of the state's population of 1,182,012. In order to control the flooding of farmland and low-lying villages along the Mississippi, the state had completed construction of of levees along the river by 1860. After the secession of Southern states began in 1861, the Missouri legislature called for the election of a special convention on secession. The convention voted decisively to remain within the Union. Pro-Southern Governor Claiborne F. Jackson ordered the mobilization of several hundred members of the state militia who had gathered in a camp in St. Louis for training. Alarmed at this action, Union General Nathaniel Lyon struck first, encircling the camp and forcing the state troops to surrender. Lyon directed his soldiers, largely non-English-speaking German immigrants, to march the prisoners through the streets, and they opened fire on the largely hostile crowds of civilians who gathered around them. Soldiers killed unarmed prisoners as well as men, women and children of St. Louis in the incident that became known as the "St. Louis Massacre". These events heightened Confederate support within the state. Governor Jackson appointed Sterling Price, president of the convention on secession, as head of the new Missouri State Guard. In the face of Union General Lyon's rapid advance through the state, Jackson and Price were forced to flee the capital of Jefferson City on June 14, 1861. In the town of Neosho, Missouri, Jackson called the state legislature into session. They enacted a secession ordinance. However, even under the Southern view of secession, only the state convention had the power to secede. Since the convention was dominated by unionists, and the state was more pro-Union than pro-Confederate in any event, the ordinance of secession adopted by the legislature is generally given little credence. The Confederacy nonetheless recognized it on October 30, 1861. With the elected governor absent from the capital and the legislators largely dispersed, the state convention was reassembled with most of its members present, save 20 that fled south with Jackson's forces. The convention declared all offices vacant, and installed Hamilton Gamble as the new governor of Missouri. President Lincoln's administration immediately recognized Gamble's government as the legal Missouri government. The federal government's decision enabled raising pro-Union militia forces for service within the state as well as volunteer regiments for the Union Army. Fighting ensued between Union forces and a combined army of General Price's Missouri State Guard and Confederate troops from Arkansas and Texas under General Ben McCulloch. After winning victories at the battle of Wilson's Creek and the siege of Lexington, Missouri and suffering losses elsewhere, the Confederate forces retreated to Arkansas and later Marshall, Texas, in the face of a largely reinforced Union Army. Though regular Confederate troops staged some large-scale raids into Missouri, the fighting in the state for the next three years consisted chiefly of guerrilla warfare. "Citizen soldiers" or insurgents such as Captain William Quantrill, Frank and Jesse James, the Younger brothers, and William T. Anderson made use of quick, small-unit tactics. Pioneered by the Missouri Partisan Rangers, such insurgencies also arose in portions of the Confederacy occupied by the Union during the Civil War. Historians have portrayed stories of the James brothers' outlaw years as an American "Robin Hood" myth. The vigilante activities of the Bald Knobbers of the Ozarks in the 1880s were an unofficial continuation of insurgent mentality long after the official end of the war, and they are a favorite theme in Branson's self-image. The Progressive Era (1890s to 1920s) saw numerous prominent leaders from Missouri trying to end corruption and modernize politics, government and society. Joseph "Holy Joe" Folk was a key leader who made a strong appeal to middle class and rural evangelical Protestants. Folk was elected governor as a progressive reformer and Democrat in the 1904 election. He promoted what he called "the Missouri Idea", the concept of Missouri as a leader in public morality through popular control of law and strict enforcement. He successfully conducted antitrust prosecutions, ended free railroad passes for state officials, extended bribery statutes, improved election laws, required formal registration for lobbyists, made racetrack gambling illegal, and enforced the Sunday-closing law. He helped enact Progressive legislation, including an initiative and referendum provision, regulation of elections, education, employment and child labor, railroads, food, business, and public utilities. A number of efficiency-oriented examiner boards and commissions were established during Folk's administration, including many agricultural boards and the Missouri library commission. Between the Civil War and the end of World War II, Missouri transitioned from a rural economy to a hybrid industrial-service-agricultural economy as the Midwest rapidly industrialized. The expansion of railroads to the West transformed Kansas City into a major transportation hub within the nation. The growth of the Texas cattle industry along with this increased rail infrastructure and the invention of the refrigerated boxcar also made Kansas City a major meatpacking center, as large cattle drives from Texas brought herds of cattle to Dodge City and other Kansas towns. There, the cattle were loaded onto trains destined for Kansas City, where they were butchered and distributed to the eastern markets. The first half of the twentieth century was the height of Kansas City's prominence and its downtown became a showcase for stylish Art Deco skyscrapers as construction boomed. In 1930, there was a diphtheria epidemic in the area around Springfield, which killed approximately 100 people. Serum was rushed to the area, and medical personnel stopped the epidemic. During the mid-1950s and 1960s, St. Louis and Kansas City suffered deindustrialization and loss of jobs in railroads and manufacturing, as did other Midwestern industrial cities. In 1956 St. Charles claims to be the site of the first interstate highway project. Such highway construction made it easy for middle-class residents to leave the city for newer housing developed in the suburbs, often former farmland where land was available at lower prices. These major cities have gone through decades of readjustment to develop different economies and adjust to demographic changes. Suburban areas have developed separate job markets, both in knowledge industries and services, such as major retail malls. In 2014, Missouri received national attention for the protests and riots that followed the shooting of Michael Brown by a police officer of Ferguson, which led Governor Jay Nixon to call out the Missouri National Guard. A grand jury declined to indict the officer, and the U.S. Department of Justice concluded, after careful investigation, that the police officer legitimately feared for his safety. However, in a separate investigation, the Department of Justice also found that the Ferguson Police Department and the City of Ferguson relied on unconstitutional practices in order to balance the city's budget through racially motivated excessive fines and punishments, that the Ferguson police "had used excessive and dangerous force and had disproportionately targeted blacks," and that the municipal court "emphasized revenue over public safety, leading to routine breaches of citizens' constitutional guarantees of due process and equal protection under the law." A series of student protests at the University of Missouri against what the protesters viewed as poor response by the administration to racist incidents on campus began in September 2015. On June 7, 2017, the National Association for the Advancement of Colored People issued a warning to prospective African-American travelers to Missouri. This is the first NAACP warning ever covering an entire state. According to a 2018 report by the Missouri Attorney General's office, for the past 18 years, "African Americans, Hispanics and other people of color are disproportionately affected by stops, searches and arrests." The same report found that the biggest discrepancy was in 2017, when "black motorists were 85% more likely to be pulled over in traffic stops". In 2018 the USDA announced its plans to relocate Economic Research Service (ERS) and National Institute of Food & Agriculture (NIFA) to Kansas City. They have since decided on a specific location in downtown Kansas City, MO. With the addition of the KC Streetcar project and construction of the Sprint Center Arena, the downtown area in KC has attracted investment in new offices, hotels, and residential complexes. Both Kansas City and Saint Louis are undergoing a rebirth in their downtown areas with the addition of the new Power & Light (KC) and Ballpark Village (STL) districts as well as the renovation of existing historic buildings in each downtown area. The 2019 announcement of an MLS expansion team in Saint Louis is driving even more development in the downtown west area of Saint Louis. Missouri is landlocked and borders eight different states as does its neighbor, Tennessee. No state in the U.S. touches more than eight. Missouri is bounded by Iowa on the north; by Illinois, Kentucky, and Tennessee across the Mississippi River on the east; on the south by Arkansas; and by Oklahoma, Kansas, and Nebraska (the last across the Missouri River) on the west. Whereas the northern and southern boundaries are straight lines, the Missouri Bootheel extends south between the St. Francis and the Mississippi rivers. The two largest rivers are the Mississippi (which defines the eastern boundary of the state) and the Missouri River (which flows from west to east through the state) essentially connecting the two largest metros of Kansas City and St. Louis. Although today it is usually considered part of the Midwest, Missouri was historically seen by many as a border state, chiefly because of the settlement of migrants from the South and its status as a slave state before the Civil War, balanced by the influence of St. Louis. The counties that made up "Little Dixie" were those along the Missouri River in the center of the state, settled by Southern migrants who held the greatest concentration of slaves. In 2005, Missouri received 16,695,000 visitors to its national parks and other recreational areas totaling , giving it $7.41 million in annual revenues, 26.6% of its operating expenditures. North of, and in some cases just south of, the Missouri River lie the Northern Plains that stretch into Iowa, Nebraska, and Kansas. Here, rolling hills remain from the glaciation that once extended from the Canadian Shield to the Missouri River. Missouri has many large river bluffs along the Mississippi, Missouri, and Meramec Rivers. Southern Missouri rises to the Ozark Mountains, a dissected plateau surrounding the Precambrian igneous St. Francois Mountains. This region also hosts karst topography characterized by high limestone content with the formation of sinkholes and caves. The southeastern part of the state is known as the Missouri Bootheel region, which is part of the Mississippi Alluvial Plain or Mississippi embayment. This region is the lowest, flattest, warmest, and wettest part of the state. It is also among the poorest, as the economy there is mostly agricultural. It is also the most fertile, with cotton and rice crops predominant. The Bootheel was the epicenter of the four New Madrid Earthquakes of 1811 and 1812. Missouri generally has a humid continental climate with cool, and sometimes cold, winters and hot, humid, and wet summers. In the southern part of the state, particularly in the Bootheel, the climate becomes humid subtropical. Located in the interior United States, Missouri often experiences extreme temperatures. Without high mountains or oceans nearby to moderate temperature, its climate is alternately influenced by air from the cold Arctic and the hot and humid Gulf of Mexico. Missouri's highest recorded temperature is at Warsaw and Union on July 14, 1954, while the lowest recorded temperature is also at Warsaw on February 13, 1905. Located in Tornado Alley, Missouri also receives extreme weather in the form of severe thunderstorms and tornadoes. On May 22, 2011, a massive EF-5 tornado, killed 158 people and destroyed roughly one-third of the city of Joplin. The tornado caused an estimated $1–3 billion in damages, killed 159 people, and injured more than a thousand. It was the first EF5 to hit the state since 1957 and the deadliest in the U.S. since 1947, making it the seventh deadliest tornado in American history and 27th deadliest in the world. St. Louis and its suburbs also have a history of experiencing particularly severe tornadoes, the most recent memorable one being an EF4 that damaged Lambert-St. Louis International Airport on April 22, 2011. One of the worst tornadoes in American history struck St. Louis on May 27, 1896, killing at least 255 and causing $10 million in damage (equivalent to $3.9 billion in 2009 or $ in today's dollars). Missouri is home to diverse flora and fauna, including several endemic species.There is a large amount of fresh water present due to the Mississippi River, Missouri River, Table Rock Lake and Lake of the Ozarks, with numerous smaller tributary rivers, streams, and lakes. North of the Missouri River, the state is primarily rolling hills of the Great Plains, whereas south of the Missouri River, the state is dominated by the Oak-Hickory Central U.S. hardwood forest. Recreational and commercial uses of public forests including grazing, logging and mining increased after World War II. Fishermen, hikers, campers and others started lobbying to protect areas of the forest that had a "wilderness character". During the 1930s and 1940s Aldo Leopold, Arthur Carhart and Bob Marshall developed a "wilderness" policy for the Forest Service. Their efforts bore fruit with The Wilderness Act of 1964 which designated wilderness areas "where the earth and its community of life are untrammeled by men, where man himself is a visitor and does not remain". This included second growth public forests like the Mark Twain National Forest. The United States Census Bureau estimates that the population of Missouri was 6,137,428 on July 1, 2019, a 2.48% increase since the 2010 United States Census. Missouri had a population of 5,988,927, according to the 2010 Census; an increase of 137,525 (2.3 percent) since the year 2010. From 2010 to 2018, this includes a natural increase of 137,564 people since the last census (480,763 births less 343,199 deaths), and an increase of 88,088 people due to net migration into the state. Immigration from outside the United States resulted in a net increase of 50,450 people, and migration within the country produced a net increase of 37,638 people. More than half of Missourians (3,294,936 people, or 55.0%) live within the state's two largest metropolitan areas—St. Louis and Kansas City. The state's population density 86.9 in 2009, is also closer to the national average (86.8 in 2009) than any other state. In 2011, the racial composition of the state was: In 2011, 3.7% of the total population was of Hispanic or Latino origin (they may be of any race). The U.S. Census of 2010 found that the population center of the United States is in Texas County, while the 2000 Census found the mean population center to be in Phelps County. The center of population of Missouri is in Osage County, in the city of Westphalia. In 2004, the population included 194,000 foreign-born (3.4 percent of the state population). The five largest ancestry groups in Missouri are: German (27.4 percent), Irish (14.8 percent), English (10.2 percent), American (8.5 percent) and French (3.7 percent). German Americans are an ancestry group present throughout Missouri. African Americans are a substantial part of the population in St. Louis (56.6% of African Americans in the state lived in St. Louis or St. Louis County as of the 2010 census), Kansas City, Boone County and in the southeastern Bootheel and some parts of the Missouri River Valley, where plantation agriculture was once important. Missouri Creoles of French ancestry are concentrated in the Mississippi River Valley south of St. Louis (see Missouri French). Kansas City is home to large and growing immigrant communities from Latin America esp. Mexico and Colombia, Africa (i.e. Sudan, Somalia and Nigeria), and Southeast Asia including China and the Philippines; and Europe like the former Yugoslavia (see Bosnian American). A notable Cherokee Indian population exists in Missouri. In 2004, 6.6 percent of the state's population was reported as younger than5, 25.5 percent younger than 18, and 13.5 percent 65 or older. Females were approximately 51.4 percent of the population. 81.3 percent of Missouri residents were high school graduates (more than the national average), and 21.6 percent had a bachelor's degree or higher. 3.4 percent of Missourians were foreign-born, and 5.1 percent reported speaking a language other than English at home. In 2010, there were 2,349,955 households in Missouri, with 2.45 people per household. The home ownership rate was 70.0 percent, and the median value of an owner-occupied housing unit was $137,700. The median household income for 2010 was $46,262, or $24,724 per capita. There were 14.0 percent (1,018,118) of Missourians living below the poverty line in 2010. The mean commute time to work was 23.8 minutes. In 2011, 28.1% of Missouri's population younger than age1 were minorities. "Note: Births in table don't add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number." The vast majority of people in Missouri speak English. Approximately 5.1% of the population reported speaking a language other than English at home. The Spanish language is spoken in small Latino communities in the St. Louis and Kansas City Metro areas. Missouri is home to an endangered dialect of the French language known as Missouri French. Speakers of the dialect, who call themselves "Créoles", are descendants of the French pioneers who settled the area then known as the Illinois Country beginning in the late 17th century. It developed in isolation from French speakers in Canada and Louisiana, becoming quite distinct from the varieties of Canadian French and Louisiana Creole French. Once widely spoken throughout the area, Missouri French is now nearly extinct, with only a few elderly speakers able to use it. According to a Pew Research study conducted in 2014, 80% of Missourians identify with a religion. 77% affiliate with Christianity and its various denominations, and the other 3% are adherents of non-Christian religions. The remaining 20% have no religion, with 2% specifically identifying as atheists and 3% identifying as agnostics (the other 15% do not identify as "anything in particular"). The religious demographics of Missouri are as follows: The largest denominations by number of adherents in 2010 were the Southern Baptist Convention with 749,685; the Roman Catholic Church with 724,315; and the United Methodist Church with 226,409. Among the other denominations there are approximately 93,000 Mormons in 253 congregations, 25,000 Jewish adherents in 21 synagogues, 12,000 Muslims in 39 masjids, 7,000 Buddhists in 34 temples, 20,000 Hindus in 17 temples, 2,500 Unitarians in nine congregations, 2,000 Baha'i in 17 temples, five Sikh temples, a Zoroastrian temple, a Jain temple and an uncounted number of neopagans. Several religious organizations have headquarters in Missouri, including the Lutheran Church–Missouri Synod, which has its headquarters in Kirkwood, as well as the United Pentecostal Church International in Hazelwood, both outside St. Louis. Independence, near Kansas City, is the headquarters for the Community of Christ (formerly the Reorganized Church of Jesus Christ of Latter Day Saints), the Church of Christ (Temple Lot) and the group Remnant Church of Jesus Christ of Latter Day Saints. This area and other parts of Missouri are also of significant religious and historical importance to The Church of Jesus Christ of Latter-day Saints (LDS Church), which maintains several sites and visitors centers. Springfield is the headquarters of the Assemblies of God USA and the Baptist Bible Fellowship International. The General Association of General Baptists has its headquarters in Poplar Bluff. The Unity Church is headquartered in Unity Village. Hindu Temple of St. Louis is the largest Hindu Temple in Missouri, serving more than 14,000 Hindus. The U.S. Department of Commerce's Bureau of Economic Analysis estimated Missouri's 2016 gross state product at $299.1 billion, ranking 22nd among U.S. states. Per capita personal income in 2006 was $32,705, ranking 26th in the nation. Major industries include aerospace, transportation equipment, food processing, chemicals, printing/publishing, electrical equipment, light manufacturing, financial services and beer. The agriculture products of the state are beef, soybeans, pork, dairy products, hay, corn, poultry, sorghum, cotton, rice, and eggs. Missouri is ranked 6th in the nation for the production of hogs and 7th for cattle. Missouri is ranked in the top five states in the nation for production of soy beans, and it is ranked fourth in the nation for the production of rice. In 2001, there were 108,000 farms, the second-largest number in any state after Texas. Missouri actively promotes its rapidly growing wine industry. According to the Missouri Partnership, Missouri's agriculture industry contributes $33 billion in GDP to Missouri's economy, and generates $88 billion in sales and more than 378,000 jobs. Missouri has vast quantities of limestone. Other resources mined are lead, coal, and crushed stone. Missouri produces the most lead of all the states. Most of the lead mines are in the central eastern portion of the state. Missouri also ranks first or near first in the production of lime, a key ingredient in Portland cement. Missouri also has a growing science, agricultural technology and biotechnology field. Monsanto, one of the largest biotech companies in America, is based in St. Louis. Tourism, services and wholesale/retail trade follow manufacturing in importance. Tourism benefits from the many rivers, lakes, caves, parks, etc. throughout the state. In addition to a network of state parks, Missouri is home to the Gateway Arch National Park in St. Louis and the Ozark National Scenic Riverways National Park. A much-visited show cave is Meramec Caverns in Stanton, Missouri. Missouri is the only state in the Union to have two Federal Reserve Banks: one in Kansas City (serving western Missouri, Kansas, Nebraska, Oklahoma, Colorado, northern New Mexico, and Wyoming) and one in St. Louis (serving eastern Missouri, southern Illinois, southern Indiana, western Kentucky, western Tennessee, northern Mississippi, and all of Arkansas). The state's seasonally adjusted unemployment rate in April 2017 was 3.9 percent. In 2017, Missouri became a right-to-work state, but in August 2018, Missouri voters rejected a right-to-work law with 67% to 33%. Personal income is taxed in ten different earning brackets, ranging from 1.5% to 6.0%. Missouri's sales tax rate for most items is 4.225% with some additional local levies. More than 2,500 Missouri local governments rely on property taxes levied on real property (real estate) and personal property. Most personal property is exempt, except for motorized vehicles. Exempt real estate includes property owned by governments and property used as nonprofit cemeteries, exclusively for religious worship, for schools and colleges and for purely charitable purposes. There is no inheritance tax and limited Missouri estate tax related to federal estate tax collection. In 2017, the Tax Foundation rated Missouri as having the 5th-best corporate tax index, and the 15th-best overall tax climate. Missouri's corporate income tax rate is 6.25%; however, 50% of federal income tax payments may be deducted before computing taxable income, leading to an effective rate of 5.2%. In 2012, Missouri had roughly 22,000 MW of installed electricity generation capacity. In 2011, 82% of Missouri's electricity was generated by coal. Ten percent was generated from the state's only nuclear power plant, the Callaway Plant in Callaway County, northeast of Jefferson City. Five percent was generated by natural gas. One percent was generated by hydroelectric sources, such as the dams for Truman Lake and Lake of the Ozarks. Missouri has a small but growing amount of wind and solar power—wind capacity increased from 309 MW in 2009 to 459 MW in 2011, while photovoltaics have increased from 0.2 MW to 1.3 MW over the same period. As of 2016, Missouri's solar installations had reached 141 MW. Oil wells in Missouri produced 120,000 barrels of crude oil in fiscal 2012. There are no oil refineries in Missouri. Missouri has two major airport hubs: Lambert–St. Louis International Airport and Kansas City International Airport. Southern Missouri has the Springfield–Branson National Airport (SGF) with multiple non-stop destinations. Residents of Mid-Missouri use Columbia Regional Airport (COU) to fly to Chicago (ORD), Dallas (DFW) or Denver (DEN). Two of the nation's three busiest rail centers are in Missouri. Kansas City is a major railroad hub for BNSF Railway, Norfolk Southern Railway, Kansas City Southern Railway, and Union Pacific Railroad, and every class1 railroad serves Missouri. Kansas City is the second largest freight rail center in the US (but is first in the amount of tonnage handled). Like Kansas City, St. Louis is a major destination for train freight. Springfield remains an operational hub for BNSF Railway. Amtrak passenger trains serve Kansas City, La Plata, Jefferson City, St. Louis, Lee's Summit, Independence, Warrensburg, Hermann, Washington, Kirkwood, Sedalia, and Poplar Bluff. A proposed high-speed rail route in Missouri as part of the Chicago Hub Network has received $31 million in funding. The only urban light rail/subway system operating in Missouri is MetroLink, which connects the city of St. Louis with suburbs in Illinois and St. Louis County. It is one of the largest systems (by track mileage) in the United States. The KC Streetcar in downtown Kansas City opened in May 2016. The Gateway Multimodal Transportation Center in St. Louis is the largest active multi-use transportation center in the state. It is in downtown St. Louis, next to the historic Union Station complex. It serves as a hub center/station for MetroLink, the MetroBus regional bus system, Greyhound, Amtrak, and taxi services. The proposed Missouri Hyperloop would connect St. Louis, Kansas City, and Columbia, reducing travel times to around a half hour. Many cities have regular fixed-route systems, and many rural counties have rural public transit services. Greyhound and Trailways provide inter-city bus service in Missouri. Megabus serves St. Louis, but discontinued service to Columbia and Kansas City in 2015. The Mississippi River and Missouri River are commercially navigable over their entire lengths in Missouri. The Missouri was channelized through dredging and jettys and the Mississippi was given a series of locks and dams to avoid rocks and deepen the river. St. Louis is a major destination for barge traffic on the Mississippi. Following the passage of Amendment 3 in late 2004, the Missouri Department of Transportation (MoDOT) began its Smoother, Safer, Sooner road-building program with a goal of bringing of highways up to good condition by December 2007. From 2006 to 2010 traffic deaths have decreased annually from 1,257 in 2005, to 1,096 in 2006, to 992 for 2007, to 960 for 2008, to 878 in 2009, to 821 in 2010. The current Constitution of Missouri, the fourth constitution for the state, was adopted in 1945. It provides for three branches of government: the legislative, judicial, and executive branches. The legislative branch consists of two bodies: the House of Representatives and the Senate. These bodies comprise the Missouri General Assembly. The House of Representatives has 163 members who are apportioned based on the last decennial census. The Senate consists of 34 members from districts of approximately equal populations. The judicial department comprises the Supreme Court of Missouri, which has seven judges, the Missouri Court of Appeals (an intermediate appellate court divided into three districts), sitting in Kansas City, St. Louis, and Springfield, and 45 Circuit Courts which function as local trial courts. The executive branch is headed by the Governor of Missouri and includes five other statewide elected offices. Following the death of Tom Schweich in 2015, only one of Missouri's statewide elected offices is held by a Democrat. Harry S Truman (1884–1972), the 33rd President of the United States (Democrat, 1945–1953), was born in Lamar. He was a judge in Jackson County and then represented the state in the United States Senate for ten years, before being elected vice-president in 1944. He lived in Independence after retiring. Missouri was widely regarded as a bellwether in American politics, often making it a swing state. The state had a longer stretch of supporting the winning presidential candidate than any other state, having voted with the nation in every election from 1904 to 2004 with a single exception: 1956, when Democratic candidate Adlai Stevenson of neighboring Illinois lost the election despite carrying Missouri. However, in recent years, areas of the state outside Kansas City, St. Louis, and Columbia have shifted heavily to the right, and so the state is no longer considered a bellwether by most analysts. Missouri twice voted against Democrat Barack Obama, who won in 2008 and 2012. Missouri voted for Romney by nearly 10% in 2012, and voted for Trump by nearly 18% in 2016. On October 24, 2012, there were 4,190,936 registered voters. At the state level, both Democratic Senator Claire McCaskill and Democratic Governor Jay Nixon were re-elected. On November 8, 2016, there were 4,223,787 registered voters, with 2,811,549 voting (66.6%). Missouri has been known for its population's generally "stalwart, conservative, noncredulous" attitude toward regulatory regimes, which is one of the origins of the state's unofficial nickname, the "Show-Me State". As a result, and combined with the fact that Missouri is one of America's leading alcohol states, regulation of alcohol and tobacco in Missouri is among the most laissez-faire in America. For 2013, the annual "Freedom in the 50 States" study prepared by the Mercatus Center at George Mason University ranked Missouri as #3 in America for alcohol freedom and #1 for tobacco freedom (#7 for freedom overall). The study notes that Missouri's "alcohol regime is one of the least restrictive in the United States, with no blue laws and taxes well below average", and that "Missouri ranks best in the nation on tobacco freedom". Missouri law makes it "an improper employment practice" for an employer to refuse to hire, to fire, or otherwise to disadvantage any person because that person lawfully uses alcohol and/or tobacco products when he or she is not at work. With a large German immigrant population and the development of a brewing industry, Missouri always has had among the most permissive alcohol laws in the United States. It never enacted statewide prohibition. Missouri voters rejected prohibition in three separate referenda in 1910, 1912, and 1918. Alcohol regulation did not begin in Missouri until 1934. Today, alcohol laws are controlled by the state government, and local jurisdictions are prohibited from going beyond those state laws. Missouri has no statewide open container law or prohibition on drinking in public, no alcohol-related blue laws, no local option, no precise locations for selling liquor by the package (allowing even drug stores and gas stations to sell any kind of liquor), and no differentiation of laws based on alcohol percentage. State law protects persons from arrest or criminal penalty for public intoxication. Missouri law expressly prohibits any jurisdiction from going dry. Missouri law also expressly allows parents and guardians to serve alcohol to their children. The Power & Light District in Kansas City is one of the few places in the United States where a state law explicitly allows persons over 21 to possess and consume open containers of alcohol in the street (as long as the beverage is in a plastic cup). As for tobacco (as of July 2016), Missouri has the lowest cigarette excise taxes in the United States, at 17 cents per pack, and the state electorate voted in 2002, 2006, 2012, and twice in 2016 to keep it that way. In 2007, "Forbes" named Missouri's largest metropolitan area, St. Louis, America's "best city for smokers". According to the Centers for Disease Control and Prevention, in 2008 Missouri had the fourth highest percentage of adult smokers among U.S states, at 24.5%. Although Missouri's minimum age for purchase and distribution of tobacco products is 18, tobacco products can be distributed to persons under 18 by family members on private property. No statewide smoking ban ever has been seriously entertained before the Missouri General Assembly, and in October 2008, a statewide survey by the Missouri Department of Health and Senior Services found that only 27.5% of Missourians support a statewide ban on smoking in all bars and restaurants. Missouri state law permits restaurants seating less than 50 people, bars, bowling alleys, and billiard parlors to decide their own smoking policies, without limitation. In 2014, a Republican-lead legislature and Democratic governor Jay Nixon enacted a series of laws to partially decriminalize possession of cannabis by making first time possession of up to 10 grams no longer punishable with jail time and legalizing CBD oil. In November 2018, 66% of voters approved a constitutional amendment that established a right to medical marijuana and a system for licensing, regulating, and taxing medical marijuana. Missouri has 114 counties and one independent city, St. Louis, which is Missouri's most densely populated—5,140 people per square mile. The largest counties by population are St. Louis (996,726), Jackson (698,895), and St. Charles (395,504). Worth County is the smallest (2,057). The largest counties by size are Texas (1,179 square miles) and Shannon (1,004). Worth County is the smallest (266). Jefferson City is the capital of Missouri. The five largest cities in Missouri are Kansas City, St. Louis, Springfield, Columbia, and Independence. St. Louis is the principal city of the largest metropolitan area in Missouri, composed of 17 counties and the independent city of St. Louis; eight of those counties lie in Illinois. As of 2017 St. Louis was the 21st-largest metropolitan area in the nation with 2.81 million people. However, if ranked using Combined Statistical Area, it is 19th-largest with 2.91 million people in 2017. Some of the major cities making up the St. Louis Metro area in Missouri are O'Fallon, St. Charles, St. Peters, Florissant, Chesterfield, Wentzville, Wildwood, University City, and Ballwin. Kansas City is Missouri's largest city and the principal city of the fourteen-county Kansas City Metropolitan Statistical Area, including five counties in the state of Kansas. As of 2017, it was the 30th-largest metropolitan area in the nation, with 2.13 million people. In the Combined Statistical Area in 2017, it ranked 25th with 2.47 million. Some of the other major cities comprising the Kansas City metro area in Missouri include Independence, Lee's Summit, Blue Springs, Liberty, Raytown, Gladstone, and Grandview. Springfield is Missouri's third-largest city and the principal city of the Springfield-Branson Metropolitan Area, which has a population of 549,423 and includes seven counties in southwestern Missouri. Branson is a major tourist attraction in the Ozarks of southwestern Missouri. Some of the other major cities comprising the Springfield-Branson metro area include Nixa, Ozark, and Republic. The Missouri State Board of Education has general authority over all public education in the state of Missouri. It is made up of eight citizens appointed by the governor and confirmed by the Missouri Senate. Education is compulsory from ages seven to seventeen, and it is required that any parent, guardian or other person with custody of a child between the ages of seven and seventeen the compulsory attendance age for the district, must ensure that the child is enrolled in and regularly attends public, private, parochial school, home school or a combination of schools for the full term of the school year. Compulsory attendance also ends when children complete sixteen credits in high school. Children in Missouri between the ages of five and seven are not required to be enrolled in school. However, if they are enrolled in a public school their parent, guardian or custodian must ensure that they regularly attend. Missouri schools are commonly but not exclusively divided into three tiers of primary and secondary education: elementary school, middle school or junior high school and high school. The public schools system includes kindergarten to 12th grade. District territories are often complex in structure. In some cases, elementary, middle and junior high schools of a single district feed into high schools in another district. High school athletics and competitions are governed by the Missouri State High School Activities Association (MSHSAA). Homeschooling is legal in Missouri and is an option to meet the compulsory education requirement. It is neither monitored nor regulated by the state's Department of Elementary and Secondary Education Another gifted school is the Missouri Academy of Science, Mathematics and Computing, which is at the Northwest Missouri State University. The University of Missouri System is Missouri's statewide public university system. The flagship institution and largest university in the state is the University of Missouri in Columbia. The others in the system are University of Missouri–Kansas City, University of Missouri–St. Louis, and Missouri University of Science and Technology in Rolla. During the late nineteenth and early twentieth century the state established a series of normal schools in each region of the state, originally named after the geographic districts: Northeast Missouri State University (now Truman State University) (1867), Central Missouri State University (now the University of Central Missouri) (1871), Southeast Missouri State University (1873), Southwest Missouri State University (now Missouri State University) (1905), Northwest Missouri State University (1905), Missouri Western State University (1915), Maryville University (1872) and Missouri Southern State University (1937). Lincoln University and Harris–Stowe State University were established in the mid-nineteenth century and are historically black colleges and universities. Among private institutions Washington University in St. Louis and Saint Louis University are two top ranked schools in the US. There are numerous junior colleges, trade schools, church universities and other private universities in the state. A.T. Still University was the first osteopathic medical school in the world. Hannibal–LaGrange University in Hannibal, Missouri, was one of the first colleges west of the Mississippi (founded 1858 in LaGrange, Missouri, and moved to Hannibal in 1928). The state funds a $2000, renewable merit-based scholarship, Bright Flight, given to the top three percent of Missouri high school graduates who attend a university in-state. The 19th century border wars between Missouri and Kansas have continued as a sports rivalry between the University of Missouri and University of Kansas. The rivalry was chiefly expressed through football and basketball games between the two universities, but since Missouri left the Big 12 Conference in 2012, the teams no longer regularly play one another. It was the oldest college rivalry west of the Mississippi River and the second-oldest in the nation. Each year when the universities met to play, the game was coined the "Border War." An exchange occurred following the game where the winner took a historic Indian War Drum, which had been passed back and forth for decades. Though Missouri and Kansas no longer have an annual game after the University of Missouri moved to the Southeastern Conference, tension still exists between the two schools. Many well-known musicians were born or have lived in Missouri. These include guitarist and rock pioneer Chuck Berry, singer and actress Josephine Baker, "Queen of Rock" Tina Turner, pop singer-songwriter Sheryl Crow, Michael McDonald of the Doobie Brothers, and rappers Nelly, Chingy and Akon, all of whom are either current or former residents of St. Louis. Country singers from Missouri include New Franklin native Sara Evans, Cantwell native Ferlin Husky, West Plains native Porter Wagoner, Tyler Farr of Garden City, and Mora native Leroy Van Dyke, along with bluegrass musician Rhonda Vincent, a native of Greentop. Rapper Eminem was born in St. Joseph and also lived in Savannah and Kansas City. Ragtime composer Scott Joplin lived in St. Louis and Sedalia. Jazz saxophonist Charlie Parker lived in Kansas City. Rock and Roll singer Steve Walsh of the group Kansas was born in St. Louis and grew up in St. Joseph. The Kansas City Symphony and the St. Louis Symphony Orchestra are the state's major orchestras. The latter is the nation's second-oldest symphony orchestra and achieved prominence in recent years under conductor Leonard Slatkin. Branson is well known for its music theaters, most of which bear the name of a star performer or musical group. Missouri is the native state of Mark Twain. His novels "The Adventures of Tom Sawyer" and "The Adventures of Huckleberry Finn" are set in his boyhood hometown of Hannibal. Authors Kate Chopin, T. S. Eliot and Tennessee Williams were from St. Louis. Kansas City-born writer William Least Heat-Moon resides in Rocheport. He is best known for "Blue Highways", a chronicle of his travels to small towns across America, which was on The New York Times Bestseller list for 42 weeks in 1982–1983. Filmmaker, animator, and businessman Walt Disney spent part of his childhood in the Linn County town of Marceline before settling in Kansas City. Disney began his artistic career in Kansas City, where he founded the Laugh-O-Gram Studio. Several film versions of Mark Twain's novels "The Adventures of Tom Sawyer" and "The Adventures of Huckleberry Finn" have been made. "Meet Me in St. Louis", a musical involving the 1904 St. Louis World's Fair, starred Judy Garland. Part of the 1983 road movie "National Lampoon's Vacation" was shot on location in Missouri, for the Griswolds' trip from Chicago to Los Angeles. The Thanksgiving holiday film "Planes, Trains, and Automobiles" was partially shot at Lambert–St. Louis International Airport. "White Palace" was filmed in St. Louis. The award-winning 2010 film "Winter's Bone" was shot in the Ozarks of Missouri. "Up in the Air" starring George Clooney was filmed in St. Louis. John Carpenter's "Escape from New York" was filmed in St. Louis during the early 1980s due to the large number of abandoned buildings in the city. The 1973 movie "Paper Moon", which starred Ryan and Tatum O'Neal, was partly filmed in St. Joseph. Most of HBO's film "Truman" (1995) was filmed in Kansas City, Independence, and the surrounding area; Gary Sinise won an Emmy for his portrayal of Harry Truman in the film. "Ride With the Devil" (1999), starring Jewel and Tobey Maguire, was filmed in the countryside of Jackson County (where the historic events of the film actually took place). "Gone Girl", a 2014 film starring Ben Affleck, Rosamund Pike, Neil Patrick Harris, and Tyler Perry, was filmed in Cape Girardeau. Missouri hosted the 1904 Summer Olympics at St. Louis, the first time the games were hosted in the United States. Professional major league teams Former professional major league teams
https://en.wikipedia.org/wiki?curid=19571
Moses Moses (), also called "Moshe Rabbenu" in Hebrew (, "lit." "Moses our Teacher"), is the most important prophet in Judaism, and an important prophet in Christianity, Islam, the Bahá'í Faith, and a number of other Abrahamic religions. In the biblical narrative he was the leader of the Israelites and lawgiver, to whom the authorship of the first five books of the bible, the Torah, or "acquisition of the Torah from heaven", is attributed. Rabbinical Judaism calculated a lifespan of Moses corresponding to 1391 and 1271 BCE; Jerome gives 1592 BCE, and James Ussher 1571 BCE as his birth year. Scholarly consensus sees Moses as a legendary figure and not a historical person, while retaining the possibility that a Moses-like figure existed in the 13th century BCE. According to the Book of Exodus, Moses was born in a time when his people, the Israelites, an enslaved minority, were increasing in population and, as a result, the Egyptian Pharaoh worried that they might ally themselves with Egypt's enemies. Moses' Hebrew mother, Jochebed, secretly hid him when the Pharaoh ordered all newborn Hebrew boys to be killed in order to reduce the population of the Israelites. Through the Pharaoh's daughter (identified as Queen Bithia in the Midrash), the child was adopted as a foundling from the Nile river and grew up with the Egyptian royal family. After killing an Egyptian slave-master who was beating a Hebrew, Moses fled across the Red Sea to Midian, where he encountered the Angel of the Lord, speaking to him from within a burning bush on Mount Horeb, which he regarded as the Mountain of God. God sent Moses back to Egypt to demand the release of the Israelites from slavery. Moses said that he could not speak eloquently, so God allowed Aaron, his elder brother, to become his spokesperson. After the Ten Plagues, Moses led the Exodus of the Israelites out of Egypt and across the Red Sea, after which they based themselves at Mount Sinai, where Moses received the Ten Commandments. After 40 years of wandering in the desert, Moses died within sight of the Promised Land on Mount Nebo. Several etymologies for the name "Moses" have been proposed. An Egyptian root "msy", "child of", has been considered as a possible etymology, arguably an abbreviation of a theophoric name, as for example in Egyptian names like Thutmoses (Thoth created him) and Ramesses (Ra created him), with the god's name omitted. Abraham Yahuda, based on the spelling given in the Tanakh, argues that it combines "water" or "seed" and "pond, expanse of water", thus yielding the sense of "child of the Nile" ("mw-še"). The Biblical account of Moses' birth provides him with a folk etymology to explain the ostensible meaning of his name. He is said to have received it from the Pharaoh's daughter: "he became her son. She named him Moses (Moshe), saying, 'I drew him out ("meshitihu") of the water. This explanation links it to a verb "mashah", meaning "to draw out", which makes the Pharaoh's daughter's declaration a play on words. The princess made a grammatical mistake which is prophetic of his future role in legend, as someone who will "draw the people of Israel out of Egypt through the waters of the Red Sea." The Hebrew etymology in the Biblical story may reflect an attempt to cancel out traces of Moses' Egyptian origins. The Egyptian character of his name was recognized as such by ancient Jewish writers like Philo and Josephus. Philo linked Moses's name () to the Egyptian (Coptic) word for water (/"möu"), in reference to his finding in the Nile and the biblical folk etymology. Josephus, in his "Antiquities of the Jews", claims that the second element, "-esês", meant 'those who are saved'. The problem of how an Egyptian princess, known to Josephus as Thermutis (identified as Tharmuth) and in later Jewish tradition as Bithiah, could have known Hebrew puzzled medieval Jewish commentators like Abraham ibn Ezra and Hezekiah ben Manoah. Hezekiah suggested she either converted or took a tip from Jochebed. The Israelites had settled in the Land of Goshen in the time of Joseph and Jacob, but a new Pharaoh arose who oppressed the children of Israel. At this time Moses was born to his father Amram, son (or descendant) of Kehath the Levite, who entered Egypt with Jacob's household; his mother was Jochebed (also Yocheved), who was kin to Kehath. Moses had one older (by seven years) sister, Miriam, and one older (by three years) brother, Aaron. The Pharaoh had commanded that all male Hebrew children born would be drowned in the river Nile, but Moses' mother placed him in an ark and concealed the ark in the bulrushes by the riverbank, where the baby was discovered and adopted by Pharaoh's daughter, and raised as an Egyptian. One day, after Moses had reached adulthood, he killed an Egyptian who was beating a Hebrew. Moses, in order to escape the Pharaoh's death penalty, fled to Midian (a desert country south of Judah), where he married Zipporah. There, on Mount Horeb, God appeared to Moses as a burning bush, revealed to Moses his name YHWH (probably pronounced Yahweh) and commanded him to return to Egypt and bring his chosen people (Israel) out of bondage and into the Promised Land (Canaan). During the journey, God tried to kill Moses, but Zipporah saved his life. Moses returned to carry out God's command, but God caused the Pharaoh to refuse, and only after God had subjected Egypt to ten plagues did the Pharaoh relent. Moses led the Israelites to the border of Egypt, but there God hardened the Pharaoh's heart once more, so that he could destroy the Pharaoh and his army at the Red Sea Crossing as a sign of his power to Israel and the nations. After defeating the Amalekites in Rephidim, Moses led the Israelites to biblical Mount Sinai, where he was given the Ten Commandments from God, written on stone tablets. However, since Moses remained a long time on the mountain, some of the people feared that he might be dead, so they made a statue of a golden calf and worshipped it, thus disobeying and angering God and Moses. Moses, out of anger, broke the tablets, and later ordered the elimination of those who had worshiped the golden statue, which was melted down and fed to the idolaters. He also wrote the ten commandments on a new set of tablets. Later at Mount Sinai, Moses and the elders entered into a covenant, by which Israel would become the people of YHWH, obeying his laws, and YHWH would be their god. Moses delivered the laws of God to Israel, instituted the priesthood under the sons of Moses' brother Aaron, and destroyed those Israelites who fell away from his worship. In his final act at Sinai, God gave Moses instructions for the Tabernacle, the mobile shrine by which he would travel with Israel to the Promised Land. From Sinai, Moses led the Israelites to the Desert of Paran on the border of Canaan. From there he sent twelve spies into the land. The spies returned with samples of the land's fertility, but warned that its inhabitants were giants. The people were afraid and wanted to return to Egypt, and some rebelled against Moses and against God. Moses told the Israelites that they were not worthy to inherit the land, and would wander the wilderness for forty years until the generation who had refused to enter Canaan had died, so that it would be their children who would possess the land. When the forty years had passed, Moses led the Israelites east around the Dead Sea to the territories of Edom and Moab. There they escaped the temptation of idolatry, conquered the lands of Og and Sihon in Transjordan, received God's blessing through Balaam the prophet, and massacred the Midianites, who by the end of the Exodus journey had become the enemies of the Israelites due to their notorious role in enticing the Israelites to sin against God. Moses was twice given notice that he would die before entry to the Promised Land: in Numbers 27:13, once he had seen the Promised Land from a viewpoint on Mount Abarim, and again in Numbers 31:1 once battle with the Midianites had been won. On the banks of the Jordan River, in sight of the land, Moses assembled the tribes. After recalling their wanderings he delivered God's laws by which they must live in the land, sang a song of praise and pronounced a blessing on the people, and passed his authority to Joshua, under whom they would possess the land. Moses then went up Mount Nebo to the top of Pisgah, looked over the promised land of Israel spread out before him, and died, at the age of one hundred and twenty. More humble than any other man (Num. 12:3), "there hath not arisen a prophet since in Israel like unto Moses, whom YHWH knew face to face" (Deuteronomy 34:10). The New Testament states that after Moses' death, Michael the Archangel and the Devil disputed over his body (Epistle of Jude 1:9). Moses is honoured among Jews today as the "lawgiver of Israel", and he delivers several sets of laws in the course of the four books. The first is the Covenant Code (Exodus –), the terms of the covenant which God offers to the Israelites at biblical Mount Sinai. Embedded in the covenant are the Decalogue (the Ten Commandments, Exodus 20:1–17) and the Book of the Covenant (Exodus 20:22–23:19). The entire Book of Leviticus constitutes a second body of law, the Book of Numbers begins with yet another set, and the Book of Deuteronomy another. Moses has traditionally been regarded as the author of those four books and the Book of Genesis, which together comprise the Torah, the first section of the Hebrew Bible. The modern scholarly consensus is that the figure of Moses is a mythical figure, and while, as William G. Dever writes, "a Moses-like figure may have existed somewhere in the southern Transjordan in the mid-late 13th century B.C.", archaeology cannot confirm his existence. Certainly no Egyptian sources mention Moses or the events of Exodus–Deuteronomy, nor has any archaeological evidence been discovered in Egypt or the Sinai wilderness to support the story in which he is the central figure. The story of his discovery picks up a familiar motif in ancient Near Eastern mythological accounts of the ruler who rises from humble origins: Thus Sargon of Akkad's Akkadian account of his own origins runs: My mother, the high priestess, conceived; in secret she bore me She set me in a basket of rushes, with bitumen she sealed my lid She cast me into the river which rose over me. Despite the imposing fame associated with Moses, no source mentions him until he emerges in texts associated with the Babylonian exile. A theory developed by Cornelis Tiele in 1872, which has proved influential, argued that Yahweh was a Midianite god, introduced to the Israelites by Moses, whose father-in-law Jethro was a Midianite priest. It was to such a Moses that Yahweh reveals his real name, hidden from the Patriarchs who knew him only as El Shaddai. Against this view is the modern consensus that most of the Israelites were native to Palestine. Martin Noth argued that the Pentateuch uses the figure of Moses, originally linked to legends of a Transjordan conquest, as a narrative bracket or late redactional device to weld together 4 of the 5, originally independent, themes of that work. and , the latter in a somewhat sensationalist manner, have suggested that the Moses story is a distortion or transmogrification of the historical pharaoh Amenmose (c. 1200 BCE), who was dismissed from office and whose name was later simplified to "msy" (Mose). Aidan Dodson regards this hypothesis as "intriguing, but beyond proof." Rudolf Smend argues that the two details about Moses that were most likely to be historical are his name, of Egyptian origin, and his marriage to a Midianite woman, details which seem unlikely to have been invented by the Israelites; in Smend's view, all other details given in the biblical narrative are too mythically charged to be seen as accurate data. The name King Mesha of Moab has been linked to that of Moses. Mesha also is associated with narratives of an exodus and a conquest, and several motifs in stories about him are shared with the Exodus tale and that regarding Israel's war with Moab (2 Kings 3). Moab rebels against oppression, like Moses, leads his people out of Israel, as Moses does from Egypt, and his first-born son is slaughtered at the wall of Kir-hareseth as the firstborn of Israel are condemned to slaughter in the Exodus story, "an infernal passover that delivers Mesha while wrath burns against his enemies". An Egyptian version of the tale that crosses over with the Moses story is found in Manetho who, according to the summary in Josephus, wrote that a certain Osarseph, a Heliopolitan priest, became overseer of a band of lepers, when Amenophis, following indications by Amenhotep, son of Hapu, had all the lepers in Egypt quarantined in order to cleanse the land so that he might see the gods. The lepers are bundled into Avaris, the former capital of the Hyksos, where Osarseph prescribes for them everything forbidden in Egypt, while proscribing everything permitted in Egypt. They invite the Hyksos to reinvade Egypt, rule with them for 13 years – Osarseph then assumes the name Moses – and are then driven out. Non-biblical writings about Jews, with references to the role of Moses, first appear at the beginning of the Hellenistic period, from 323 BCE to about 146 BCE. Shmuel notes that "a characteristic of this literature is the high honour in which it holds the peoples of the East in general and some specific groups among these peoples." In addition to the Judeo-Roman or Judeo-Hellenic historians Artapanus, Eupolemus, Josephus, and Philo, a few non-Jewish historians including Hecataeus of Abdera (quoted by Diodorus Siculus), Alexander Polyhistor, Manetho, Apion, Chaeremon of Alexandria, Tacitus and Porphyry also make reference to him. The extent to which any of these accounts rely on earlier sources is unknown. Moses also appears in other religious texts such as the Mishnah (c. 200 CE), Midrash (200–1200 CE), and the Quran (c. 610–653). The figure of Osarseph in Hellenistic historiography is a renegade Egyptian priest who leads an army of lepers against the pharaoh and is finally expelled from Egypt, changing his name to Moses. The earliest existing reference to Moses in Greek literature occurs in the Egyptian history of Hecataeus of Abdera (4th century BCE). All that remains of his description of Moses are two references made by Diodorus Siculus, wherein, writes historian Arthur Droge, he "describes Moses as a wise and courageous leader who left Egypt and colonized Judaea." Among the many accomplishments described by Hecataeus, Moses had founded cities, established a temple and religious cult, and issued laws: Droge also points out that this statement by Hecataeus was similar to statements made subsequently by Eupolemus. The Jewish historian Artapanus of Alexandria (2nd century BCE), portrayed Moses as a cultural hero, alien to the Pharaonic court. According to theologian John Barclay, the Moses of Artapanus "clearly bears the destiny of the Jews, and in his personal, cultural and military splendor, brings credit to the whole Jewish people." Artapanus goes on to relate how Moses returns to Egypt with Aaron, and is imprisoned, but miraculously escapes through the name of YHWH in order to lead the Exodus. This account further testifies that all Egyptian temples of Isis thereafter contained a rod, in remembrance of that used for Moses' miracles. He describes Moses as 80 years old, "tall and ruddy, with long white hair, and dignified." Some historians, however, point out the "apologetic nature of much of Artapanus' work," with his addition of extra-biblical details, such as his references to Jethro: the non-Jewish Jethro expresses admiration for Moses' gallantry in helping his daughters, and chooses to adopt Moses as his son. Strabo, a Greek historian, geographer and philosopher, in his "Geographica" (c. 24 CE), wrote in detail about Moses, whom he considered to be an Egyptian who deplored the situation in his homeland, and thereby attracted many followers who respected the deity. He writes, for example, that Moses opposed the picturing of the deity in the form of man or animal, and was convinced that the deity was an entity which encompassed everything – land and sea: In Strabo's writings of the history of Judaism as he understood it, he describes various stages in its development: from the first stage, including Moses and his direct heirs; to the final stage where "the Temple of Jerusalem continued to be surrounded by an aura of sanctity." Strabo's "positive and unequivocal appreciation of Moses' personality is among the most sympathetic in all ancient literature." His portrayal of Moses is said to be similar to the writing of Hecataeus who "described Moses as a man who excelled in wisdom and courage." Egyptologist Jan Assmann concludes that Strabo was the historian "who came closest to a construction of Moses' religion as monotheistic and as a pronounced counter-religion." It recognized "only one divine being whom no image can represent... [and] the only way to approach this god is to live in virtue and in justice." The Roman historian Tacitus (c. 56–120 CE) refers to Moses by noting that the Jewish religion was monotheistic and without a clear image. His primary work, wherein he describes Jewish philosophy, is his "Histories" (c. 100), where, according to 18th-century translator and Irish dramatist Arthur Murphy, as a result of the Jewish worship of one God, "pagan mythology fell into contempt." Tacitus states that, despite various opinions current in his day regarding the Jews' ethnicity, most of his sources are in agreement that there was an Exodus from Egypt. By his account, the Pharaoh Bocchoris, suffering from a plague, banished the Jews in response to an oracle of the god Zeus-Amun. In this version, Moses and the Jews wander through the desert for only six days, capturing the Holy Land on the seventh. The Septuagint, the Greek version of the Hebrew Bible, influenced Longinus, who may have been the author of the great book of literary criticism, "On the Sublime". The date of composition is unknown, but it is commonly assigned to the late 1st century C.E. The writer quotes Genesis in a "style which presents the nature of the deity in a manner suitable to his pure and great being," however he does not mention Moses by name, calling him 'no chance person' () but "the Lawgiver" (, thesmothete) of the Jews," a term that puts him on a par with Lycurgus and Minos. Aside from a reference to Cicero, Moses is the only non-Greek writer quoted in the work, contextually he is put on a par with Homer, and he is described "with far more admiration than even Greek writers who treated Moses with respect, such as Hecataeus and Strabo. In Josephus' (37 – c. 100 CE) "Antiquities of the Jews", Moses is mentioned throughout. For example Book VIII Ch. IV, describes Solomon's Temple, also known as the First Temple, at the time the Ark of the Covenant was first moved into the newly built temple: According to Feldman, Josephus also attaches particular significance to Moses' possession of the "cardinal virtues of wisdom, courage, temperance, and justice." He also includes piety as an added fifth virtue. In addition, he "stresses Moses' willingness to undergo toil and his careful avoidance of bribery. Like Plato's philosopher-king, Moses excels as an educator." Numenius, a Greek philosopher who was a native of Apamea, in Syria, wrote during the latter half of the 2nd century CE. Historian Kennieth Guthrie writes that "Numenius is perhaps the only recognized Greek philosopher who explicitly studied Moses, the prophets, and the life of Jesus..." He describes his background: The Christian saint and religious philosopher Justin Martyr (103–165 CE) drew the same conclusion as Numenius, according to other experts. Theologian Paul Blackham notes that Justin considered Moses to be "more trustworthy, profound and truthful because he is "older" than the Greek philosophers." He quotes him: Most of what is known about Moses from the Bible comes from the books of Exodus, Leviticus, Numbers and Deuteronomy. The majority of scholars consider the compilation of these books to go back to the Persian period, 538–332 BCE, but based on earlier written and oral traditions. There is a wealth of stories and additional information about Moses in the Jewish apocrypha and in the genre of rabbinical exegesis known as Midrash, as well as in the primary works of the Jewish oral law, the Mishnah and the Talmud. Moses is also given a number of bynames in Jewish tradition. The Midrash identifies Moses as one of seven biblical personalities who were called by various names. Moses' other names were Jekuthiel (by his mother), Heber (by his father), Jered (by Miriam), Avi Zanoah (by Aaron), Avi Gedor (by Kohath), Avi Soco (by his wet-nurse), Shemaiah ben Nethanel (by people of Israel). Moses is also attributed the names Toviah (as a first name), and Levi (as a family name) (Vayikra Rabbah 1:3), Heman, Mechoqeiq (lawgiver) and Ehl Gav Ish (Numbers 12:3). In another exegesis, Moses had ascended to the first heaven until the seventh, even visited Paradise and Hell alive, after he saw the Divine vision in Mount Horeb. Jewish historians who lived at Alexandria, such as Eupolemus, attributed to Moses the feat of having taught the Phoenicians their alphabet, similar to legends of Thoth. Artapanus of Alexandria explicitly identified Moses not only with Thoth/Hermes, but also with the Greek figure Musaeus (whom he called "the teacher of Orpheus"), and ascribed to him the division of Egypt into 36 districts, each with its own liturgy. He named the princess who adopted Moses as Merris, wife of Pharaoh Chenephres. Jewish tradition considers Moses to be the greatest prophet who ever lived. Despite his importance, Judaism stresses that Moses was a human being, and is therefore not to be worshipped. Only God is worthy of worship in Judaism. To Orthodox Jews, Moses is called "Moshe Rabbenu, `Eved HaShem, Avi haNeviim zya"a": "Our Leader Moshe, Servant of God, Father of all the Prophets (may his merit shield us, amen)". In the orthodox view, Moses received not only the Torah, but also the revealed (written and oral) and the hidden (the "`hokhmat nistar" teachings, which gave Judaism the Zohar of the Rashbi, the Torah of the Ari haQadosh and all that is discussed in the Heavenly Yeshiva between the Ramhal and his masters). Arising in part from his age of death (120 according to Deut. 34:7) and that "his eye had not dimmed, and his vigor had not diminished," the phrase "may you live to 120" has become a common blessing among Jews, especially since 120 is elsewhere stated as the maximum age for Noah's descendants (one interpretation of ). Moses is mentioned more often in the New Testament than any other Old Testament figure. For Christians, Moses is often a symbol of God's law, as reinforced and expounded on in the teachings of Jesus. New Testament writers often compared Jesus' words and deeds with Moses' to explain Jesus' mission. In Acts 7:39–43, 51–53, for example, the rejection of Moses by the Jews who worshipped the golden calf is likened to the rejection of Jesus by the Jews that continued in traditional Judaism. Moses also figures in several of Jesus' messages. When he met the Pharisee Nicodemus at night in the third chapter of the Gospel of John, he compared Moses' lifting up of the bronze serpent in the wilderness, which any Israelite could look at and be healed, to his own lifting up (by his death and resurrection) for the people to look at and be healed. In the sixth chapter, Jesus responded to the people's claim that Moses provided them "manna" in the wilderness by saying that it was not Moses, but God, who provided. Calling himself the "bread of life", Jesus stated that He was provided to feed God's people. Moses, along with Elijah, is presented as meeting with Jesus in all three Synoptic Gospels of the Transfiguration of Jesus in Matthew 17, Mark 9, and Luke 9, respectively. In Matthew 23, Jesus refers to the scribes and the Pharisees of the Temple as "seated in the chair of Moses" (, "epi tēs Mōuseōs kathedras") His relevance to modern Christianity has not diminished. Moses is considered to be a saint by several churches; and is commemorated as a prophet in the respective Calendars of Saints of the Eastern Orthodox Church, the Roman Catholic Church, and the Lutheran churches on September 4. In Eastern Orthodox liturgics for September 4, Moses is commemorated as the "Holy Prophet and God-seer Moses, on Mount Nebo". The Orthodox Church also commemorates him on the Sunday of the Forefathers, two Sundays before the Nativity. The Armenian Apostolic Church commemorates him as one of the Holy Forefathers in their Calendar of Saints on July 30. Members of The Church of Jesus Christ of Latter-day Saints (colloquially called Mormons) generally view Moses in the same way that other Christians do. However, in addition to accepting the biblical account of Moses, Mormons include Selections from the Book of Moses as part of their scriptural canon. This book is believed to be the translated writings of Moses, and is included in the Pearl of Great Price. Latter-day Saints are also unique in believing that Moses was taken to heaven without having tasted death (translated). In addition, Joseph Smith and Oliver Cowdery stated that on April 3, 1836, Moses appeared to them in the Kirtland Temple (located in Kirtland, Ohio) in a glorified, immortal, physical form and bestowed upon them the "keys of the gathering of Israel from the four parts of the earth, and the leading of the ten tribes from the land of the north." Moses is mentioned more in the Quran than any other individual and his life is narrated and recounted more than that of any other Islamic prophet. In general, Moses is described in ways which parallel the Islamic prophet Muhammad. Like Muhammad, Moses is defined in the Quran as both prophet ("nabi") and messenger ("rasul"), the latter term indicating that he was one of those prophets who brought a scripture and law to his people. One of the hadith, or traditional narratives about Muhammad's life, describes a meeting in heaven between Moses and Muhammad, which resulted in Muslims observing 5 daily prayers. Huston Smith says this was "one of the crucial events in Muhammad's life". Moses is mentioned 502 times in the Quran; passages mentioning Moses include 2.49–61, 7.103–60, 10.75–93, 17.101–04, 20.9–97, 26.10–66, 27.7–14, 28.3–46, 40.23–30, 43.46–55, 44.17–31, and 79.15–25. and many others. Most of the key events in Moses' life which are narrated in the Bible are to be found dispersed through the different Surahs of the Quran, with a story about meeting Khidr which is not found in the Bible. In the Moses story related by the Quran, Jochebed is commanded by God to place Moses in an ark and cast him on the waters of the Nile, thus abandoning him completely to God's protection. The Pharaoh's wife Asiya, not his daughter, found Moses floating in the waters of the Nile. She convinced the Pharaoh to keep him as their son because they were not blessed with any children. The Quran's account has emphasized Moses' mission to invite the Pharaoh to accept God's divine message as well as give salvation to the Israelites. According to the Quran, Moses encourages the Israelites to enter Canaan, but they are unwilling to fight the Canaanites, fearing certain defeat. Moses responds by pleading to Allah that he and his brother Aaron be separated from the rebellious Israelites. After which the Israelites are made to wander for 40 years. According to some Islamic tradition, Moses is believed to be buried at Maqam El-Nabi Musa, Jericho. Moses is one of the most important of God's messengers in the Bahá'í Faith being designated a Manifestation of God. An epithet of Moses in Baha'i scriptures is the One Who Conversed with God. According to the Baha'i Faith, Bahá'u'lláh, the founder of the faith, is the one who spoke to Moses from the Burning bush. Abdul’l-Baha, has highlighted the fact that Moses, like Abraham, had none of the makings of a great man of history, but through God's assistance he was able to achieve many great things. He is described as having been "for a long time a shepherd in the wilderness," of having had a stammer, and of being "much hated and detested" by the Pharaoh and the ancient Egyptians of his time. He is said to have been raised in an oppressive household, and to have been known, in Egypt, as a man who had committed murder – though he had done so in order to prevent an act of cruelty. Nevertheless, like Abraham, through the assistance of God, he achieved great things and gained renown even beyond the Levant. Chief among these achievements was the freeing of his people, the Hebrews, from bondage in Egypt and leading "them to the Holy Land." He is viewed as the one who bestowed on Israel 'the religious and the civil law' which gave them "honour among all nations," and which spread their fame to different parts of the world. Furthermore, through the law, Moses is believed to have led the Hebrews 'to the highest possible degree of civilization at that period.’ Abdul’l-Baha asserts that the ancient Greek philosophers regarded "the illustrious men of Israel as models of perfection." Chief among these philosophers, he says, was Socrates who "visited Syria, and took from the children of Israel the teachings of the Unity of God and of the immortality of the soul." Moses is further described as paving the way for Bahá'u'lláh and his ultimate revelation, and as a teacher of truth, whose teachings were in line with the customs of his time. In a metaphorical sense in the Christian tradition, a "Moses" has been referred to as the leader who delivers the people from a terrible situation. Among the Presidents of the United States known to have used the symbolism of Moses were Harry S. Truman, Jimmy Carter, Ronald Reagan, Bill Clinton, George W. Bush and Barack Obama, who referred to his supporters as "the Moses generation." In subsequent years, theologians linked the Ten Commandments with the formation of early democracy. Scottish theologian William Barclay described them as "the universal foundation of all things… the law without which nationhood is impossible. …Our society is founded upon it. Pope Francis addressed the United States Congress in 2015 stating that all people need to "keep alive their sense of unity by means of just legislation... [and] the figure of Moses leads us directly to God and thus to the transcendent dignity of the human being. References to Moses were used by the Puritans, who relied on the story of Moses to give meaning and hope to the lives of Pilgrims seeking religious and personal freedom in North America. John Carver was the first governor of Plymouth colony and first signer of the Mayflower Compact, which he wrote in 1620 during the ship "Mayflower"'s three-month voyage. He inspired the Pilgrims with a "sense of earthly grandeur and divine purpose," notes historian Jon Meacham, and was called the "Moses of the Pilgrims." Early American writer James Russell Lowell noted the similarity of the founding of America by the Pilgrims to that of ancient Israel by Moses: Following Carver's death the following year, William Bradford was made governor. He feared that the remaining Pilgrims would not survive the hardships of the new land, with half their people having already died within months of arriving. Bradford evoked the symbol of Moses to the weakened and desperate Pilgrims to help calm them and give them hope: "Violence will break all. Where is the meek and humble spirit of Moses?" William G. Dever explains the attitude of the Pilgrims: "We considered ourselves the 'New Israel,' particularly we in America. And for that reason we knew who we were, what we believed in and valued, and what our 'manifest destiny' was." On July 4, 1776, immediately after the Declaration of Independence was officially passed, the Continental Congress asked John Adams, Thomas Jefferson, and Benjamin Franklin to design a seal that would clearly represent a symbol for the new United States. They chose the symbol of Moses leading the Israelites to freedom. The Founding Fathers of the United States inscribed the words of Moses on the Liberty Bell: "Proclaim Liberty thro' all the Land to all the Inhabitants thereof." (Leviticus 25) After the death of George Washington in 1799, two thirds of his eulogies referred to him as "America's Moses," with one orator saying that "Washington has been the same to us as Moses was to the Children of Israel." Benjamin Franklin, in 1788, saw the difficulties that some of the newly independent American states were having in forming a government, and proposed that until a new code of laws could be agreed to, they should be governed by "the laws of Moses," as contained in the Old Testament. He justified his proposal by explaining that the laws had worked in biblical times: "The Supreme Being… having rescued them from bondage by many miracles, performed by his servant Moses, he personally delivered to that chosen servant, in the presence of the whole nation, a constitution and code of laws for their observance. John Adams, 2nd President of the United States, stated why he relied on the laws of Moses over Greek philosophy for establishing the United States Constitution: "As much as I love, esteem, and admire the Greeks, I believe the Hebrews have done more to enlighten and civilize the world. Moses did more than all their legislators and philosophers. Swedish historian Hugo Valentin credited Moses as the "first to proclaim the rights of man." Historian Gladys L. Knight describes how leaders who emerged during and after the period in which slavery in the United States was legal often personified the Moses symbol. "The symbol of Moses was empowering in that it served to amplify a need for freedom." Therefore, when Abraham Lincoln was assassinated in 1865 after the passage of the amendment to the Constitution outlawing slavery, Black Americans said they had lost "their Moses". Lincoln biographer Charles Carleton Coffin writes, "The millions whom Abraham Lincoln delivered from slavery will ever liken him to Moses, the deliverer of Israel." Similarly, Harriet Tubman, who rescued approximately seventy enslaved family and friends, was also described as the "Moses" of her people. In the 1960s, a leading figure in the civil rights movement was Martin Luther King Jr., who was called "a modern Moses," and often referred to Moses in his speeches: "The struggle of Moses, the struggle of his devoted followers as they sought to get out of Egypt. This is something of the story of every people struggling for freedom." Literature Art Michelangelo's statue of Moses (1513-1515), in the Church of San Pietro in Vincoli, Rome, is one of the most familiar statues in the world. The horns the sculptor included on Moses' head are the result of a mistranslation of the Hebrew Bible into the Latin Vulgate Bible with which Michelangelo was familiar. The Hebrew word taken from "Exodus" means either a "horn" or an "irradiation." Experts at the Archaeological Institute of America show that the term was used when Moses "returned to his people after seeing as much of the Glory of the Lord as human eye could stand," and his face "reflected radiance." In early Jewish art, moreover, Moses is often "shown with rays coming out of his head." Another author explains, "When Saint Jerome translated the Old Testament into Latin, he thought no one but Christ should glow with rays of light – so he advanced the secondary translation. However, writer J. Stephen Lang points out that Jerome's version actually described Moses as "giving off hornlike rays," and he "rather clumsily translated it to mean 'having horns.'" It has also been noted that he had Moses seated on a throne, yet Moses was never given the title of a King nor ever sat on such thrones. Moses is depicted in several U.S. government buildings because of his legacy as a lawgiver. In the Library of Congress stands a large statue of Moses alongside a statue of the Paul the Apostle. Moses is one of the 23 lawgivers depicted in marble bas-reliefs in the chamber of the U.S. House of Representatives in the United States Capitol. The plaque's overview states: "Moses (c. 1350–1250 B.C.) Hebrew prophet and lawgiver; transformed a wandering people into a nation; received the Ten Commandments." The other twenty-two figures have their profiles turned to Moses, which is the only forward-facing bas-relief. Moses appears eight times in carvings that ring the Supreme Court Great Hall ceiling. His face is presented along with other ancient figures such as Solomon, the Greek god Zeus and the Roman goddess of wisdom, Minerva. The Supreme Court Building's east pediment depicts Moses holding two tablets. Tablets representing the Ten Commandments can be found carved in the oak courtroom doors, on the support frame of the courtroom's bronze gates and in the library woodwork. A controversial image is one that sits directly above the Chief Justice of the United States' head. In the center of the 40-foot-long Spanish marble carving is a tablet displaying Roman numerals I through X, with some numbers partially hidden. Film and television In the late eighteenth century, the deist Thomas Paine commented at length on Moses' Laws in "The Age of Reason" (1794, 1795, and 1807). Paine considered Moses to be a "detestable villain", and cited Numbers 31 as an example of his "unexampled atrocities". In the passage, the Jewish army had returned from conquering the Midianites, and Moses went to meet it, saying angrily: The prominent atheist Richard Dawkins also made reference to these verses in his 2006 book, "The God Delusion", concluding that Moses was "not a great role model for modern moralists". Rabbi Joel Grossman argued that the story is a "powerful fable of lust and betrayal", and that Moses' execution of the women was a symbolic condemnation of those who seek to turn sex and desire to evil purposes. Alan Levin, an educational specialist with the Reform movement, has similarly suggested that the story should be taken as a cautionary tale, to "warn successive generations of Jews to watch their own idolatrous behavior". However, some Jewish sources defend Moses' role. The Chasam Sofer emphasizes that this war was not fought at Moses' behest, but was commanded by God as an act of revenge against the Midianite women, who, according to the Biblical account, had seduced the Israelites and led them to sin. In "Legend of the Jews", Phinehas son of Eleazar defend their innocent action in leaving the women remain alive because Moses instructed them to take revenge "only to the Midianites," without mentioning "Midianite women." Informational notes Citations Further reading
https://en.wikipedia.org/wiki?curid=19577
Lambda phage Enterobacteria phage λ (lambda phage, coliphage λ, officially Escherichia virus Lambda) is a bacterial virus, or bacteriophage, that infects the bacterial species "Escherichia coli" ("E. coli"). It was discovered by Esther Lederberg in 1950. The wild type of this virus has a temperate life cycle that allows it to either reside within the genome of its host through lysogeny or enter into a lytic phase, during which it kills and lyses the cell to produce offspring. Lambda strains, mutated at specific sites, are unable to lysogenize cells; instead, they grow and enter the lytic cycle after superinfecting an already lysogenized cell. The phage particle consists of a head (also known as a capsid), a tail, and tail fibers (see image of virus below). The head contains the phage's double-strand linear DNA genome. During infection, the phage particle recognizes and binds to its host, "E. coli", causing DNA in the head of the phage to be ejected through the tail into the cytoplasm of the bacterial cell. Usually, a "lytic cycle" ensues, where the lambda DNA is replicated and new phage particles are produced within the cell. This is followed by cell lysis, releasing the cell contents, including virions that have been assembled, into the environment. However, under certain conditions, the phage DNA may integrate itself into the host cell chromosome in the lysogenic pathway. In this state, the λ DNA is called a prophage and stays resident within the host's genome without apparent harm to the host. The host is termed a lysogen when a prophage is present. This prophage may enter the lytic cycle when the lysogen enters a stressed condition. The virus particle consists of a head and a tail that can have tail fibers. The whole particle consists of 12–14 different proteins with more than 1000 protein molecules total and one DNA molecule located in the phage head. However, it is still not entirely clear whether the L and M proteins are part of the virion. The genome contains 48,490 base pairs of double-stranded, linear DNA, with 12-base single-strand segments at both 5' ends. These two single-stranded segments are the "sticky ends" of what is called the "cos" site. The "cos" site circularizes the DNA in the host cytoplasm. In its circular form, the phage genome, therefore, is 48,502 base pairs in length. The lambda genome can be inserted into the " E. coli" chromosome and is then called a prophage. See section below for details. Lambda phage is a non-contractile tailed phage, meaning during an infection event it cannot 'force' its DNA through a bacterial cell membrane. It must instead use an existing pathway to invade the host cell, having evolved the tip of its tail to interact with a specific pore to allow entry of its DNA to the hosts. On initial infection, the stability of cII determines the lifestyle of the phage; stable cII will lead to the lysogenic pathway, whereas if cII is degraded the phage will go into the lytic pathway. Low temperature, starvation of the cells and high multiplicity of infection (MOI) are known to favor lysogeny (see later discussion). This occurs without the N protein interacting with the DNA; the protein instead binds to the freshly transcribed mRNA. Nut sites contain 3 conserved "boxes", of which only BoxB is essential. This is the lifecycle that the phage follows following most infections, where the cII protein does not reach a high enough concentration due to degradation, so does not activate its promoters. Rightward transcription expresses the "O", "P" and "Q" genes. O and P are responsible for initiating replication, and Q is another antiterminator that allows the expression of head, tail, and lysis genes from "PR’". Q is similar to N in its effect: Q binds to RNA polymerase in "Qut" sites and the resulting complex can ignore terminators, however the mechanism is very different; the Q protein first associates with a DNA sequence rather than an mRNA sequence. Leftward transcription expresses the "gam", "red", "xis", and "int" genes. Gam and red proteins are involved in recombination. Gam is also important in that it inhibits the host RecBCD nuclease from degrading the 3’ ends in rolling circle replication. Int and xis are integration and excision proteins vital to lysogeny. The lysogenic lifecycle begins once the cI protein reaches a high enough concentration to activate its promoters, after a small number of infections. The prophage is duplicated with every subsequent cell division of the host. The phage genes expressed in this dormant state code for proteins that repress expression of other phage genes (such as the structural and lysis genes) in order to prevent entry into the lytic cycle. These repressive proteins are broken down when the host cell is under stress, resulting in the expression of the repressed phage genes. Stress can be from starvation, poisons (like antibiotics), or other factors that can damage or destroy the host. In response to stress, the activated prophage is excised from the DNA of the host cell by one of the newly expressed gene products and enters its lytic pathway. The integration of phage λ takes place at a special attachment site in the bacterial and phage genomes, called "attλ". The sequence of the bacterial att site is called "attB", between the "gal" and "bio" operons, and consists of the parts B-O-B', whereas the complementary sequence in the circular phage genome is called "attP" and consists of the parts P-O-P'. The integration itself is a sequential exchange (see genetic recombination) via a Holliday junction and requires both the phage protein Int and the bacterial protein IHF ("integration host factor"). Both Int and IHF bind to "attP" and form an intasome, a DNA-protein-complex designed for site-specific recombination of the phage and host DNA. The original B-O-B' sequence is changed by the integration to B-O-P'-phage DNA-P-O-B'. The phage DNA is now part of the host's genome. The classic induction of a lysogen involved irradiating the infected cells with UV light. Any situation where a lysogen undergoes DNA damage or the SOS response of the host is otherwise stimulated leads to induction. Multiplicity reactivation (MR) is the process by which multiple viral genomes, each containing inactivating genome damage, interact within an infected cell to form a viable viral genome. MR was originally discovered with phage T4, but was subsequently found in phage λ (as well as in numerous other bacterial and mammalian viruses). MR of phage λ inactivated by UV light depends on the recombination function of either the host or of the infecting phage. Absence of both recombination systems leads to a loss of MR. Survival of UV-irradiated phage λ is increased when the E. coli host is lysogenic for an homologous prophage, a phenomenon termed prophage reactivation. Prophage reactivation in phage λ appears to occur by a recombinational repair process similar to that of MR. The repressor found in the phage lambda is a notable example of the level of control possible over gene expression by a very simple system. It forms a 'binary switch' with two genes under mutually exclusive expression, as discovered by Barbara J. Meyer. The lambda repressor gene system consists of (from left to right on the chromosome): The lambda repressor is a self assembling dimer also known as the cI protein. It binds DNA in the helix-turn-helix binding motif. It regulates the transcription of the cI protein and the Cro protein. The life cycle of lambda phages is controlled by cI and Cro proteins. The lambda phage will remain in the lysogenic state if cI proteins predominate, but will be transformed into the lytic cycle if cro proteins predominate. The cI dimer may bind to any of three operators, OR1, OR2, and OR3, in the order OR1 > OR2 > OR3. Binding of a cI dimer to OR1 enhances binding of a second cI dimer to OR2, an effect called cooperativity. Thus, OR1 and OR2 are almost always simultaneously occupied by cI. However, this does not increase the affinity between cI and OR3, which will be occupied only when the cI concentration is high. At high concentrations of cI, the dimers will also bind to operators OL1 and OL2 (which are over 2 kb downstream from the R operators). When cI dimers are bound to OL1, OL2, OR1, and OR2 a loop is induced in the DNA, allowing these dimers to bind together to form an octamer. This is a phenomenon called "long-range cooperativity". Upon formation of the octamer, cI dimers may cooperatively bind to OL3 and OR3, repressing transcription of cI. This "autonegative" regulation ensures a stable minimum concentration of the repressor molecule and, should SOS signals arise, allows for more efficient prophage induction. An important distinction here is that between the two decisions; lysogeny and lysis on infection, and continuing lysogeny or lysis from a prophage. The latter is determined solely by the activation of RecA in the SOS response of the cell, as detailed in the section on induction. The former will also be affected by this; a cell undergoing an SOS response will always be lysed, as no cI protein will be allowed to build up. However, the initial lytic/lysogenic decision on infection is also dependent on the cII and cIII proteins. In cells with sufficient nutrients, protease activity is high, which breaks down cII. This leads to the lytic lifestyle. In cells with limited nutrients, protease activity is low, making cII stable. This leads to the lysogenic lifestyle. cIII appears to stabilize cII, both directly and by acting as a competitive inhibitor to the relevant proteases. This means that a cell "in trouble", i.e. lacking in nutrients and in a more dormant state, is more likely to lysogenise. This would be selected for because the phage can now lie dormant in the bacterium until it falls on better times, and so the phage can create more copies of itself with the additional resources available and with the more likely proximity of further infectable cells. A full biophysical model for lambda's lysis-lysogeny decision remains to be developed. Computer modeling and simulation suggest that random processes during infection drive the selection of lysis or lysogeny within individual cells. However, recent experiments suggest that physical differences among cells, that exist prior to infection, predetermine whether a cell will lyse or become a lysogen. Lambda phage has been used heavily as a model organism, and has been a rich source for useful tools in microbial genetics, and later in molecular genetics. Uses include its application as a vector for the cloning of recombinant DNA; the use of its site-specific recombinase (int) for the shuffling of cloned DNAs by the gateway method; and the application of its Red operon, including the proteins Red alpha (also called 'exo'), beta and gamma in the DNA engineering method called recombineering. The 48 kb DNA fragment of lambda phage is not essential for productive infection and can be replaced by foreign DNA. Lambda phage will enter bacteria more easily than plasmids making it a useful vector that can destroy or can become part of the host's DNA. Lambda phage can be manipulated and used as an anti-cancer vaccine, nanoparticle, targeting human aspartyl (asparaginyl) β-hydroxylase (ASPH, HAAH). Lambda phage has also been of major importance in the study of specialized transduction.
https://en.wikipedia.org/wiki?curid=18310
Louis Armstrong Louis Daniel Armstrong (August 4, 1901 – July 6, 1971), nicknamed "Satchmo", "Satch", and "Pops", was an American trumpeter, composer, vocalist, and actor who was among the most influential figures in jazz. His career spanned five decades, from the 1920s to the 1960s, and different eras in the history of jazz. In 2017, he was inducted into the Rhythm & Blues Hall of Fame. Armstrong was born and raised in New Orleans. Coming to prominence in the 1920s as an inventive trumpet and cornet player, Armstrong was a foundational influence in jazz, shifting the focus of the music from collective improvisation to solo performance. Around 1922, he followed his mentor, Joe "King" Oliver, to Chicago to play in the Creole Jazz Band. In Chicago, he spent time with other popular jazz musicians, reconnecting with his friend Bix Beiderbecke and spending time with Hoagy Carmichael and Lil Hardin. He earned a reputation at "cutting contests", and relocated to New York in order to join Fletcher Henderson's band. With his instantly recognizable rich, gravelly voice, Armstrong was also an influential singer and skillful improviser, bending the lyrics and melody of a song. He was also skilled at scat singing. Armstrong is renowned for his charismatic stage presence and voice as well as his trumpet playing. By the end of Armstrong's career in the 1960s, his influence had spread to popular music in general. Armstrong was one of the first popular African-American entertainers to "cross over", meaning his music transcended his skin color in a racially divided America. He rarely publicly politicized his race, to the dismay of fellow African Americans, but took a well-publicized stand for desegregation in the Little Rock crisis. He was able to access the upper echelons of American society at a time when this was difficult for black men. Armstrong appeared in films such as "High Society" (1956) alongside Bing Crosby, Grace Kelly, and Frank Sinatra, and "Hello, Dolly!" (1969) starring Barbra Streisand. He received many accolades including three Grammy Award nominations and a win for his vocal performance of "Hello, Dolly!" in 1964. Armstrong often stated that he was born on July 4, 1900. Although he died in 1971, it was not until the mid-1980s that his true birth date, August 4, 1901, was discovered by Tad Jones by researching baptismal records. At least three other biographies treat the July 4th birth date as a myth. Armstrong was born in New Orleans to Mary Albert and William Armstrong. Albert was from Boutte, Louisiana, and gave birth at home when she was about sixteen. William Armstrong abandoned the family shortly after. About two years later, he had a daughter, Beatrice "Mama Lucy" Armstrong, who was raised by Albert. Louis Armstrong was raised by his grandmother until the age of five when he was returned to his mother. He spent his youth in poverty in a rough neighborhood known as The Battlefield. At six he attended the Fisk School for Boys, a school that accepted black children in the racially segregated system of New Orleans. He did odd jobs for the Karnoffskys, a family of Lithuanian Jews. While selling coal in Storyville, he heard spasm bands, groups that played music out of household objects. He heard the early sounds of jazz from bands that played in brothels and dance halls such as Pete Lala's, where King Oliver performed. The Karnoffskys took him in and treated him like family. Knowing he lived without a father, they fed and nurtured him. In his memoir "Louis Armstrong + the Jewish Family in New Orleans, La., the Year of 1907", he described his discovery that this family was also subject to discrimination by "other white folks" who felt that they were better than Jews: "I was only seven years old but I could easily see the ungodly treatment that the white folks were handing the poor Jewish family whom I worked for." He wore a Star of David pendant for the rest of his life and wrote about what he learned from them: "how to live—real life and determination." His first musical performance may have been at the side of the Karnoffsky's junk wagon. To distinguish them from other hawkers, he tried playing a tin horn to attract customers. Morris Karnoffsky gave Armstrong an advance toward the purchase of a cornet from a pawn shop. When Armstrong was eleven, he dropped out of school. His mother moved into a one-room house on Perdido Street with him, Lucy, and her common-law husband, Tom Lee, next door to her brother Ike and his two sons. Armstrong joined a quartet of boys who sang in the streets for money. He also got into trouble. Cornetist Bunk Johnson said he taught the eleven-year-old to play by ear at Dago Tony's honky tonk. (In his later years Armstrong credited King Oliver.) He said about his youth, "Every time I close my eyes blowing that trumpet of mine—I look right in the heart of good old New Orleans ... It has given me something to live for." Borrowing his stepfather's gun without permission, he fired a blank into the air and was arrested on December 31, 1912. He spent the night at New Orleans Juvenile Court, then was sentenced the next day to detention at the Colored Waif's Home. Life at the home was spartan. Mattresses were absent. Meals were often little more than bread and molasses. Captain Joseph Jones ran the home like a military camp and used corporal punishment. Armstrong developed his cornet skills by playing in the band. Peter Davis, who frequently appeared at the home at the request of Captain Jones, became Armstrong's first teacher and chose him as bandleader. With this band, the thirteen year-old Armstrong attracted the attention of Kid Ory. On June 14, 1914, Armstrong was released into the custody of his father and his new stepmother, Gertrude. He lived in this household with two stepbrothers for several months. After Gertrude gave birth to a daughter, Armstrong's father never welcomed him, so he returned to his mother, Mary Albert. In her small home, he had to share a bed with his mother and sister. His mother still lived in The Battlefield, leaving him open to old temptations, but he sought work as a musician. He found a job at a dance hall owned by Henry Ponce, who had connections to organized crime. He met the six-foot tall drummer Black Benny, who became his guide and bodyguard. Around the age of fifteen, he pimped for a prostitute named Nootsy, but that relationship failed after she stabbed Armstrong in the shoulder and his mother nearly choked her to death. Armstrong played in brass bands and riverboats in New Orleans, first on an excursion boat in September 1918. He traveled with the band of Fate Marable, which toured on the steamboat "Sidney" with the Streckfus Steamers line up and down the Mississippi River. Marable was proud of his musical knowledge, and he insisted that Armstrong and other musicians in his band learn sight reading. Armstrong described his time with Marable as "going to the University", since it gave him a wider experience working with written arrangements. He did return to New Orleans periodically. In 1919, Oliver decided to go north and resigned his position in Kid Ory's band; Armstrong replaced him. He also became second trumpet for the Tuxedo Brass Band. Throughout his riverboat experience, Armstrong's musicianship began to mature and expand. At twenty, he could read music. He became one of the first jazz musicians to be featured on extended trumpet solos, injecting his own personality and style. He started singing in his performances. In 1922, he moved to Chicago at the invitation of King Oliver. With Oliver's Creole Jazz Band he could make enough money to quit his day jobs. Although race relations were poor, Chicago was booming. The city had jobs for blacks making good wages at factories with some left over for entertainment. Oliver's band was among the most influential jazz bands in Chicago in the early 1920s. Armstrong lived luxuriously in his own apartment with his first private bath. Excited as he was to be in Chicago, he began his career-long pastime of writing letters to friends in New Orleans. Armstrong could blow two hundred high Cs in a row. As his reputation grew, he was challenged to cutting contests by other musicians. His first studio recordings were with Oliver for Gennett Records on April 56, 1923. They endured several hours on the train to remote Richmond, Indiana, and the band was paid little. The quality of the performances was affected by lack of rehearsal, crude recording equipment, bad acoustics, and a cramped studio. In addition, Richmond was associated with the Ku Klux Klan. Lil Hardin Armstrong urged him to seek more prominent billing and develop his style apart from the influence of Oliver. She encouraged him to play classical music in church concerts to broaden his skills. She prodded him into wearing more stylish attire to offset his girth. Her influence eventually undermined Armstrong's relationship with his mentor, especially concerning his salary and additional money that Oliver held back from Armstrong and other band members. Armstrong's mother, May Ann Albert, came to visit him in Chicago during the summer of 1923 after being told that Armstrong was "out of work, out of money, hungry, and sick"; Hardin located and decorated an apartment for her to live in while she stayed. Armstrong and Oliver parted amicably in 1924. Shortly afterward, Armstrong received an invitation to go to New York City to play with the Fletcher Henderson Orchestra, the top African-American band of the time. He switched to the trumpet to blend in better with the other musicians in his section. His influence on Henderson's tenor sax soloist, Coleman Hawkins, can be judged by listening to the records made by the band during this period. Armstrong adapted to the tightly controlled style of Henderson, playing trumpet and experimenting with the trombone. The other members were affected by Armstrong's emotional style. His act included singing and telling tales of New Orleans characters, especially preachers. The Henderson Orchestra played in prominent venues for patrons only, including the Roseland Ballroom, with arrangements by Don Redman. Duke Ellington's orchestra went to Roseland to catch Armstrong's performances. During this time, Armstrong recorded with Clarence Williams (a friend from New Orleans), the Williams Blue Five, Sidney Bechet, and blues singers Alberta Hunter, Ma Rainey, and Bessie Smith. In 1925, Armstrong returned to Chicago largely at the insistence of Lil, who wanted to expand his career and his income. In publicity, much to his chagrin, she billed him as "the World's Greatest Trumpet Player". For a time he was a member of the Lil Hardin Armstrong Band and working for his wife. He formed Louis Armstrong and his Hot Five and recorded the hits "Potato Head Blues" and "Muggles". The word "muggles" was a slang term for marijuana, something he used often during his life. The Hot Five included Kid Ory (trombone), Johnny Dodds (clarinet), Johnny St. Cyr (banjo), Lil Armstrong on piano, and usually no drummer. Over a twelve-month period starting in November 1925, this quintet produced twenty-four records. Armstrong's band leading style was easygoing, as St. Cyr noted, "One felt so relaxed working with him, and he was very broad-minded ... always did his best to feature each individual." Among the most notable of the Hot Five and Seven records were "Cornet Chop Suey", "Struttin' With Some Barbecue", "Hotter Than that" and "Potato Head Blues", all featuring highly creative solos by Armstrong. His recordings soon after with pianist Earl "Fatha" Hines (most famously their 1928 "Weather Bird" duet) and Armstrong's trumpet introduction to and solo in "West End Blues" remain some of the most famous and influential improvisations in jazz history. Armstrong was now free to develop his personal style as he wished, which included a heavy dose of effervescent jive, such as "Whip That Thing, Miss Lil" and "Mr. Johnny Dodds, Aw, Do That Clarinet, Boy!" Armstrong also played with Erskine Tate's Little Symphony, which played mostly at the Vendome Theatre. They furnished music for silent movies and live shows, including jazz versions of classical music, such as "Madame Butterfly", which gave Armstrong experience with longer forms of music and with hosting before a large audience. He began to scat sing (improvised vocal jazz using nonsensical words) and was among the first to record it, on the Hot Five recording "Heebie Jeebies" in 1926. The recording was so popular that the group became the most famous jazz band in the United States, even though they had not performed live to any great extent. Young musicians across the country, black or white, were turned on by Armstrong's new type of jazz. After separating from Lil, Armstrong started to play at the Sunset Café for Al Capone's associate Joe Glaser in the Carroll Dickerson Orchestra, with Earl Hines on piano, which was renamed Louis Armstrong and his Stompers, though Hines was the music director and Glaser managed the orchestra. Hines and Armstrong became fast friends and successful collaborators. It was at the Sunset Café that Armstrong accompanied singer Adelaide Hall. It was during Hall's tenure at the venue that she experimented, developed and expanded her use and art of Scat singing with Armstrong's guidance and encouragement. In the first half of 1927, Armstrong assembled his Hot Seven group, which added drummer Al "Baby" Dodds and tuba player, Pete Briggs, while preserving most of his original Hot Five lineup. John Thomas replaced Kid Ory on trombone. Later that year he organized a series of new Hot Five sessions which resulted in nine more records. In the last half of 1928, he started recording with a new group: Zutty Singleton (drums), Earl Hines (piano), Jimmy Strong (clarinet), Fred Robinson (trombone), and Mancy Carr (banjo). Armstrong returned to New York in 1929, where he played in the pit orchestra for the musical "Hot Chocolates", an all-black revue written by Andy Razaf and pianist Fats Waller. He also made a cameo appearance as a vocalist, regularly stealing the show with his rendition of "Ain't Misbehavin'". His version of the song became his biggest selling record to date. Armstrong started to work at Connie's Inn in Harlem, chief rival to the Cotton Club, a venue for elaborately staged floor shows, and a front for gangster Dutch Schultz. Armstrong also had considerable success with vocal recordings, including versions of famous songs composed by his old friend Hoagy Carmichael. His 1930s recordings took full advantage of the new RCA ribbon microphone, introduced in 1931, which imparted a characteristic warmth to vocals and immediately became an intrinsic part of the 'crooning' sound of artists like Bing Crosby. Armstrong's famous interpretation of Carmichael's "Stardust" became one of the most successful versions of this song ever recorded, showcasing Armstrong's unique vocal sound and style and his innovative approach to singing songs that had already become standards. Armstrong's radical re-working of Sidney Arodin and Carmichael's "Lazy River" (recorded in 1931) encapsulated many features of his groundbreaking approach to melody and phrasing. The song begins with a brief trumpet solo, then the main melody is introduced by sobbing horns, memorably punctuated by Armstrong's growling interjections at the end of each bar: "Yeah! ..."Uh-huh"..."Sure"..."Way down, way down." In the first verse, he ignores the notated melody entirely and sings as if playing a trumpet solo, pitching most of the first line on a single note and using strongly syncopated phrasing. In the second stanza he breaks into an almost fully improvised melody, which then evolves into a classic passage of Armstrong "scat singing". As with his trumpet playing, Armstrong's vocal innovations served as a foundation stone for the art of jazz vocal interpretation. The uniquely gravelly coloration of his voice became a musical archetype that was much imitated and endlessly impersonated. His scat singing style was enriched by his matchless experience as a trumpet soloist. His resonant, velvety lower-register tone and bubbling cadences on sides such as "Lazy River" exerted a huge influence on younger white singers such as Bing Crosby. The Great Depression of the early 1930s was especially hard on the jazz scene. The Cotton Club closed in 1936 after a long downward spiral, and many musicians stopped playing altogether as club dates evaporated. Bix Beiderbecke died and Fletcher Henderson's band broke up. King Oliver made a few records but otherwise struggled. Sidney Bechet became a tailor, later moving to Paris and Kid Ory returned to New Orleans and raised chickens. Armstrong moved to Los Angeles in 1930 to seek new opportunities. He played at the New Cotton Club in Los Angeles with Lionel Hampton on drums. The band drew the Hollywood crowd, which could still afford a lavish night life, while radio broadcasts from the club connected with younger audiences at home. Bing Crosby and many other celebrities were regulars at the club. In 1931, Armstrong appeared in his first movie, "Ex-Flame" and was also convicted of marijuana possession but received a suspended sentence. He returned to Chicago in late 1931 and played in bands more in the Guy Lombardo vein and he recorded more standards. When the mob insisted that he get out of town, Armstrong visited New Orleans, had a hero's welcome, and saw old friends. He sponsored a local baseball team known as Armstrong's Secret Nine and had a cigar named after him. But soon he was on the road again. After a tour across the country shadowed by the mob, he fled to Europe. After returning to the United States, he undertook several exhausting tours. His agent Johnny Collins's erratic behavior and his own spending ways left Armstrong short of cash. Breach of contract violations plagued him. He hired Joe Glaser as his new manager, a tough mob-connected wheeler-dealer, who began to straighten out his legal mess, his mob troubles, and his debts. Armstrong also began to experience problems with his fingers and lips, which were aggravated by his unorthodox playing style. As a result, he branched out, developing his vocal style and making his first theatrical appearances. He appeared in movies again, including Crosby's 1936 hit "Pennies from Heaven". In 1937, Armstrong substituted for Rudy Vallee on the CBS radio network and became the first African American to host a sponsored, national broadcast. During the 1920s, Louis Armstrong brought a huge impact during the Harlem Renaissance within the Jazz world. The music he created was an incredible part of his life during the Harlem Renaissance. His impact touched many, including a well known man during that time named Langston Hughes. The admiration he had for Armstrong and acknowledging him as one of the most recognized musicians during the era. Within Hughes writings, he created many books which held the central idea of jazz and recognition to Armstrong as one of the most important person to be part of the new found love of their culture. The sound of jazz, along with many other musicians such as Armstrong, helped shape Hughes as a writer. Just as the musicians, Hughes wrote his words with jazz. Armstrong changed the jazz during the Harlem Renaissance. Being known as "the world's greatest trumpet player" during this time he continued his legacy and decided to continue a focus on his own vocal career. The popularity he gained brought together many black and white audiences to watch him perform. After spending many years on the road, Armstrong settled permanently in Queens, New York in 1943 in contentment with his fourth wife, Lucille. Although subject to the vicissitudes of Tin Pan Alley and the gangster-ridden music business, as well as anti-black prejudice, he continued to develop his playing. He recorded Hoagy Carmichael's "Rockin' Chair" for Okeh Records. During the next 30 years, Armstrong played more than 300 performances a year. Bookings for big bands tapered off during the 1940s due to changes in public tastes: ballrooms closed, and there was competition from television and from other types of music becoming more popular than big band music. It became impossible under such circumstances to finance a 16-piece touring band. During the 1940s, a widespread revival of interest in the traditional jazz of the 1920s made it possible for Armstrong to consider a return to the small-group musical style of his youth. Armstrong was featured as a guest artist with Lionel Hampton's band at the famed second Cavalcade of Jazz concert held at Wrigley Field in Los Angeles which was produced by Leon Hefflin Sr. on October 12, 1946. Following a highly successful small-group jazz concert at New York Town Hall on May 17, 1947, featuring Armstrong with trombonist/singer Jack Teagarden, Armstrong's manager, Joe Glaser dissolved the Armstrong big band on August 13, 1947, and established a six-piece traditional jazz group featuring Armstrong with (initially) Teagarden, Earl Hines and other top swing and Dixieland musicians, most of whom were previously leaders of big bands. The new group was announced at the opening of Billy Berg's Supper Club. This group was called Louis Armstrong and His All Stars and included at various times Earl "Fatha" Hines, Barney Bigard, Edmond Hall, Jack Teagarden, Trummy Young, Arvell Shaw, Billy Kyle, Marty Napoleon, Big Sid "Buddy" Catlett, Cozy Cole, Tyree Glenn, Barrett Deems, Mort Herbert, Joe Darensbourg, Eddie Shu, Joe Muranyi and percussionist Danny Barcelona. During this period, Armstrong made many recordings and appeared in over thirty films. He was the first jazz musician to appear on the cover of "Time" magazine, on February 21, 1949. Louis Armstrong and his All Stars were featured at the ninth Cavalcade of Jazz concert also at Wrigley Field in Los Angeles produced by Leon Hefflin Sr. held on June 7, 1953 along with Shorty Rogers, Roy Brown, Don Tosti and His Mexican Jazzmen, Earl Bostic, and Nat "King" Cole. By the 1950s, Armstrong was a widely beloved American icon and cultural ambassador who commanded an international fanbase. However, a growing generation gap became apparent between him and the young jazz musicians who emerged in the postwar era such as Charlie Parker, Miles Davis, and Sonny Rollins. The postwar generation regarded their music as abstract art and considered Armstrong's vaudevillian style, half-musician and half-stage entertainer, outmoded and Uncle Tomism, "... he seemed a link to minstrelsy that we were ashamed of." He called bebop "Chinese music". While touring Australia, 1954, he was asked if he could play bebop. "Bebop?" he husked. "I just play music. Guys who invent terms like that are walking the streets with their instruments under their arms." February 28, 1948, Suzy Delair sang the French song "C'est si bon" at the Hotel Negresco during the first Nice Jazz Festival. Louis Armstrong was present and loved the song. June 26, 1950, he recorded the American version of the song (English lyrics by Jerry Seelen) in New York City with Sy Oliver and his Orchestra. When it was released, the disc was a worldwide success and the song was then performed by the greatest international singers. In the 1960s, he toured Ghana and Nigeria. After finishing his contract with Decca Records, he became a freelance artist and recorded for other labels. He continued an intense international touring schedule, but in 1959 he suffered a heart attack in Italy and had to rest. In 1964, after over two years without setting foot in a studio, he recorded his biggest-selling record, "Hello, Dolly!", a song by Jerry Herman, originally sung by Carol Channing. Armstrong's version remained on the Hot 100 for 22 weeks, longer than any other record produced that year, and went to No. 1 making him, at 62 years, 9 months and 5 days, the oldest person ever to accomplish that feat. In the process, he dislodged the Beatles from the No. 1 position they had occupied for 14 consecutive weeks with three different songs. Armstrong kept touring well into his 60s, even visiting part of the communist bloc in 1965. He also toured Africa, Europe, and Asia under the sponsorship of the US State Department with great success, earning the nickname "Ambassador Satch" and inspiring Dave Brubeck to compose his jazz musical "The Real Ambassadors". By 1968, he was approaching 70 and his health began to give out. He suffered heart and kidney ailments that forced him to stop touring. He did not perform publicly at all in 1969 and spent most of the year recuperating at home. Meanwhile, his longtime manager Joe Glaser died. By the summer of 1970, his doctors pronounced him fit enough to resume live performances. He embarked on another world tour, but a heart attack forced him to take a break for two months. Armstrong made his last recorded trumpet performances on his 1968 album "Disney Songs the Satchmo Way". The Louis Armstrong House Museum website states: In a memoir written for Robert Goffin between 1943 and 1944, Armstrong states, "All white folks call me Louie," perhaps suggesting that he himself did not or, on the other hand, that no whites addressed him by one of his nicknames such as Pops. That said, Armstrong was registered as "Lewie" for the 1920 U.S. Census. On various live records he's called "Louie" on stage, such as on the 1952 "Can Anyone Explain?" from the live album "In Scandinavia vol.1". The same applies to his 1952 studio recording of the song "Chloe", where the choir in the background sings "Louie ... Louie", with Armstrong responding "What was that? Somebody called my name?" "Lewie" is the French pronunciation of "Louis" and is commonly used in Louisiana. Armstrong was performing at the Brick House in Gretna, Louisiana, when he met Daisy Parker, a local prostitute. He started the affair as a client. He returned to Gretna on several occasions to visit her. He found the courage to look for her home to see her away from work. It was on this occasion that he found out that she had a common-law husband. Not long after this fiasco, Parker traveled to Armstrong's home on Perdido Street. They checked into Kid Green's hotel that evening. On the next day, March 19, 1919, Armstrong and Parker married at City Hall. They adopted a three-year-old boy, Clarence, whose mother, Armstrong's cousin Flora, had died soon after giving birth. Clarence Armstrong was mentally disabled as the result of a head injury at an early age, and Armstrong spent the rest of his life taking care of him. His marriage to Parker ended when they separated in 1923. On February 4, 1924, he married Lil Hardin Armstrong, King Oliver's pianist. She had divorced her first husband a few years earlier. His second wife helped him develop his career, but they separated in 1931 and divorced in 1938. Armstrong then married Alpha Smith. His relationship with Alpha, however, began while he was playing at the Vendome during the 1920s and continued long after. His marriage to his third wife lasted four years, and they divorced in 1942. Louis then married Lucille Wilson in October 1942, a singer at the Cotton Club, to whom he was married until his death in 1971. Armstrong's marriages never produced any offspring. However, in December 2012, 57-year-old Sharon Preston-Folta claimed to be his daughter from a 1950s affair between Armstrong and Lucille "Sweets" Preston, a dancer at the Cotton Club. In a 1955 letter to his manager, Joe Glaser, Armstrong affirmed his belief that Preston's newborn baby was his daughter, and ordered Glaser to pay a monthly allowance of $400 (US$ in dollars) to mother and child. Armstrong was noted for his colorful and charismatic personality. His autobiography vexed some biographers and historians, as he had a habit of telling tales, particularly of his early childhood when he was less scrutinized, and his embellishments of his history often lack consistency. In addition to being an entertainer, Armstrong was a leading personality of the day. He was beloved by an American public that gave even the greatest African American performers little access beyond their public celebrity, and he was able to live a private life of access and privilege afforded to few other African Americans during that era. He generally remained politically neutral, which at times alienated him from members of the black community who looked to him to use his prominence with white America to become more of an outspoken figure during the civil rights movement. However, he did criticize President Eisenhower for not acting forcefully enough on civil rights. The trumpet is a notoriously hard instrument on the lips, and Armstrong suffered from lip damage over much of his life due to his aggressive style of playing and preference for narrow mouthpieces that would stay in place easier, but which tended to dig into the soft flesh of his inner lip. During his 1930s European tour, he suffered an ulceration so severe that he had to stop playing entirely for a year. Eventually he took to using salves and creams on his lips and also cutting off scar tissue with a razor blade. By the 1950s, he was an official spokesman for Ansatz-Creme Lip Salve. During a backstage meeting with trombonist Marshall Brown in 1959, Armstrong received the suggestion that he should go to a doctor and receive proper treatment for his lips instead of relying on home remedies, but he did not get around to doing it until the final years of his life, by which point his health was failing and doctors considered surgery too risky. The nicknames "Satchmo" and "Satch" are short for "Satchelmouth". The nickname has many possible origins. The most common tale that biographers tell is the story of Armstrong as a young boy in New Orleans dancing for pennies. He scooped the coins off the street and stuck them into his mouth to prevent bigger children from stealing them. Someone dubbed him "satchel mouth" for his mouth acting as a satchel. Another tale is that because of his large mouth, he was nicknamed "satchel mouth" which was shortened to "Satchmo". Early on he was also known as "Dipper", short for "Dippermouth", a reference to the piece "Dippermouth Blues". and something of a riff on his unusual embouchure. The nickname "Pops" came from Armstrong's own tendency to forget people's names and simply call them "Pops" instead. The nickname was turned on Armstrong himself. It was used as the title of a 2010 biography of Armstrong by Terry Teachout. Armstrong was largely accepted into white society, both on stage and off, a rarity for a black person at the time. Some musicians criticized Armstrong for playing in front of segregated audiences, and for not taking a strong enough stand in the American civil rights movement. When he did speak out, it made national news, including his criticism of President Eisenhower, calling him "two-faced" and "gutless" because of his inaction during the conflict over school desegregation in Little Rock, Arkansas, in 1957. As a protest, Armstrong canceled a planned tour of the Soviet Union on behalf of the State Department saying: "The way they're treating my people in the South, the government can go to hell" and that he could not represent his government abroad when it was in conflict with its own people.
https://en.wikipedia.org/wiki?curid=18313
Long Island Long Island (locally: ) is a densely populated island in the southeast part of the U.S. state of New York, in the northeastern United States, beginning at New York Harbor approximately 0.35 miles (0.56 km) from Manhattan Island and extending eastward into the Atlantic Ocean. The island comprises four counties; Kings and Queens counties (the New York City boroughs of Brooklyn and Queens, respectively) and Nassau County share the western third of the island, while Suffolk County occupies the eastern two-thirds. More than half of New York City's residents now live on Long Island, in Brooklyn, and in Queens. However, many people in the New York metropolitan area (even in Brooklyn and Queens) colloquially use the term "Long Island" (or the Island) to refer exclusively to Nassau and Suffolk Counties, and conversely, employ the term "the City" to mean Manhattan alone. Broadly speaking, "Long Island" may refer both to the main island and the surrounding outer barrier islands. North of the island is Long Island Sound, across which lie Westchester County, New York, and the state of Connecticut. Across the Block Island Sound to the northeast is the state of Rhode Island. To the west, Long Island is separated from the Bronx and the island of Manhattan by the East River. To the extreme southwest, it is separated from Staten Island and the state of New Jersey by Upper New York Bay, the Narrows, and Lower New York Bay. To the east lie Block Island—which is part of the State of Rhode Island—and numerous smaller islands. Both the longest and the largest island in the contiguous United States, Long Island extends eastward from New York Harbor to Montauk Point, with a maximum north-to-south distance of between Long Island Sound and the Atlantic coast. With a land area of 1,401 square miles (3,630 km2), Long Island is the 11th-largest island in the United States and the 149th-largest island in the world—larger than the of the smallest U.S. state, Rhode Island. With a Census-estimated population of 7,869,820 in 2017, constituting nearly 40% of New York State's population, Long Island is the most populated island in any U.S. state or territory, and the 18th-most populous island in the world (ahead of Ireland, Jamaica, and Hokkaidō). Its population density is . If Long Island geographically constituted an independent metropolitan statistical area, it would rank fourth most populous in the United States; while if it were a U.S. state, Long Island would rank thirteenth in population and first in population density. Long Island is culturally and ethnically diverse, featuring some of the wealthiest and most expensive neighborhoods in the Western Hemisphere near the shorelines as well as working-class areas in all four counties. As a hub of commercial aviation, Long Island is home to two of the New York City metropolitan area's three busiest airports, JFK International Airport and LaGuardia Airport, in addition to Islip MacArthur Airport; as well as two major air traffic control radar facilities, the New York TRACON and the New York ARTCC. Nine bridges and thirteen tunnels (including railroad tunnels) connect Brooklyn and Queens to the three other boroughs of New York City. Ferries connect Suffolk County northward across Long Island Sound to the state of Connecticut. The Long Island Rail Road is the busiest commuter railroad in North America and operates 24/7. Nassau County high school students often feature prominently as winners of the Intel International Science and Engineering Fair and similar STEM-based academic awards. Biotechnology companies and scientific research play a significant role in Long Island's economy, including research facilities at Brookhaven National Laboratory, Cold Spring Harbor Laboratory, New York Institute of Technology, Plum Island Animal Disease Center, State University of New York at Stony Brook, the New York University Tandon School of Engineering, the City University of New York, and Hofstra Northwell School of Medicine. By the year 1643, there were 13 Indigenous Tribes living on Long Island: Canarsie, Rockaway, Matinecock, Merrick, Massapequa, Nissequoge, Secatoag, Seatauket, Patchoag, Corchaug, Shinnecock, Manhasset and Montauk. They used canoes as a source of transportation, and since they lived by the shores, they went fishing. The fisherman used bows, arrows, and hooks to catch seafood such as crabs, scallops, and lobster. The farmers used fish for fertilizer and planted vegetables such as corn, beans, and squash, which were popular among the indigenous people. They were exceptional farmers; they had a great understanding of how the weather and soil affected the crops. Many of them hunted animals, such as deer, raccoon, and turkey in the forest. If they killed an animal, they used all the parts of it, so nothing went to waste. The government that they set up was a participatory democracy and there was an alliance between the tribes. Each tribe had their own territory and chief that was respected by other tribes. Prior to European contact, the Lenape people (named the "Delaware" by Europeans) inhabited the western end of Long Island, and spoke the Munsee dialect of Lenape, one of the Algonquian language family. The Lenape (who were part of the Shinnecock Tribe) practiced record keeping and used wooden tablets, trees, and stones to keep record. They also used wampum belts to write down important messages. They also used their wampum to trade with the Europeans. The Lenape people, in specific, were seen as peacemakers by other indigenous tribes, although they would defend themselves if necessary. The Europeans admired their friendliness and their skills in mediation. Giovanni da Verrazzano was the first European to record an encounter with the Lenapes, after entering what is now New York Bay in 1524. The island's eastern portion was inhabited by speakers of the Mohegan-Montauk-Narragansett language group of Algonquian languages; they were part of the Pequot and Narragansett peoples inhabiting the area that now includes Connecticut and Rhode Island. In 1609, the English navigator Henry Hudson explored the harbor and purportedly landed at Coney Island. Adriaen Block followed in 1615, and is credited as the first European to determine that both Manhattan and Long Island are islands. In 1655, the European settlers purchased land from the Indigenous people. They split the land amongst themselves and continued to search the island for more land for settlement. Other parts of indigenous land were bought, areas that are now known as Brookhaven, Bellport, and South haven. These purchases occurred on June 10, 1664, the exchange was four coats and what is now $16.25. The indigenous people did not fully understand that the European claimed ownership of the land. They thought that the exchange was made for them to share the land. The white settlers and the indigenous people lived amicably together for a while. However, some of the white settlers did not trust some of indigenous people. During King Philip's War in 1675, the English governor of New York ordered all canoes that were east of Hell Gate to be confiscated. This was done to prevent the Indigenous people from helping their allies in the mainland. After the Dutch began to move into Manhattan many of the indigenous people moved to Pennsylvania and Delaware. They felt pressure to move as they did not fit in with the European culture or religion at the time. Many of them who stayed behind died from smallpox. Native American land deeds recorded by the Dutch from 1636 state that the Indians referred to Long Island as ' (' and ' were other spellings in the transliteration of Lenape). ' was one of the terms for wampum (commemorative stringed shell beads, for a while also used as currency by colonists in trades with the Lenape), and is also translated as "loose" or "scattered", which may refer either to the wampum or to Long Island. The name "'t Lange Eylandt alias Matouwacs" appears in Dutch maps from the 1650s. Later, the English referred to the land as "Nassau Island", after the Dutch Prince William of Nassau, Prince of Orange (who later also ruled as King William III of England). It is unclear when the name "Nassau Island" was discontinued. Another indigenous name from colonial time, Paumanok, comes from the Native American name for Long Island and means "the island that pays tribute." The very first European settlements on Long Island were by settlers from England and its colonies in present-day New England. Lion Gardiner settled nearby Gardiners Island. The first settlement on the geographic Long Island itself was on October 21, 1640, when Southold was established by the Rev. John Youngs and settlers from New Haven, Connecticut. Peter Hallock, one of the settlers, drew the long straw and was granted the honor to step ashore first. He is considered the first New World settler on Long Island. Southampton was settled in the same year. Hempstead followed in 1644, East Hampton in 1648, Huntington in 1653, Brookhaven in 1655, and Smithtown in 1665. While the English eastern region of Long Island was first settled by the English, the western portion of Long Island was settled by the Dutch; until 1664, the jurisdiction of Long Island was split between the Dutch and English, roughly at the present border between Nassau County and Suffolk County. The Dutch founded six towns in present-day Brooklyn beginning in 1645. These included: Brooklyn, Gravesend, Flatlands, Flatbush, New Utrecht, and Bushwick. The Dutch had granted an English settlement in Hempstead, New York (now in Nassau County) in 1644, but after a boundary dispute they drove out English settlers from the Oyster Bay area. However, in 1664, the English returned to take over the Dutch colony of New Netherland, including Long Island. The 1664 land patent granted to the Duke of York included all islands in Long Island Sound. The Duke of York held a grudge against Connecticut, as New Haven had hidden three of the judges (John Dixwell, Edward Whalley and William Goffe) who sentenced the Duke's father, King Charles I, to death in 1649. Settlers throughout Suffolk County pressed to stay part of Connecticut, but Governor Sir Edmund Andros threatened to eliminate the settlers' rights to land if they did not yield, which they did by 1676. All of Long Island (as well as the islands between it and Connecticut) became part of the Province of New York within the Shire of York. Present-day Suffolk County was designated as the "East Riding" (of Yorkshire), present-day Brooklyn was part of the "West Riding", and present-day Queens and Nassau were part of the larger "North Riding". In 1683, Yorkshire was dissolved and the three original counties on Long Island were established: Kings, Queens, and Suffolk. Early in the American Revolutionary War, the island was captured by the British from General George Washington in the Battle of Long Island, a decisive battle after which Washington narrowly evacuated his troops from Brooklyn Heights under a dense fog. After the British victory on Long Island, many Patriots fled, leaving mostly Loyalists behind. The island was a British stronghold until the end of the war in 1783. General Washington based his espionage activities on Long Island, due to the western part of the island's proximity to the British military headquarters in New York City. The Culper Spy Ring included agents operating between Setauket and Manhattan. This ring alerted Washington to valuable British secrets, including the treason of Benedict Arnold and a plan to use counterfeiting to induce economic sabotage. Long Island's colonists served both Loyalist and Patriot causes, with many prominent families divided among both sides. During the occupation British troops used a number of civilian structures for defense and demanded to be quartered in the homes of civilians. A number of structures from this era remain. Among these are Raynham Hall, the Oyster Bay home of patriot spy Robert Townsend, and the Caroline Church in Setauket, which contains bullet holes from a skirmish known as the Battle of Setauket. Also in existence is a reconstruction of Brooklyn's Old Stone House, on the site of the Maryland 400's celebrated last stand during the Battle of Long Island. In the 19th century, Long Island was still mainly rural and devoted to agriculture. The predecessor to the Long Island Rail Road (LIRR) began service in 1836 from the South Ferry in Brooklyn, through the remainder of Brooklyn, to Jamaica in Queens. The line was completed to the east end of Long Island in 1844 (as part of a plan for transportation to Boston). Competing railroads (soon absorbed by the LIRR) were built along the south shore to accommodate travellers from those more populated areas. For the century from 1830 until 1930, total population roughly doubled every twenty years, with more dense development in areas near Manhattan. Several cities were incorporated, such as the 'City of Brooklyn' in Kings County, and Long Island City in Queens. Until the 1883 completion of the Brooklyn Bridge, the only means of travel between Long Island and the rest of the United States was by boat or ship. As other bridges and tunnels were constructed, areas of the island began to be developed as residential suburbs, first around the railroads that offered commuting into the city. On January 1, 1898, Kings County and portions of Queens were consolidated into the 'City of Greater New York', abolishing all cities and towns within them. The easternmost of Queens County, which were not part of the consolidation plan, separated from Queens in 1899 to form Nassau County. At the close of the 19th century, wealthy industrialists who made vast fortunes during the Gilded Age began to construct large "baronial" country estates in Nassau County communities along the North Shore of Long Island, favoring the many properties with water views. Proximity to Manhattan attracted such men as J. P. Morgan, William K. Vanderbilt, and Charles Pratt, whose estates led to this area being nicknamed the Gold Coast. This period and the area was immortalized in fiction, such as F. Scott Fitzgerald's "The Great Gatsby", which has also been adapted in films. Charles Lindbergh lifted off from Roosevelt Field with his "Spirit of Saint Louis" for his historic 1927 solo flight to Europe, one of the events that helped to establish Long Island as an early center of aviation during the 20th Century. Other famous aviators such as Wiley Post originated notable flights from Floyd Bennett Field in Brooklyn, which became the first major airport serving New York City before it was superseded by the opening of La Guardia Airport in 1939. Long Island was also the site of Mitchel Air Force Base and was a major center of military aircraft production by companies such as Grumman and Fairchild Aircraft during World War II and for some decades afterward. Aircraft production on Long Island extended all the way into the Space Age – Grumman was one of the major contractors that helped to build the early lunar flight and space shuttle vehicles. Although the aircraft companies eventually ended their Long Island operations and the early airports were all later closed – Roosevelt Field, for instance, became the site of a major shopping mall – the Cradle of Aviation Museum on the site of the former Mitchel Field documents the Island's key role in the history of aviation. From the 1920s to the 1940s, Long Island began the transformation from backwoods and farms as developers created numerous suburbs. Numerous branches of the LIRR already enabled commuting from the suburbs to Manhattan. Robert Moses engineered various automobile parkway projects to span the island, and developed beaches and state parks for the enjoyment of residents and visitors from the city. Gradually, development also followed these parkways, with various communities springing up along the more traveled routes. After World War II, suburban development increased with incentives under the G.I. Bill, and Long Island's population skyrocketed, mostly in Nassau County and western Suffolk County. Second and third-generation children of immigrants moved out to eastern Long Island to settle in new housing developments built during the post-war boom. Levittown became noted as a suburb, where housing construction was simplified to be produced on a large scale. These provided opportunities for World War II military veterans returning home to buy houses and start a family. In his 1966 book, "My Private America" ("Moja prywatna Ameryka"), Kazimierz Wierzyński, a Polish poet who could not go back to Poland after World War Two, describes Polish farmers living there, as "walking novels". By the start of the 21st century, a number of Long Island communities had converted their assets from industrial uses to post-industrial roles. Brooklyn reversed decades of population decline and factory closings to resurface as a globally renowned cultural and intellectual hotbed. Gentrification has affected much of Brooklyn and a portion of Queens, relocating a sizeable swath of New York City's population. On eastern Long Island, such villages as Port Jefferson, Patchogue, and Riverhead have been changed from inactive shipbuilding and mill towns into tourist-centric commercial centers with cultural attractions. The descendants of late 19th and early 20th-century immigrants from southern and eastern Europe, and black migrants from the South, have been followed by more recent immigrants from Asia and Latin America. Long Island has many ethnic Irish, Jews, and Italians, as well as an increasing numbers of Asians and Hispanics, reflecting later migrations. The westernmost end of Long Island contains the New York City boroughs of Brooklyn (Kings County) and Queens (Queens County). The central and eastern portions contain the suburban Nassau and Suffolk Counties. However, colloquial usage of the term "Long Island" usually refers only to Nassau and Suffolk Counties. For example, the Federal Reserve Bank of New York has a district named "Long Island (Nassau-Suffolk Metro Division)." At least as late as 1911, locations in Queens were still commonly referred to as being on Long Island. Some institutions in the New York City section of the island use the island's names, like Long Island University and Long Island Jewish Medical Center. In 1985, the United States Supreme Court ruled in "United States v. Maine" that Long Island is legally not an island, because New York State's boundaries contained its offshore soil and seabeds. Despite the legal decision the United States Board on Geographic Names still considers Long Island an island, because it is surrounded by water. Nassau County is more densely developed than Suffolk County. While affluent overall, Nassau County has pockets of more pronounced wealth with estates covering greater acreage within the Gold Coast of the North Shore and the Five Towns area on the South Shore. South Shore communities are built along protected wetlands of the island and contain white sandy beaches of Outer Barrier Islands fronting on the Atlantic Ocean. Dutch and English settlers from the time before the American Revolutionary War, as well as communities of Native Americans, populated the island. The 19th century saw the infusion of the wealthiest Americans in the so-called Gold Coast of the North Shore, where wealthy Americans and Europeans in the Gilded Age built lavish country homes. In its easternmost sections, Suffolk County remains semi-rural, as in Greenport on the North Fork and some of the periphery of the area prominently known as The Hamptons, although summer tourism swells the population in those areas. The North Fork peninsula of Suffolk County's East End has developed a burgeoning wine region. In addition, the South Fork peninsula is known for beach communities, including the Hamptons, and for the Montauk Point Lighthouse at the eastern tip of the island. The Pine Barrens is a preserved pine forest encompassing much of eastern Suffolk County. A detailed geomorphological study of Long Island provides evidence of glacial history of the kame and terminal moraines of the island which were formed by the advance and retreat of two ice sheets. Long Island, as part of the Outer Lands region, is formed largely of two spines of glacial moraine, with a large, sandy outwash plain beyond. These moraines consist of gravel and loose rock left behind during the two most recent pulses of Wisconsin glaciation during the ice ages some 21,000 years ago (19,000 BC). The northern moraine, which directly abuts the North Shore of Long Island at points, is known as the Harbor Hill moraine. The more southerly moraine, known as the Ronkonkoma moraine, forms the "backbone" of Long Island; it runs primarily through the very center of Long Island, roughly coinciding with the length of the Long Island Expressway. The land to the south of this moraine to the South Shore is the outwash plain of the last glacier. One part of the outwash plain was known as the Hempstead Plains, and this land contained one of the few natural prairies to exist east of the Appalachian Mountains. The glaciers melted and receded to the north, resulting in the difference between the topography of the North Shore beaches and the South Shore beaches. The North Shore beaches are rocky from the remaining glacial debris, while the South Shore's are crisp, clear, outwash sand. Jayne's Hill, at , within Suffolk County near its border with Nassau County, is the highest hill along either moraine; another well-known summit is Bald Hill in Brookhaven Town, not far from its geographical center at Middle Island. The glaciers also formed Lake Ronkonkoma in Suffolk County and Lake Success in Nassau County, each a deep kettle lake. Under the Köppen climate classification, Long Island lies in a transition zone between a humid subtropical climate (Cfa) and a humid continental climate (Dfa). The climate features hot, usually humid summers with occasional thunderstorms, mild spring and fall weather, and cold winters with a mix of snow and rain and stormier conditions. Spring can be cool due to the relatively cooler temperatures of the Atlantic Ocean. Thunderstorms rarely form directly over Long Island, but can form over inland areas and then move eastward. Some storms may weaken as they approach Long Island due to the moderating effects of the Atlantic Ocean. The ocean also brings afternoon sea breezes to the immediate South Shore areas (within ) that temper the heat in the warmer months. The temperatures south of Sunrise Highway (RT27) tend to be significantly cooler than the rest of Long Island in the spring and summer months because of the relatively cooler temperatures of the Atlantic Ocean. Long Island has a moderately sunny climate, averaging 2,400 to 2,800 hours of sunshine annually. Due to its coastal location, Long Island winter temperatures are milder than most of the state. The coldest month is January, when average temperatures range from , and the warmest month is July, when average temperatures range from . Temperatures seldom fall below or rise above . Long Island temperatures vary from west to east, with the western part (Nassau County, Queens, and Brooklyn) generally warmer than the east (Suffolk County). This is due to several factors: the western part is closer to the mainland and more densely developed, causing the "urban heat island" effect, and Long Island's land mass veers northward as one travels east. Also, daytime high temperatures on the eastern part of Long Island are cooler on most occasions, due to the moderating effect of the Atlantic Ocean and Long Island Sound. On dry nights with no clouds or wind, the Central Part of Suffolk County and Pine Barrens forest of eastern Suffolk County can be almost cooler than the rest of the island, due to radiational cooling. Average dew points, a measure of atmospheric moisture, typically lie in the range during July and August. Precipitation is distributed uniformly throughout the year, with approximately on average during each month. Average yearly snowfall totals range from approximately , with the north shore and western parts averaging more than the immediate south shore (South of Sunrise Hwy) and the east end. In any given winter, however, some parts of the island can see up to of snow or more. There are also milder winters, in which much of the island see less than of snow. On August 13, 2014, flash flooding occurred in western-central Suffolk County after a record-setting rainfall deposited more than three months' worth of precipitation on the area within a few hours. Long Island is somewhat vulnerable to tropical cyclones. While it lies north of where most tropical cyclones turn eastward and out to sea (most landfalls on the East Coast of the USA occur from North Carolina southward), several tropical cyclones have struck Long Island, including a devastating Category 3, the 1938 New England Hurricane (also known as the "Long Island Express"), and another Category 3, Hurricane Carol in 1954. Other 20th-century storms that made landfall on Long Island at hurricane intensity include the Great Atlantic Hurricane of 1944, Hurricane Donna in 1960, Hurricane Belle in 1976, and Hurricane Gloria in 1985. Also, the eyewall of Hurricane Bob in 1991 brushed the eastern tip. In August 2011, portions of Long Island were evacuated in preparation for Hurricane Irene, a Category 1 hurricane which weakened to a tropical storm before it reached Long Island. On October 29, 2012, Hurricane Sandy caused extensive damage to low-lying coastal areas of Nassau and Suffolk Counties, Brooklyn, and Queens, destroying or severely damaging thousands of area homes and other structures by ocean and bay storm surges. Hundreds of thousands of residents were left without electric power for periods of time ranging up to several weeks while the damage was being repaired. The slow-moving "Superstorm Sandy" (so-nicknamed because it merged with a nor'easter before it made landfall) caused 90% of Long Island households to lose power and an estimated $18 billion in damages in Nassau and Suffolk Counties alone. The storm also had a devastating impact on coastal communities in the Brooklyn and Queens portions of the island, including Coney Island in Brooklyn and the Rockaway Peninsula in Queens, although estimates of monetary damages there are usually calculated as part of the overall losses suffered in New York City as a whole. When allowance is made for inflation, the extent of Sandy's damages is second only to that of those caused by the 1938 Long Island Express. Although a lower central pressure was recorded in Sandy, the National Hurricane Center estimates that the 1938 hurricane had a lower pressure at landfall. Hurricane Sandy and its profound impacts have prompted the discussion of constructing seawalls and other coastal barriers around the shorelines of Long Island and New York City to minimize the risk of destructive consequences from another such event in the future. Several smaller islands, though geographically distinct, are in proximity to Long Island and are often grouped with it. These islands include Fire Island, the largest of the outer barrier islands that parallels the southern shore of Long Island for approximately ; Plum Island, which was home to the Plum Island Animal Disease Center, a biological weapons research facility; as well as Robins Island, Gardiners Island, Fishers Island, Long Beach Barrier Island, Jones Beach Island, Great Gull Island, Little Gull Island, and Shelter Island. Long Island is one of the most densely populated regions in the United States. As of the United States 2010 Census, the total population of all four counties of Long Island was 7,568,304, which was 39% of the population of the State of New York. As of 2017, the proportion of New York City residents living on Long Island had risen to 58%, given the 5,007,353 residents living in Brooklyn or Queens. Furthermore, the proportion of New York State's population residing on Long Island has also been increasing, with Long Island's Census-estimated population increasing 4.0% since 2010, to 7,869,820 in 2017, representing 39.6% of New York State's Census-estimated 2017 population of 19,849,399 and with a population density of on Long Island. Long Island's population is greater than 37 of the 50 U.S. states. As of the 2010 census, the combined population of Nassau and Suffolk Counties was 2,832,882 people; Suffolk County's share being 1,493,350 and Nassau County's 1,339,532. Nassau County had a larger population for decades, but Suffolk County surpassed it in the 1990 census as growth and development continued to spread eastward. As Suffolk County has more than three times the land area of Nassau County, the latter still has a much higher population density and is growing faster in the 21st century, given its proximity to New York City. According to the U.S. Census Bureau's 2008 American Community Survey, Nassau and Suffolk Counties had the 10th and 26th highest median household incomes in the nation, respectively. Population figures from the U.S. Census Bureau "Census 2010" show that whites are the largest racial group in all four counties, and are in the majority in Nassau and Suffolk Counties. In 2002, "The New York Times" cited a study by the non-profit group ERASE Racism, which determined that Nassau and Suffolk Counties constitute the most racially segregated suburbs in the United States. In contrast, Queens is the most ethnically diverse county in the United States and the most diverse urban area in the world. According to a 2000 report on religion, which asked congregations to respond, Catholics are the largest religious group on Long Island, with non-affiliated in second place. Catholics make up 52% of the population of Nassau and Suffolk, versus 22% for the country as a whole, with Jews at 16% and 7%, respectively, versus 1.7% nationwide. Only a small percentage of Protestants responded, 7% and 8% respectively, for Nassau and Suffolk Counties. This is in contrast with 23% for the entire country on the same survey, and 50% on self-identification surveys. A growing population of nearly half a million Chinese Americans now live on Long Island. Rapidly expanding Chinatowns have developed in Brooklyn (布魯克林) and Queens, with Chinese immigrants also moving into Nassau County, as did earlier European immigrants, such as the Irish and Italians. The busy intersection of Main Street, Kissena Boulevard, and 41st Avenue defines the center of Downtown Flushing and the Flushing Chinatown, known as the "Chinese Times Square" or the "Chinese Manhattan". The segment of Main Street between Kissena Boulevard and Roosevelt Avenue, punctuated by the Long Island Rail Road trestle overpass, represents the cultural heart of the Flushing Chinatown. Housing over 30,000 individuals born in China alone, the largest by this metric outside Asia, Flushing has become home to the largest and one of the fastest-growing Chinatowns in the world as the heart of over 250,000 ethnic Chinese in Queens, representing the largest Chinese population of any U.S. municipality other than New York City in total. Conversely, the Flushing Chinatown has also become the epicenter of organized prostitution in the United States, importing women from China, Korea, Thailand, and Eastern Europe to sustain the underground North American sex trade. More recently, a Little India community has emerged in Hicksville, Nassau County, spreading eastward from the more established Little India enclaves in Queens. Likewise, the Long Island Koreatown originated in Flushing, Queens, and is expanding eastward along Northern Boulevard and into Nassau County. Long Island is home to two Native American reservations, Poospatuck Reservation, and Shinnecock Reservation, both in Suffolk County. Numerous island place names are Native American in origin. A 2010 article in "The New York Times" stated that the expansion of the immigrant workforce on Long Island has not displaced any jobs from other Long Island residents. Half of the immigrants on Long Island hold white-collar positions. The Counties of Nassau and Suffolk have been long renowned for their affluence. Long Island is home to some of the wealthiest communities in the United States, including The Hamptons, on the East End of the South Shore of Suffolk County; the Gold Coast, in the vicinity of the island's North Shore, along Long Island Sound; and increasingly, the western shoreline of Brooklyn, facing Manhattan. In 2016, according to "Business Insider", the 11962 zip code encompassing Sagaponack, within Southampton, was listed as the most expensive in the U.S., with a median home sale price of $8.5 million. Long Island has played a prominent role in scientific research and in engineering. It is the home of the Brookhaven National Laboratory in nuclear physics and Department of Energy research. Long Island is also home to the Cold Spring Harbor Laboratory, which was directed for 35 years by James D. Watson (who, along with Francis Crick and Rosalind Franklin, discovered the double helix structure of DNA). Companies such as Sperry Rand, Computer Associates (headquartered in Islandia), Zebra Technologies (now occupying the former headquarters of Symbol Technologies, and a former Grumman plant in Holtsville), have made Long Island a center for the computer industry. Stony Brook University of the State University of New York and New York Institute of Technology conduct advanced medical and technological research. Long Island is home to the East Coast's largest industrial park, the Hauppauge Industrial Park, hosting over 1,300 companies which employ more than 71,000 individuals. Companies in the park and abroad are represented by the Hauppauge Industrial Association. As many as 20% of Long Islanders commute to jobs in Manhattan. The island's eastern end is still partly agricultural. Development of vineyards on the North Fork has spawned a major viticultural industry, replacing potato fields. Pumpkin farms have been added to traditional truck farming. Farms allow fresh fruit picking by Long Islanders for much of the year. Fishing continues to be an important industry, especially at Huntington, Northport, Montauk, and other coastal communities of the East End and South Shore. From about 1930 to about 1990, Long Island was considered one of the aerospace manufacturing centers of the United States, with companies such as Grumman Aircraft, Republic, Fairchild, and Curtiss having their headquarters and factories on Long Island. These operations have largely been phased out or significantly diminished. Nassau County and Suffolk County each have their own governments, with a County Executive leading each. Each has a county legislature and countywide-elected officials, including district attorney, county clerk, and county comptroller. The towns in both counties have their own governments as well, with town supervisors and a town council. Nassau County is divided into three towns and two small incorporated cities (Glen Cove and Long Beach). Suffolk County is divided into ten towns. Brooklyn and Queens, on the other hand, do not have county governments. As boroughs of New York City, both have borough presidents, which have been largely ceremonial offices since the shutdown of the New York City Board of Estimate. The respective Borough Presidents are responsible for appointing individuals to the Brooklyn Community Boards and Queens Community Boards, each of which serves an advisory function on local issues. Brooklyn's sixteen members and Queens' fourteen members represent the first and second largest borough contingents of the New York City Council. Queens and Brooklyn are patrolled by the New York City Police Department. Nassau and Suffolk Counties are served by the Nassau County Police Department and Suffolk County Police Department, respectively, although several dozen villages and the two cities in Nassau County have their own police departments. The Nassau County Sheriff's Department and Suffolk County Sheriff's Office handle civil procedure, evictions, warrant service and enforcement, prisoner transport and detention, and operation of the county jail. New York State Police patrol state parks and parkways. The several SUNY colleges and universities are patrolled by the New York State University Police. The secession of Nassau and Suffolk Counties on Long Island from New York State was proposed as early as 1896, but talk was revived towards the latter part of the twentieth century. On March 28, 2008, Suffolk County Comptroller Joseph Sawicki proposed a plan that would make Nassau and Suffolk Counties on Long Island the 51st state of the United States of America. Sawicki claimed all of Nassau and Suffolk taxpayers' money would remain locally, rather than the funds being dispersed all over the entire state of New York, with these counties sending to Albany over three billion dollars more than they receive. The state of Long Island would have included nearly 3 million people (a larger population than that of fifteen other states). Nassau County executive Ed Mangano came out in support of such a proposal in April 2010 and commissioned a study on it. Many major forms of transportation serve Long Island, including aviation via John F. Kennedy International Airport, LaGuardia Airport, and Long Island MacArthur Airport, and multiple smaller airports; rail transportation via the Long Island Rail Road and the New York City Subway; bus routes via MTA Regional Bus Operations, Nassau Inter-County Express, and Suffolk County Transit; ferry service via NYC Ferry and multiple smaller ferry companies; and several major highways. There are historic and modern bridges, and recreational and commuter trails, serving various parts of Long Island. There are ten road crossings out of Long Island, all within New York City limits at the island's extreme western end. Plans for a Long Island Crossing at locations in Nassau and Suffolk Counties (a proposed bridge or tunnel that would link Long Island to the south with New York or Connecticut to the north across Long Island Sound) have been discussed for decades, but there are no plans to construct such a crossing. The MTA implements mass transportation for the New York metropolitan area including all five boroughs of New York City, the suburban counties of Dutchess, Nassau, Orange, Putnam, Rockland, Suffolk, and Westchester, all of which together are the "Metropolitan Commuter Transportation District (MCTD)". The MTA considers itself to be the largest regional public transportation provider in the Western Hemisphere. , MTA agencies move about 8.6 million customers per day (translating to 2.65 billion rail and bus customers a year). The MTA's systems carry over 11 million passengers on an average weekday systemwide, and over 850,000 vehicles on its seven toll bridges and two tunnels per weekday. The Long Island Rail Road (LIRR) is North America's busiest commuter railroad system, carrying an average of 282,400 passengers each weekday on 728 daily trains. Chartered on April 24, 1834, and operating continuously since, it is also the oldest railroad in the U.S. that still operates under its original charter and name. The Metropolitan Transportation Authority has operated the LIRR as one of its two commuter railroads since 1966, and the LIRR is one of the few railroads worldwide that provides service all the time, year round. In July 2017, a $2 billion plan to add a third railroad track to the LIRR Main Line between the Floral Park and Hicksville stations in Nassau County was approved. Other LIRR projects, such as the Ronkonkoma Branch Double Track Project, are also underway. Five "readiness projects" across the LIRR system, which will cost a combined $495 million, are also under construction in preparation for expanded peak-hour LIRR service after the completion of East Side Access, which will bring LIRR trains to Grand Central Terminal. Nassau Inter-County Express (NICE) provides bus service in Nassau County, while Suffolk County Transit, an agency of the Suffolk County government, provides bus service in Suffolk County. In 2012, NICE replaced the former MTA Long Island Bus in transporting Long Islanders across Nassau County while allowing them to use MTA MetroCards as payment. The Long Island Expressway, Northern State Parkway, and Southern State Parkway, all products of the automobile-centered planning of Robert Moses, are the island's primary east-west high-speed controlled-access highways. Several hundred transportation companies service the Long Island/New York City area. Winston Airport Shuttle, the oldest of these companies in business since 1973, was the first to introduce door-to-door shared-ride service to and from the major airports, which almost all transportation companies now use. Many public and private high schools on Long Island are ranked among the best in the United States. Nassau and Suffolk Counties are the home of 125 public school districts containing 656 public schools. It also hosts private schools such as Friends Academy, Chaminade High School, Kellenberg Memorial High School, St. Anthony's High School, and North Shore Hebrew Academy, as well as parochial schools, many of which are operated by the Catholic Diocese of Rockville Centre. In contrast, all of Brooklyn and Queens are served by the New York City Department of Education, the largest school district in the United States. Three of the nine specialized high schools in New York City are in the two Long Island boroughs, those being Brooklyn Latin School, Brooklyn Technical High School (one of the original three specialized schools), and Queens High School for the Sciences. Like Nassau and Suffolk Counties, they are home to private schools such as Poly Prep Country Day School, Packer Collegiate Institute, and Saint Ann's School, and Berkeley Carroll School, and parochial schools operated by the Catholic Diocese of Brooklyn. Long Island is home to a range of higher-education institutions, both public and private. Brooklyn and Queens contain five of eleven senior colleges within CUNY, the public university system of New York City and one of the largest in the country. Among these are the notable institutions of Brooklyn College and Queens College. Brooklyn also contains private colleges such as Pratt Institute and the New York University Polytechnic School of Engineering, an engineering college that merged with New York University in 2014. Several colleges and universities within the State University of New York system are on Long Island, including Stony Brook (which noted its health sciences research and medical center), as well as Nassau Community College and Suffolk County Community College that serve their respective counties. Private institutions include Molloy College in Rockville Centre, the New York Institute of Technology, Hofstra University and Adelphi University (both in the Town of Hempstead), as well as Long Island University (with its C.W. Post campus, on a former Gold Coast estate in Brookville, and a satellite campus in downtown Brooklyn). Long Island also contains the Webb Institute, a small naval architecture college in Glen Cove. The island is also home to the United States Merchant Marine Academy, a Federal Service Academy in Kings Point, on the North Shore. Music on Long Island (Nassau and Suffolk) is strongly influenced by the proximity to New York City and by the youth culture of the suburbs. Psychedelic rock was widely popular in the 1960s as flocks of disaffected youth travelled to NYC to participate in protest and the culture of the time. R & B also has a history on Long Island, most notably Huntington-born Mariah Carey, one of the top-selling musicians of all time. In the late 1970s through the 1980s, the influence of radio station WLIR made Long Island one of the first places in the U.S. to hear and embrace European New Wave bands such as Depeche Mode, the Pet Shop Boys, and Culture Club. In the 1990s, hip-hop became popular with rap pioneers Rakim, EPMD, MF DOOM, and Public Enemy growing up on Long Island. Long Island was the home of a bustling emo scene in the 2000s, with bands such as Brand New, Taking Back Sunday, Straylight Run, From Autumn to Ashes and As Tall as Lions. More recently, newer acts from Long Island, including Austin Schoeffel, Jon Bellion, and Envy on the Coast, have made a name for themselves. Famous rock bands from Long Island include The Rascals, The Ramones (from Queens), Dream Theater, Blue Öyster Cult, Twisted Sister and guitar virtuosos Donald (Buck Dharma) Roeser, John Petrucci, Steve Vai and Joe Satriani, as well as drummer Mike Portnoy. Rock and pop singer Billy Joel grew up in Hicksville, and his music often reflects Long Island and his youth. National The Nassau Coliseum and Northwell Health at Jones Beach Theater are venues used by national touring acts as performance spaces for concerts. Northwell Health at Jones Beach Theater is an outdoor amphitheatre at Jones Beach State Park. It is a popular place to view summer concerts that feature new and classic artists. It hosts a large Fourth of July fireworks show every year which fills the stands. People also park cars along the highway leading to the show, and others watch from the nearby beaches. Long Island is also known for its school music programs. Many schools in Suffolk County have distinguished music programs, with high numbers of students who are accepted into the statewide All-State music groups, or even the National All-Eastern Coast music groups. Both the Suffolk County and Nassau County Music Educator's Associations are recognized by The National Association for Music Education (NAfME), and host numerous events, competitions, and other music-related activities. Long Island has historically been a center for fishing and seafood. This legacy continues in the Blue Point oyster, a now ubiquitous variety originally harvested on the Great South Bay that was the favorite oyster of Queen Victoria. Clams are also a popular food and clam digging a popular recreational pursuit, with Manhattan clam chowder reputed to have Long Island origins. Of land-based produce, Long Island duck has a history of national recognition since the 19th century, with four duck farms continuing to produce 2 million ducks a year . Two symbols of Long Island's duck farming heritage are the Long Island Ducks minor-league baseball team and the Big Duck, a 1931 duck-shaped building that is a historic landmark and tourist attraction. In addition to Long Island's duck industry, Riverhead contains one of the largest buffalo farms on the East coast. Long Island is well known for its production of alcoholic beverages. Eastern Long Island is a significant producer of wine. Vineyards are most heavily concentrated on Long Island's North Fork, which contains 38 wineries. Most of these contain tasting rooms, which are popular attractions for visitors from across the New York metropolitan area. Long Island has also become a producer of diverse craft beers, with 15 microbreweries across Nassau and Suffolk Counties . The largest of these is Blue Point Brewing Company, best known for its "toasted lager". Long Island is also globally known for its signature cocktail, the Long Island Iced Tea, which was purportedly invented at the popular Babylon, Oak Beach Inn nightclub in the 1970s. Long Island's eateries are largely a product of the region's local ethnic populations. Asian cuisines, Italian cuisine, Jewish cuisine, and Latin American cuisines were the most popular ethnic cuisines on Long Island as of the second decade of the 2000s. Asian cuisines are predominantly represented by East Asian, South Asian, and Middle Eastern cuisines. Italian cuisine is found in ubiquitous pizzerias throughout the island, with the region hosting an annual competition, the Long Island Pizza Festival & Bake-Off. Jewish cuisine is likewise represented by delicatessens and bagel stores. Latin American cuisines span their geographical origins, from Brazilian rodizios to Mexican taquerias. The New York Mets baseball team plays at Citi Field in Flushing Meadows-Corona Park, Queens. Their former stadium, Shea Stadium was also home for the New York Jets football team from 1964 until 1983. The new stadium has an exterior façade and main entry rotunda inspired by Brooklyn's famous Ebbets Field (see below). The New York Mets planned to move their Double-A farm team to Long Island, as part of the ambitious but now-defunct plan for Nassau County called The Lighthouse Project. The Brooklyn Cyclones are a minor league baseball team, affiliated with the New York Mets. The Cyclones play at MCU Park just off the boardwalk on Coney Island in Brooklyn. An artificial turf baseball complex named Baseball Heaven is in Yaphank. The Barclays Center, a sports arena, business, and residential complex built partly on a platform over the Atlantic Yards at Atlantic Avenue in Brooklyn, is the home of the Brooklyn Nets basketball team and the New York Islanders hockey team. The move from New Jersey in the summer of 2012 marked the return to Long Island for the Nets franchise, which played at Nassau Veterans Memorial Coliseum in Uniondale from 1972 to 1977. The Islanders played at Nassau Coliseum from their 1972 inception through 2015, and then splitting time between Nassau Coliseum and Barclays Center from 2017 to present. The Islanders anticipate moving full time to a new hockey arena at Belmont Park, in Elmont, NY, in 2022. Ebbets Field, which stood in Brooklyn from 1913 until its demolition in 1960, was the home of the Brooklyn Dodgers baseball team, who moved to California after the 1957 Major League Baseball season to become the Los Angeles Dodgers. The Dodgers won several National League pennants in the 1940s and 1950s, losing several times in the World Series—often called "Subway Series"—to their Bronx rivals, the New York Yankees. The Dodgers won their lone championship in Brooklyn in the 1955 World Series versus the Yankees. Despite this success during the latter part of the team's stay in Brooklyn, they were a second-division team with an unspectacular winning record for much of their history there – but nonetheless became legendary for the almost-fanatical devotion of the Brooklynites who packed the relatively small ballpark to vigorously root for the team they affectionately called, "Dem Bums". Loss of the Dodgers to California was locally considered a civic tragedy that negatively affected the community far more than the similar moves of other established teams to new cities in the 1950s, including the Dodgers' long-time arch-rival New York Giants, who also left for California after 1957. Long Island is also home to the Long Island Ducks minor league baseball team of the Atlantic League. Their stadium, Bethpage Ballpark, is in Central Islip. The Brooklyn Cyclones minor league baseball team, affiliated with the New York Mets, plays in the Short-Season A classification New York–Penn League. The Cyclones play at MCU Park just off the Coney Island boardwalk in the New York City borough of Brooklyn. The New York Dragons of the Arena Football League played their home games at Nassau Coliseum.The two main rugby union teams are the Long Island RFC in East Meadow and the Suffolk Bull Moose in Stony Brook. The New York Sharks is a women's American football team that is a member of the Women's Football Alliance.The New York Sharks home field is at Aviator Sports Complex in Brooklyn. Long Island's professional soccer club, the New York Cosmos, play in the Division 2 North American Soccer League at James M. Shuart Stadium in Hempstead. Long Island has historically been a hotbed of lacrosse at the youth and college level, which made way for a Major League Lacrosse team in 2001, the Long Island Lizards. The Lizards play at Mitchel Athletic Complex in Uniondale. Long Island has a wide variety of golf courses found all over the island. Two of the most well-known are the Shinnecock Hills Golf Club and the public Bethpage Black Course that has hosted multiple U.S. Open tournaments as well as several other top level international championships. Queens also hosts one of the four tennis grand slams, the US Open. Every August (September, in Olympic years) the best tennis players in the world travel to Long Island to play the championships held in the USTA National Tennis Center, adjacent to Citi Field in Flushing Meadows Park. The complex also contains the biggest tennis stadium in the world, the Arthur Ashe Stadium. Long Island also has two horse racing tracks, Aqueduct Racetrack in Ozone Park, Queens and Belmont Park on the Queens/Nassau border in Elmont, home of the Belmont Stakes. The longest dirt thoroughbred racecourse in the world is also at Belmont Park. Another category of sporting events popular in this region involves firematic racing events, involving many local volunteer fire departments. Long Island is home to numerous famous athletes, including Hall of Famers Jim Brown, Julius Erving, John Mackey, Whitey Ford, Nick Drahos, and Carl Yastrzemski. Others include Gold Medalists Sue Bird, Sarah Hughes and Derrick Adkins, USWNT World Cup champions Crystal Dunn and Alexandra Long, D'Brickashaw Ferguson, Billy Donovan, Larry Brown, Rick Pitino, John McEnroe, Jumbo Elliott, Mick Foley, Zack Ryder, Matt Serra, Boomer Esiason, Vinny Testaverde, Craig Biggio, Frank Catalanotto, Greg Sacks, Gilles Villemure, Rob Burnett, Steve Park, Frank Viola, Chris Weidman, Marques Colston and Speedy Claxton. Several NHL players were born and/or raised on Long Island, such as Vancouver Canucks Christopher Higgins and Matt Gilroy, Nashville Predators Eric Nystrom, Toronto Maple Leaf Mike Komisarek, Pittsburgh Penguin Rob Scuderi, and New Jersey Devil Keith Kinkaid. Both Komisarek and Higgins played on the same Suffolk County Hockey League team at an early age, and later played on the Montreal Canadiens together. Nick Drahos was an All Scholastic and All Long Island honoree at Lawrence High School, Nassau Co. in 1936 and 1937, and a two-time Unanimous National College All-American in the years of 1939 and 1940 at Cornell University.
https://en.wikipedia.org/wiki?curid=18315
Lower Peninsula of Michigan The Lower Peninsula of Michigan – also known as Lower Michigan – is the southern and less elevated of the two major landmasses that make up the U.S. state of Michigan, the other being the Upper Peninsula. It is surrounded by water on all sides except its southern border, which it shares with Indiana and Ohio. Although the Upper Peninsula is commonly referred to as "the U.P.", it is uncommon for the Lower Peninsula to be called "the L.P." Because of its recognizable shape, the Lower Peninsula is nicknamed "the mitten", with the eastern region identified as "The Thumb". This has led to several folkloric creation myths for the area, one being that it is a hand print of Paul Bunyan, a giant lumberjack and popular European-American folk character in Michigan. When asked where they live, Lower Peninsula residents may hold up their right palm and point to a spot on it to indicate the location. The peninsula is sometimes divided into the Northern Lower Peninsula—which is more sparsely populated and largely forested—and the Southern Lower Peninsula—which is largely urban or farmland. Southern Lower Michigan is sometimes further divided into economic and cultural subregions. The more culturally and economically diverse Lower Peninsula dominates Michigan politics, and maps of it without the Upper Peninsula are sometimes mistakenly presented as "Michigan", which contributes to resentment by "Yoopers" (residents of "the U.P"). Yoopers jokingly refer to residents of the Lower Peninsula as "flat-landers" (referring to the region's less rugged terrain) or "trolls" (because, being south of the Mackinac Bridge, they "live under the bridge"). The Lower Peninsula is bounded on the west by Lake Michigan and on the northeast by Lake Huron, which connect at the Straits of Mackinac. In the southeast, the waterway consisting of the St. Clair River, Lake St. Clair, Detroit River, and Lake Erie separates it from the province of Ontario, Canada. It is bounded on the south by the states of Indiana and Ohio. This border is irregular: the border with Indiana was moved 10 miles northward from its territorial position to give Indiana access to Lake Michigan, and its slightly angled border with Ohio was part of the compromise which ended the Toledo War. At its widest points, the Lower Peninsula is long from north to south and from east to west. It contains nearly two-thirds of Michigan's total land area. The surface of the peninsula is generally level, broken by conical hills and glacial moraines usually not more than a few hundred feet tall, most common in the north. The highest point in the Lower Peninsula is not definitely established but is either Briar Hill at , or one of several points nearby in the vicinity of Cadillac. The lowest point is at the shore of Lake Erie at . The western coast features extensive sandy beaches and dunes produced by Lake Michigan and the prevailing winds from the west. The relatively shallow Saginaw Bay is surrounded by a similarly shallow drainage basin. Several large river systems flow into the adjacent Great Lakes, including the Kalamazoo, Grand, Muskegon, and Manistee rivers (Lake Michigan), and the Au Sable and Tittabawassee–Shiawassee–Saginaw rivers (Lake Huron). Because of the networks of rivers and numerous lakes, no point on land is more than from one of these bodies of water, and at most from one of the Great Lakes (near Lansing). The American Bird Conservancy and the National Audubon Society have designated several locations as internationally Important Bird Areas. The Lower Peninsula is dominated by a geological basin known as the Michigan Basin. That feature is represented by a nearly circular pattern of geologic sedimentary strata in the area with a nearly uniform structural dip toward the center of the peninsula. The basin is centered in Gladwin County where the Precambrian basement rocks are deep. Around the margins, such as under Mackinaw City, Michigan, the Precambrian surface is around down. This contour on the bedrock clips the northern part of the lower peninsula and continues under Lake Michigan along the west. It crosses the southern counties of Michigan and continues on to the north beneath Lake Huron. Primary Interstate Highways have two digits on their shields; auxiliary Interstate Highways have three digits. Interstate Highways include: U.S. Highways include: The Great Lakes Circle Tour is a designated scenic road system connecting all of the Great Lakes and the St. Lawrence River. Michigan's Lower Peninsula can be divided into four main regions based on geological, soil, and vegetation differences; amount of urban areas or rural areas; minority populations; and agriculture. The four principal regions listed below can further be separated into sub-regions and overlapping areas.
https://en.wikipedia.org/wiki?curid=18317
Lake Toba Lake Toba () is a large natural lake in North Sumatra, Indonesia, occupying the caldera of a supervolcano. The lake is located in the middle of the northern part of the island of Sumatra, with a surface elevation of about , the lake stretches from to . The lake is about long, wide, and up to deep. It is the largest lake in Indonesia and the largest volcanic lake in the world. Lake Toba Caldera is one of the nineteen Geoparks in Indonesia, which is proposed to be included in the UNESCO Global Geopark. Lake Toba is the site of a supervolcanic eruption estimated at VEI 8 that occurred 69,000 to 77,000 years ago, representing a climate-changing event. Recent advances in dating methods suggest a more accurate identification of 74,000 years ago as the date. It is the largest-known explosive eruption on Earth in the last 25 million years. According to the Toba catastrophe theory, it had global consequences for human populations; it killed most humans living at that time and is believed to have created a population bottleneck in central east Africa and India, which affects the genetic make-up of the human worldwide population to the present. It has been accepted that the eruption of Toba led to a volcanic winter with a worldwide decrease in temperature between , and up to in higher latitudes. Additional studies in Lake Malawi in East Africa show significant amounts of ash being deposited from the Toba eruptions, even at that great distance, but little indication of a significant climatic effect in East Africa. The Toba caldera complex in North Sumatra comprises four overlapping volcanic craters that adjoin the Sumatran "volcanic front". At it is the world's largest Quaternary caldera, and the fourth and youngest caldera. It intersects the three older calderas. An estimated of dense-rock equivalent pyroclastic material, known as the youngest Toba tuff, was released during one of the largest explosive volcanic eruptions in recent geological history. Following this eruption, a resurgent dome formed within the new caldera, joining two half-domes separated by a longitudinal graben. At least four cones, four stratovolcanoes, and three craters are visible in the lake. The Tandukbenua cone on the northwestern edge of the caldera has only sparse vegetation, suggesting a young age of several hundred years. Also, the Pusubukit (Hill Center) volcano ( above sea level) on the south edge of the caldera is solfatarically active. The "Toba eruption" (the "Toba event") occurred at what is now Lake Toba about 75,000±900 years ago. It was the last in a series of at least four caldera-forming eruptions at this location, with earlier calderas having formed around 788,000±2,200 years ago. This last eruption had an estimated VEI=8, making it the largest-known explosive volcanic eruption within the last 25 million years. Bill Rose and Craig Chesner of Michigan Technological University have estimated that the total amount of material released in the eruption was about —about of ignimbrite that flowed over the ground, and approximately that fell as ash mostly to the west. However, based on the new method (crystal concentration and exponential), Toba possibly erupted of ignimbrite and co-ignimbrite. The pyroclastic flows of the eruption destroyed an area of least , with ash deposits as thick as by the main vent. The eruption was large enough to have deposited an ash layer approximately thick over all of South Asia; at one site in central India, the Toba ash layer today is up to thick and parts of Malaysia were covered with of ash fall. In addition it has been variously calculated that of sulfurous acid or of sulfur dioxide were ejected into the atmosphere by the event. The subsequent collapse formed a caldera that filled with water, creating Lake Toba. The island in the center of the lake is formed by a resurgent dome. The exact year of the eruption is unknown, but the pattern of ash deposits suggests that it occurred during the northern summer because only the summer monsoon could have deposited Toba ashfall in the South China Sea. The eruption lasted perhaps two weeks, and the ensuing volcanic winter resulted in a decrease in average global temperatures by for several years. Ice cores from Greenland record a pulse of starkly reduced levels of organic carbon sequestration. Very few plants or animals in southeast Asia would have survived, and it is possible that the eruption caused a planet-wide die-off. However, the global cooling has been discussed by Rampino and Self. Their conclusion is that the cooling had already started before Toba's eruption. This conclusion was supported by Lane and Zielinski who studied the lake-core from Africa and GISP2. They concluded that there was no volcanic winter after Toba eruption and that high H2SO4 deposits do not cause long-term effects. Evidence from studies of mitochondrial DNA suggests that humans may have passed through a genetic bottleneck around this time that reduced genetic diversity below what would be expected given the age of the species. According to the Toba catastrophe theory, proposed by Stanley H. Ambrose of the University of Illinois at Urbana–Champaign in 1998, the effects of the Toba eruption may have decreased the size of human populations to only a few tens of thousands of individuals. However, this hypothesis is not widely accepted because similar effects on other animal species have not been observed, and paleoanthropology suggests there was no population bottleneck. Since the major eruption ~70,000 years ago, eruptions of smaller magnitude have also occurred at Toba. The small cone of Pusukbukit formed on the southwestern margin of the caldera and lava domes. The most recent eruption may have been at Tandukbenua on the northwestern caldera edge, suggested by a lack of vegetation that could be due to an eruption within the last few hundred years. Some parts of the caldera have shown uplift due to partial refilling of the magma chamber, for example, pushing Samosir Island and the Uluan Peninsula above the surface of the lake. The lake sediments on Samosir Island show that it has risen by at least since the cataclysmic eruption. Such uplifts are common in very large calderas, apparently due to the upward pressure of below-ground magma. Toba is probably the largest resurgent caldera on Earth. Large earthquakes have recently occurred in the vicinity of the volcano, notably in 1987 along the southern shore of the lake at a depth of . Such earthquakes have also been recorded in 1892, 1916, and 1920–1922. Lake Toba lies near the Great Sumatran fault, which runs along the centre of Sumatra in the Sumatra Fracture Zone. The volcanoes of Sumatra and Java are part of the Sunda Arc, a result of the northeasterly movement of the Indo-Australian Plate, which is sliding under the eastward-moving Eurasian Plate. The subduction zone in this area is very active: the seabed near the west coast of Sumatra has had several major earthquakes since 1995, including the 9.1 2004 Indian Ocean earthquake and the 8.7 2005 Nias–Simeulue earthquake, the epicenters of which were around from Toba. Most of the people who live around Lake Toba are ethnically Bataks. Traditional Batak houses are noted for their distinctive roofs (which curve upwards at each end, as a boat's hull does) and their colorful decor. The flora of the lake includes various types of phytoplankton, emerged macrophytes, floating macrophytes, and submerged macrophytes, while the surrounding countryside is rainforest including areas of Sumatran tropical pine forests on the higher mountainsides. The fauna includes several species of zooplankton and benthic animals. Since the lake is oligotrophic (nutrient-poor), the native fish fauna is relatively scarce, and the only endemics are "Rasbora tobana" (strictly speaking near-endemic, since also found in some tributary rivers that run into the lake) and "Neolissochilus thienemanni", locally known as the Batak fish. The latter species is threatened by deforestation (causing siltation), pollution, changes in water level and the numerous fish species that have been introduced to the lake. Other native fishes include species such as "Aplocheilus panchax", "Nemacheilus pfeifferae", "Homaloptera gymnogaster", "Channa gachua", "Channa striata", "Clarias batrachus", "Barbonymus gonionotus", "Barbonymus schwanenfeldii", "Danio albolineatus", "Osteochilus vittatus", "Puntius binotatus", "Rasbora jacobsoni", "Tor tambra", "Betta imbellis", "Betta taeniata" and "Monopterus albus". Among the many introduced species are "Anabas testudineus", "Oreochromis mossambicus", "Oreochromis niloticus", "Ctenopharyngodon idella", "Cyprinus carpio", "Osphronemus goramy", "Trichogaster pectoralis", "Trichopodus trichopterus", "Poecilia reticulata" and "Xiphophorus hellerii". On 18 June 2018, Lake Toba was the scene of a ferry disaster, in which over 190 people drowned. MV Sinar Bangun was an irregular operating vessel on the lake which capsized with many passengers on board. The incident caused the death of 190 people and injuries to a number of others. Preliminary reports found the vessel was in operation with irregularities. Ignoring overloading on the vessel and operating in rough weather conditions were concluded as the main reasons leading to the disaster. Around 50 cars and 100 motorbikes, which were aboard, also sank into the lake on that day.
https://en.wikipedia.org/wiki?curid=18318
Lens A lens is a transmissive optical device that focuses or disperses a light beam by means of refraction. A simple lens consists of a single piece of transparent material, while a compound lens consists of several simple lenses ("elements"), usually arranged along a common axis. Lenses are made from materials such as glass or plastic, and are ground and polished or molded to a desired shape. A lens can focus light to form an image, unlike a prism, which refracts light without focusing. Devices that similarly focus or disperse waves and radiation other than visible light are also called lenses, such as microwave lenses, electron lenses, acoustic lenses, or explosive lenses. The word "lens" comes from " lēns ", the Latin name of the lentil, because a double-convex lens is lentil-shaped. The lentil plant also gives its name to a geometric figure.
https://en.wikipedia.org/wiki?curid=18320
Lamorna Birch Samuel John "Lamorna" Birch, RA, RWS (7 June 1869 – 7 January 1955) was an English artist in oils and watercolours. At the suggestion of fellow artist Stanhope Forbes, Birch adopted the "soubriquet" "Lamorna" to distinguish himself from Lionel Birch, an artist who was also working in the area at that time. Lamorna Birch was born in Egremont, Cheshire, England. He was self-taught as an artist, except for a brief period of study at the Académie Colarossi in Paris during 1895. Birch settled in Lamorna, Cornwall in 1892, initially lodging at nearby Boleigh Farm. Many of his most famous pictures date from this time and the beautiful Lamorna Cove is usually their subject matter. He was attracted to Cornwall by the Newlyn group of artists but he ended up starting a second group based around his adopted home of Lamorna. He married Houghton (Mouse) Emily Vivian, the daughter of a mining agent from Camborne and they lived at Flagstaff Cottge, Lamorna. He exhibited at the Royal Academy from 1893, was elected as an Associate (ARA) in 1926 and made a Royal Academician (RA) in 1934, and showed more than two hundred paintings there. He held his first one-man exhibition at the Fine Art Society in 1906 and is said to have produced more than 20,000 pictures. Like a number of his contemporaries, he was profiled as an 'Artist of Note' in "The Artist" magazine, by Richard Seddon, in the June 1944 edition. Birch has paintings at Penlee House and in the collection of Derby Art Gallery.
https://en.wikipedia.org/wiki?curid=18322
Library classification A library classification is a system of knowledge organization by which library resources are arranged and ordered systematically. Library classifications use a notational system that represents the order of topics in the classification and allows items to be stored in that order. Library classification systems group related materials together, typically arranged as a hierarchical tree structure. A different kind of classification system, called a faceted classification system, is also widely used, which allows the assignment of multiple classifications to an object, enabling the classifications to be ordered in many ways. Library classification is an aspect of library and information science. It is distinct from scientific classification in that it has as its goal to provide a useful ordering of documents rather than a theoretical organization of knowledge. Although it has the practical purpose of creating a physical ordering of documents, it does generally attempt to adhere to accepted scientific knowledge. Library classification is distinct from the application of subject headings in that classification organizes knowledge into a systematic order, while subject headings provide access to intellectual materials through vocabulary terms that may or may not be organized as a knowledge system. The characteristics that a bibliographic classification demands for the sake of reaching these purposes are: a useful sequence of subjects at all levels, a concise memorable notation, and a host of techniques and devices of number synthesis Library classifications were preceded by classifications used by bibliographers such as Conrad Gessner. The earliest library classification schemes organized books in broad subject categories. The earliest known library classification scheme is the Pinakes by Callimachus, a scholar at the Library of Alexandria during the third century BC. During the Renaissance and Reformation era, "Libraries were organized according to the whims or knowledge of individuals in charge." This changed the format in which various materials were classified. Some collections were classified by language and others by how they were printed. After the printing revolution in the sixteenth century, the increase in available printed materials made such broad classification unworkable, and more granular classifications for library materials had to be developed in the nineteenth century. In 1627 Gabriel Naudé published a book called "Advice on Establishing a Library". At the time, he was working in the private library of President Henri de Mesmes II. Mesmes had around 8,000 printed books and many more Greek, Latin and French written manuscripts. Although it was a private library, scholars with references could access it. The purpose of "Advice on Establishing a Library" was to identify rules for private book collectors to organize their collections in a more orderly way to increase the collection's usefulness and beauty. Naudé developed a classification system based on seven different classes: theology, medicine, jurisprudence, history, philosophy, mathematics and the humanities. These seven classes would later be increased to twelve. "Advice on Establishing a Library" was about a private library, but within the same book, Naudé encouraged the idea of public libraries open to all people regardless of their ability to pay for access to the collection. One of the most famous libraries that Naudé helped improve was the Bibliothèque Mazarine in Paris. Naudé spent ten years there as a librarian. Because of Naudé's strong belief in free access to libraries to all people, the Bibliothèque Mazarine became the first public library in France around 1644. Although libraries created order within their collections from as early as the fifth century BC, the Paris Bookseller's classification, developed in 1842 by Jacques Charles Brunet, is generally seen as the first of the modern book classifications. Brunet provided five major classes: theology, jurisprudence, sciences and arts, belles-lettres, and history. There are many standard systems of library classification in use, and many more have been proposed over the years. However, in general, classification systems can be divided into three types depending on how they are used: In terms of functionality, classification systems are often described as: There are few completely enumerative systems or faceted systems; most systems are a blend but favouring one type or the other. The most common classification systems, LCC and DDC, are essentially enumerative, though with some hierarchical and faceted elements (more so for DDC), especially at the broadest and most general level. The first true faceted system was the colon classification of S. R. Ranganathan. Classification types denote the classification or categorization according to the form or characteristics or qualities of a classification scheme or schemes. Method and system has similar meaning. Method or methods or system means the classification schemes like Dewey Decimal Classification or Universal Decimal Classification. The types of classification is for identifying and understanding or education or research purposes while classification method means those classification schemes like DDC, UDC. The most common systems in English-speaking countries are: Other systems include: Newer classification systems tend to use the principle of synthesis (combining codes from different lists to represent the different attributes of a work) heavily, which is comparatively lacking in LC or DDC. Library classification is associated with library (descriptive) cataloging under the rubric of "cataloging and classification", sometimes grouped together as "technical services". The library professional who engages in the process of cataloging and classifying library materials is called a "cataloger" or "catalog librarian". Library classification systems are one of the two tools used to facilitate subject access. The other consists of alphabetical indexing languages such as Thesauri and Subject Headings systems. Library classification of a piece of work consists of two steps. Firstly, the subject or topic of the material is ascertained. Next, a call number (essentially a book's address) based on the classification system in use at the particular library will be assigned to the work using the notation of the system. It is important to note that unlike subject heading or thesauri where multiple terms can be assigned to the same work, in library classification systems, each work can only be placed in one class. This is due to shelving purposes: A book can have only one physical place. However, in classified catalogs one may have main entries as well as added entries. Most classification systems like the Dewey Decimal Classification (DDC) and Library of Congress Classification also add a cutter number to each work which adds a code for the author of the work. Classification systems in libraries generally play two roles. Firstly, they facilitate subject access by allowing the user to find out what works or documents the library has on a certain subject. Secondly, they provide a known location for the information source to be located (e.g. where it is shelved). Until the 19th century, most libraries had closed stacks, so the library classification only served to organize the subject catalog. In the 20th century, libraries opened their stacks to the public and started to shelve library material itself according to some library classification to simplify subject browsing. Some classification systems are more suitable for aiding subject access, rather than for shelf location. For example, Universal Decimal Classification, which uses a complicated notation of pluses and colons, is more difficult to use for the purpose of shelf arrangement but is more expressive compared to DDC in terms of showing relationships between subjects. Similarly faceted classification schemes are more difficult to use for shelf arrangement, unless the user has knowledge of the citation order. Depending on the size of the library collection, some libraries might use classification systems solely for one purpose or the other. In extreme cases, a public library with a small collection might just use a classification system for location of resources but might not use a complicated subject classification system. Instead all resources might just be put into a couple of wide classes (travel, crime, magazines etc.). This is known as a "mark and park" classification method, more formally called reader interest classification. As a result of differences in notation, history, use of enumeration, hierarchy, and facets, classification systems can differ in the following ways:
https://en.wikipedia.org/wiki?curid=18328
Lexus Created at around the same time as Japanese rivals Honda and Nissan created their Acura and Infiniti luxury divisions respectively, Lexus originated from a corporate project to develop a new premium sedan, code-named F1, which began in 1983 and culminated in the launch of the Lexus LS in 1989. Subsequently, the division added sedan, coupé, convertible and SUV models. Lexus did not exist as a brand in its home market until 2005, and all vehicles marketed internationally as Lexus from 1989 to 2005 were released in Japan under the Toyota marque and an equivalent model name. In 2005, a hybrid version of the RX crossover debuted and additional hybrid models later joined the division's lineup. Lexus launched its own F marque performance division in 2007 with the debut of the IS F sport sedan, followed by the LFA supercar in 2009. Lexus vehicles are largely produced in Japan, with manufacturing centered in the Chūbu and Kyūshū regions, and in particular at Toyota's Tahara, Aichi, Chūbu and Miyata, Fukuoka, Kyūshū plants. Assembly of the first Lexus built outside the country, the Ontario-produced RX 330, began in 2003. Following a corporate reorganization from 2001 to 2005, Lexus began operating its own design, engineering and manufacturing centers. Since the 2000s, Lexus has increased sales outside its largest market, the United States. The division inaugurated dealerships in the Japanese domestic market in 2005, becoming the first Japanese premium car marque to launch in its country of origin. The brand was introduced in Southeast Asia, Latin America, Europe and other regions. The Lexus brand was created around the same time as Japanese rivals Nissan and Honda developed their Infiniti and Acura premium brands. The Japanese government imposed voluntary export restraints for the U.S. market, so it was more profitable for Japanese automakers to export more expensive cars to the U.S. In 1983, Toyota chairman Eiji Toyoda issued a challenge to build the world's best car. The project, code-named F1 (“Flagship One”) developed the Lexus LS 400 to expand Toyota's product line in the premium segment. The F1 project followed the Toyota Supra sports car and the premium Toyota Mark II models. Both the Supra and Mark II were rear-wheel drive cars with a powerful 7M-GE or 7M-GTE inline-six engine. The largest sedan Toyota built at the time was the limited-production, 1960s-vintage Toyota Century, a domestic, hand-built limousine, and V8-powered model, followed by the inline-six-engined Toyota Crown premium sedan. The Century was conservatively styled for the Japanese market and along with the Crown not slated for export after a restyle in 1982. The F1 designers targeted their new sedan at international markets and began development on a new V8 engine. Japanese manufacturers exported more expensive models in the 1980s due to voluntary export restraints negotiated by the Japanese government and U.S. trade representatives that restricted mainstream car sales. In 1986, Honda launched its Acura marque in the U.S., influencing Toyota's plans for a luxury division. The initial Acura model was an export version of the Honda Legend, itself launched in Japan in 1985 as a rival to the Toyota Crown, Nissan Cedric/Gloria and Mazda Luce. In 1987, Nissan unveiled its plans for a premium brand, Infiniti, and revised its Nissan President sedan in standard wheelbase form for export as the Infiniti Q45, which it launched in 1990. Mazda began selling the Luce as the Mazda 929 in North America in 1988 and later began plans to develop an upscale marque to be called Amati, but its plans did not come to fruition. Toyota researchers visited the U.S. in May 1985 to conduct focus groups and market research on luxury consumers. During that time, several F1 designers rented a home in Laguna Beach, California, to observe the lifestyles and tastes of American upper class consumers. Meanwhile, F1 engineering teams conducted prototype testing on locations ranging from the German autobahn to U.S. roads. Toyota's market research concluded that a separate brand and sales channel were needed to present its new sedan, and plans were made to develop a new network of dealerships in the U.S. market. In 1986, Toyota's longtime advertising agency Saatchi & Saatchi formed a specialized unit, Team One, to handle marketing for the new brand. Image consulting firm Lippincott & Margulies was hired to develop a list of 219 prospective names; Vectre, Verone, Chaparel, Calibre and Alexis were chosen as top candidates. While Alexis quickly became the front runner, concerns were raised that the name applied to people more than cars (being associated with the Alexis Carrington character on the popular 1980s prime time drama "Dynasty"). As a result, the first letter was removed and the "i" replaced with a "u" to morph the name to Lexus. Theories of the etymology of the Lexus name have suggested it is the combination of the words "luxury" and "elegance," and that it is an acronym for "luxury exports to the U.S." According to Team One interviews, the brand name has no specific meaning and simply denotes a luxurious and technological image. Prior to the release of the first vehicles, database service LexisNexis obtained a temporary injunction forbidding the name Lexus from being used because it might cause product confusion. The injunction threatened to delay the division's launch and marketing efforts. The U.S. appeals court lifted the injunction, deciding that there was little likelihood of confusion between the two products. The original Lexus slogan, developed after Team One representatives visited Lexus designers in Japan and noted an obsessive attention to detail, became ""The Relentless Pursuit of Perfection."" Three firms were involved in the final phase of logo development: Saatchi & Saatchi, Molly Designs and Hunter/Korobkin, Inc. The finished logo was a combination of two firms’ final designs: the Lexus logo typeface came from Saatchi & Saatchi and the “L” was Hunter/Korobkin, Inc.’s design. According to Toyota, the automaker made some refinements so the logo would be easier to manufacture, rendering it using a mathematical formula. The first teaser ads featuring the Lexus name and logo appeared at the Chicago, Los Angeles and New York auto shows in 1988. The F1 project was completed in 1989, involving 60 designers, 24 engineering teams, 1,400 engineers, 2,300 technicians, 220 support workers, approximately 450 prototypes and more than $1 billion in costs. The resulting car, the Lexus LS 400, had a design that shared no major elements with previous Toyota vehicles, with a new 4.0 L V8 gasoline engine and rear-wheel drive. The car debuted in January 1989 at the North American International Auto Show in Detroit and official sales of the vehicle began the following September at a network of 81 new Lexus dealerships in the U.S. The LS 400 was sold along with the smaller ES 250, a rebadged version of the Japanese market Toyota Camry Prominent/Toyota Vista. The launch of Lexus was accompanied by a multimillion-dollar advertising campaign. The LS 400 was praised for its quietness, well-appointed and ergonomic interior, engine performance, build quality, aerodynamics, fuel economy and value. However, it was criticized by some automobile columnists for derivative styling and a suspension regarded as too compromising of handling for ride comfort. In some markets it was priced against mid-size, six-cylinder Mercedes-Benz and BMW models. It was rated by "Car and Driver" magazine as better than the higher-priced Mercedes-Benz 420 SEL and BMW 735i in terms of ride, handling and performance. The LS 400 also won motoring awards from automotive publications including "Automobile Magazine" and "Wheels Magazine". Lexus quickly established customer loyalty and its debut was generally regarded as a shock to existing luxury marques. BMW's and Mercedes-Benz's U.S. sales figures dropped 29 percent and 19 percent, respectively, with BMW executives accusing Lexus of dumping in that market, while 35 percent of Lexus buyers traded in a Lincoln or Cadillac. In December 1989, Lexus initiated a voluntary recall of all 8,000 LS 400s based upon two customer complaints over defective wiring and an overheated brake light. A 20-day operation to replace the parts on affected vehicles included technicians to pick up, repair and return cars to customers free of charge, and also flying personnel and renting garage space for owners in remote locations. This response was covered in media publications and helped establish the marque's early reputation for customer service. By the end of 1989, a total of 16,392 LS 400 and ES 250 sedans were sold in the four months following the U.S. launch. Although sales had begun at a slower pace than expected, the final tally matched the division's target of 16,000 units for that year. Following initial models, plans called for the addition of a sports coupe along with a redesigned ES sedan. In 1990, during its first full year of sales, Lexus sold 63,594 LS 400 and ES 250 sedans in the U.S., the majority being the LS model. That year, Lexus also began limited exports to the United Kingdom, Switzerland, Canada and Australia. In 1991, Lexus launched its first sports coupe, the SC 400, which shared the LS 400s V8 engine and rear-wheel drive design. This was followed by the second generation ES 300 sedan, which succeeded the ES 250 and became Lexus' top seller. At the conclusion of 1991, Lexus had become the top-selling premium car import in the U.S., with sales reaching 71,206 vehicles. That year, Lexus ranked highest in J.D. Power and Associates' studies on initial vehicle quality, customer satisfaction and sales satisfaction for the first time. The marque also began increasing U.S. model prices past those of comparable American premium makes, but still below high-end European models. By 1992, the LS 400's base price had risen 18 percent. In 1993, Lexus launched the mid-size GS 300 sports sedan, based on the Toyota Aristo using the Toyota "S" platform from the Toyota Crown, which had sold for two years prior in Japan. The GS 300 was priced below the LS 400 in the marque's lineup. That same year, Lexus became one of the first marques to debut a certified pre-owned program, with the aim of improving trade-in model values. The marque introduced the second generation LS 400 in 1994. In May 1995, sales were threatened by the U.S. government's proposal of 100 percent tariffs on upscale Japanese cars in response to the widening U.S.-Japan trade deficit. SUVs were exempt from the proposed sanctions. Normal sales operations resumed by late 1995 when the Japanese auto manufacturers collectively agreed to greater American investments and the tariffs were not enacted. In 1996, Lexus debuted its first sport utility vehicle, the LX 450, followed by the third generation ES 300 sedan, and the second generation GS 300 and GS 400 sedans in 1997. The marque's plans for developing an SUV model had accelerated during the U.S.-Japan tariff discussions of 1995. Lexus added the first luxury-branded crossover SUV, the RX 300 in 1998. The RX crossover targeted suburban buyers who desired an upmarket SUV but did not need the LX's off-road capability. It was particularly successful, eventually becoming the marque's top-selling model ahead of the ES sedan. The same year, Lexus made its debut in South America's most populous country when it launched sales in Brazil. In 1999, the IS was introduced, an entry-level sport sedan. Lexus also recorded its 1 millionth vehicle sold in the U.S. market, being ranked as the top-selling premium car maker in the U.S. overall. In July 2000, Lexus introduced the IS 300 in North America, following global launch in 1999 (as the IS 200) and the third generation LS 430. In 2001, the first convertible was introduced, as well as the SC 430 and a redesigned ES 300. The GX 470 mid-size SUV debuted in 2002, followed by the second generation RX 330 in 2003. The following year, Lexus recorded its 2 millionth U.S. vehicle sale, and the first luxury-branded production hybrid SUV, the RX 400h. This vehicle used Toyota's Hybrid Synergy Drive system that combined gasoline and electric motors. In 2005, Lexus completed an organizational separation from parent company Toyota, with dedicated design, engineering, training, and manufacturing centers working exclusively for the division. This effort coincided with Lexus' launch in its home market of Japan and an expanded global launch of the brand in markets such as China. Executives aimed to increase Lexus sales outside of its largest market in the U.S. To accompany this expansion, next generation Lexus vehicles were redesigned as "global models" for international release. In the European market, where Lexus had long faced struggling sales owing to low brand recognition, few dedicated dealerships, and 1990s import quotas, the marque announced plans to introduce hybrid and diesel powertrains, increase the number of Lexus dealerships, and expand operations in emerging markets such as Russia. Lexus' arrival in the Japanese market in July 2005 marked the first introduction of a Japanese premium car marque in the domestic market. New generation LS, IS, ES, GS, and RX models subsequently became available in Japan along with the SC 430, ending domestic sales of Toyota-branded models under the Celsior, Altezza, Windom, Aristo, Harrier, and Soarer nameplates, respectively. The Altezza and Aristo were previously exclusive to Japanese Toyota retail sales channels called Toyota Vista Store, the Windom was exclusive to Toyota Corolla Store, the Celsior and Harrier were exclusive to Toyopet Store, and the Soarer was previously available at both Toyota Store and Toyopet Store locations. Lexus models sold in Japan featured higher specifications and a price premium compared with their discontinued Toyota counterparts. Sales for the first half-year were slower than expected, affected by the contraction of the domestic auto market and price increases, but improved in subsequent months with an expanded lineup. Through the mid-2000s, Lexus experienced sales successes in South Korea and Taiwan, becoming the top-selling import make in both markets in 2005; the marque also sold well in the Middle East, where it ranked first or second among rivals in multiple countries, and in Australia, where Lexus reached third in luxury car sales in 2006. Division executives in 2006 announced an expansion goal from 68 countries to 76 worldwide by 2010. By the end of the decade, this expansion resulted in official launches in Malaysia and South Africa in 2006, Indonesia in 2007, Chile in 2008, and the Philippines in 2009. In 2006, Lexus began sales of the GS 450h, a V6 hybrid performance sedan, and launched the fourth generation LS line, comprising both standard- and long-wheelbase V8 (LS 460 and LS 460 L) and hybrid (LS 600h and LS 600h L) versions. The fifth generation ES 350 also debuted in the same year. The LS 600h L subsequently went on sale as the most expensive sedan ever produced in Japan. By the end of 2006, Lexus' annual sales had reached 475,000 vehicles worldwide. In January 2007, Lexus announced a new F marque performance division, which would produce racing-inspired versions of its performance models. The IS F, made its debut at the 2007 North American International Auto Show, accompanied by a concept car, the LF-A. In October 2007, Lexus entered the Specialty Equipment Market Association show in the U.S. for the first time with the IS F, and announced its F-Sport performance trim level and factory-sanctioned accessory line. Increased emphasis on sporty models was an effort to target rivals from Mercedes-Benz's AMG and BMW's M divisions. Models such as the SC 400 and GS 400 had received favorable reactions from sport luxury buyers, most Lexus models had been characterized as favoring comfort over sporty road feel and handling, compared with European rivals. By the end of 2007, Lexus annual worldwide sales had surpassed 500,000 vehicles, and the marque ranked as the top-selling premium import in China for the first time. The largest sales markets in order of size for 2007 were the U.S., Japan, the UK, China, Canada, and Russia. In 2008, amidst the late-2000s recession and a weakened world car market, global sales fell 16 percent to 435,000, with declines in markets such as the U.S. and Europe where deliveries fell by 21 percent and 27.5 percent, respectively. In 2009, the marque launched the HS 250h, a dedicated hybrid sedan for North America and Japan, the RX 450h, the second generation hybrid SUV replacing the earlier RX 400h, and later that year debuted the production LFA exotic coupe. In late 2009, citing higher sales of hybrid models over their petrol counterparts, Lexus announced plans to become a hybrid-only marque in Europe. By the end of the decade, Lexus ranked as the fourth-largest premium car make in the world by volume, and was the number one selling premium car marque in the U.S. for 10 consecutive years. In 2010, Lexus underwent a gradual sales recovery in North America and Asia as the marque focused on adding hybrids and new model derivatives. Sales in the U.S. held steady despite the 2009–2010 Toyota vehicle recalls, several of which included Lexus models. The ES 350 and certain IS models were affected by a recall for potentially jamming floor mats, while parent company Toyota bore the brunt of negative publicity amid investigations over its series of product recalls and problem rates per-vehicle. The redesigned GX 460 was also voluntarily recalled in April 2010 for a software update, one week after "Consumer Reports" issued a recommendation not to buy the SUV, citing a possible rollover risk following the slow stability control response to a high-speed emergency turn. Although the publication knew of no reported incidents, the GX 460 received updated stability control software. In late 2010 and early 2011, Lexus began sales of the CT 200h, a compact four-door hybrid hatchback designed for Europe, in multiple markets. Sales of lower-displacement regional models were also expanded, beginning with the ES 240 in China followed by the RX 270; Japan, Russia, and Taiwan were among markets which received model variants intended for reduced emissions or import taxes. In March 2011, the Tōhoku earthquake and tsunami caused severe disruption to Lexus' Japan-based production lines, hindering the marque's near-term sales prospects. Lexus' U.S. executives stated that due to vehicle shortages amidst close competition from BMW, Mercedes-Benz, and Audi, the marque would not remain the country's top-selling premium car brand. Cumulative sales results for 2011 indicated a 14 percent sales drop in the U.S. market, along with sales increases of 40 percent and 27 percent in Europe and Japan respectively, for a global sales total of 410,000 units. Lexus' streak of 11 consecutive years as the best-selling luxury marque in the U.S. ended that year, with the title going to BMW followed by Mercedes-Benz. While 45 percent of Lexus sales in the U.S. in 2011 relied upon the RX luxury crossover SUV, rival Mercedes-Benz's best-selling offering was the E-Class mid-luxury sedan, which commands considerably higher prices. Subsequently, Toyota chairman Akio Toyoda vowed to restore passion to the marque and further increase its organizational independence, admitting that "...back then we did not regard Lexus as a brand, but as a distribution channel". As a result of Toyoda's organizational changes, Lexus senior managers report directly to the chairman for the first time in the marque's history. In January 2012, the marque began sales of the fourth generation GS line, including GS 350 and GS 450h variants, as well as a lower-displacement GS 250 model for select markets. In April 2012, the sixth generation ES line, include ES 350 and ES 300h variants, debuted at the New York International Auto Show. In April 2014, Lexus unveiled the five-seater NX crossover. The vehicle features a very first for a Lexus vehicle: a turbocharger. Its nomenclature is denoted as the 200t. In August 2014, Toyota announced it would be cutting its Lexus spare parts prices in China by up to 35 percent. The company admitted the move was in response to a probe foreshadowed earlier in the month by China's National Development and Reform Commission of Lexus spare parts policies, as part of an industry-wide investigation into what the Chinese regulator considers exorbitantly high prices being charged by automakers for spare parts and after-sales servicing. In March 2016, Lexus announced that it will be producing a new flagship vehicle: the two-door LC 500. The vehicle will be produced for late 2017 in a V8 version putting out 467 horsepower. The LC 500h, a V6 hybrid variant, could potentially become available in late 2017 or early 2018. In April 2019, Lexus announced that a rebadged limousine version of the third-generation Alphard would be sold as the Lexus LM. In October 2019, Lexus announced that it will be launching the brand's first all-battery electric vehicle in 2020. It was announced in April 2019 that Lexus would finally enter the market in Mexico in 2021 with some of the vehicles in their lineup. Lexus International, headed by managing officer Tokuo Fukuichi, coordinates the worldwide operations of Toyota's luxury division. Other executives at Lexus' global headquarters, located in Nagoya, Aichi, include Mark Templin, executive vice president of Lexus International, and managers of the marque's Japan Sales and Marketing and global Product and Marketing Planning divisions. While organizationally separate from its parent company, Lexus International reports directly to Toyota chief executive officer Akio Toyoda. In the U.S., Lexus operations are headed by David Christ, group vice president and general manager of the U.S. Lexus division, located in Plano, Texas. In Europe, Lexus operations are headed by Alain Uyttenhoven, vice president of Lexus Europe, located in Brussels. Companion design facilities are located in Southern California and central Japan, with the head design studio devoted entirely to Lexus models in Toyota City, Aichi. Lexus sales operations vary in structure by region. In many markets, such as the U.S., the dealership network is a distinct organization from corporate headquarters, with separately owned and operated Lexus showrooms. By contrast, in Japan all 143 dealerships in the country are owned and operated by Lexus. Several markets have a designated, third party regional distributor; for example, in the United Arab Emirates, sales operations are managed by Al-Futtaim Motors LLC, and in Costa Rica, Lexus vehicles are sold via regional distributor Purdy Motors S.A. Other officially sanctioned regional distributors have sold Lexus models prior to the launch of, or in absence of, a dedicated dealership network. The Lexus brand launched in the Indian market in 2017, with the models RX450h, LX450d, LX570, ES300h, NX, LS. Dealerships in Mumbai, Delhi, Gurgaon and Bangalore became operational in March 2017, when the brand began sales in India. A second set of dealerships will be in Chandigarh, Cochin and Chennai and will be operational toward the end of 2017. This made Lexus the fifth luxury brand to be launched in India, after Mercedes-Benz, Porsche, BMW and Audi. Global sales of Lexus vehicles reached an all-time high in 2007, with a total of 518,000. Sales decreased in subsequent years due to the effects of the 2008 recession and the Japanese tsunami of 2011. Following this, sales recovered and reached a new high of 523,000 in 2013. In 2014, the Lexus brand set a new global sales record after selling 582,000 vehicles. This made Lexus the fourth best selling luxury brand in the world, trailing BMW, Audi and Mercedes-Benz. Global sales of Lexus vehicles increased by 12 percent in 2015 to reach another annual sales record with 652,000 units sold worldwide. Global cumulative sales of Lexus brand hybrid electric cars reached the 500,000 mark in November 2012. The 1 million sales milestone was achieved in March 2016. The Lexus RX 400h/RX 450h ranks as the top selling Lexus hybrid with 335,000 units delivered worldwide , followed by the Lexus CT 200h with 267,000 units. Lexus has not sold as well in Europe, where it suffers from smaller brand recognition, image, and a less-developed dealership network. In European markets, the Lexus LS has ranked behind Jaguar, Mercedes-Benz, Audi, and BMW in flagship luxury car sales. Automotive analysts have suggested a possible rationale for the sales disparity, in that European buyers place less emphasis on vehicle reliability and have more brand loyalty to established domestic marques. In contrast, the Lexus LS has ranked second in sales to the Mercedes-Benz S-Class (and ahead of rivals from BMW, Audi, and Jaguar) in markets outside Europe, such as South Africa. Currently all of Lexus's models for the US market are imported from Japan, with the exception of the RX, which is also produced in Cambridge, Ontario for North America, and the ES, which is also produced in Georgetown, Kentucky. The RX midsized crossover is Lexus's best selling model in the United States, while the ES mid-sized car is the most popular sedan in the line-up. Financial data of Lexus operations are not disclosed publicly. However, automotive analysts estimate that the Lexus division contributes a disproportionate share of Toyota's profits, relative to its limited production and sales volume. Interviews with retired division officials indicate that depending on sales volume, vehicle product development cycles, and exchange rates, Lexus sales have accounted for as much as half of Toyota's annual U.S. profit in certain years. Division executives have employed pricing strategies aimed at sustaining profit margins rather than sales volume, with historically fewer price incentives than rival brands. In 2006, Lexus entered Interbrand's list of the Top 100 Global Brands for the first time, with an estimated brand value of approximately $3 billion annually. In 2009, Interbrand ranked Lexus as Japan's seventh largest brand, between Panasonic and Nissan, based on revenue, earnings, and market value. The global Lexus lineup features sedans of different size classes, including the compact IS model, mid-size ES and GS models, and the full-size LS. The 2-door coupe range consists of the RC and the LC. Former convertibles include the SC and IS C models. Sport-utility vehicles range in size from the subcompact UX, compact NX and mid-size RX crossovers, to the full-size GX and LX. Hybrid models include the CT hatchback, and variants of the IS, ES, GS, LS, RC, LC, UX, NX, RX and LM. The F marque line formerly produced a variant of the IS and the LFA and currently produces a variant of the GS sedan and the RC coupe. Lexus produces its highest-performance models under its F marque division. The name refers to Flagship and Fuji Speedway in Japan, whose first corner, 27R, inspired the shape of the ""F"" emblem. F marque models are developed by the Lexus Vehicle Performance Development Division. The first F marque model, the IS F, went on sale in 2007, followed by the LFA in 2009. A related F-Sport performance trim level and factory-sanctioned accessory line is available for standard Lexus models such as the IS 250 and IS 350. F-Sport succeeded an earlier in-house tuning effort, the TRD-based L-Tuned, which had offered performance packages on the IS and GS sedans in the early 2000s (decade). The latest additions to the performance F Sport marque include the Lexus RC F Sport and Lexus GS F Sport and Lexus LS F Sport. Lexus production models are named alphanumerically using two-letter designations followed by three digits. The first letter indicates relative status in the Lexus model range (ranking), and the second letter refers to car body style or type (e.g. LS for 'luxury sedan'). The three digits indicate engine displacement in liters multiplied by a factor of one hundred (e.g. 350 for a 3.5 L engine). A space is used between the letters and numbers. The same letter may be used differently depending on the model; 'S' can refer to 'sedan' or 'sport' (e.g. in LS and SC), while 'X' refers to 'luxury utility vehicle' or SUV. On hybrids, the three digits refer to the combined gasoline-electric output. For certain models, a lower case letter placed after the alphanumeric designation indicates powerplant type ('h' for hybrid, 'd' for diesel, 't' for turbocharged), while capital letter(s) placed at the end indicates a class subtype (e.g. 'L' for long-wheelbase, 'C' for coupe, 'AWD' for all-wheel drive). On F marque models, the two-letter designation and the letter 'F' are used with no numbers or hyphens (e.g. IS F). Lexus design has traditionally placed an emphasis on targeting specific vehicle development standards. Since the marque's inception, design targets have ranged from aerodynamics and ride quality to interior ergonomics. The backronym "IDEAL" ("Impressive, Dynamic, Elegant, Advanced, and Lasting") is used in the development process. Each vehicle is designed according to approximately 500 specific product standards, known as "Lexus Musts," on criteria such as leather seat stitching. Design elements from the marque's concept vehicle line, the LF series (including the 2003 LF-S and 2004 LF-C), have been incorporated in production models. Vehicle cabins have incorporated electroluminescent Optitron gauges, SmartAccess, a smart key entry and startup system, and multimedia features. Beginning with the 2010 RX and HS models, the Remote Touch system, featuring a computer mouse-like controller with haptic feedback, was introduced; other models have featured touchscreen controls (through the 2009 model year) as a navigation screen interface. 2014 saw the introduction of the next version of Lexus’ remote-touch innovations—the Remote Touch Interface Touchpad in the new RC Coupe. In 1989, Lexus became among the first premium car marques to equip models with premium audio systems, in partnership with stereo firm Nakamichi. Since 2001, optional surround sound systems are offered via high-end audio purveyor Mark Levinson. For reduced cabin noise, the first LS 400 introduced sandwich steel plating, and later models added acoustic glass. In 2006, the LS 460 debuted the first ceiling air diffusers and infrared body temperature sensors in a car. Telematics services include G-Book with G-Link in Asia and Lexus Enform in North America. In 2006, Lexus incorporated the first production eight-speed automatic transmission in an automobile with the LS 460, and the gearbox was later adapted for the GS 460 and IS F models. Continuously variable transmissions, regenerative brakes, and electric motors have been used on all Lexus hybrid models. In 2007, Lexus executives signaled intentions to equip further models with hybrid powertrains, catering to demands for a decrease in both carbon pollution and oil reliance. Hybrid models have been differentiated by separate badging and lighting technology; in 2008, the LS 600h L became the first production vehicle to use LED headlamps. Safety features on Lexus models range from stability and handling programs (Vehicle Stability Control and Vehicle Dynamics Integrated Management) to backup cameras, swivel headlights, and sonar warning systems. The Lexus Pre-Collision System (PCS) integrates multiple safety systems. In 2007, Lexus introduced the first car safety systems with infrared and pedestrian detection capabilities, lane keep assist, a Driver Monitoring System with facial recognition monitoring of driver attentiveness, and rear pre-collision whiplash protection, as part of the LS 460 PCS. As a safety precaution, Lexus GPS navigation systems in many regions feature a motion lockout when the vehicle reaches a set speed; to prevent distraction, navigation inputs are limited, while voice input and certain buttons are still accessible. This safety feature has attracted criticism because passengers cannot use certain functions when the vehicle is in motion. Pre-2007 models came with a hidden manufacturer override option, and updated European models allow operation in motion. Production models in development have included convertibles, crossovers, and dedicated hybrids. Under the F marque, Lexus plans to produce high-performance vehicles with its first expressions being the IS F and the LFA. Lexus officials have also discussed standard production model usage of varying platforms. The LS uses a dedicated platform, while the entry-level Lexus ES had been criticized for being too similar to the Toyota Camry, with which it shared platforms until its sixth generation, in both styling and powertrain design. The Nürburgring test track in Germany has also seen Lexus prototype testing. Lexus introduced a new design language known as "L-finesse" in the mid-2000s with its LF series concepts and the 2006 Lexus GS. L-finesse is represented by three Japanese kanji characters which translate as "Intriguing Elegance, Incisive Simplicity, and Seamless Anticipation". Design characteristics, including a fastback profile, lower-set grille, and the use of both convex and concave surfaces, are derived from Japanese cultural motifs (e.g. the phrase "kirikaeshi" in arrowhead shapes). While earlier Lexus models were criticized for reserved and derivative styling, and often mistaken for understated domestic market cars, automotive design analyses described L-finesse as adding a distinctive nature and embrace of Japanese design identity. Opinions varied for L-finesse's debut on the GS; "Sports Car International" analysis praised the vehicle's in-person appearance; "Automobile Magazine" criticized the daring of its forward styling, and compared subsequent rival models for design similarities. In 2012, the arrival of the redesigned fourth generation Lexus GS featured the introduction of a spindle-shaped grille design, intended to be used on all forthcoming Lexus models. L-finesse exhibitions were presented at Milan's Salone del Mobile from 2005 through 2009. The first Lexus vehicles were manufactured in Toyota's Tahara plant, a highly sophisticated, computerized manufacturing plant in Japan. Lexus production techniques include methods and standards of quality control that differ from Toyota models. At the Tahara plant, separate assembly lines were developed for Lexus vehicles, along with new molds and specialized manufacturing equipment. Welding processes, body panel fit tolerances, and paint quality requirements are more stringent. Lexus plant workers, typically veteran technicians, are identified via repeated performance evaluations and ranked according to skill grade, with limited applicants accepted. The highest level "takumi" (Japanese for "artisan") engineers are responsible for maintaining production standards at key points in the assembly process, such as testing engine performance. Production vehicles are given visual inspections for flaws, individually test-driven at high speeds, and subjected to vibration tests. Through the 2000s (decade), most Lexus sedan and SUV production has occurred in Japan at the Tahara plant in Aichi and Miyata plant in Fukuoka. In addition to the Tahara factory, Lexus vehicles have been produced at the Miyata plant (Toyota Motor Kyushu, Inc.) in Miyawaka, Fukuoka, Higashi Fuji plant (Kanto Auto Works, Ltd.) in Susono, Shizuoka, and Sanage plant (Toyota Boshoku Corp.; Araco) in Toyota City, Aichi. Front-wheel drive cars, such as the ES and HS, are produced in the Fukuoka Prefecture. The Kokura plant in Kitakyushu, Fukuoka, which opened in 2008, is a dedicated hybrid production site for Lexus models such as the gasoline-electric RX. The North American–market RX 350 (since the 2004 model year) is produced at the Cambridge plant (Toyota Canada, Inc.) in the city of Cambridge, in Ontario, Canada, which is the first Lexus production site located outside Japan. In late 2015, Lexus started to assemble North American-spec ES 350 sedans at the Georgetown plant (TMMK, Inc.). In January 2020, Toyota Kirloskar Motor of India started assembling the ES sedan in its Bidadi plant. In the 2000s (decade), "Consumer Reports" named Lexus among the top five most reliable brands in its Annual Car Reliability Surveys of over one million vehicles across the U.S. Lexus has become known for efforts to provide an upscale image, particularly with service provided after the sale. The waiting areas in service departments are replete with amenities, ranging from refreshment bars to indoor putting greens. Dealerships typically offer complimentary loaner cars or "courtesy cars" and free car washes, and some have added on-site cafes and designer boutiques. Service bays are lined with large picture windows for owners to watch the servicing of their vehicle. In 2005, Lexus also began reserving parking lots at major sporting arenas, entertainment events, and shopping malls, with the only requirement for free entry being the ownership of a Lexus vehicle. An online owner publication, "Lexus Magazine", features automotive and lifestyle articles and is published online monthly and on a mobile site. Since 2002, Lexus has scored consecutive top ratings in the "Auto Express" and 76,000-respondent "Top Gear" customer satisfaction surveys in the UK. Lexus has also repeatedly topped the 79,000-respondent J.D. Power Customer Service Index and Luxury Institute, New York surveys in the U.S. As a result of service satisfaction levels, the marque has one of the highest customer loyalty rates in the industry. To improve customer service, employees are instructed to follow the "Lexus Covenant," the marque's founding promise (which states that "Lexus will treat each customer as we would a guest in our home"), and some dealerships have incorporated training at upscale establishments such as Nordstrom department stores and Ritz-Carlton hotels. Lexus first entered the motorsport arena in 1999 when its racing unit, Team Lexus, fielded two GS 400 race vehicles in the Motorola Cup North American Street Stock Championship touring car series. In its 1999 inaugural season, Team Lexus achieved its first victory with its sixth race at Road Atlanta. Led by Sports Car Club of America and International Motor Sports Association driver Chuck Goldsborough, based in Baltimore, Maryland, Team Lexus capitalized on the debut of the first generation Lexus IS by entering three IS 300s in the third race of the 2001 Grand-Am Cup season at Phoenix, Arizona. Team Lexus won its first IS 300 victory that year at the Virginia International Raceway. In 2002, Team Lexus' competitive efforts in the Grand-Am Cup ST1 (Street Tuner) class achieved victories in the Drivers' and Team Championships, as well as a sweep of the top three finishes at Circuit Mont-Tremblant in Quebec, Canada. After the release of the Lexus brand in the Japanese domestic market in 2005, Lexus sanctioned the entry of four SC 430 coupes in the Super GT series of the All Japan Grand Touring Car Championship in the GT500 class. In the first race of the 2006 series, an SC 430 took the chequered flag, and drivers André Lotterer and Juichi Wakisaka raced the SC 430 to capture the GT500 championship for that year. In 2007, another SC 430 won the GT500 opening round race. In 2006, Lexus raced a hybrid vehicle for the first time, entering a GS 450h performance hybrid sedan in partnership with Sigma Advanced Racing Development at the 24 Hours of Tokachi race in Hokkaido, Japan. Lexus Canada also entered the GS 450h in 2007's Targa Newfoundland event. In 2009, Lexus Super GT Team SC 430 and IS 350 racers won the GT500 and GT300 championships, respectively. Lexus' participation in endurance racing further includes the Rolex 24 Hours of Daytona, sanctioned by the Grand American Road Racing Association. After entering the Rolex Sports Car Series in 2004, Lexus has won over 15 Rolex Series event races. In 2005, Lexus was runner-up, and in 2006, it won the championship. Although Toyota has won this race in the past, it was the first time that its luxury arm emerged as the winner. In 2007, six Lexus-powered Daytona prototypes were entered in the Rolex 24 Hours of Daytona event at the Daytona International Speedway. Lexus was a repeat winner of the event, with a Lexus-Riley prototype driven by Scott Pruett, Juan Pablo Montoya, and Salvador Durán of Chip Ganassi Racing finishing first; Lexus-Riley prototypes also took three of the top ten spots. In 2008, Lexus won its third consecutive win at Daytona. For the 2010 season, Lexus departed from the Rolex Sports Car Series, and Ganassi Racing switched to BMW/Dinan engines. The LF-A prototype also competed on the Nürburgring from 2008 to 2011 in VLN endurance races and in the 24 Hours Nürburgring, also with the IS F. On 14 May 2011, a CT 200h tuned up by Gazoo Racing competed in the Adenauer ADAC Rundstrecken-Trophy, a six-hour endurance race. 3GT Racing, a partnership of Lexus and Paul Gentilozzi, entered two Lexus RC F GT3 at the N.American 2017 WeatherTech SportsCar Championship in the GT Daytona class. Their first win came in the 2018 WeatherTech SportsCar Championship with Dominik Baumann and Kyle Marcelli at Mid-Ohio before winning again with the same pairing at Virginia International Raceway. Lexus finished in 8th in their first season in 2017 in the GT Daytona Manufacturer's Championship. They then improved to 5th in 2018 with the #14 achieving 5th and the #15 getting 10th in the Team's Championship. For the 2019 WeatherTech SportsCar Championship, the running of the Lexus GT3 cars has been transferred to AIM Vasser Sullivan, which is a debuting partnership, where the driver pairing of Jack Hawksworth and Richard Heistand have currently achieved 2 wins at Mid-Ohio before winning the next race at Belle Isle. As of the result of Lime Rock Park, Lexus are second in the 2019 Manufacturer's Championship and the #12 is third with the #14 in 5th despite that being the winning car. From its inception, Lexus has been advertised to luxury consumers using specific marketing strategies, with a consistent motif used for the marque's advertisements. Beginning in 1989, television ads were narrated by actor James Sloyan (the voice of "Mr. Lexus" until 2009), and accompanied by vehicles that performed unusual stunts onscreen. The first decade of Lexus commercials (1989–99) consisted primarily of disjunctive verbal descriptions, such as "relentless," "pursuit," and "perfection," while vehicles were used to claim superiority in precision, idling, and interior quiet and comfort on camera. Examples included the champagne glass "Balance" (1989) and rolling "Ball Bearing" (1992). In the 2000s (decade), commercials included descriptions of features, or a narration of the events onscreen, and were often targeted at the marque's German competitors. An annual "December to Remember" campaign featured scenes of family members surprising loved ones with the gift of a new Lexus. The marque returned to the champagne glass theme in a 2006 LS 460 spot showing the sedan maneuvering between two stacks of glasses using its self-parking system, and in a 2010 LFA spot showing its engine sound shattering a glass via resonance frequency. Industry observers have attributed Lexus' early marketing successes to higher levels of perceived quality and lower prices than competitors, which have enabled the marque to attract customers upgrading from mass-market cars. A reputation for dependability, bolstered by reliability surveys, also became a primary factor in attracting new customers from rival premium makes. Lexus has since grown to command higher price premiums than rival Japanese makes, with new models further increasing in price and reaching the more than $100,000 ultra-luxury category long dominated by rival European marques. Automotive analysts have also noted Lexus' relative newcomer status as a marketing challenge for the brand, although some have debated the requirement of a long history. European rivals have marketed their decades of heritage and pedigree, whereas Lexus' reputation rests primarily upon its perceived quality and shared history with parent company Toyota. Several analysts have stated that Lexus will have to develop its own heritage over time by highlighting technological innovations and producing substantial products. Lexus' marketing efforts have extended to sporting and charity event sponsorships, including the U.S. Open tennis Grand Slam event from 2005 to 2009, and the United States Golf Association's U.S. Open, U.S. Women's Open, U.S. Senior Open, and U.S. Amateur tournaments since 2007. Lexus has organized an annual Champions for Charity golf series in the U.S. since 1989. Endorsement contracts have also been signed with professional athletes Hideki Matsuyama, Andy Roddick, Annika Sörenstam, and Peter Jacobsen. Since 2008, Lexus has run the video website L Studio. Shows on L Studio include "Web Therapy". Lexus unveiled its new "Experience Amazing" tagline in the U.S. in a 60-second advertisement at the February 2017 Super Bowl LI. The new tagline replaced Lexus's previous slogan, "The Pursuit of Perfection". On 30 March 2018, Lexus premiered a fake partnership with 23 and Me during a spot on Saturday Night Live, for a pretend program that allows buyers to customize vehicles based on their DNA, as an April Fool's Day joke.
https://en.wikipedia.org/wiki?curid=18329