text
stringlengths
10
951k
source
stringlengths
39
44
Camelopardalis Camelopardalis is a large but faint constellation of the northern sky representing a giraffe. The constellation was introduced in 1612 or 1613 by Petrus Plancius. Some older astronomy books give Camelopardalus or Camelopardus as alternative forms of the name, but the version recognized by the International Astronomical Union matches the genitive form, seen suffixed to most of its key stars. First attested in English in 1785, the word "camelopardalis" comes from Latin, and it is the romanization of the Greek "καμηλοπάρδαλις" meaning "giraffe", from "κάμηλος" ("kamēlos"), "camel" + "πάρδαλις" ("pardalis"), "spotted", because it has a long neck like a camel and spots. Although Camelopardalis is the 18th largest constellation, it is not a particularly bright constellation, as the brightest stars are only of fourth magnitude. In fact, it only contains four stars below (brighter than) magnitude 5.0. Other variable stars are U Camelopardalis, VZ Camelopardalis, and Mira variables T Camelopardalis, X Camelopardalis, and R Camelopardalis. RU Camelopardalis is one of the brighter Type II Cepheids visible in the night sky. In 2011 a supernova was discovered in the constellation. Camelopardalis is in the part of the celestial sphere facing away from the galactic plane. Accordingly, many distant galaxies are visible within its borders. The annual May meteor shower Camelopardalids from comet 209P/LINEAR have a radiant in Camelopardalis. The space probe "Voyager 1" is moving in the direction of this constellation, though it will not be nearing any of the stars in this constellation for many thousands of years, by which time its power source will be long dead. Camelopardalis is not one of Ptolemy's 48 constellations in the "Almagest". It was created by Petrus Plancius in 1613. It first appeared in a globe designed by him and produced by Pieter van den Keere. One year later, Jakob Bartsch featured it in his atlas. Johannes Hevelius depicted this constellation in his works which were so influential that it was referred to as Camelopardali Hevelii or abbreviated as Camelopard. Hevel. Part of the constellation was hived off to form the constellation Sciurus Volans, the Flying Squirrel, by William Croswell in 1810. However this was not taken up by later cartographers. In Chinese astronomy, the stars of Camelopardalis are located within a group of circumpolar stars called the Purple Forbidden Enclosure (紫微垣 "Zǐ Wēi Yuán").
https://en.wikipedia.org/wiki?curid=6364
Convention of Kanagawa On March 31, 1854, the or became the first treaty between the United States and the Tokugawa shogunate. Signed under threat of force, it effectively meant the end of Japan's 220-year-old policy of national seclusion ("sakoku") by opening the ports of Shimoda and Hakodate to American vessels. It also ensured the safety of American castaways and established the position of an American consul in Japan. The treaty also precipitated the signing of similar treaties establishing diplomatic relations with other Western powers. Since the beginning of the seventeenth century, the Tokugawa shogunate pursued a policy of isolating the country from outside influences. Foreign trade was maintained only with the Dutch and the Chinese and was conducted exclusively at Nagasaki under a strict government monopoly. This policy had two main objectives. One was the fear that trade with western powers and the spread of Christianity would serve as a pretext for the invasion of Japan by imperialist forces, as had been the case with most of the nations of Asia. The second objective was fear that foreign trade and the wealth developed would lead to the rise of a "daimyō" powerful enough to overthrow the ruling Tokugawa clan. By the early nineteenth century, this policy of isolation was increasingly under challenge. In 1844, King William II of the Netherlands sent a letter urging Japan to end the isolation policy on its own before change would be forced from the outside. In 1846, an official American expedition led by Commodore James Biddle arrived in Japan asking for ports to be opened for trade, but was sent away. In 1853, United States Navy Commodore Matthew C. Perry was sent with a fleet of warships by US president Millard Fillmore to force the opening of Japanese ports to American trade, through the use of gunboat diplomacy if necessary. The growing commerce between America and China, the presence of American whalers in waters offshore Japan, and the increasing monopolization of potential coaling stations by the British and French in Asia were all contributing factors. The Americans were also driven by concepts of Manifest Destiny and the desire to impose the benefits of western civilization on what they perceived as backward Asian nations. For the Japanese standpoint, increasing contacts with foreign warships and the increasing disparity between western military technology and the Japanese feudal armies created growing concern. The Japanese had been keeping abreast of world events via information gathered from Dutch traders in Dejima and had been forewarned by the Dutch of Perry's voyage. There was considerable internal debate in Japan on how best to meet this potential threat to Japan's economic and political sovereignty in light of events occurring in China with the Opium Wars. Perry arrived with four warships at Uraga, at the mouth of Edo Bay on July 8, 1853. After refusing Japanese demands that he proceed to Nagasaki, which was the designated port for foreign contact, and after threatening to continue directly on to Edo, the nation's capital, and to burn it to the ground if necessary, he was allowed to land at nearby Kurihama on July 14 and to deliver his letter. Despite years of debate on the isolation policy, Perry's letter created great controversy within the highest levels of the Tokugawa shogunate. The "shōgun" himself, Tokugawa Ieyoshi, died days after Perry's departure, and was succeeded by his sickly young son, Tokugawa Iesada, leaving effective administration in the hands of the Council of Elders ("rōjū") led by Abe Masahiro. Abe felt that it was currently impossible for Japan to resist the American demands by military force, and yet was reluctant to take any action on his own authority for such an unprecedented situation. Attempting to legitimize any decision taken, Abe polled all of the "daimyō" for their opinions. This was the first time that the Tokugawa shogunate had allowed its decision-making to be a matter of public debate, and had the unforeseen consequence of portraying the shogunate as weak and indecisive. The results of the poll also failed to provide Abe with an answer as, of the 61 known responses, 19 were in favor of accepting the American demands and 19 were equally opposed. Of the remainder, 14 gave vague responses expressing concern of possible war, 7 suggested making temporary concessions and two advised that they would simply go along with whatever was decided. Perry returned again on February 13, 1854, with an even larger force of eight warships and made it clear that he would not be leaving until a treaty was signed. Negotiations began on March 8 and proceeded for around one month. The Japanese side gave in to almost all of Perry's demands, with the exception of a commercial agreement modeled after previous American treaties with China, which Perry agreed to defer to a later time. The main controversy centered on the selection of the ports to open, with Perry adamantly rejecting Nagasaki. The treaty, written in English, Dutch, Chinese and Japanese, was signed on March 31, 1854 at what is now known as Kaikō Hiroba (Port Opening Square) Yokohama, a site adjacent to the current Yokohama Archives of History. The "Japan–US Treaty of Peace and Amity" has twelve articles: The final article, Article Twelve, stipulated that the terms of the treaty were to be ratified by the President of the United States and the "August Sovereign of Japan" within 18 months. At the time, "shōgun" Tokugawa Iesada was the de facto ruler of Japan; for the Emperor to interact in any way with foreigners was out of the question. Perry concluded the treaty with representatives of the shogun, led by plenipotentiary and the text was endorsed subsequently, albeit reluctantly, by Emperor Kōmei. The treaty was ratified on February 21, 1855. In the short term, the US was content with the agreement since Perry had achieved his primary objective of breaking Japan's "sakoku" policy and setting the grounds for protection of American citizens and an eventual commercial agreement. On the other hand, the Japanese were forced into this trade, and many saw it as a sign of weakness. The Tokugawa shogunate could point out that the treaty was not actually signed by the Shogun, or indeed any of his "rōjū", and that it had at least temporarily averted the possibility of immediate military confrontation. Externally, the treaty led to the United States-Japan Treaty of Amity and Commerce, the "Harris Treaty" of 1858, which allowed the establishment of foreign concessions, extraterritoriality for foreigners, and minimal import taxes for foreign goods. The Japanese chafed under the "unequal treaty system" which characterized Asian and western relations during this period. The Kanagawa treaty was also followed by similar agreements with the United Kingdom (Anglo-Japanese Friendship Treaty, October 1854), the Russians (Treaty of Shimoda, February 7, 1855), and the French (Treaty of Amity and Commerce between France and Japan, October 9, 1858). Internally, the treaty had far-reaching consequences. Decisions to suspend previous restrictions on military activities led to re-armament by many domains and further weakened the position of the Shogun. Debate over foreign policy and popular outrage over perceived appeasement to the foreign powers was a catalyst for the "sonnō jōi" movement and a shift in political power from Edo back to the Imperial Court in Kyoto. The opposition of Emperor Kōmei to the treaties further lent support to the "tōbaku" (overthrow the Shogunate) movement, and eventually to the Meiji Restoration. The Convention was negotiated and then signed in a purpose-built house in Yokohama, Japan, the site of which is now the Yokohama Archives of History.
https://en.wikipedia.org/wiki?curid=6365
Canis Major Canis Major is a constellation in the southern celestial hemisphere. In the second century, it was included in Ptolemy's 48 constellations, and is counted among the 88 modern constellations. Its name is Latin for "greater dog" in contrast to Canis Minor, the "lesser dog"; both figures are commonly represented as following the constellation of Orion the hunter through the sky. The Milky Way passes through Canis Major and several open clusters lie within its borders, most notably M41. Canis Major contains Sirius, the brightest star in the night sky, known as the "dog star". It is bright because of its proximity to the Solar System. In contrast, the other bright stars of the constellation are stars of great distance and high luminosity. At magnitude 1.5, Epsilon Canis Majoris (Adhara) is the second-brightest star of the constellation and the brightest source of extreme ultraviolet radiation in the night sky. Next in brightness are the yellow-white supergiant Delta (Wezen) at 1.8, the blue-white giant Beta (Mirzam) at 2.0, blue-white supergiants Eta (Aludra) at 2.4 and Omicron2 at 3.0, and white spectroscopic binary Zeta (Furud), also at 3.0. The red hypergiant VY Canis Majoris is one of the largest stars known, while the neutron star RX J0720.4-3125 has a radius of a mere 5 km. In ancient Mesopotamia, Sirius, named KAK.SI.DI by the Babylonians, was seen as an arrow aiming towards Orion, while the southern stars of Canis Major and a part of Puppis were viewed as a bow, named BAN in the "Three Stars Each" tablets, dating to around 1100 BC. In the later compendium of Babylonian astronomy and astrology titled "MUL.APIN", the arrow, Sirius, was also linked with the warrior Ninurta, and the bow with Ishtar, daughter of Enlil. Ninurta was linked to the later deity Marduk, who was said to have slain the ocean goddess Tiamat with a great bow, and worshipped as the principal deity in Babylon. The Ancient Greeks replaced the bow and arrow depiction with that of a dog. In Greek Mythology, Canis Major represented the dog Laelaps, a gift from Zeus to Europa; or sometimes the hound of Procris, Diana's nymph; or the one given by Aurora to Cephalus, so famed for its speed that Zeus elevated it to the sky. It was also considered to represent one of Orion's hunting dogs, pursuing Lepus the Hare or helping Orion fight Taurus the Bull; and is referred to in this way by Aratos, Homer and Hesiod. The ancient Greeks refer only to one dog, but by Roman times, Canis Minor appears as Orion's second dog. Alternative names include Canis Sequens and Canis Alter. Canis Syrius was the name used in the 1521 "Alfonsine tables". The Roman myth refers to Canis Major as "Custos Europae", the dog guarding Europa but failing to prevent her abduction by Jupiter in the form of a bull, and as "Janitor Lethaeus", "the watchdog". In medieval Arab astronomy, the constellation became "al-Kalb al-Akbar", "the Greater Dog", transcribed as "Alcheleb Alachbar" by 17th century writer Edmund Chilmead. Islamic scholar Abū Rayḥān al-Bīrūnī referred to Orion as "Kalb al-Jabbār", "the Dog of the Giant". Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called "Merzem", includes the stars of Canis Major and Canis Minor and is the herald of two weeks of hot weather. In Chinese astronomy, the modern constellation of Canis Major is located in the Vermilion Bird (), where the stars were classified in several separate asterisms of stars. The Military Market () was a circular pattern of stars containing Nu3, Beta, Xi1 and Xi2, and some stars from Lepus. The Wild Cockerel () was at the centre of the Military Market, although it is uncertain which stars depicted what. Schlegel reported that the stars Omicron and Pi Canis Majoris might have been them, while Beta or Nu2 have also been proposed. Sirius was ' (), the Celestial Wolf, denoting invasion and plunder. Southeast of the Wolf was the asterism ' (), the celestial Bow and Arrow, which was interpreted as containing Delta, Epsilon, Eta and Kappa Canis Majoris and Delta Velorum. Alternatively, the arrow was depicted by Omicron2 and Eta and aiming at Sirius (the Wolf), while the bow comprised Kappa, Epsilon, Sigma, Delta and 164 Canis Majoris, and Pi and Omicron Puppis. Both the Māori people and the people of the Tuamotus recognized the figure of Canis Major as a distinct entity, though it was sometimes absorbed into other constellations. ', also called ' and ', ("The Assembly of " or "The Assembly of Sirius") was a Māori constellation that included both Canis Minor and Canis Major, along with some surrounding stars. Related was ', also called ', the Mirror of , formed from an undefined group of stars in Canis Major. They called Sirius ' and ', corresponding to two of the names for the constellation, though ' was a name applied to other stars in various Māori groups and other Polynesian cosmologies. The Tuamotu people called Canis Major "", "the abiding assemblage of ". The Tharumba people of the Shoalhaven River saw three stars of Canis Major as ' (Bat) and his two wives ' (Mrs Brown Snake) and ' (Mrs Black Snake); bored of following their husband around, the women try to bury him while he is hunting a wombat down its hole. He spears them and all three are placed in the sky as the constellation '. To the Boorong people of Victoria, Sigma Canis Majoris was ' (which has become the official name of this star), and its flanking stars Delta and Epsilon were his two wives. The moon (', "native cat") sought to lure the further wife (Epsilon) away, but assaulted him and he has been wandering the sky ever since. Canis Major is a constellation in the Southern Hemisphere's summer (or northern hemisphere's winter) sky, bordered by Monoceros (which lies between it and Canis Minor) to the north, Puppis to the east and southeast, Columba to the southwest, and Lepus to the west. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CMa". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a quadrilateral; in the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −11.03° and −33.25°. Covering 380 square degrees or 0.921% of the sky, it ranks 43rd of the 88 currently-recognized constellations in size. Canis Major is a prominent constellation because of its many bright stars. These include Sirius (Alpha Canis Majoris), the brightest star in the night sky, as well as three other stars above magnitude 2.0. Furthermore, two other stars are thought to have previously outshone all others in the night sky—Adhara (Epsilon Canis Majoris) shone at −3.99 around 4.7 million years ago, and Mirzam (Beta Canis Majoris) peaked at −3.65 around 4.42 million years ago. Another, NR Canis Majoris, will be brightest at magnitude −0.88 in about 2.87 million years' time. The German cartographer Johann Bayer used the Greek letters Alpha through Omicron to label the most prominent stars in the constellation, including three adjacent stars as Nu and two further pairs as Xi and Omicron, while subsequent observers designated further stars in the southern parts of the constellation that were hard to discern from Central Europe. Bayer's countryman Johann Elert Bode later added Sigma, Tau and Omega; the French astronomer Nicolas Louis de Lacaille added lettered stars a to k (though none are in use today). John Flamsteed numbered 31 stars, with 3 Canis Majoris being placed by Lacaille into Columba as Delta Columbae (Flamsteed had not recognised Columba as a distinct constellation). He also labelled two stars—his 10 and 13 Canis Majoris—as Kappa1 and Kappa2 respectively, but subsequent cartographers such as Francis Baily and John Bevis dropped the fainter former star, leaving Kappa2 as the sole Kappa. Flamsteed's listing of Nu1, Nu2, Nu3, Xi1, Xi2, Omicron1 and Omicron2 have all remained in use. Sirius is the brightest star in the night sky at apparent magnitude −1.46 and one of the closest stars to Earth at a distance of 8.6 light-years. Its name comes from the Greek word for "scorching" or "searing". Sirius is also a binary star; its companion Sirius B is a white dwarf with a magnitude of 8.4–10,000 times fainter than Sirius A to observers on Earth. The two orbit each other every 50 years. Their closest approach last occurred in 1993 and they will be at their greatest separation between 2020 and 2025. Sirius was the basis for the ancient Egyptian calendar. The star marked the Great Dog's mouth on Bayer's star atlas. Flanking Sirius are Beta and Gamma Canis Majoris. Also called Mirzam or Murzim, Beta is a blue-white Beta Cephei variable star of magnitude 2.0, which varies by a few hundredths of a magnitude over a period of six hours. Mirzam is 500 light-years from Earth, and its traditional name means "the announcer", referring to its position as the "announcer" of Sirius, as it rises a few minutes before Sirius does. Gamma, also known as Muliphein, is a fainter star of magnitude 4.12, in reality a blue-white bright giant of spectral type B8IIe located 441 light-years from earth. Iota Canis Majoris, lying between Sirius and Gamma, is another star that has been classified as a Beta Cephei variable, varying from magnitude 4.36 to 4.40 over a period of 1.92 hours. It is a remote blue-white supergiant star of spectral type B3Ib, around 46,000 times as luminous as the sun and, at 2500 light-years distant, 300 times further away than Sirius. Epsilon, Omicron2, Delta, and Eta Canis Majoris were called "Al Adzari" "the virgins" in medieval Arabic tradition. Marking the dog's right thigh on Bayer's atlas is Epsilon Canis Majoris, also known as Adhara. At magnitude 1.5, it is the second-brightest star in Canis Major and the 23rd-brightest star in the sky. It is a blue-white supergiant of spectral type B2Iab, around 404 light-years from Earth. This star is one of the brightest known extreme ultraviolet sources in the sky. It is a binary star; the secondary is of magnitude 7.4. Its traditional name means "the virgins", having been transferred from the group of stars to Epsilon alone. Nearby is Delta Canis Majoris, also called Wezen. It is a yellow-white supergiant of spectral type F8Iab and magnitude 1.84, around 1605 light-years from Earth. With a traditional name meaning "the weight", Wezen is 17 times as massive and 50,000 times as luminous as the Sun. If located in the centre of the Solar System, it would extend out to Earth as its diameter is 200 times that of the Sun. Only around 10 million years old, Wezen has stopped fusing hydrogen in its core. Its outer envelope is beginning to expand and cool, and in the next 100,000 years it will become a red supergiant as its core fuses heavier and heavier elements. Once it has a core of iron, it will collapse and explode as a supernova. Nestled between Adhara and Wezen lies Sigma Canis Majoris, known as Unurgunite to the Boorong and Wotjobaluk people, a red supergiant of spectral type K7Ib that varies irregularly between magnitudes 3.43 and 3.51. Also called Aludra, Eta Canis Majoris is a blue-white supergiant of spectral type B5Ia with a luminosity 176,000 times and diameter around 80 times that of the Sun. Classified as an Alpha Cygni type variable star, Aludra varies in brightness from magnitude 2.38 to 2.48 over a period of 4.7 days. It is located 1120 light-years away. To the west of Adhara lies 3.0-magnitude Zeta Canis Majoris or Furud, around 362 light-years distant from Earth. It is a spectroscopic binary, whose components orbit each other every 1.85 years, the combined spectrum indicating a main star of spectral type B2.5V. Between these stars and Sirius lie Omicron1, Omicron2, and Pi Canis Majoris. Omicron2 is a massive supergiant star about 21 times as massive as the Sun. Only 7 million years old, it has exhausted the supply of hydrogen at its core and is now processing helium. It is an Alpha Cygni variable that undergoes periodic non-radial pulsations, which cause its brightness to cycle from magnitude 2.93 to 3.08 over a 24.44-day interval. Omicron1 is an orange K-type supergiant of spectral type K2.5Iab that is an irregular variable star, varying between apparent magnitudes 3.78 and 3.99. Around 18 times as massive as the Sun, it shines with 65,000 times its luminosity. North of Sirius lie Theta and Mu Canis Majoris, Theta being the most northerly star with a Bayer designation in the constellation. Around 8 billion years old, it is an orange giant of spectral type K4III that is around as massive as the Sun but has expanded to 30 times the Sun's diameter. Mu is a multiple star system located around 1244 light-years distant, its components discernible in a small telescope as a 5.3-magnitude yellow-hued and 7.1-magnitude bluish star. The brighter star is a giant of spectral type K2III, while the companion is a main sequence star of spectral type B9.5V. Nu Canis Majoris is a yellow-hued giant star of magnitude 5.7, 278 light-years away; it is at the threshold of naked-eye visibility. It has a companion of magnitude 8.1. At the southern limits of the constellation lie Kappa and Lambda Canis Majoris. Although of similar spectra and nearby each other as viewed from Earth, they are unrelated. Kappa is a Gamma Cassiopeiae variable of spectral type B2Vne, which brightened by 50% between 1963 and 1978, from magnitude 3.96 or so to 3.52. It is around 659 light-years distant. Lambda is a blue-white B-type main sequence dwarf with an apparent magnitude of 4.48 located around 423 light-years from Earth. It is 3.7 times as wide as and 5.5 times as massive as the Sun, and shines with 940 times its luminosity. Canis Major is also home to many variable stars. EZ Canis Majoris is a Wolf–Rayet star of spectral type WN4 that varies between magnitudes 6.71 and 6.95 over a period of 3.766 days; the cause of its variability is unknown but thought to be related to its stellar wind and rotation. VY Canis Majoris is a remote red hypergiant located approximately 3,800 light-years away from Earth. It is one of largest stars known (sometimes described as the largest known) and is also one of most luminous with a radius varying from 1,420 to 2,200 times the Sun's radius, and a luminosity around 300,000 times greater than the Sun. Its current mass is about 17 ± 8 solar masses, having shed material from an initial mass of 25–32 solar masses. VY CMa is also surrounded by a red reflection nebula that has been made by the material expelled by the strong stellar winds of its central star. W Canis Majoris is a type of red giant known as a carbon star—a semiregular variable, it ranges between magnitudes 6.27 and 7.09 over a period of 160 days. A cool star, it has a surface temperature of around 2,900 K and a radius 234 times that of the Sun, its distance estimated at 1,444–1,450 light-years from Earth. At the other extreme in size is RX J0720.4-3125, a neutron star with a radius of around 5 km. Exceedingly faint, it has an apparent magnitude of 26.6. Its spectrum and temperature appear to be mysteriously changing over several years. The nature of the changes are unclear, but it is possible they were caused by an event such as the star's absorption of an accretion disc. Tau Canis Majoris is a Beta Lyrae-type eclipsing multiple star system that varies from magnitude 4.32 to 4.37 over 1.28 days. Its four main component stars are hot O-type stars, with a combined mass 80 times that of the Sun and shining with 500,000 times its luminosity, but little is known of their individual properties. A fifth component, a magnitude 10 star, lies at a distance of . The system is only 5 million years old. UW Canis Majoris is another Beta Lyrae-type star 3000 light-years from Earth; it is an eclipsing binary that ranges in magnitude from a minimum of 5.3 to a maximum of 4.8. It has a period of 4.4 days; its components are two massive hot blue stars, one a blue supergiant of spectral type O7.5–8 Iab, while its companion is a slightly cooler, less evolved and less luminous supergiant of spectral type O9.7Ib. The stars are 200,000 and 63,000 times as luminous as the Sun. However the fainter star is the more massive at 19 solar masses to the primary's 16. R Canis Majoris is another eclipsing binary that varies from magnitude 5.7 to 6.34 over 1.13 days, with a third star orbiting these two every 93 years. The shortness of the orbital period and the low ratio between the two main components make this an unusual Algol-type system. Seven star systems have been found to have planets. Nu2 Canis Majoris is an ageing orange giant of spectral type K1III of apparent magnitude 3.91 located around 64 light-years distant. Around 1.5 times as massive and 11 times as luminous as the Sun, it is orbited over a period of 763 days by a planet 2.6 times as massive as Jupiter. HD 47536 is likewise an ageing orange giant found to have a planetary system—echoing the fate of the Solar System in a few billion years as the Sun ages and becomes a giant. Conversely, HD 45364 is a star 107 light-years distant that is a little smaller and cooler than the Sun, of spectral type G8V, which has two planets discovered in 2008. With orbital periods of 228 and 342 days, the planets have a 3:2 orbital resonance, which helps stabilise the system. HD 47186 is another sunlike star with two planets; the inner—HD 47186 b—takes four days to complete an orbit and has been classified as a Hot Neptune, while the outer—HD 47186 c—has an eccentric 3.7-year period orbit and has a similar mass to Saturn. HD 43197 is a sunlike star around 183 light-years distant that has a Jupiter-size planet with an eccentric orbit. Z Canis Majoris is a star system a mere 300,000 years old composed of two pre-main-sequence stars—a FU Orionis star and a Herbig Ae/Be star, which has brightened episodically by two magnitudes to magnitude 8 in 1987, 2000, 2004 and 2008. The more massive Herbig Ae/Be star is enveloped in an irregular roughly spherical cocoon of dust that has an inner diameter of and outer diameter of . The cocoon has a hole in it through which light shines that covers an angle of 5 to 10 degrees of its circumference. Both stars are surrounded by a large envelope of in-falling material left over from the original cloud that formed the system. Both stars are emitting jets of material, that of the Herbig Ae/Be star being much larger—11.7 light-years long. Meanwhile, FS Canis Majoris is another star with infra-red emissions indicating a compact shell of dust, but it appears to be a main-sequence star that has absorbed material from a companion. These stars are thought to be significant contributors to interstellar dust. The band of the Milky Way goes through Canis Major, with only patchy obscurement by interstellar dust clouds. It is bright in the northeastern corner of the constellation, as well as in a triangular area between Adhara, Wezen and Aludra, with many stars visible in binoculars. Canis Major boasts several open clusters. The only Messier object is M41 (NGC 2287), an open cluster with a combined visual magnitude of 4.5, around 2300 light-years from Earth. Located 4 degrees south of Sirius, it contains contrasting blue, yellow and orange stars and covers an area the apparent size of the full moon—in reality around 25 light-years in diameter. Its most luminous stars have already evolved into giants. The brightest is a 6.3-magnitude star of spectral type K3. Located in the field is 12 Canis Majoris, though this star is only 670 light-years distant. NGC 2360, known as Caroline's Cluster after its discoverer Caroline Herschel, is an open cluster located 3.5 degrees west of Muliphein and has a combined apparent magnitude of 7.2. Around 15 light-years in diameter, it is located 3700 light-years away from Earth, and has been dated to around 2.2 billion years old. NGC 2362 is a small, compact open cluster, 5200 light-years from Earth. It contains about 60 stars, of which Tau Canis Majoris is the brightest member. Located around 3 degrees northeast of Wezen, it covers an area around 12 light-years in diameter, though the stars appear huddled around Tau when seen through binoculars. It is a very young open cluster as its member stars are only a few million years old. Lying 2 degrees southwest of NGC 2362 is NGC 2354 a fainter open cluster of magnitude 6.5, with around 15 member stars visible with binoculars. Located around 30' northeast of NGC 2360, NGC 2359 (Thor's Helmet or the Duck Nebula) is a relatively bright emission nebula in Canis Major, with an approximate magnitude of 10, which is 10,000 light-years from Earth. The nebula is shaped by HD 56925, an unstable Wolf–Rayet star embedded within it. In 2003, an overdensity of stars in the region was announced to be the Canis Major Dwarf, the closest satellite galaxy to Earth. However, there remains debate over whether it represents a disrupted dwarf galaxy or in fact a variation in the thin and thick disk and spiral arm populations of the Milky Way. Investigation of the area yielded only ten RR Lyrae variables—consistent with the Milky Way's halo and thick disk populations rather than a separate dwarf spheroidal galaxy. On the other hand, a globular cluster in Puppis, NGC 2298—which appears to be part of the Canis Major dwarf system—is extremely metal-poor, suggesting it did not arise from the Milky Way's thick disk, and instead is of extragalactic origin. NGC 2207 and IC 2163 are a pair of face-on interacting spiral galaxies located 125 million light-years from Earth. About 40 million years ago, the two galaxies had a close encounter and are now moving farther apart; nevertheless, the smaller IC 2163 will eventually be incorporated into NGC 2207. As the interaction continues, gas and dust will be perturbed, sparking extensive star formation in both galaxies. Supernovae have been observed in NGC 2207 in 1975 (type Ia SN 1975a), 1999 (the type Ib SN 1999ec), 2003 (type 1b supernova SN 2003H), and 2013 (type II supernova SN 2013ai). Located 16 million light-years distant, ESO 489-056 is an irregular dwarf- and low-surface-brightness galaxy that has one of the lowest metallicities known.
https://en.wikipedia.org/wiki?curid=6366
Canis Minor Canis Minor is a small constellation in the northern celestial hemisphere. In the second century, it was included as an asterism, or pattern, of two stars in Ptolemy's 48 constellations, and it is counted among the 88 modern constellations. Its name is Latin for "lesser dog", in contrast to Canis Major, the "greater dog"; both figures are commonly represented as following the constellation of Orion the hunter. Canis Minor contains only two stars brighter than the fourth magnitude, Procyon (Alpha Canis Minoris), with a magnitude of 0.34, and Gomeisa (Beta Canis Minoris), with a magnitude of 2.9. The constellation's dimmer stars were noted by Johann Bayer, who named eight stars including Alpha and Beta, and John Flamsteed, who numbered fourteen. Procyon is the seventh-brightest star in the night sky, as well as one of the closest. A yellow-white main sequence star, it has a white dwarf companion. Gomeisa is a blue-white main sequence star. Luyten's Star is a ninth-magnitude red dwarf and the Solar System's next closest stellar neighbour in the constellation after Procyon. The fourth-magnitude HD 66141, which has evolved into an orange giant towards the end of its life cycle, was discovered to have a planet in 2012. There are two faint deep-sky objects within the constellation's borders. The 11 Canis-Minorids are a meteor shower that can be seen in early December. Though strongly associated with the Classical Greek uranographic tradition, Canis Minor originates from ancient Mesopotamia. Procyon and Gomeisa were called "MASH.TAB.BA" or "twins" in the "Three Stars Each" tablets, dating to around 1100 BC. In the later "MUL.APIN", this name was also applied to the pairs of Pi3 and Pi4 Orionis and Zeta and Xi Orionis. The meaning of "MASH.TAB.BA" evolved as well, becoming the twin deities Lulal and Latarak, who are on the opposite side of the sky from "Papsukal", the True Shepherd of Heaven in Babylonian mythology. Canis Minor was also given the name "DAR.LUGAL", its position defined as "the star which stands behind it [Orion]", in the "MUL.APIN"; the constellation represents a rooster. This name may have also referred to the constellation Lepus. "DAR.LUGAL" was also denoted "DAR.MUŠEN" and "DAR.LUGAL.MUŠEN" in Babylonia. Canis Minor was then called "tarlugallu" in Akkadian astronomy. Canis Minor was one of the original 48 constellations formulated by Ptolemy in his second-century Almagest, in which it was defined as a specific pattern (asterism) of stars; Ptolemy identified only two stars and hence no depiction was possible. The Ancient Greeks called the constellation προκυων/"Procyon", "coming before the dog", transliterated into Latin as "Antecanis", "Praecanis", or variations thereof, by Cicero and others. Roman writers also appended the descriptors "parvus", "minor" or "minusculus" ("small" or "lesser", for its faintness), "septentrionalis" ("northerly", for its position in relation to Canis Major), "primus" (rising "first") or "sinister" (rising to the "left") to its name "Canis". In Greek mythology, Canis Minor was sometimes connected with the Teumessian Fox, a beast turned into stone with its hunter, Laelaps, by Zeus, who placed them in heaven as Canis Major (Laelaps) and Canis Minor (Teumessian Fox). Eratosthenes accompanied the Little Dog with Orion, while Hyginus linked the constellation with Maera, a dog owned by Icarius of Athens. On discovering the latter's death, the dog and Icarius' daughter Erigone took their lives and all three were placed in the sky—Erigone as Virgo and Icarius as Boötes. As a reward for his faithfulness, the dog was placed along the "banks" of the Milky Way, which the ancients believed to be a heavenly river, where he would never suffer from thirst. The medieval Arabic astronomers maintained the depiction of Canis Minor ("al-Kalb al-Asghar" in Arabic) as a dog; in his Book of the Fixed Stars, Abd al-Rahman al-Sufi included a diagram of the constellation with a canine figure superimposed. There was one slight difference between the Ptolemaic vision of Canis Minor and the Arabic; al-Sufi claims Mirzam, now assigned to Orion, as part of both Canis Minor—the collar of the dog—and its modern home. The Arabic names for both Procyon and Gomeisa alluded to their proximity and resemblance to Sirius, though they were not direct translations of the Greek; Procyon was called "ash-Shi'ra ash-Shamiya", the "Syrian Sirius" and Gomeisa was called "ash-Shira al-Ghamisa", the Sirius with bleary eyes. Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called "Merzem", includes the stars of Canis Minor and Canis Major and is the herald of two weeks of hot weather. The ancient Egyptians thought of this constellation as Anubis, the jackal god. Alternative names have been proposed: Johann Bayer in the early 17th century termed the constellation "Fovea" "The Pit", and "Morus" "Sycamine Tree". Seventeenth-century German poet and author Philippus Caesius linked it to the dog of Tobias from the Apocrypha. Richard A. Proctor gave the constellation the name "Felis" "the Cat" in 1870 (contrasting with Canis Major, which he had abbreviated to "Canis" "the Dog"), explaining that he sought to shorten the constellation names to make them more manageable on celestial charts. Occasionally, Canis Minor is confused with Canis Major and given the name "Canis Orionis" ("Orion's Dog"). In Chinese astronomy, the stars corresponding to Canis Minor lie in the Vermilion Bird of the South (南方朱雀, "Nán Fāng Zhū Què"). Procyon, Gomeisa and Eta Canis Minoris form an asterism known as Nánhé, the Southern River. With its counterpart, the Northern River Beihe (Castor and Pollux), Nánhé was also associated with a gate or sentry. Along with Zeta and 8 Cancri, 6 Canis Minoris and 11 Canis Minoris formed the asterism "Shuiwei", which literally means "water level". Combined with additional stars in Gemini, Shuiwei represented an official who managed floodwaters or a marker of the water level. Neighboring Korea recognized four stars in Canis Minor as part of a different constellation, "the position of the water". This constellation was located in the Red Bird, the southern portion of the sky. Polynesian peoples often did not recognize Canis Minor as a constellation, but they saw Procyon as significant and often named it; in the Tuamotu Archipelago it was known as "Hiro", meaning "twist as a thread of coconut fiber", and "Kopu-nui-o-Hiro" ("great paunch of Hiro"), which was either a name for the modern figure of Canis Minor or an alternative name for Procyon. Other names included "Vena" (after a goddess), on Mangaia and "Puanga-hori" (false "Puanga", the name for Rigel), in New Zealand. In the Society Islands, Procyon was called "Ana-tahua-vahine-o-toa-te-manava", literally "Aster the priestess of brave heart", figuratively the "pillar for elocution". The Wardaman people of the Northern Territory in Australia gave Procyon and Gomeisa the names "Magum" and "Gurumana", describing them as humans who were transformed into gum trees in the dreamtime. Although their skin had turned to bark, they were able to speak with a human voice by rustling their leaves. The Aztec calendar was related to their cosmology. The stars of Canis Minor were incorporated along with some stars of Orion and Gemini into an asterism associated with the day called "Water". Lying directly south of Gemini's bright stars Castor and Pollux, Canis Minor is a small constellation bordered by Monoceros to the south, Gemini to the north, Cancer to the northeast, and Hydra to the east. It does not border Canis Major; Monoceros is in between the two. Covering 183 square degrees, Canis Minor ranks seventy-first of the 88 constellations in size. It appears prominently in the southern sky during the Northern Hemisphere's winter. The constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 14 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . Most visible in the evening sky from January to March, Canis Minor is most prominent at 10 PM during mid-February. It is then seen earlier in the evening until July, when it is only visible after sunset before setting itself, and rising in the morning sky before dawn. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "CMi". Canis Minor contains only two stars brighter than fourth magnitude. At magnitude 0.34, Procyon, or Alpha Canis Minoris, is the seventh-brightest star in the night sky, as well as one of the closest. Its name means "before the dog" or "preceding the dog" in Greek, as it rises an hour before the "Dog Star", Sirius, of Canis Major. It is a binary star system, consisting of a yellow-white main sequence star of spectral type F5 IV-V, named Procyon A, and a faint white dwarf companion of spectral type DA, named Procyon B. Procyon B, which orbits the more massive star every 41 years, is of magnitude 10.7. Procyon A is 1.4 times the Sun's mass, while its smaller companion is 0.6 times as massive as the Sun. The system is from Earth, the shortest distance to a northern-hemisphere star of the first magnitude. Gomeisa, or Beta Canis Minoris, with a magnitude of 2.89, is the second-brightest star in Canis Minor. Lying from the Solar System, it is a blue-white main sequence star of spectral class B8 Ve. Although fainter to Earth observers, it is much brighter than Procyon, and is 250 times as luminous and three times as massive as the Sun. Although its variations are slight, Gomeisa is classified as a shell star (Gamma Cassiopeiae variable), with a maximum magnitude of 2.84 and a minimum magnitude of 2.92. It is surrounded by a disk of gas which it heats and causes to emit radiation. Johann Bayer used the Greek letters Alpha to Eta to label the most prominent eight stars in the constellation, designating two stars as Delta (named Delta1 and Delta2). John Flamsteed numbered fourteen stars, discerning a third star he named Delta3; his star 12 Canis Minoris was not found subsequently. In Bayer's 1603 work "Uranometria", Procyon is located on the dog's belly, and Gomeisa on its neck. Gamma, Epsilon and Eta Canis Minoris lie nearby, marking the dog's neck, crown and chest respectively. Although it has an apparent magnitude of 4.34, Gamma Canis Minoris is an orange K-type giant of spectral class K3-III C, which lies away. Its colour is obvious when seen through binoculars. It is a multiple system, consisting of the spectroscopic binary Gamma A and three optical companions, Gamma B, magnitude 13; Gamma C, magnitude 12; and Gamma D, magnitude 10. The two components of Gamma A orbit each other every 389.2 days, with an eccentric orbit that takes their separation between 2.3 and 1.4 astronomical units (AU). Epsilon Canis Minoris is a yellow bright giant of spectral class G6.5IIb of magnitude of 4.99. It lies from Earth, with 13 times the diameter and 750 times the luminosity of the Sun. Eta Canis Minoris is a giant of spectral class F0III of magnitude 5.24, which has a yellowish hue when viewed through binoculars as well as a faint companion of magnitude 11.1. Located 4 arcseconds from the primary, the companion star is actually around 440 AU from the main star and takes around 5000 years to orbit it. Near Procyon, three stars share the name Delta Canis Minoris. Delta1 is a yellow-white F-type giant of magnitude 5.25 located around from Earth. About 360 times as luminous and 3.75 times as massive as the Sun, it is expanding and cooling as it ages, having spent much of its life as a main sequence star of spectrum B6V. Also known as 8 Canis Minoris, Delta2 is an F-type main-sequence star of spectral type F2V and magnitude 5.59 which is distant. The last of the trio, Delta3 (also known as 9 Canis Minoris), is a white main sequence star of spectral type A0Vnn and magnitude 5.83 which is distant. These stars mark the paws of the Lesser Dog's left hind leg, while magnitude 5.13 Zeta marks the right. A blue-white bright giant of spectral type B8II, Zeta lies around away from the Solar System. Lying 222 ± 7 light-years away with an apparent magnitude of 4.39, HD 66141 is 6.8 billion years old and has evolved into an orange giant of spectral type K2III with a diameter around 22 times that of the Sun, and weighing 1.1 solar masses. It is 174 times as luminous as the Sun, with an absolute magnitude of −0.15. HD 66141 was mistakenly named 13 Puppis, as its celestial coordinates were recorded incorrectly when catalogued and hence mistakenly thought to be in the constellation of Puppis; Bode gave it the name Lambda Canis Minoris, which is now obsolete. The orange giant is orbited by a planet, HD 66141b, which was detected in 2012 by measuring the star's radial velocity. The planet has a mass around 6 times that of Jupiter and a period of 480 days. A red giant of spectral type M4III, BC Canis Minoris lies around distant from the Solar System. It is a semiregular variable star that varies between a maximum magnitude of 6.14 and minimum magnitude of 6.42. Periods of 27.7, 143.3 and 208.3 days have been recorded in its pulsations. AZ, AD and BI Canis Minoris are Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. AZ is of spectral type A5IV, and ranges between magnitudes 6.44 and 6.51 over a period of 2.3 hours. AD has a spectral type of F2III, and has a maximum magnitude of 9.21 and minimum of 9.51, with a period of approximately 2.95 hours. BI is of spectral type F2 with an apparent magnitude varying around 9.19 and a period of approximately 2.91 hours. At least three red giants are Mira variables in Canis Minor. S Canis Minoris, of spectral type M7e, is the brightest, ranging from magnitude 6.6 to 13.2 over a period of 332.94 days. V Canis Minoris ranges from magnitude 7.4 to 15.1 over a period of 366.1 days. Similar in magnitude is R Canis Minoris, which has a maximum of 7.3, but a significantly brighter minimum of 11.6. An S-type star, it has a period of 337.8 days. YZ Canis Minoris is a red dwarf of spectral type M4.5V and magnitude 11.2, roughly three times the size of Jupiter and from Earth. It is a flare star, emitting unpredictable outbursts of energy for mere minutes, which might be much more powerful analogues of solar flares. Luyten's Star (GJ 273) is a red dwarf star of spectral type M3.5V and close neighbour of the Solar System. Its visual magnitude of 9.9 renders it too faint to be seen with the naked eye, even though it is only away. Fainter still is PSS 544-7, an eighteenth-magnitude red dwarf around 20 percent the mass of the Sun, located from Earth. First noticed in 1991, it is thought to be a cannonball star, shot out of a star cluster and now moving rapidly through space directly away from the galactic disc. The WZ Sagittae-type dwarf nova DY Canis Minoris (also known as VSX J074727.6+065050) flared up to magnitude 11.4 over January and February 2008 before dropping eight magnitudes to around 19.5 over approximately 80 days. It is a remote binary star system where a white dwarf and low mass star orbit each other close enough for the former star to draw material off the latter and form an accretion disc. This material builds up until it erupts dramatically. The Milky Way passes through much of Canis Minor, yet it has few deep-sky objects. William Herschel recorded four objects in his 1786 work "Catalogue of Nebulae and Clusters of Stars", including two he mistakenly believed were star clusters. NGC 2459 is a group of five thirteenth- and fourteenth-magnitude stars that appear to lie close together in the sky but are not related. A similar situation has occurred with NGC 2394, also in Canis Minor. This is a collection of fifteen unrelated stars of ninth-magnitude and fainter. Herschel also observed three faint galaxies, two of which are interacting with each other. NGC 2508 is a lenticular galaxy of thirteenth-magnitude, estimated at 205 million light-years (63 million parsecs) distance with a diameter of 80 thousand light-years (25 thousand parsecs). Named as a single object by Herschel, NGC 2402 is actually a pair of near-adjacent galaxies that appear to be interacting with each other. Only of fourteenth- and fifteenth-magnitudes respectively, the elliptical and spiral galaxy are thought to be approximately 245 million light-years distant, and each measure 55,000 light-years in diameter. The 11 Canis-Minorids, also called the Beta Canis Minorids, are a meteor shower that arise near the fifth-magnitude star 11 Canis Minoris and were discovered in 1964 by Keith Hindley, who investigated their trajectory and proposed a common origin with the comet D/1917 F1 Mellish. However, this conclusion has been refuted subsequently as the number of orbits analysed was low and their trajectories too disparate to confirm a link. They last from 4 to 15 December, peaking over 10 and 11 December.
https://en.wikipedia.org/wiki?curid=6367
Centaurus Centaurus is a bright constellation in the southern sky. One of the largest constellations, Centaurus was included among the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. In Greek mythology, Centaurus represents a centaur; a creature that is half human, half horse (another constellation named after a centaur is one from the zodiac: Sagittarius). Notable stars include Alpha Centauri, the nearest star system to the Solar System, its neighbour in the sky Beta Centauri, and V766 Centauri, one of the largest stars yet discovered. The constellation also contains Omega Centauri, the brightest globular cluster as visible from Earth and the largest identified in the Milky Way, possibly a remnant of a dwarf galaxy. Centaurus contains several very bright stars. Its alpha and beta stars are used as "pointer stars" to help observers find the constellation Crux. Centaurus has 281 stars above magnitude 6.5, meaning that they are visible to the unaided eye, the most of any constellation. Alpha Centauri, the closest star system to the Sun, has a high proper motion; it will be a mere half-degree from Beta Centauri in approximately 4000 years. Alpha Centauri is a triple star system, a binary around which orbits Proxima Centauri, currently the nearest star to the Sun. Traditionally called Rigil Kentaurus or Toliman, meaning "foot of the centaur", the system has an overall magnitude of −0.28 and is 4.4 light-years from Earth. The primary and secondary are both yellow-hued stars; the first is of magnitude −0.01 and the second: 1.35. Proxima, the tertiary star, is a red dwarf of magnitude 11.0; it appears almost 2 degrees away from the close pairing of Alpha and has a period of approximately one million years. Also a flare star, Proxima has minutes-long outbursts where it brightens by over a magnitude. The Alpha couple revolve in 80-year periodicity and will next appear closest as seen from Earth's telescopes in 2037 and 2038, together as they appear to the naked eye they present the third-brightest "star" in the night sky. One other first magnitude star Beta Centauri is in the constellation in a position beyond Proxima and toward the narrow axis of Crux, thus with Alpha forming a far-south limb of the constellation. Also called Hadar and Agena, it is a double star; the primary is a blue-hued giant star of magnitude 0.6, 525 light-years from Earth. The secondary is of magnitude 4.0 and has a modest separation, appearing only under intense magnification due to its distance. The next bright object is Gamma Centauri, for this reason intuitively named Gamma, another binary star which appears to the naked eye at magnitude 2.2. The primary and secondary are both blue-white hued stars of magnitude 2.9; their period is 84 years. Centaurus also has many dimmer double stars and binary stars. 3 Centauri is a double star with a blue-white hued primary of magnitude 4.5 and a secondary of magnitude 6.0. The primary is 344 light-years away. Centaurus is home to many variable stars. R Centauri is a Mira variable star with a minimum magnitude of 11.8 and a maximum magnitude of 5.3; it is about 1,250 light-years from Earth and has a period of 18 months. V810 Centauri is a semiregular variable. BPM 37093 is a white dwarf star whose carbon atoms are thought to have formed a crystalline structure. Since diamond also consists of carbon arranged in a crystalline lattice (though of a different configuration), scientists have nicknamed this star "Lucy" after the Beatles song ""Lucy in the Sky with Diamonds"." PDS 70, (V1032 Centauri) a low mass T Tauri star is found in the constellation Centauras. In July 2018 astronomers captured the first conclusive image of a protoplanetary disk containing a nascent exoplanet, named PDS 70b. ω Centauri (NGC 5139), despite being listed as the constellation's "omega" star, is in fact a naked-eye globular cluster, 17,000 light-years away with a diameter of 150 light-years. It is the largest and brightest globular cluster in the Milky Way; at ten times the size of the next-largest cluster, it has a magnitude of 3.7. It is also the most luminous globular cluster in the Milky Way, at over one million solar luminosities. Omega Centauri is classified as a Shapley class VIII cluster, which means that its center is loosely concentrated. It is also the only globular cluster to be designated with a Bayer letter; the globular cluster 47 Tucanae is the only one designated with a Flamsteed number. It contains several million stars, most of which are yellow dwarf stars, but also possesses red giants and blue-white stars; the stars have an average age of 12 billion years. This has prompted suspicion that Omega Centauri was the core of a dwarf galaxy that had been absorbed by the Milky Way. Omega Centauri was determined to be nonstellar in 1677 by the English astronomer Edmond Halley, though it was visible as a star to the ancients. Its status as a globular cluster was determined by James Dunlop in 1827. To the unaided eye, Omega Centauri appears fuzzy and is obviously non-circular; it is approximately half a degree in diameter, the same size as the full Moon. Centaurus is also home to open clusters. NGC 3766 is an open cluster 6,300 light-years from Earth that is visible to the unaided eye. It contains approximately 100 stars, the brightest of which are 7th magnitude. NGC 5460 is another naked-eye open cluster, 2,300 light-years from Earth, that has an overall magnitude of 6 and contains approximately 40 stars. There is one bright planetary nebula in Centaurus, NGC 3918, also known as the Blue Planetary. It has an overall magnitude of 8.0 and a central star of magnitude 11.0; it is 2600 light-years from Earth. The Blue Planetary was discovered by John Herschel and named for its color's similarity to Uranus, though the nebula is apparently three times larger than the planet. Centaurus is rich in galaxies as well. NGC 4622 is a face-on spiral galaxy located 200 million light-years from Earth (redshift 0.0146). Its spiral arms wind in both directions, which makes it nearly impossible for astronomers to determine the rotation of the galaxy. Astronomers theorize that a collision with a smaller companion galaxy near the core of the main galaxy could have led to the unusual spiral structure. NGC 5253, a peculiar irregular galaxy, is located near the border with Hydra and M83, with which it likely had a close gravitational interaction 1–2 billion years ago. This may have sparked the galaxy's high rate of star formation, which continues today and contributes to its high surface brightness. NGC 5253 includes a large nebula and at least 12 large star clusters. In the eyepiece, it is a small galaxy of magnitude 10 with dimensions of 5 arcminutes by 2 arcminutes and a bright nucleus. NGC 4945 is a spiral galaxy seen edge-on from Earth, 13 million light-years away. It is visible with any amateur telescope, as well as binoculars under good conditions; it has been described as "shaped like a candle flame", being long and thin (16' by 3'). In the eyepiece of a large telescope, its southeastern dust lane becomes visible. Another galaxy is NGC 5102, found by star-hopping from Iota Centauri. In the eyepiece, it appears as an elliptical object 9 arcminutes by 2.5 arcminutes tilted on a southwest-northeast axis. One of the closest active galaxies to Earth is the Centaurus A galaxy, NGC 5128, at 11 million light-years away (redshift 0.00183). It has a supermassive black hole at its core, which expels massive jets of matter that emit radio waves due to synchrotron radiation. Astronomers posit that its dust lanes, not common in elliptical galaxies, are due to a previous merger with another galaxy, probably a spiral galaxy. NGC 5128 appears in the optical spectrum as a fairly large elliptical galaxy with a prominent dust lane. Its overall magnitude is 7.0 and it has been seen under perfect conditions with the naked eye, making it one of the most distant objects visible to the unaided observer. In equatorial and southern latitudes, it is easily found by star hopping from Omega Centauri. In small telescopes, the dust lane is not visible; it begins to appear with about 4 inches of aperture under good conditions. In large amateur instruments, above about 12 inches in aperture, the dust lane's west-northwest to east-southeast direction is easily discerned. Another dim dust lane on the east side of the 12-arcminute-by-15-arcminute galaxy is also visible. ESO 270-17, also called the Fourcade-Figueroa Object, is a low-surface brightness object believed to be the remnants of a galaxy; it does not have a core and is very difficult to observe with an amateur telescope. It measures 7 arcminutes by 1 arcminute. It likely originated as a spiral galaxy and underwent a catastrophic gravitational interaction with Centaurus A around 500 million years ago, stopping its rotation and destroying its structure. NGC 4650A is a polar-ring galaxy 136 million light-years from Earth (redshift 0.01). It has a central core made of older stars that resembles an elliptical galaxy, and an outer ring of young stars that orbits around the core. The plane of the outer ring is distorted, which suggests that NGC 4650A is the result of a galaxy collision about a billion years ago. This galaxy has also been cited in studies of dark matter, because the stars in the outer ring orbit too quickly for their collective mass. This suggests that the galaxy is surrounded by a dark matter halo, which provides the necessary mass. One of the closest galaxy clusters to Earth is the Centaurus Cluster at 160 million light-years away, having redshift 0.0114. It has a cooler, denser central region of gas and a hotter, more diffuse outer region. The intracluster medium in the Centaurus Cluster has a high concentration of metals (elements heavier than helium) due to a large number of supernovae. This cluster also possesses a plume of gas whose origin is unknown. While Centaurus now has a high southern latitude, at the dawn of civilization it was an equatorial constellation. Precession has been slowly shifting it southward for millennia, and it is now close to its maximal southern declination. In a little over 7000 years it will be at maximum visibility for those in the northern hemisphere, visible at times in the year up to quite a high northern latitude. The figure of Centaurus can be traced back to a Babylonian constellation known as the Bison-man (MUL.GUD.ALIM). This being was depicted in two major forms: firstly, as a 4-legged bison with a human head, and secondly, as a being with a man's head and torso attached to the rear legs and tail of a bull or bison. It has been closely associated with the Sun god Utu-Shamash from very early times. The Greeks depicted the constellation as a centaur and gave it its current name. It was mentioned by Eudoxus in the 4th century BC and Aratus in the 3rd century BC. In the 2nd century AD, Claudius Ptolemy catalogued 37 stars in Centaurus, including Alpha Centauri. Large as it is now, in earlier times it was even larger, as the constellation Lupus was treated as an asterism within Centaurus, portrayed in illustrations as an unspecified animal either in the centaur's grasp or impaled on its spear. The Southern Cross, which is now regarded as a separate constellation, was treated by the ancients as a mere asterism formed of the stars composing the centaur's legs. Additionally, what is now the minor constellation Circinus was treated as undefined stars under the centaur's front hooves. According to the Roman poet Ovid ("Fasti" v.379), the constellation honors the centaur Chiron, who was tutor to many of the earlier Greek heroes including Heracles (Hercules), Theseus, and Jason, the leader of the Argonauts. It is not to be confused with the more warlike centaur represented by the zodiacal constellation Sagittarius. The legend associated with Chiron says that he was accidentally poisoned with an arrow shot by Hercules, and was subsequently placed in the heavens. In Chinese astronomy, the stars of Centaurus are found in three areas: the Azure Dragon of the East (東方青龍, "Dōng Fāng Qīng Lóng"), the Vermillion Bird of the South (南方朱雀, "Nán Fāng Zhū Què"), and the Southern Asterisms (近南極星區, "Jìnnánjíxīngōu"). Not all of the stars of Centaurus can be seen from China, and the unseen stars were classified among the Southern Asterisms by Xu Guangqi, based on his study of western star charts. However, most of the brightest stars of Centaurus, including α Centauri, θ Centauri (or Menkent), ε Centauri and η Centauri, can be seen in the Chinese sky. Some Polynesian peoples considered the stars of Centaurus to be a constellation as well. On Pukapuka, Centaurus had two names: "Na Mata-o-te-tokolua" and "Na Lua-mata-o-Wua-ma-Velo". In Tonga, the constellation was called by four names: "O-nga-tangata", "Tautanga-ufi", "Mamangi-Halahu", and "Mau-kuo-mau". Alpha and Beta Centauri were not named specifically by the people of Pukapuka or Tonga, but they were named by the people of Hawaii and the Tuamotus. In Hawaii, the name for Alpha Centauri was either "Melemele" or "Ka Maile-hope" and the name for Beta Centauri was either "Polapola" or "Ka Maile-mua". In the Tuamotu islands, Alpha was called "Na Kuhi" and Beta was called "Tere". The Pointer (α Centauri and β Centauri) is one of the asterisms used by Bugis sailors for navigation, called "bintoéng balué", meaning "the widowed-before-marriage". It is also called "bintoéng sallatang" meaning "southern star" Two United States Navy ships, and , were named after Centaurus, the constellation. The Centaurus is also a name of a Mega Mall and commercial / residential complex in Islamabad, Pakistan. Construction started in 2005 and the three 41-storey towers, the tallest structures today in Islamabad, were completed by late 2012. The shopping mall was officially opened on February 17, 2013. The Centaurus originally included a 7 star hotel, construction of which is yet to begin.
https://en.wikipedia.org/wiki?curid=6371
Impact crater An impact crater is an approximately circular depression in the surface of a planet, moon, or other solid body in the Solar System or elsewhere, formed by the hypervelocity impact of a smaller body. In contrast to volcanic craters, which result from explosion or internal collapse, impact craters typically have raised rims and floors that are lower in elevation than the surrounding terrain. Impact craters range from small, simple, bowl-shaped depressions to large, complex, multi-ringed impact basins. Meteor Crater is a well-known example of a small impact crater on Earth. Impact craters are the dominant geographic features on many solid Solar System objects including the Moon, Mercury, Callisto, Ganymede and most small moons and asteroids. On other planets and moons that experience more active surface geological processes, such as Earth, Venus, Mars, Europa, Io and Titan, visible impact craters are less common because they become eroded, buried or transformed by tectonics over time. Where such processes have destroyed most of the original crater topography, the terms impact structure or astrobleme are more commonly used. In early literature, before the significance of impact cratering was widely recognised, the terms cryptoexplosion or cryptovolcanic structure were often used to describe what are now recognised as impact-related features on Earth. The cratering records of very old surfaces, such as Mercury, the Moon, and the southern highlands of Mars, record a period of intense early bombardment in the inner Solar System around 3.9 billion years ago. The rate of crater production on Earth has since been considerably lower, but it is appreciable nonetheless; Earth experiences from one to three impacts large enough to produce a crater about once every million years on average. This indicates that there should be far more relatively young craters on the planet than have been discovered so far. The cratering rate in the inner solar system fluctuates as a consequence of collisions in the asteroid belt that create a family of fragments that are often sent cascading into the inner solar system. Formed in a collision 80 million years ago, the Baptistina family of asteroids is thought to have caused a large spike in the impact rate. Note that the rate of impact cratering in the outer Solar System could be different from the inner Solar System. Although Earth's active surface processes quickly destroy the impact record, about 190 terrestrial impact craters have been identified. These range in diameter from a few tens of meters up to about , and they range in age from recent times (e.g. the Sikhote-Alin craters in Russia whose creation was witnessed in 1947) to more than two billion years, though most are less than 500 million years old because geological processes tend to obliterate older craters. They are also selectively found in the stable interior regions of continents. Few undersea craters have been discovered because of the difficulty of surveying the sea floor, the rapid rate of change of the ocean bottom, and the subduction of the ocean floor into Earth's interior by processes of plate tectonics. Impact craters are not to be confused with landforms that may appear similar, including calderas, sinkholes, glacial cirques, ring dikes, salt domes, and others. Daniel M. Barringer, a mining engineer, was convinced that the crater he owned, Meteor Crater, was of cosmic origin. Yet, most geologists at the time assumed it formed as the result of a volcanic steam eruption. In the 1920s, the American geologist Walter H. Bucher studied a number of sites now recognized as impact craters in the United States. He concluded they had been created by some great explosive event, but believed that this force was probably volcanic in origin. However, in 1936, the geologists John D. Boon and Claude C. Albritton Jr. revisited Bucher's studies and concluded that the craters that he studied were probably formed by impacts. Grove Karl Gilbert suggested in 1893 that the Moon's craters were formed by large asteroid impacts. Ralph Baldwin in 1949 wrote that the Moon's craters were mostly of impact origin. Around 1960, Gene Shoemaker revived the idea. According to David H. Levy, Gene "saw the craters on the Moon as logical impact sites that were formed not gradually, in eons, but explosively, in seconds." For his Ph.D. degree at Princeton (1960), under the guidance of Harry Hammond Hess, Shoemaker studied the impact dynamics of Barringer Meteor Crater. Shoemaker noted Meteor Crater had the same form and structure as two explosion craters created from atomic bomb tests at the Nevada Test Site, notably Jangle U in 1951 and Teapot Ess in 1955. In 1960, Edward C. T. Chao and Shoemaker identified coesite at Meteor Crater, proving the crater was formed from an impact generating extremely high temperatures and pressures. They followed this discovery with the identification of coesite within suevite at Nördlinger Ries, proving its impact origin. Armed with the knowledge of shock-metamorphic features, Carlyle S. Beals and colleagues at the Dominion Astrophysical Observatory in Victoria, British Columbia, Canada and Wolf von Engelhardt of the University of Tübingen in Germany began a methodical search for impact craters. By 1970, they had tentatively identified more than 50. Although their work was controversial, the American Apollo Moon landings, which were in progress at the time, provided supportive evidence by recognizing the rate of impact cratering on the Moon. Because the processes of erosion on the Moon are minimal, craters persist. Since the Earth could be expected to have roughly the same cratering rate as the Moon, it became clear that the Earth had suffered far more impacts than could be seen by counting evident craters. Impact cratering involves high velocity collisions between solid objects, typically much greater than the speed of sound in those objects. Such hyper-velocity impacts produce physical effects such as melting and vaporization that do not occur in familiar sub-sonic collisions. On Earth, ignoring the slowing effects of travel through the atmosphere, the lowest impact velocity with an object from space is equal to the gravitational escape velocity of about 11 km/s. The fastest impacts occur at about 72 km/s in the "worst case" scenario in which an object in a retrograde near-parabolic orbit hits Earth. The median impact velocity on Earth is about 20 km/s. However, the slowing effects of travel through the atmosphere rapidly decelerate any potential impactor, especially in the lowest 12 kilometres where 90% of the earth's atmospheric mass lies. Meteorites of up to 7,000 kg lose all their cosmic velocity due to atmospheric drag at a certain altitude (retardation point), and start to accelerate again due to Earth's gravity until the body reaches its terminal velocity of 0.09 to 0.16 km/s. The larger the meteoroid (i.e. asteroids and comets) the more of its initial cosmic velocity it preserves. While an object of 9,000 kg maintains about 6% of its original velocity, one of 900,000 kg already preserves about 70%. Extremely large bodies (about 100,000 tonnes) are not slowed by the atmosphere at all, and impact with their initial cosmic velocity if no prior disintegration occurs. Impacts at these high speeds produce shock waves in solid materials, and both impactor and the material impacted are rapidly compressed to high density. Following initial compression, the high-density, over-compressed region rapidly depressurizes, exploding violently, to set in train the sequence of events that produces the impact crater. Impact-crater formation is therefore more closely analogous to cratering by high explosives than by mechanical displacement. Indeed, the energy density of some material involved in the formation of impact craters is many times higher than that generated by high explosives. Since craters are caused by explosions, they are nearly always circular – only very low-angle impacts cause significantly elliptical craters. This describes impacts on solid surfaces. Impacts on porous surfaces, such as that of Hyperion, may produce internal compression without ejecta, punching a hole in the surface without filling in nearby craters. This may explain the 'sponge-like' appearance of that moon. It is convenient to divide the impact process conceptually into three distinct stages: (1) initial contact and compression, (2) excavation, (3) modification and collapse. In practice, there is overlap between the three processes with, for example, the excavation of the crater continuing in some regions while modification and collapse is already underway in others. In the absence of atmosphere, the impact process begins when the impactor first touches the target surface. This contact accelerates the target and decelerates the impactor. Because the impactor is moving so rapidly, the rear of the object moves a significant distance during the short-but-finite time taken for the deceleration to propagate across the impactor. As a result, the impactor is compressed, its density rises, and the pressure within it increases dramatically. Peak pressures in large impacts exceed 1 TPa to reach values more usually found deep in the interiors of planets, or generated artificially in nuclear explosions. In physical terms, a shock wave originates from the point of contact. As this shock wave expands, it decelerates and compresses the impactor, and it accelerates and compresses the target. Stress levels within the shock wave far exceed the strength of solid materials; consequently, both the impactor and the target close to the impact site are irreversibly damaged. Many crystalline minerals can be transformed into higher-density phases by shock waves; for example, the common mineral quartz can be transformed into the higher-pressure forms coesite and stishovite. Many other shock-related changes take place within both impactor and target as the shock wave passes through, and some of these changes can be used as diagnostic tools to determine whether particular geological features were produced by impact cratering. As the shock wave decays, the shocked region decompresses towards more usual pressures and densities. The damage produced by the shock wave raises the temperature of the material. In all but the smallest impacts this increase in temperature is sufficient to melt the impactor, and in larger impacts to vaporize most of it and to melt large volumes of the target. As well as being heated, the target near the impact is accelerated by the shock wave, and it continues moving away from the impact behind the decaying shock wave. Contact, compression, decompression, and the passage of the shock wave all occur within a few tenths of a second for a large impact. The subsequent excavation of the crater occurs more slowly, and during this stage the flow of material is largely subsonic. During excavation, the crater grows as the accelerated target material moves away from the point of impact. The target's motion is initially downwards and outwards, but it becomes outwards and upwards. The flow initially produces an approximately hemispherical cavity that continues to grow, eventually producing a paraboloid (bowl-shaped) crater in which the centre has been pushed down, a significant volume of material has been ejected, and a topographically elevated crater rim has been pushed up. When this cavity has reached its maximum size, it is called the transient cavity. The depth of the transient cavity is typically a quarter to a third of its diameter. Ejecta thrown out of the crater do not include material excavated from the full depth of the transient cavity; typically the depth of maximum excavation is only about a third of the total depth. As a result, about one third of the volume of the transient crater is formed by the ejection of material, and the remaining two thirds is formed by the displacement of material downwards, outwards and upwards, to form the elevated rim. For impacts into highly porous materials, a significant crater volume may also be formed by the permanent compaction of the pore space. Such compaction craters may be important on many asteroids, comets and small moons. In large impacts, as well as material displaced and ejected to form the crater, significant volumes of target material may be melted and vaporized together with the original impactor. Some of this impact melt rock may be ejected, but most of it remains within the transient crater, initially forming a layer of impact melt coating the interior of the transient cavity. In contrast, the hot dense vaporized material expands rapidly out of the growing cavity, carrying some solid and molten material within it as it does so. As this hot vapor cloud expands, it rises and cools much like the archetypal mushroom cloud generated by large nuclear explosions. In large impacts, the expanding vapor cloud may rise to many times the scale height of the atmosphere, effectively expanding into free space. Most material ejected from the crater is deposited within a few crater radii, but a small fraction may travel large distances at high velocity, and in large impacts it may exceed escape velocity and leave the impacted planet or moon entirely. The majority of the fastest material is ejected from close to the center of impact, and the slowest material is ejected close to the rim at low velocities to form an overturned coherent flap of ejecta immediately outside the rim. As ejecta escapes from the growing crater, it forms an expanding curtain in the shape of an inverted cone. The trajectory of individual particles within the curtain is thought to be largely ballistic. Small volumes of un-melted and relatively un-shocked material may be spalled at very high relative velocities from the surface of the target and from the rear of the impactor. Spalling provides a potential mechanism whereby material may be ejected into inter-planetary space largely undamaged, and whereby small volumes of the impactor may be preserved undamaged even in large impacts. Small volumes of high-speed material may also be generated early in the impact by jetting. This occurs when two surfaces converge rapidly and obliquely at a small angle, and high-temperature highly shocked material is expelled from the convergence zone with velocities that may be several times larger than the impact velocity. In most circumstances, the transient cavity is not stable and collapses under gravity. In small craters, less than about 4 km diameter on Earth, there is some limited collapse of the crater rim coupled with debris sliding down the crater walls and drainage of impact melts into the deeper cavity. The resultant structure is called a simple crater, and it remains bowl-shaped and superficially similar to the transient crater. In simple craters, the original excavation cavity is overlain by a lens of collapse breccia, ejecta and melt rock, and a portion of the central crater floor may sometimes be flat. Above a certain threshold size, which varies with planetary gravity, the collapse and modification of the transient cavity is much more extensive, and the resulting structure is called a complex crater. The collapse of the transient cavity is driven by gravity, and involves both the uplift of the central region and the inward collapse of the rim. The central uplift is not the result of "elastic rebound", which is a process in which a material with elastic strength attempts to return to its original geometry; rather the collapse is a process in which a material with little or no strength attempts to return to a state of gravitational equilibrium. Complex craters have uplifted centers, and they have typically broad flat shallow crater floors, and terraced walls. At the largest sizes, one or more exterior or interior rings may appear, and the structure may be labeled an "impact basin" rather than an impact crater. Complex-crater morphology on rocky planets appears to follow a regular sequence with increasing size: small complex craters with a central topographic peak are called "central peak craters", for example Tycho; intermediate-sized craters, in which the central peak is replaced by a ring of peaks, are called "peak-ring craters", for example Schrödinger; and the largest craters contain multiple concentric topographic rings, and are called "multi-ringed basins", for example Orientale. On icy (as opposed to rocky) bodies, other morphological forms appear that may have central pits rather than central peaks, and at the largest sizes may contain many concentric rings. Valhalla on Callisto is an example of this type. Non-explosive volcanic craters can usually be distinguished from impact craters by their irregular shape and the association of volcanic flows and other volcanic materials. Impact craters produce melted rocks as well, but usually in smaller volumes with different characteristics. The distinctive mark of an impact crater is the presence of rock that has undergone shock-metamorphic effects, such as shatter cones, melted rocks, and crystal deformations. The problem is that these materials tend to be deeply buried, at least for simple craters. They tend to be revealed in the uplifted center of a complex crater, however. Impacts produce distinctive shock-metamorphic effects that allow impact sites to be distinctively identified. Such shock-metamorphic effects can include: On Earth impact craters have resulted in useful minerals. Some of the ores produced from impact related effects on Earth include ores of iron, uranium, gold, copper, and nickel. It is estimated that the value of materials mined from impact structures is five billion dollars/year just for North America. The eventual usefulness of impact craters depends on several factors especially the nature of the materials that were impacted and when the materials were affected. In some cases the deposits were already in place and the impact brought them to the surface. These are called “progenetic economic deposits.” Others were created during the actual impact. The great energy involved caused melting. Useful minerals formed as a result of this energy are classified as “syngenetic deposits.” The third type, called “epigenetic deposits,” is caused by the creation of a basin from the impact. Many of the minerals that our modern lives depend on are associated with impacts in the past. The Vredeford Dome in the center of the Witwatersrand Basin is the largest goldfield in the world which has supplied about 40% of all the gold ever mined in an impact structure. The asteroid that struck the region was wide. The Sudbury Basin was caused by an impacting body over in diameter. This basin is famous for its deposits of nickel, copper, and Platinum Group Elements. An impact was involved in making the Carswell structure in Saskatchewan, Canada; it contains uranium deposits. Hydrocarbons are common around impact structures. Fifty percent of impact structures in North America in hydrocarbon-bearing sedimentary basins contain oil/gas fields. Because of the many missions studying Mars since the 1960s, there is good coverage of its surface which contains large numbers of craters. Many of the craters on Mars differ from those on the Moon and other moons since Mars contains ice under the ground, especially in the higher latitudes. Some of the types of craters that have special shapes due to impact into ice-rich ground are pedestal craters, rampart craters, expanded craters, and LARLE craters. On Earth, the recognition of impact craters is a branch of geology, and is related to planetary geology in the study of other worlds. Out of many proposed craters, relatively few are confirmed. The following twenty are a sample of articles of confirmed and well-documented impact sites. See the Earth Impact Database, a website concerned with 190 () scientifically-confirmed impact craters on Earth. There are approximately twelve more impact craters/basins larger than 300 km on the Moon, five on Mercury, and four on Mars. Large basins, some unnamed but mostly smaller than 300 km, can also be found on Saturn's moons Dione, Rhea and Iapetus.
https://en.wikipedia.org/wiki?curid=6416
Corona Borealis Corona Borealis is a small constellation in the Northern Celestial Hemisphere. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and remains one of the 88 modern constellations. Its brightest stars form a semicircular arc. Its Latin name, inspired by its shape, means "northern crown". In classical mythology Corona Borealis generally represented the crown given by the god Dionysus to the Cretan princess Ariadne and set by him in the heavens. Other cultures likened the pattern to a circle of elders, an eagle's nest, a bear's den, or even a smokehole. Ptolemy also listed a southern counterpart, Corona Australis, with a similar pattern. The brightest star is the magnitude 2.2 Alpha Coronae Borealis. The yellow supergiant R Coronae Borealis is the prototype of a rare class of giant stars—the R Coronae Borealis variables—that are extremely hydrogen deficient, and thought to result from the merger of two white dwarfs. T Coronae Borealis, also known as the Blaze Star, is another unusual type of variable star known as a recurrent nova. Normally of magnitude 10, it last flared up to magnitude 2 in 1946. ADS 9731 and Sigma Coronae Borealis are multiple star systems with six and five components respectively. Five star systems have been found to have Jupiter-sized exoplanets. Abell 2065 is a highly concentrated galaxy cluster one billion light-years from the Solar System containing more than 400 members, and is itself part of the larger Corona Borealis Supercluster. Covering 179 square degrees and hence 0.433% of the sky, Corona Borealis ranks 73rd of the 88 modern constellations by area. Its position in the Northern Celestial Hemisphere means that the whole constellation is visible to observers north of 50°S. It is bordered by Boötes to the north and west, Serpens Caput to the south, and Hercules to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrB". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of eight segments ("illustrated in infobox"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 39.71° and 25.54°. It has a counterpart—Corona Australis—in the Southern Celestial Hemisphere. The seven stars that make up the constellation's distinctive crown-shaped pattern are all 4th-magnitude stars except for the brightest of them, Alpha Coronae Borealis. The other six stars are Theta, Beta, Gamma, Delta, Epsilon and Iota Coronae Borealis. The German cartographer Johann Bayer gave twenty stars in Corona Borealis Bayer designations from Alpha to Upsilon in his 1603 star atlas "Uranometria". Zeta Coronae Borealis was noted to be a double star by later astronomers and its components designated Zeta1 and Zeta2. John Flamsteed did likewise with Nu Coronae Borealis; classed by Bayer as a single star, it was noted to be two close stars by Flamsteed. He named them 20 and 21 Coronae Borealis in his catalogue, alongside the designations Nu1 and Nu2 respectively. Chinese astronomers deemed nine stars to make up the asterism, adding Pi and Rho Coronae Borealis. Within the constellation's borders, there are 37 stars brighter than or equal to apparent magnitude 6.5. Alpha Coronae Borealis (officially named Alphecca by the IAU, but sometimes also known as Gemma) appears as a blue-white star of magnitude 2.2. In fact, it is an Algol-type eclipsing binary that varies by 0.1 magnitude with a period of 17.4 days. The primary is a white main-sequence star of spectral type A0V that is 2.91 times the mass of the Sun () and 57 times as luminous (), and is surrounded by a debris disk out to a radius of around 60 astronomical units (AU). The secondary companion is a yellow main-sequence star of spectral type G5V that is a little smaller (0.9 times) the diameter of the Sun. Lying 75±0.5 light-years from Earth, Alphecca is believed to be a member of the Ursa Major Moving Group of stars that have a common motion through space. Located 112±3 light-years away, Beta Coronae Borealis or Nusakan is a spectroscopic binary system whose two components are separated by 10 AU and orbit each other every 10.5 years. The brighter component is a rapidly oscillating Ap star, pulsating with a period of 16.2 minutes. Of spectral type A5V with a surface temperature of around 7980 K, it has around , 2.6 solar radii (), and . The smaller star is of spectral type F2V with a surface temperature of around 6750 K, and has around , , and between 4 and . Near Nusakan is Theta Coronae Borealis, a binary system that shines with a combined magnitude of 4.13 located 380±20 light-years distant. The brighter component, Theta Coronae Borealis A, is a blue-white star that spins extremely rapidly—at a rate of around 393 km per second. A Be star, it is surrounded by a debris disk. Flanking Alpha to the east is Gamma Coronae Borealis, yet another binary star system, whose components orbit each other every 92.94 years and are roughly as far apart from each other as the Sun and Neptune. The brighter component has been classed as a Delta Scuti variable star, though this view is not universal. The components are main sequence stars of spectral types B9V and A3V. Located 170±2 light-years away, 4.06-magnitude Delta Coronae Borealis is a yellow giant star of spectral type G3.5III that is around and has swollen to . It has a surface temperature of 5180 K. For most of its existence, Delta Coronae Borealis was a blue-white main-sequence star of spectral type B before it ran out of hydrogen fuel in its core. Its luminosity and spectrum suggest it has just crossed the Hertzsprung gap, having finished burning core hydrogen and just begun burning hydrogen in a shell that surrounds the core. Zeta Coronae Borealis is a double star with two blue-white components 6.3 arcseconds apart that can be readily separated at 100x magnification. The primary is of magnitude 5.1 and the secondary is of magnitude 6.0. Nu Coronae Borealis is an optical double, whose components are a similar distance from Earth but have different radial velocities, hence are assumed to be unrelated. The primary, Nu1 Coronae Borealis, is a red giant of spectral type M2III and magnitude 5.2, lying 640±30 light-years distant, and the secondary, Nu2 Coronae Borealis, is an orange-hued giant star of spectral type K5III and magnitude 5.4, estimated to be 590±30 light-years away. Sigma Coronae Borealis, on the other hand, is a true multiple star system divisible by small amateur telescopes. It is actually a complex system composed of two stars around as massive as the Sun that orbit each other every 1.14 days, orbited by a third Sun-like star every 726 years. The fourth and fifth components are a binary red dwarf system that is 14,000 AU distant from the other three stars. ADS 9731 is an even rarer multiple system in the constellation, composed of six stars, two of which are spectroscopic binaries. Corona Borealis is home to two remarkable variable stars. T Coronae Borealis is a cataclysmic variable star also known as the Blaze Star. Normally placid around magnitude 10—it has a minimum of 10.2 and maximum of 9.9—it brightens to magnitude 2 in a period of hours, caused by a nuclear chain reaction and the subsequent explosion. T Coronae Borealis is one of a handful of stars called recurrent novae, which include T Pyxidis and U Scorpii. An outburst of T Coronae Borealis was first recorded in 1866; its second recorded outburst was in February 1946. T Coronae Borealis is a binary star with a red-hued giant primary and a white dwarf secondary, the two stars orbiting each other over a period of approximately 8 months. R Coronae Borealis is a yellow-hued variable supergiant star, over 7000 light-years from Earth, and prototype of a class of stars known as R Coronae Borealis variables. Normally of magnitude 6, its brightness periodically drops as low as magnitude 15 and then slowly increases over the next several months. These declines in magnitude come about as dust that has been ejected from the star obscures it. Direct imaging with the Hubble Space Telescope shows extensive dust clouds out to a radius of around 2000 AU from the star, corresponding with a stream of fine dust (composed of grains 5 nm in diameter) associated with the star's stellar wind and coarser dust (composed of grains with a diameter of around 0.14 µm) ejected periodically. There are several other variables of reasonable brightness for amateur astronomer to observe, including three Mira-type long period variables: S Coronae Borealis ranges between magnitudes 5.8 and 14.1 over a period of 360 days. Located around 1946 light-years distant, it shines with a luminosity 16,643 times that of the Sun and has a surface temperature of 3033 K. One of the reddest stars in the sky, V Coronae Borealis is a cool star with a surface temperature of 2877 K that shines with a luminosity 102,831 times that of the Sun and is a remote 8810 light-years distant from Earth. Varying between magnitudes 6.9 and 12.6 over a period of 357 days, it is located near the junction of the border of Corona Borealis with Hercules and Bootes. Located 1.5° northeast of Tau Coronae Borealis, W Coronae Borealis ranges between magnitudes 7.8 and 14.3 over a period of 238 days. Another red giant, RR Coronae Borealis is a M3-type semiregular variable star that varies between magnitudes 7.3 and 8.2 over 60.8 days. RS Coronae Borealis is yet another semiregular variable red giant, which ranges between magnitudes 8.7 to 11.6 over 332 days. It is unusual in that it is a red star with a high proper motion (greater than 50 milliarcseconds a year). Meanwhile, U Coronae Borealis is an Algol-type eclipsing binary star system whose magnitude varies between 7.66 and 8.79 over a period of 3.45 days TY Coronae Borealis is a pulsating white dwarf (of ZZ Ceti) type, which is around 70% as massive as the Sun, yet has only 1.1% of its diameter. Discovered in 1990, UW Coronae Borealis is a low-mass X-ray binary system composed of a star less massive than the Sun and a neutron star surrounded by an accretion disk that draws material from the companion star. It varies in brightness in an unusually complex manner: the two stars orbit each other every 111 minutes, yet there is another cycle of 112.6 minutes, which corresponds to the orbit of the disk around the degenerate star. The beat period of 5.5 days indicates the time the accretion disk—which is asymmetrical—takes to precess around the star. Extrasolar planets have been confirmed in five star systems, four of which were found by the radial velocity method. The spectrum of Epsilon Coronae Borealis was analysed for seven years from 2005 to 2012, revealing a planet around 6.7 times as massive as Jupiter () orbiting every 418 days at an average distance of around 1.3 AU. Epsilon itself is a orange giant of spectral type K2III that has swollen to and . Kappa Coronae Borealis is a spectral type K1IV orange subgiant nearly twice as massive as the Sun; around it lie a dust debris disk, and one planet with a period of 3.4 years. This planet's mass is estimated at . The dimensions of the debris disk indicate it is likely there is a second substellar companion. Omicron Coronae Borealis is a K-type clump giant with one confirmed planet with a mass of that orbits every 187 days—one of the two least massive planets known around clump giants. HD 145457 is an orange giant of spectral type K0III found to have one planet of . Discovered by the Doppler method in 2010, it takes 176 days to complete an orbit. XO-1 is a magnitude 11 yellow main-sequence star located approximately light-years away, of spectral type G1V with a mass and radius similar to the Sun. In 2006 the hot Jupiter exoplanet XO-1b was discovered orbiting XO-1 by the transit method using the XO Telescope. Roughly the size of Jupiter, it completes an orbit around its star every three days. The discovery of a Jupiter-sized planetary companion was announced in 1997 via analysis of the radial velocity of Rho Coronae Borealis, a yellow main sequence star and Solar analog of spectral type G0V, around 57 light-years distant from Earth. More accurate measurement of data from the Hipparcos satellite subsequently showed it instead to be a low-mass star somewhere between 100 and 200 times the mass of Jupiter. Possible stable planetary orbits in the habitable zone were calculated for the binary star Eta Coronae Borealis, which is composed of two stars—yellow main sequence stars of spectral type G1V and G3V respectively—similar in mass and spectrum to the Sun. No planet has been found, but a brown dwarf companion about 63 times as massive as Jupiter with a spectral type of L8 was discovered at a distance of 3640 AU from the pair in 2001. Corona Borealis contains few galaxies observable with amateur telescopes. NGC 6085 and 6086 are a faint spiral and elliptical galaxy respectively close enough to each other to be seen in the same visual field through a telescope. Abell 2142 is a huge (six million light-year diameter), X-ray luminous galaxy cluster that is the result of an ongoing merger between two galaxy clusters. It has a redshift of 0.0909 (meaning it is moving away from us at 27,250 km/s) and a visual magnitude of 16.0. It is about 1.2 billion light-years away. Another galaxy cluster in the constellation, RX J1532.9+3021, is approximately 3.9 billion light-years from Earth. At the cluster's center is a large elliptical galaxy containing one of the most massive and most powerful supermassive black holes yet discovered. Abell 2065 is a highly concentrated galaxy cluster containing more than 400 members, the brightest of which are 16th magnitude; the cluster is more than one billion light-years from Earth. On a larger scale still, Abell 2065, along with Abell 2061, Abell 2067, Abell 2079, Abell 2089, and Abell 2092, make up the Corona Borealis Supercluster. Another galaxy cluster, Abell 2162, is a member of the Hercules Superclusters. In Greek mythology, Corona Borealis was linked to the legend of Theseus and the minotaur. It was generally considered to represent a crown given by Dionysus to Ariadne, the daughter of Minos of Crete, after she had been abandoned by the Athenian prince Theseus. When she wore the crown at her marriage to Dionysus, he placed it in the heavens to commemorate their wedding. An alternate version has the besotted Dionysus give the crown to Ariadne, who in turn gives it to Theseus after he arrives in Crete to kill the minotaur that the Cretans have demanded tribute from Athens to feed. The hero uses the crown's light to escape the labyrinth after disposing of the creature, and Dionysus later sets it in the heavens. The Latin author Hyginus linked it to a crown or wreath worn by Bacchus (Dionysus) to disguise his appearance when first approaching Mount Olympus and revealing himself to the gods, having been previously hidden as yet another child of Jupiter's trysts with a mortal, in this case Semele. Corona Borealis was one of the 48 constellations mentioned in the "Almagest" of classical astronomer Ptolemy. In Welsh mythology, it was called Caer Arianrhod, "the Castle of the Silver Circle", and was the heavenly abode of the Lady Arianrhod. To the ancient Balts, Corona Borealis was known as "Darželis", the "flower garden". The Arabs called the constellation Alphecca (a name later given to Alpha Coronae Borealis), which means "separated" or "broken up" ( '), a reference to the resemblance of the stars of Corona Borealis to a loose string of jewels. This was also interpreted as a broken dish. Among the Bedouins, the constellation was known as ' (), or "the dish/bowl of the poor people". The Skidi people of Native Americans saw the stars of Corona Borealis representing a council of stars whose chief was Polaris. The constellation also symbolised the smokehole over a fireplace, which conveyed their messages to the gods, as well as how chiefs should come together to consider matters of importance. The Shawnee people saw the stars as the "Heavenly Sisters", who descended from the sky every night to dance on earth. Alphecca signifies the youngest and most comely sister, who was seized by a hunter who transformed into a field mouse to get close to her. They married though she later returned to the sky, with her heartbroken husband and son following later. The Mi'kmaq of eastern Canada saw Corona Borealis as "Mskegwǒm", the den of the celestial bear (Alpha, Beta, Gamma and Delta Ursae Majoris). Polynesian peoples often recognized Corona Borealis; the people of the Tuamotus named it "Na Kaua-ki-tokerau" and probably "Te Hetu". The constellation was likely called "Kaua-mea" in Hawaii, "Rangawhenua" in New Zealand, and "Te Wale-o-Awitu" in the Cook Islands atoll of Pukapuka. Its name in Tonga was uncertain; it was either called "Ao-o-Uvea" or "Kau-kupenga". In Australian Aboriginal astronomy, the constellation is called "womera" ("the boomerang") due to the shape of the stars. The Wailwun people of northwestern New South Wales saw Corona Borealis as "mullion wollai" "eagle's nest", with Altair and Vega—each called "mullion"—the pair of eagles accompanying it. The Wardaman people of northern Australia held the constellation to be a gathering point for Men's Law, Women's Law and Law of both sexes come together and consider matters of existence. Corona Borealis was renamed Corona Firmiana in honour of the Archbishop of Salzburg in the 1730 Atlas "Mercurii Philosophicii Firmamentum Firminianum Descriptionem" by Corbinianus Thomas, but this was not taken up by subsequent cartographers. The constellation was featured as a main plot ingredient in the short story "Hypnos" by H. P. Lovecraft, published in 1923; it is the object of fear of one of the protagonists in the short story. Finnish band Cadacross released an album titled "Corona Borealis" in 2002.
https://en.wikipedia.org/wiki?curid=6420
Cygnus (constellation) Cygnus is a northern constellation lying on the plane of the Milky Way, deriving its name from the Latinized Greek word for swan. Cygnus is one of the most recognizable constellations of the northern summer and autumn, and it features a prominent asterism known as the Northern Cross (in contrast to the Southern Cross). Cygnus was among the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations. Cygnus contains Deneb (ذنب, translit. "ḏanab," tail)which is one of the brightest stars in the night sky and is the most distant first-magnitude star as its "tail star" and one corner of the Summer Triangle. It also has some notable X-ray sources and the giant stellar association of Cygnus OB2. Cygnus is also known as the Northern Cross. One of the stars of this association, NML Cygni, is one of the largest stars currently known. The constellation is also home to Cygnus X-1, a distant X-ray binary containing a supergiant and unseen massive companion that was the first object widely held to be a black hole. Many star systems in Cygnus have known planets as a result of the Kepler Mission observing one patch of the sky, an area around Cygnus. Most of the east has part of the Hercules–Corona Borealis Great Wall in the deep sky, a giant galaxy filament that is the largest known structure in the observable universe, covering most of the northern sky. "See also: Cygnus in Chinese astronomy" In Hinduism, the period of time (or Muhurta) between 4:24 AM to 5:12 AM is called the Brahmamuhurtha, which means "the moment of the Universe"; the star system in correlation is the Cygnus constellation. This is believed to be a highly auspicious time to meditate, do any task, or start the day. In Polynesia, Cygnus was often recognized as a separate constellation. In Tonga it was called "Tuula-lupe", and in the Tuamotus it was called "Fanui-tai". In New Zealand it was called "Mara-tea", in the Society Islands it was called "Pirae-tea" or "Taurua-i-te-haapa-raa-manu", and in the Tuamotus it was called "Fanui-raro". Beta Cygni was named in New Zealand; it was likely called "Whetu-kaupo". Gamma Cygni was called "Fanui-runga" in the Tuamotus. Deneb was also often a given name, in the Islamic world of astronomy. The name "Deneb" comes from the Arabic name "dhaneb", meaning "tail", from the phrase "Dhanab ad-Dajājah", which means “the tail of the hen”. In Greek mythology, Cygnus has been identified with several different legendary swans. Zeus disguised himself as a swan to seduce Leda, Spartan king Tyndareus's wife, who gave birth to the Gemini, Helen of Troy, and Clytemnestra; Orpheus was transformed into a swan after his murder, and was said to have been placed in the sky next to his lyre (Lyra); and the King Cygnus was transformed into a swan. The Greeks also associated this constellation with the tragic story of Phaethon, the son of Helios the sun god, who demanded to ride his father's sun chariot for a day. Phaethon, however, was unable to control the reins, forcing Zeus to destroy the chariot (and Phaethon) with a thunderbolt, causing it to plummet to the earth into the river Eridanus. According to the myth, Phaethon's close friend or lover, Cygnus, grieved bitterly and spent many days diving into the river to collect Phaethon's bones to give him a proper burial. The gods were so touched by Cygnus's devotion that they turned him into a swan and placed him among the stars. In Ovid's "Metamorphoses", there are three people named Cygnus, all of whom are transformed into swans. Alongside Cygnus, noted above, he mentions a boy from Tempe who commits suicide when Phyllius refuses to give him a tamed bull that he demands, but is transformed into a swan and flies away. He also mentions a son of Neptune who is an invulnerable warrior in the Trojan War who is eventually defeated by Achilles, but Neptune saves him by transforming him into a swan. Together with other avian constellations near the summer solstice, Vultur cadens and Aquila, Cygnus may be a significant part of the origin of the myth of the Stymphalian Birds, one of The Twelve Labours of Hercules. A very large constellation, Cygnus is bordered by Cepheus to the north and east, Draco to the north and west, Lyra to the west, Vulpecula to the south, Pegasus to the southeast and Lacerta to the east. The three-letter abbreviation for the constellation, as adopted by the IAU in 1922, is "Cyg". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined as a polygon of 28 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 27.73° and 61.36°. Covering 804 square degrees and around 1.9% of the night sky, Cygnus ranks 16th of the 88 constellations in size. Cygnus culminates at midnight on 29 June, and is most visible in the evening from the early summer to mid-autumn in the Northern Hemisphere. Normally, Cygnus is depicted with Delta and Epsilon Cygni as its wings. Deneb, the brightest in the constellation is at its tail, and Albireo as the tip of its beak. There are several asterisms in Cygnus. In the 17th-century German celestial cartographer Johann Bayer's star atlas the "Uranometria", Alpha, Beta and Gamma Cygni form the pole of a cross, while Delta and Epsilon form the cross beam. The nova P Cygni was then considered to be the body of Christ. Bayer catalogued many stars in the constellation, giving them the Bayer designations from Alpha to Omega and then using lowercase Roman letters to g. John Flamsteed added the Roman letters h, i, k, l and m (these stars were considered "informes" by Bayer as they lay outside the asterism of Cygnus), but were dropped by Francis Baily. There are several bright stars in Cygnus. Alpha Cygni, called Deneb, is the brightest star in Cygnus. It is a white supergiant star of spectral type A2Iae that varies between magnitudes 1.21 and 1.29, one of the largest and most luminous A-class stars known. It is located about 3200 light-years away. Its traditional name means "tail" and refers to its position in the constellation. Albireo, designated Beta Cygni, is a celebrated binary star among amateur astronomers for its contrasting hues. The primary is an orange-hued giant star of magnitude 3.1 and the secondary is a blue-green hued star of magnitude 5.1. The system is 380 light-years away and is visible in large binoculars and all amateur telescopes. Gamma Cygni, traditionally named Sadr, is a yellow-tinged supergiant star of magnitude 2.2, 1500 light-years away. Its traditional name means "breast" and refers to its position in the constellation. Delta Cygni (the proper name is Fawaris) is another bright binary star in Cygnus, 171 light-years with a period of 800 years. The primary is a blue-white hued giant star of magnitude 2.9, and the secondary is a star of magnitude 6.6. The two components are visible in a medium-sized amateur telescope. The fifth star in Cygnus above magnitude 3 is Aljanah, designated Epsilon Cygni. It is an orange-hued giant star of magnitude 2.5, 72 light-years from Earth. There are several other dimmer double and binary stars in Cygnus. Mu Cygni is a binary star with an optical tertiary component. The binary system has a period of 790 years and is 73 light-years from Earth. The primary and secondary, both white stars, are of magnitude 4.8 and 6.2, respectively. The unrelated tertiary component is of magnitude 6.9. Though the tertiary component is visible in binoculars, the primary and secondary currently require a medium-sized amateur telescope to split, as they will through the year 2020. The two stars will be closest between 2043 and 2050, when they will require a telescope with larger aperture to split. The stars 30 and 31 Cygni form a contrasting double star similar to the brighter Albireo. The two are visible in binoculars. The primary, 31 Cygni, is an orange-hued star of magnitude 3.8, 1400 light-years from Earth. The secondary, 30 Cygni, appears blue-green. It is of spectral type A5IIIn and magnitude 4.83, and is around 610 light-years from Earth. 31 Cygni itself is a binary star; the tertiary component is a blue star of magnitude 7.0. Psi Cygni is a binary star visible in small amateur telescopes, with two white components. The primary is of magnitude 5.0 and the secondary is of magnitude 7.5. 61 Cygni is a binary star visible in large binoculars or a small amateur telescope. It is 11.4 light-years from Earth and has a period of 750 years. Both components are orange-hued dwarf (main sequence) stars; the primary is of magnitude 5.2 and the secondary is of magnitude 6.1. 61 Cygni is significant because Friedrich Wilhelm Bessel determined its parallax in 1838, the first star to have a known parallax. Located near Eta Cygni is the X-ray source Cygnus X-1, which is now thought to be caused by a black hole accreting matter in a binary star system. This was the first x-ray source widely believed to be a black hole. Cygnus also contains several other noteworthy X-ray sources. Cygnus X-3 is a microquasar containing a Wolf–Rayet star in orbit around a very compact object, with a period of only 4.8 hours. The system is one of the most intrinsically luminous X-ray sources observed. The system undergoes periodic outbursts of unknown nature, and during one such outburst, the system was found to be emitting muons, likely caused by neutrinos. While the compact object is thought to be a neutron star or possibly a black hole, it is possible that the object is instead a more exotic stellar remnant, possibly the first discovered quark star, hypothesized due to its production of cosmic rays that cannot be explained if the object is a normal neutron star. The system also emits cosmic rays and gamma rays, and has helped shed insight on to the formation of such rays. Cygnus X-2 is another X-ray binary, containing an A-type giant in orbit around a neutron star with a 9.8 day period. The system is interesting due to the rather small mass of the companion star, as most millisecond pulsars have much more massive companions. Another black hole in Cygnus is V404 Cygni, which consists of a K-type star orbiting around a black hole of around 12 solar masses. The black hole, similar to that of Cygnus X-3, has been hypothesized to be a quark star. 4U 2129+ 47 is another X-ray binary containing a neutron star which undergoes outbursts, as is EXO 2030+ 375. Cygnus is also home to several variable stars. SS Cygni is a dwarf nova which undergoes outbursts every 7–8 days. The system's total magnitude varies from 12th magnitude at its dimmest to 8th magnitude at its brightest. The two objects in the system are incredibly close together, with an orbital period of less than 0.28 days. Chi Cygni is a red giant and the second-brightest Mira variable star at its maximum. It ranges between magnitudes 3.3 and 14.2, and spectral types S6,2e to S10,4e (MSe) over a period of 408 days; it has a diameter of 300 solar diameters and is 350 light-years from Earth. P Cygni is a luminous blue variable that brightened suddenly to 3rd magnitude in 1600 AD. Since 1715, the star has been of 5th magnitude, despite being more than 5000 light-years from Earth. The star's spectrum is unusual in that it contains very strong emission lines resulting from surrounding nebulosity. W Cygni is a semi-regular variable red giant star, 618 light-years from Earth.It has a maximum magnitude of 5.10 and a minimum magnitude 6.83; its period of 131 days. It is a red giant ranging between spectral types M4e-M6e(Tc:)III, NML Cygni is a red hypergiant semi-regular variable star located at 5,300 light-years away from Earth. It is one of largest stars currently known in the galaxy with a radius exceeding 1,000 solar radii. Its magnitude is around 16.6, its period is about 940 days. Cygnus contains the binary star system KIC 9832227. It is predicted that the two stars will coalesce in about 2022, briefly forming a new naked-eye object. Cygnus is one of the constellations that the Kepler satellite surveyed in its search for extrasolar planets, and as a result, there are about a hundred stars in Cygnus with known planets, the most of any constellation. One of the most notable systems is the Kepler-11 system, containing six transiting planets, all within a plane of approximately one degree. With a spectral type of G6V, the star is somewhat cooler than the Sun. The planets are very close to the star; all but the last planet are closer to Kepler-11 than Mercury is to the Sun, and all the planets are more massive than Earth. The naked-eye star 16 Cygni, a triple star approximately 70 light-years from Earth composed two Sun-like stars and a red dwarf, contains a planet orbiting one of the sun-like stars, found due to variations in the star's radial velocity. Gliese 777, another naked-eye multiple star system containing a yellow star and a red dwarf, also contains a planet. The planet is somewhat similar to Jupiter, but with slightly more mass and a more eccentric orbit. The Kepler-22 system is also notable, in that its extrasolar planet is believed to be the first "Earth-twin" planet ever discovered. There is an abundance of deep-sky objects, with many open clusters, nebulae of various types and supernova remnants found in Cygnus due to its position on the Milky Way. Some open clusters can be difficult to make out from a rich background of stars. M39 (NGC 7092) is an open cluster 950 light-years from Earth that is visible to the unaided eye under dark skies. It is loose, with about 30 stars arranged over a wide area; their conformation appears triangular. The brightest stars of M39 are of the 7th magnitude. Another open cluster in Cygnus is NGC 6910, also called the Rocking Horse Cluster, possessing 16 stars with a diameter of 5 arcminutes visible in a small amateur instrument; it is of magnitude 7.4. The brightest of these are two gold-hued stars, which represent the bottom of the toy it is named for. A larger amateur instrument reveals 8 more stars, nebulosity to the east and west of the cluster, and a diameter of 9 arcminutes. The nebulosity in this region is part of the Gamma Cygni Nebula. The other stars, approximately 3700 light-years from Earth, are mostly blue-white and very hot. Other open clusters in Cygnus include Dolidze 9, Collinder 421, Dolidze 11, and Berkeley 90. Dolidze 9, 2800 light-years from Earth and relatively young at 20 million light-years old, is a faint open cluster with up to 22 stars visible in small and medium-sized amateur telescopes. Nebulosity is visible to the north and east of the cluster, which is 7 arcminutes in diameter. The brightest star appears in the eastern part of the cluster and is of the 7th magnitude; another bright star has a yellow hue. Dolidze 11 is an open cluster 400 million years old, farthest away of the three at 3700 light-years. More than 10 stars are visible in an amateur instrument in this cluster, of similar size to Dolidze 9 at 7 arcminutes in diameter, whose brightest star is of magnitude 7.5. It, too, has nebulosity in the east. Collinder 421 is a particularly old open cluster at an age of approximately 1 billion years; it is of magnitude 10.1. 3100 light-years from Earth, more than 30 stars are visible in a diameter of 8 arcseconds. The prominent star in the north of the cluster has a golden color, whereas the stars in the south of the cluster appear orange. Collinder 421 appears to be embedded in nebulosity, which extends past the cluster's borders to its west. Berkeley 90 is a smaller open cluster, with a diameter of 5 arcminutes. More than 16 members appear in an amateur telescope. NGC 6826, the Blinking Planetary Nebula, is a planetary nebula with a magnitude of 8.5, 3200 light-years from Earth. It appears to "blink" in the eyepiece of a telescope because its central star is unusually bright (10th magnitude). When an observer focuses on the star, the nebula appears to fade away. Less than one degree from the Blinking Planetary is the double star 16 Cygni. The North America Nebula (NGC 7000) is one of the most well-known nebulae in Cygnus, because it is visible to the unaided eye under dark skies, as a bright patch in the Milky Way. However, its characteristic shape is only visible in long-exposure photographs – it is difficult to observe in telescopes because of its low surface brightness. It has low surface brightness because it is so large; at its widest, the North America Nebula is 2 degrees across. Illuminated by a hot embedded star of magnitude 6, NGC 7000 is 1500 light-years from Earth. To the south of Epsilon Cygni is the Veil Nebula (NGC 6960, 6962, 6979, 6992, and 6995), a 5,000-year-old supernova remnant covering approximately 3 degrees of the sky - it is over 50 light-years long. Because of its appearance, it is also called the Cygnus Loop. The Loop is only visible in long-exposure astrophotographs. However, the brightest portion, NGC 6992, is faintly visible in binoculars, and a dimmer portion, NGC 6960, is visible in wide-angle telescopes. The DR 6 cluster is also nicknamed the "Galactic Ghoul" because of the nebula's resemblance to a human face; The Northern Coalsack Nebula, also called the Cygnus Rift, is a dark nebula located in the Cygnus Milky Way. The Gamma Cygni Nebula (IC 1318) includes both bright and dark nebulae in an area of over 4 degrees. DWB 87 is another of the many bright emission nebulae in Cygnus, 7.8 by 4.3 arcminutes. It is in the Gamma Cygni area. Two other emission nebulae include Sharpless 2-112 and Sharpless 2-115. When viewed in an amateur telescope, Sharpless 2–112 appears to be in a teardrop shape. More of the nebula's eastern portion is visible with an O III (doubly ionized oxygen) filter. There is an orange star of magnitude 10 nearby and a star of magnitude 9 near the nebula's northwest edge. Further to the northwest, there is a dark rift and another bright patch. The whole nebula measures 15 arcminutes in diameter. Sharpless 2–115 is another emission nebula with a complex pattern of light and dark patches. Two pairs of stars appear in the nebula; it is larger near the southwestern pair. The open cluster Berkeley 90 is embedded in this large nebula, which measures 30 by 20 arcminutes. Also of note is the Crescent Nebula (NGC 6888), located between Gamma and Eta Cygni, which was formed by the Wolf–Rayet star HD 192163. In recent years, amateur astronomers have made some notable Cygnus discoveries. The "Soap bubble nebula" (PN G75.5+1.7), near the Crescent nebula, was discovered on a digital image by Dave Jurasevich in 2007. In 2011, Austrian amateur Matthias Kronberger discovered a planetary nebula (Kronberger 61, now nicknamed "The Soccer Ball") on old survey photos, confirmed recently in images by the Gemini Observatory; both of these are likely too faint to be detected by eye in a small amateur scope. But a much more obscure and relatively 'tiny' object—one which is readily seen in dark skies by amateur telescopes, under good conditions—is the newly discovered nebula (likely reflection type) associated with the star 4 Cygni (HD 183056): an approximately fan-shaped glowing region of several arcminutes' diameter, to the south and west of the fifth-magnitude star. It was first discovered visually near San Jose, California and publicly reported by amateur astronomer Stephen Waldee in 2007, and was confirmed photographically by Al Howard in 2010. California amateur astronomer Dana Patchick also says he detected it on the Palomar Observatory survey photos in 2005 but had not published it for others to confirm and analyze at the time of Waldee's first official notices and later 2010 paper. Cygnus X is the largest star-forming region in the Solar neighborhood and includes not only some of the brightest and most massive stars known (such as Cygnus OB2-12), but also Cygnus OB2, a massive stellar association classified by some authors as a young globular cluster. More supernovae have been seen in the Fireworks Galaxy (NGC 6946) than in any other galaxy. Cygnus A is the first radio galaxy discovered; at a distance of 730 million light-years from Earth, it is the closest powerful radio galaxy. In the visible spectrum, it appears as an elliptical galaxy in a small cluster. It is classified as an active galaxy because the supermassive black hole at its nucleus is accreting matter, which produces two jets of matter from the poles. The jets' interaction with the interstellar medium creates radio lobes, one source of radio emissions. Cygnus is also the apparent source of the WIMP-wind due to the orientation of the solar system's rotation through the galactic halo.
https://en.wikipedia.org/wiki?curid=6421
Calorie The calorie is a unit of energy widely used in nutrition. For historical reasons, two main definitions of calorie are in wide use. The small calorie or gram calorie (usually denoted cal) is the amount of heat energy needed to raise the temperature of one "gram" of water by one degree Celsius (or one kelvin). The large calorie, food calorie, or kilocalorie (Cal, calorie or kcal) is the amount of heat needed to cause the same increase in one "kilogram" of water. Thus, 1 kilocalorie (kcal) = 1000 calories (cal). By convention in food science, the large calorie is commonly called calorie (with a capital C by some authors to distinguish from the smaller unit). In most countries, labels of industrialized food products are required to indicate the nutritional energy value in (kilo or large) calories per serving or per weight. Calorie relates directly to the metric system, and therefore to the SI system. It is regarded as obsolete within the scientific community, since the adoption of the SI system, but is still in some use. The SI unit of energy is the joule, with symbol "J": one small calorie is defined as exactly 4.184 J; one large calorie is 4184 J. The calorie was first introduced by Nicolas Clément, as a unit of heat energy, in lectures during the years 1819–1824. This was the "large" calorie, viz. modern kilocalorie. The term entered French and English dictionaries between 1841 and 1867. It comes . The "small" calorie (modern calorie) was introduced by Pierre Antoine Favre (Chemist) and Johann T. Silbermann (Physicist) in 1852. In 1879, Marcellin Berthelot distinguished between gram-calorie (modern calorie) and kilogram-calorie (modern kilocalorie). Berthelot also introduced the convention of capitalizing the kilogram-calorie, as "Calorie". The use of the kilogram-calorie (kcal) for nutrition was introduced to the American public by Wilbur Olin Atwater, a professor at Wesleyan University, in 1887. The modern calorie (cal) was first recognized as a unit of the cm-g-s system (cgs) in 1896, alongside the already-existing cgs unit of energy, the erg (first suggested by Clausius in 1864, under the name "ergon", and officially adopted in 1882). Already in 1928 there were serious complaints about the possible confusion arising from the two main definitions of the calorie and whether the notion of using the capital letter to distinguish them was sound. Use of the calorie was officially deprecated by the ninth General Conference on Weights and Measures, in 1948. The alternate spelling "calory" is archaic. The modern (small) calorie is defined as the amount of energy needed to increase the temperature of 1 gram of water by 1 °C (or 1 K, which is the same increment). The definition depends on the atmospheric pressure and the starting temperature. Accordingly, several different precise definitions of the calorie have been used. The two definitions most common in older literature appear to be the "15 °C calorie" and the "thermochemical calorie". Until 1948, the latter was defined as 4.1833 international joules; the current standard of 4.184 J was chosen to have the new thermochemical calorie represent the same quantity of energy as before. The calorie was first defined specifically to measure energy in the form of heat, especially in experimental calorimetry. In a nutritional context, the kilojoule (kJ) is the SI unit of food energy, although the "calorie" is commonly used. The word "calorie" is commonly used with the number of kilocalories (kcal) of nutritional energy measured. In the United States, most nutritionists prefer the unit kilocalorie to the unit kilojoules, whereas most physiologists prefer to use kilojoules. In the majority of other countries, nutritionists prefer the kilojoule to the kilocalorie. US food labelling laws require the use of kilocalories (under the name of "Calories"); kilojoules are permitted to be included on food labels alongside kilocalories, but most food labels do not do so. In Australia, kilojoules are officially preferred over kilocalories, but kilocalories retain some degree of popular use. Australian and New Zealand food labelling laws require the use of kilojoules; kilocalories are allowed to be included on labels in addition to kilocalories, but are not required. EU food labelling laws require both kilojoules and kilocalories on all nutritional labels, with the kilojoules listed first. To facilitate comparison, specific energy or energy density figures are often quoted as "calories per serving" or "kcal per 100 g". A nutritional requirement or consumption is often expressed in calories or kilocalories per day. Food nutrients as fat (lipids) contains 9 kilocalories per gram (kcal/g), while carbohydrate (sugar) or protein contains approximately 4 kcal/g. Alcohol in food contains 7 kcal/g.. Food nutrients are also often quoted "per 100 g". In other scientific contexts, the term "calorie" almost always refers to the small calorie. Even though it is not an SI unit, it is still used in chemistry. For example, the energy released in a chemical reaction per mole of reagent is occasionally expressed in kilocalories per mole. Typically, this use was largely due to the ease with which it could be calculated in laboratory reactions, especially in aqueous solution: a volume of reagent dissolved in water forming a solution, with concentration expressed in moles per litre (1 litre weighing 1 kilogram), will induce a temperature change in degrees Celsius in the total volume of water solvent, and these quantities (volume, molar concentration and temperature change) can then be used to calculate energy per mole. It is also occasionally used to specify energy quantities that relate to reaction energy, such as enthalpy of formation and the size of activation barriers. However, its use is being superseded by the SI unit, the joule, and multiples thereof such as the kilojoule. In the past, a bomb calorimeter was used to determine the energy content of food by burning a sample and measuring a temperature change in the surrounding water. Today, this method is not commonly used in the United States and has been replaced by calculating the energy content indirectly from adding up the energy provided by energy-containing nutrients of food (such as protein, carbohydrates, and fats). The fibre content is also subtracted to account for the fact that fibre is not digested by the body.
https://en.wikipedia.org/wiki?curid=6423
Corona Australis Corona Australis is a constellation in the Southern Celestial Hemisphere. Its Latin name means "southern crown", and it is the southern counterpart of Corona Borealis, the northern crown. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. The Ancient Greeks saw Corona Australis as a wreath rather than a crown and associated it with Sagittarius or Centaurus. Other cultures have likened the pattern to a turtle, ostrich nest, a tent, or even a hut belonging to a rock hyrax. Although fainter than its northern counterpart, the oval- or horseshoe-shaped pattern of its brighter stars renders it distinctive. Alpha and Beta Coronae Australis are the two brightest stars with an apparent magnitude of around 4.1. Epsilon Coronae Australis is the brightest example of a W Ursae Majoris variable in the southern sky. Lying alongside the Milky Way, Corona Australis contains one of the closest star-forming regions to the Solar System—a dusty dark nebula known as the Corona Australis Molecular Cloud, lying about 430 light years away. Within it are stars at the earliest stages of their lifespan. The variable stars R and TY Coronae Australis light up parts of the nebula, which varies in brightness accordingly. The name of the constellation was entered as "Corona Australis" when the International Astronomical Union (IAU) established the 88 modern constellations in 1922. In 1932, the name was instead recorded as "Corona Austrina" when the IAU's commission on notation approved a list of four-letter abbreviations for the constellations. The four-letter abbreviations were repealed in 1955. The IAU presently uses "Corona Australis" exclusively. Corona Australis is a small constellation bordered by Sagittarius to the north, Scorpius to the west, Telescopium to the south, and Ara to the southwest. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrA". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of four segments ("illustrated in infobox"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.77° and −45.52°. Covering 128 square degrees, Corona Australis culminates at midnight around the 30th of June and ranks 80th in area. Only visible at latitudes south of 53° north, Corona Australis cannot be seen from the British Isles as it lies too far south, but it can be seen from southern Europe and readily from the southern United States. While not a bright constellation, Corona Australis is nonetheless distinctive due to its easily identifiable pattern of stars, which has been described as horseshoe- or oval-shaped. Though it has no stars brighter than 4th magnitude, it still has 21 stars visible to the unaided eye (brighter than magnitude 5.5). Nicolas Louis de Lacaille used the Greek letters Alpha through to Lambda to label the most prominent eleven stars in the constellation, designating two stars as Eta and omitting Iota altogether. Mu Coronae Australis, a yellow star of spectral type G5.5III and apparent magnitude 5.21, was labelled by Johann Elert Bode and retained by Benjamin Gould, who deemed it bright enough to warrant naming. The only star in the constellation to have received a name is Alfecca Meridiana or Alpha CrA. The name combines the Arabic name of the constellation with the Latin for "southern". In Arabic, "Alfecca" means "break", and refers to the shape of both Corona Australis and Corona Borealis. Also called simply "Meridiana", it is a white main sequence star located 125 light years away from Earth, with an apparent magnitude of 4.10 and spectral type A2Va. A rapidly rotating star, it spins at almost 200 km per second at its equator, making a complete revolution in around 14 hours. Like the star Vega, it has excess infrared radiation, which indicates it may be ringed by a disk of dust. It is currently a main-sequence star, but will eventually evolve into a white dwarf; currently, it has a luminosity 31 times greater, and a radius and mass of 2.3 times that of the Sun. Beta Coronae Australis is an orange giant 474 light years from Earth. Its spectral type is K0II, and it is of apparent magnitude 4.11. Since its formation, it has evolved from a B-type star to a K-type star. Its luminosity class places it as a bright giant; its luminosity is 730 times that of the Sun, designating it one of the highest-luminosity K0-type stars visible to the naked eye. 100 million years old, it has a radius of 43 solar radii () and a mass of between 4.5 and 5 solar masses (). Alpha and Beta are so similar as to be indistinguishable in brightness to the naked eye. Some of the more prominent double stars include Gamma Coronae Australis—a pair of yellowish white stars 58 light years away from Earth, which orbit each other every 122 years. Widening since 1990, the two stars can be seen as separate with a 100 mm aperture telescope; they are separated by 1.3 arcseconds at an angle of 61 degrees. They have a combined visual magnitude of 4.2; each component is an F8V dwarf star with a magnitude of 5.01. Epsilon Coronae Australis is an eclipsing binary belonging to a class of stars known as W Ursae Majoris variables. These star systems are known as contact binaries as the component stars are so close together they touch. Varying by a quarter of a magnitude around an average apparent magnitude of 4.83 every seven hours, the star system lies 98 light years away. Its spectral type is F4VFe-0.8+. At the southern end of the crown asterism are the stars Eta¹ and Eta² Coronae Australis, which form an optical double. Of magnitude 5.1 and 5.5, they are separable with the naked eye and are both white. Kappa Coronae Australis is an easily resolved optical double—the components are of apparent magnitudes 6.3 and 5.6 and are about 1000 and 150 light years away respectively. They appear at an angle of 359 degrees, separated by 21.6 arcseconds. Kappa² is actually the brighter of the pair and is more bluish white, with a spectral type of B9V, while Kappa¹ is of spectral type A0III. Lying 202 light years away, Lambda Coronae Australis is a double splittable in small telescopes. The primary is a white star of spectral type A2Vn and magnitude of 5.1, while the companion star has a magnitude of 9.7. The two components are separated by 29.2 arcseconds at an angle of 214 degrees. Zeta Coronae Australis is a rapidly rotating main sequence star with an apparent magnitude of 4.8, 221.7 light years from Earth. The star has blurred lines in its hydrogen spectrum due to its rotation. Its spectral type is B9V. Theta Coronae Australis lies further to the west, a yellow giant of spectral type G8III and apparent magnitude 4.62. Corona Australis harbours RX J1856.5-3754, an isolated neutron star that is thought to lie 140 (±40) parsecs, or 460 (±130) light years, away, with a diameter of 14 km. It was once suspected to be a strange star, but this has been discounted. In the north of the constellation is the Corona Australis Molecular Cloud, a dark molecular cloud with many embedded reflection nebulae, including NGC 6729, NGC 6726–7, and IC 4812. A star-forming region of around , it contains Herbig–Haro objects (protostars) and some very young stars. About 430 light years (130 parsecs) away, it is one of the closest star-forming regions to the Solar System. The related NGC 6726 and 6727, along with unrelated NGC 6729, were first recorded by Johann Friedrich Julius Schmidt in 1865. The Coronet cluster, about 554 light years (170 parsecs) away at the edge of the Gould Belt, is also used in studying star and protoplanetary disk formation. R Coronae Australis is an irregular variable star ranging from magnitudes 9.7 to 13.9. Blue-white, it is of spectral type B5IIIpe. A very young star, it is still accumulating interstellar material. It is obscured by, and illuminates, the surrounding nebula, NGC 6729, which brightens and darkens with it. The nebula is often compared to a comet for its appearance in a telescope, as its length is five times its width. S Coronae Australis is a G-class dwarf in the same field as R and is a T Tauri star. Nearby, another young variable star, TY Coronae Australis, illuminates another nebula: reflection nebula NGC 6726–7. TY Coronae Australis ranges irregularly between magnitudes 8.7 and 12.4, and the brightness of the nebula varies with it. Blue-white, it is of spectral type B8e. The largest young stars in the region, R, S, T, TY and VV Coronae Australis, are all ejecting jets of material which cause surrounding dust and gas to coalesce and form Herbig–Haro objects, many of which have been identified nearby. Lying adjacent to the nebulosity is the globular cluster known as NGC 6723, which is actually in the neighbouring constellation of Sagittarius and is much much further away. Near Epsilon and Gamma Coronae Australis is Bernes 157, a dark nebula and star forming region. It is a large nebula, 55 by 18 arcminutes, that possesses several stars around magnitude 13. These stars have been dimmed by up to 8 magnitudes by its dust clouds. IC 1297 is a planetary nebula of apparent magnitude 10.7, which appears as a green-hued roundish object in higher-powered amateur instruments. The nebula surrounds the variable star RU Coronae Australis, which has an average apparent magnitude of 12.9 and is a WC class Wolf–Rayet star. IC 1297 is small, at only 7 arcseconds in diameter; it has been described as "a square with rounded edges" in the eyepiece, elongated in the north-south direction. Descriptions of its color encompass blue, blue-tinged green, and green-tinged blue. Corona Australis' location near the Milky Way means that galaxies are uncommonly seen. NGC 6768 is a magnitude 11.2 object 35′ south of IC 1297. It is made up of two galaxies merging, one of which is an elongated elliptical galaxy of classification E4 and the other a lenticular galaxy of classification S0. IC 4808 is a galaxy of apparent magnitude 12.9 located on the border of Corona Australis with the neighbouring constellation of Telescopium and 3.9 degrees west-southwest of Beta Sagittarii. However, amateur telescopes will only show a suggestion of its spiral structure. It is 1.9 arcminutes by 0.8 arcminutes. The central area of the galaxy does appear brighter in an amateur instrument, which shows it to be tilted northeast-southwest. Southeast of Theta and southwest of Eta lies the open cluster ESO 281-SC24, which is composed of the yellow 9th magnitude star GSC 7914 178 1 and five 10th to 11th magnitude stars. Halfway between Theta Coronae Australis and Theta Scorpii is the dense globular cluster NGC 6541. Described as between magnitude 6.3 and magnitude 6.6, it is visible in binoculars and small telescopes. Around 22000 light years away, it is around 100 light years in diameter. It is estimated to be around 14 billion years old. NGC 6541 appears 13.1 arcminutes in diameter and is somewhat resolvable in large amateur instruments; a 12-inch telescope reveals approximately 100 stars but the core remains unresolved. The Corona Australids are a meteor shower that takes place between 14 and 18 March each year, peaking around 16 March. This meteor shower does not have a high peak hourly rate. In 1953 and 1956, observers noted a maximum of 6 meteors per hour and 4 meteors per hour respectively; in 1955 the shower was "barely resolved". However, in 1992, astronomers detected a peak rate of 45 meteors per hour. The Corona Australids' rate varies from year to year. At only six days, the shower's duration is particularly short, and its meteoroids are small; the stream is devoid of large meteoroids. The Corona Australids were first seen with the unaided eye in 1935 and first observed with radar in 1955. Corona Australid meteors have an entry velocity of 45 kilometers per second. In 2006, a shower originating near Beta Coronae Australis was designated as the Beta Coronae Australids. They appear in May, the same month as a nearby shower known as the May Microscopids, but the two showers have different trajectories and are unlikely to be related. Corona Australis may have been recorded by ancient Mesopotamians in the MUL.APIN, as a constellation called MA.GUR ("The Bark"). However, this constellation, adjacent to SUHUR.MASH ("The Goat-Fish", modern Capricornus), may instead have been modern Epsilon Sagittarii. As a part of the southern sky, MA.GUR was one of the fifteen "stars of Ea". In the 3rd century BC, the Greek didactic poet Aratus wrote of, but did not name the constellation, instead calling the two crowns Στεφάνοι ("Stephanoi"). The Greek astronomer Ptolemy described the constellation in the 2nd century AD, though with the inclusion of Alpha Telescopii, since transferred to Telescopium. Ascribing 13 stars to the constellation, he named it Στεφάνος νοτιος ("Stephanos notios"), "Southern Wreath", while other authors associated it with either Sagittarius (having fallen off his head) or Centaurus; with the former, it was called "Corona Sagittarii". Similarly, the Romans called Corona Australis the "Golden Crown of Sagittarius". It was known as "Parvum Coelum" ("Canopy", "Little Sky") in the 5th century. The 18th-century French astronomer Jérôme Lalande gave it the names "Sertum Australe" ("Southern Garland") and "Orbiculus Capitis", while German poet and author Philippus Caesius called it "Corolla" ("Little Crown") or "Spira Australis" ("Southern Coil"), and linked it with the Crown of Eternal Life from the New Testament. Seventeenth-century celestial cartographer Julius Schiller linked it to the Diadem of Solomon. Sometimes, Corona Australis was not the wreath of Sagittarius but arrows held in his hand. Corona Australis has been associated with the myth of Bacchus and Stimula. Jupiter had impregnated Stimula, causing Juno to become jealous. Juno convinced Stimula to ask Jupiter to appear in his full splendor, which the mortal woman could not handle, causing her to burn. After Bacchus, Stimula's unborn child, became an adult and the god of wine, he honored his deceased mother by placing a wreath in the sky. In Chinese astronomy, the stars of Corona Australis are located within the Black Tortoise of the North (北方玄武, "Běi Fāng Xuán Wǔ"). The constellation itself was known as "ti'en pieh" ("Heavenly Turtle") and during the Western Zhou period, marked the beginning of winter. However, precession over time has meant that the "Heavenly River" (Milky Way) became the more accurate marker to the ancient Chinese and hence supplanted the turtle in this role. Arabic names for Corona Australis include "Al Ķubbah" "the Tortoise", "Al Ĥibā" "the Tent" or "Al Udḥā al Na'ām" "the Ostrich Nest". It was later given the name "Al Iklīl al Janūbiyyah", which the European authors Chilmead, Riccioli and Caesius transliterated as Alachil Elgenubi, Elkleil Elgenubi and Aladil Algenubi respectively. The ǀXam speaking San people of South Africa knew the constellation as "≠nabbe ta !nu" "house of branches"—owned originally by the Dassie (rock hyrax), and the star pattern depicting people sitting in a semicircle around a fire. The indigenous Boorong people of northwestern Victoria saw it as "Won", a boomerang thrown by "Totyarguil" (Altair). The Aranda people of Central Australia saw Corona Australis as a coolamon carrying a baby, which was accidentally dropped to earth by a group of sky-women dancing in the Milky Way. The impact of the coolamon created Gosses Bluff crater, 175 km west of Alice Springs. The Torres Strait Islanders saw Corona Australis as part of a larger constellation encompassing part of Sagittarius and the tip of Scorpius's tail; the Pleiades and Orion were also associated. This constellation was Tagai's canoe, crewed by the Pleiades, called the "Usiam", and Orion, called the "Seg". The myth of Tagai says that he was in charge of this canoe, but his crewmen consumed all of the supplies onboard without asking permission. Enraged, Tagai bound the Usiam with a rope and tied them to the side of the boat, then threw them overboard. Scorpius's tail represents a suckerfish, while Eta Sagittarii and Theta Coronae Australis mark the bottom of the canoe. On the island of Futuna, the figure of Corona Australis was called "Tanuma" and in the Tuamotus, it was called "Na Kaua-ki-Tonga". "SIMBAD"
https://en.wikipedia.org/wiki?curid=6424
Cheddar, Somerset Cheddar is a large village and civil parish in the Sedgemoor district of the English county of Somerset. It is situated on the southern edge of the Mendip Hills, north-west of Wells. The civil parish includes the hamlets of Nyland and Bradley Cross. The parish had a population of 5,755 in 2011 and an acreage of as of 1961. Cheddar Gorge, on the northern edge of the village, is the largest gorge in the United Kingdom and includes several show caves, including Gough's Cave. The gorge has been a centre of human settlement since Neolithic times including a Saxon palace. It has a temperate climate and provides a unique geological and biological environment that has been recognised by the designation of several Sites of Special Scientific Interest. It is also the site of several limestone quarries. The village gave its name to Cheddar cheese and has been a centre for strawberry growing. The crop was formerly transported on the Cheddar Valley rail line, which closed in the late 1960s but is now a cycle path. The village is now a major tourist destination with several cultural and community facilities, including the Cheddar Show Caves Museum. The village supports a variety of community groups including religious, sporting and cultural organisations. Several of these are based on the site of The Kings of Wessex Academy, which is the largest educational establishment. The name Cheddar comes from the Old English word "ceodor", meaning deep dark cavity or pouch. There is evidence of occupation from the Neolithic period in Cheddar. Britain's oldest complete human skeleton, Cheddar Man, estimated to be 9,000 years old, was found in Cheddar Gorge in 1903. Older remains from the Upper Late Palaeolithic era (12,000–13,000 years ago) have been found. There is some evidence of a Bronze Age field system at the Batts Combe quarry site. There is also evidence of Bronze Age barrows at the mound in the Longwood valley, which if man-made it is likely to be a field system. The remains of a Roman villa have been excavated in the grounds of the current vicarage. The village of Cheddar had been important during the Roman and Saxon eras. There was a royal palace at Cheddar during the Saxon period, which was used on three occasions in the 10th century to host the Witenagemot. The ruins of the palace were excavated in the 1960s. They are located on the grounds of The Kings of Wessex Academy, together with a 14th century chapel dedicated to St. Columbanus. Roman remains have also been uncovered at the site. Cheddar was listed in the Domesday Book of 1086 as "Ceder", meaning "Shear Water", from the Old English "scear" and Old Welsh "dŵr". An alternative spelling in earlier documents, common through the 1850s is "Chedder". As early as 1130 AD, the Cheddar Gorge was recognised as one of the "Four wonders of England". Historically, Cheddar's source of wealth was farming and cheese making for which it was famous as early as 1170 AD. The parish was part of the Winterstoke Hundred. The manor of Cheddar was deforested in 1337 and Bishop Ralph was granted a licence by the King to create a hunting forest. As early as 1527 there are records of watermills on the river. In the 17th and 18th centuries, there were several watermills which ground corn and made paper, with 13 mills on the Yeo at the peak, declining to seven by 1791 and just three by 1915. In the Victorian era it also became a centre for the production of clothing. The last mill, used as a shirt factory, closed in the early 1950s. William Wilberforce saw the poor conditions of the locals when he visited Cheddar in 1789. He inspired Hannah More in her work to improve the conditions of the Mendip miners and agricultural workers. In 1801, of common land were enclosed under the Inclosure Acts. Tourism of the Cheddar gorge and caves began with the opening of the Cheddar Valley Railway in 1869. Cheddar, its surrounding villages and specifically the gorge has been subject to flooding. In the Chew Stoke flood of 1968 the flow of water washed large boulders down the gorge, washed away cars, and damaged the cafe and the entrance to Gough's Cave. Cheddar is recognised as a village. The adjacent settlement of Axbridge, although only about a third the population of Cheddar, is a town. This apparently illogical situation is explained by the relative importance of the two places in historic times. While Axbridge grew in importance as a centre for cloth manufacturing in the Tudor period and gained a charter from King John, Cheddar remained a more dispersed mining and dairy-farming village. Its population grew with the arrival of the railways in the Victorian era and the advent of tourism. The parish council, which has 15 members who are elected for four years, is responsible for local issues, including setting an annual precept (local rate) to cover the council's operating costs and producing annual accounts for public scrutiny. The parish council evaluates local planning applications and works with the police, district council officers, and neighbourhood watch groups on matters of crime, security, and traffic. The parish council's role also includes initiating projects for the maintenance and repair of parish facilities, as well as consulting with the district council on the maintenance, repair, and improvement of highways, drainage, footpaths, public transport, and street cleaning. Conservation matters (including trees and listed buildings) and environmental issues are also the responsibility of the council. The village is in the 'Cheddar and Shipham' electoral ward. After including Shipham the total population of the ward taken at the 2011 census is 6,842. The village falls within the non-metropolitan district of Sedgemoor, which was formed on 1 April 1974 under the Local Government Act 1972. It was previously part of Axbridge Rural District. Sedgemoor is responsible for local planning and building control, local roads, council housing, environmental health, markets and fairs, refuse collection and recycling, cemeteries and crematoria, leisure services, parks, and tourism. Somerset County Council is responsible for running the largest and most expensive local services such as education, social services, the library, roads, public transport, trading standards, waste disposal and strategic planning, although fire, police and ambulance services are provided jointly with other authorities through the Devon and Somerset Fire and Rescue Service, Avon and Somerset Constabulary and the South Western Ambulance Service. It is also part of the Wells county constituency represented in the House of Commons of the Parliament of the United Kingdom. It elects one Member of Parliament (MP) by the first past the post system of election, and is part of the South West England constituency of the European Parliament which elects six MEPs using the d'Hondt method of party-list proportional representation. Cheddar is twinned with Felsberg, Germany and Vernouillet, France, and it has an active programme of exchange visits. Initially, Cheddar twinned with Felsberg in 1984. In 2000, Cheddar twinned with Vernouillet, which had also been twinned with Felsberg. Cheddar also has a friendship link with Ocho Rios in Saint Ann Parish, Jamaica. The area is underlain by Black Rock slate, Burrington Oolite and Clifton Down Limestone of the Carboniferous Limestone Series, which contain ooliths and fossil debris on top of Old Red Sandstone, and by Dolomitic Conglomerate of the Keuper. Evidence for Variscan orogeny is seen in the sheared rock and cleaved shales. In many places weathering of these strata has resulted in the formation of immature calcareous soils. Cheddar Gorge, which is located on the edge of the village, is the largest gorge in the United Kingdom. The gorge is the site of the Cheddar Caves, where Cheddar Man was found in 1903. Older remains from the Upper Late Palaeolithic era (12,000–13,000 years ago) have been found. The caves, produced by the activity of an underground river, contain stalactites and stalagmites. Gough's Cave, which was discovered in 1903, leads around into the rock-face, and contains a variety of large rock chambers and formations. Cox's Cave, discovered in 1837, is smaller but contains many intricate formations. A further cave houses a children's entertainment walk known as the "Crystal Quest". Cheddar Gorge, including Cox's Cave, Gough's Cave and other attractions, has become a tourist destination, attracting about 500,000 visitors per year. In a 2005 poll of "Radio Times" readers, following its appearance on the 2005 television programme "Seven Natural Wonders", Cheddar Gorge was named as the second greatest natural wonder in Britain, surpassed only by the Dan yr Ogof caves. There are several large and unique Sites of Special Scientific Interest (SSSI) around the village. Cheddar Reservoir is a near-circular artificial reservoir operated by Bristol Water. Dating from the 1930s, it has a capacity of 135 million gallons (614,000 cubic metres). The reservoir is supplied with water taken from the Cheddar Yeo, which rises in Gough's Cave in Cheddar Gorge and is a tributary of the River Axe. The inlet grate for the water pipe that is used to transport the water can be seen next to the sensory garden in Cheddar Gorge. It has been designated as a Site of Special Scientific Interest (SSSI) due to its wintering waterfowl populations. Cheddar Wood and the smaller Macall's Wood form a biological Site of Special Scientific Interest from what remains of the wood of the Bishops of Bath and Wells in the 13th century and of King Edmund the Magnificent's wood in the 10th. During the 19th century, its lower fringes were grubbed out to make strawberry fields. Most of these have been allowed to revert to woodland. The wood was coppiced until 1917. This site compromises a wide range of habitats which include ancient and secondary semi-natural broadleaved woodland, unimproved neutral grassland, and a complex mosaic of calcareous grassland and acidic dry dwarf-shrub heath. Cheddar Wood is one of only a few English stations for starved wood-sedge ("Carex depauperata"). Purple gromwell ("Lithospermum purpurocaeruleum"), a nationally rare plant, also grows in the wood. Butterflies include silver-washed fritillary ("Argynnis paphia"), dark green fritillary ("Argynnis aglaja"), pearl-bordered fritillary ("Boloria euphrosyne"), holly blue ("Celastrina argiolus") and brown argus ("Aricia agestis"). The slug "Arion fasciatus", which has a restricted distribution in the south of England, and the soldier beetle "Cantharis fusca" also occur. By far the largest of the SSSIs is called Cheddar Complex and covers of the gorge, caves and the surrounding area. It is important because of both biological and geological features. It includes four SSSIs, formerly known as Cheddar Gorge SSSI, August Hole/Longwood Swallet SSSI, GB Cavern Charterhouse SSSI and Charterhouse on-Mendip SSSI. It is partly owned by the National Trust who acquired it in 1910 and partly managed by the Somerset Wildlife Trust. Close to the village and gorge are Batts Combe quarry and Callow Rock quarry, two of the active Quarries of the Mendip Hills where limestone is still extracted. Operating since the early 20th century, Batts Combe is owned and operated by Hanson Aggregates. The output in 2005 was around 4,000 tonnes of limestone per day, one third of which was supplied to an on-site lime kiln, which closed in 2009; the remainder was sold as coated or dusted aggregates. The limestone at this site is close to 99 percent carbonate of calcium and magnesium (dolomite). The Chelmscombe Quarry finished its work as a limestone quarry in the 1950s and was then used by the Central Electricity Generating Board as a tower testing station. During the 1970s and 1980s it was also used to test the ability of containers of radioactive material to withstand impacts and other accidents. Along with the rest of South West England, Cheddar has a temperate climate which is generally wetter and milder than the rest of the country. The annual mean temperature is approximately . Seasonal temperature variation is less extreme than most of the United Kingdom because of the adjacent sea, which moderates temperature. The summer months of July and August are the warmest with mean daily maxima of approximately . In winter mean minimum temperatures of or are common. In the summer the Azores high-pressure system affects the south-west of England. Convective cloud sometimes forms inland, reducing the number of hours of sunshine; annual sunshine rates are slightly less than the regional average of 1,600 hours. In December 1998 there were 20 days without sun recorded at Yeovilton. Most the rainfall in the south-west is caused by Atlantic depressions or by convection. Most of the rainfall in autumn and winter is caused by the Atlantic depressions, which are most active during those seasons. In summer, a large proportion of the rainfall is caused by sun heating the ground leading to convection and to showers and thunderstorms. Average rainfall is around . About 8–15 days of snowfall per year is typical. November to March have the highest mean wind speeds, and June to August have the lightest winds. The predominant wind direction is from the south-west. The parish has a population in 2011 of 5,093, with a mean age of 43 years. Residents lived in 2,209 households. The vast majority of households (2,183) gave their ethnic status at the 2001 census as white. The village gave its name to Cheddar cheese, which is the most popular type of cheese in the United Kingdom. The cheese is now made and consumed worldwide, and only one producer remains in the village. Since the 1880s, Cheddar's other main produce has been the strawberry, which is grown on the south-facing lower slopes of the Mendip hills. As a consequence of its use for transporting strawberries to market, the since-closed Cheddar Valley line became known as "The Strawberry Line" after it opened in 1869. The line ran from Yatton to Wells. When the rest of the line was closed and all passenger services ceased, the section of the line between Cheddar and Yatton remained open for goods traffic. It provided a fast link with the main markets for the strawberries in Birmingham and London, but finally closed in 1964, becoming part of the Cheddar Valley Railway Nature Reserve. Cheddar Ales is a small brewery based in the village, producing beer for local public houses. Tourism is a significant source of employment. Around 15 percent of employment in Sedgemoor is provided by tourism, but within Cheddar it is estimated to employ as many as 1,000 people. The village also has a youth hostel, and a number of camping and caravan sites. Cheddar has a number of active service clubs including Cheddar Vale Lions Club, Mendip Rotary and Mendip Inner Wheel Club. The clubs raise money for projects in the local community and hold annual events such as a fireworks display, duck races in the Gorge, a dragon boat race on the reservoir and concerts on the grounds of the nearby St Michael's Cheshire Home. Several notable people have been born or lived in Cheddar. Musician Jack Bessant, the bass guitarist with the band Reef grew up on his parents' strawberry farm, and Matt Goss and Luke Goss, former members of Bros, lived in Cheddar for nine months as children. Trina Gulliver, eight-time World Professional Darts Champion, currently lives in Cheddar. The comedian Richard Herring grew up in Cheddar. His 2008 Edinburgh Festival Fringe show, "The Headmaster's Son" is based on his time at The Kings of Wessex School, where his father Keith was the headmaster. The final performance of this show was held at the school in November 2009. He also visited the school in March 2010 to perform his show "Hitler Moustache". In May 2013, a community radio station called Pulse was launched. The market cross in Bath Street dates from the 15th century, with the shelter having been rebuilt in 1834. It has a central octagonal pier, a socket raised on four steps, a hexagonal shelter with six arched four-centred openings, shallow two-stage buttresses at each angle, and an embattled parapet. The shaft is crowned by an abacus with figures in niches, probably from the late 19th century, although the cross is now missing. It was rebuilt by Thomas, Marquess of Bath. It is a Scheduled Ancient Monument (Somerset County No 21) and Grade II* listed building. In January 2000, the cross was seriously damaged in a traffic accident. By 2002, the cross had been rebuilt and the area around it was redesigned to protect and enhance its appearance. The cross was badly damaged again in March 2012, when a taxi crashed into it late at night demolishing two sides. Repair work, which included the addition of wooden-clad steel posts to protect against future crashes, was completed in November 2012 at a cost of £60,000. Hannah More, a philanthropist and educator, founded a school in the village in the late 18th century for the children of miners. Her first school was located in a 17th-century house. Now named "Hannah More's Cottage", the Grade II-listed building is used by the local community as a meeting place. The village is situated on the A371 road which runs from Wincanton, to Weston-super-Mare. It is approximately from the route of the M5 motorway with around a drive to junction 21 or 22. It was on the Cheddar Valley line, a railway line that was opened in 1869 and closed in 1963. It became known as The Strawberry Line because of the large volume of locally-grown strawberries that it carried. It ran from Yatton railway station through to Wells (Tucker Street) railway station and joined the East Somerset Railway to make a through route via Shepton Mallet (High Street) railway station to Witham. Sections of the now-disused railway have been opened as the Strawberry Line Trail, which currently runs from Yatton to Cheddar. The Cheddar Valley line survived until the "Beeching Axe". Towards the end of its life there were so few passengers that diesel railcars were sometimes used. The Cheddar branch closed to passengers on 9 September 1963 and to goods in 1964. The line closed in the 1960s, when it became part of the Cheddar Valley Railway Nature Reserve, and part of the National Cycle Network route 26. The cycle route also intersects with the West Mendip Way and various other footpaths. The first school in Cheddar was set up by Hannah More during the 18th Century, however now Cheddar has three schools belonging to the Cheddar Valley Group of Schools, twelve schools that provide Cheddar Valley's three-tier education system. Cheddar First School has ten classes for children between 4 and 9 years. Fairlands Middle School, a middle school categorised as a middle-deemed-secondary school, has 510 pupils between 9 and 13. Fairlands takes children moving up from Cheddar First School as well as other first schools in the Cheddar Valley. The Kings of Wessex Academy, a coeducational comprehensive school, has been rated as "good" by Ofsted. It has 1,176 students aged 13 to 18, including 333 in the sixth form. Kings is a faith school linked to the Church of England. It was awarded the specialist status of Technology College in 2001, enabling it to develop its Information Technology (IT) facilities and improve courses in science, mathematics and design technology. In 2007 it became a foundation school, giving it more control over its own finances. The academy owns and runs a sports centre and swimming pool, Kings Fitness & Leisure, with facilities that are used by students as well as residents. It has since November 2016 been a part of the Wessex Learning Trust which incorporates eight academies from the surrounding area. The Church of St Andrew dates from the 14th century. It was restored in 1873 by William Butterfield. It is a Grade I listed building and contains some 15th-century stained glass and an altar table of 1631. The chest tomb in the chancel is believed to contain the remains of Sir Thomas Cheddar and is dated 1442. The tower, which rises to , contains a bell dating from 1759 made by Thomas Bilbie of the Bilbie family. There are also churches for Roman Catholic, Methodist and other denominations, including Cheddar Valley Community Church, who not only meet at The Kings of Wessex School on Sunday, but also have their own site on Tweentown for meeting during the week. The Baptist chapel was built in 1831. Kings Fitness & Leisure, situated on the grounds of The Kings of Wessex School, provides a venue for various sports and includes a 20-metre swimming pool, racket sport courts, a sports hall, dance studios and a gym. A youth sports festival was held on Sharpham Road Playing Fields in 2009. In 2010 a skatepark was built in the village, funded by the Cheddar Local Action Team. Cheddar Football Club, founded in 1892 and nicknamed "The Cheesemen", play in the Western Football League Division One. In 2009 plans were revealed to move the club from its present home at Bowdens Park on Draycott Road to a new larger site. Cheddar Cricket Club was formed in the late 19th century and moved to Sharpham Road Playing Fields in 1964. They now play in the West of England Premier League Somerset Division. Cheddar Rugby Club, who own part of the Sharpham playing fields, was formed in 1836. The club organises an annual Cheddar Rugby Tournament. Cheddar Lawn Tennis Club, was formed in 1924, and play in the North Somerset League and also has social tennis and coaching. Cheddar Running Club organised an annual half marathon until 2009. The village is both on the route of the West Mendip Way and Samaritans Way South West.
https://en.wikipedia.org/wiki?curid=6427
Compact disc Compact disc (CD) is a digital optical disc data storage format that was co-developed by Philips and Sony and released in 1982. The format was originally developed to store and play only sound recordings (CD-DA) but was later adapted for storage of data (CD-ROM). Several other formats were further derived from these, including write-once audio and data storage (CD-R), rewritable media (CD-RW), Video Compact Disc (VCD), Super Video Compact Disc (SVCD), Photo CD, PictureCD, Compact Disc-Interactive (CD-i), and Enhanced Music CD. The first commercially available audio CD player, the Sony CDP-101, was released October 1982 in Japan. Standard CDs have a diameter of and can hold up to about 1 hour and 20 minutes of uncompressed audio or about 700 MiB of data. The Mini CD has various diameters ranging from ; they are sometimes used for CD singles, storing up to 24 minutes of audio, or delivering device drivers. At the time of the technology's introduction in 1982, a CD could store much more data than a personal computer hard drive, which would typically hold 10 MB. By 2010, hard drives commonly offered as much storage space as a thousand CDs, while their prices had plummeted to commodity level. In 2004, worldwide sales of audio CDs, CD-ROMs, and CD-Rs reached about 30 billion discs. By 2007, 200 billion CDs had been sold worldwide. From the early 2000s, CDs were increasingly being replaced by other forms of digital storage and distribution, with the result that by 2010 the number of audio CDs being sold in the U.S. had dropped about 50% from their peak; however, they remained one of the primary distribution methods for the music industry. In 2014, revenues from digital music services matched those from physical format sales for the first time. American inventor James T. Russell has been credited with inventing the first system to record digital information on an optical transparent foil that is lit from behind by a high-power halogen lamp. Russell's patent application was filed in 1966, and he was granted a patent in 1970. Following litigation, Sony and Philips licensed Russell's patents (then held by a Canadian company, Optical Recording Corp.) in the 1980s. The compact disc is an evolution of LaserDisc technology, where a focused laser beam is used that enables the high information density required for high-quality digital audio signals. Prototypes were developed by Philips and Sony independently in the late 1970s. Although originally dismissed by Philips Research management as a trivial pursuit, the CD became the primary focus for Philips as the LaserDisc format struggled. In 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. After a year of experimentation and discussion, the "Red Book" CD-DA standard was published in 1980. After their commercial release in 1982, compact discs and their players were extremely popular. Despite costing up to $1,000, over 400,000 CD players were sold in the United States between 1983 and 1984. By 1988, CD sales in the United States surpassed those of vinyl LPs, and by 1992 CD sales surpassed those of prerecorded music cassette tapes. The success of the compact disc has been credited to the cooperation between Philips and Sony, which together agreed upon and developed compatible hardware. The unified design of the compact disc allowed consumers to purchase any disc or player from any company, and allowed the CD to dominate the at-home music market unchallenged. In 1974, Lou Ottens, director of the audio division of Philips, started a small group to develop an analog optical audio disc with a diameter of and a sound quality superior to that of the vinyl record. However, due to the unsatisfactory performance of the analog format, two Philips research engineers recommended a digital format in March 1974. In 1977, Philips then established a laboratory with the mission of creating a digital audio disc. The diameter of Philips's prototype compact disc was set at , the diagonal of an audio cassette. Heitaro Nakajima, who developed an early digital audio recorder within Japan's national public broadcasting organization NHK in 1970, became general manager of Sony's audio department in 1971. His team developed a digital PCM adaptor audio tape recorder using a Betamax video recorder in 1973. After this, in 1974 the leap to storing digital audio on an optical disc was easily made. Sony first publicly demonstrated an optical digital audio disc in September 1976. A year later, in September 1977, Sony showed the press a disc that could play an hour of digital audio (44,100 Hz sampling rate and 16-bit resolution) using MFM modulation. In September 1978, the company demonstrated an optical digital audio disc with a 150-minute playing time, 44,056 Hz sampling rate, 16-bit linear resolution, and cross-interleaved error correction code—specifications similar to those later settled upon for the standard compact disc format in 1980. Technical details of Sony's digital audio disc were presented during the 62nd AES Convention, held on 13–16 March 1979, in Brussels. Sony's AES technical paper was published on 1 March 1979. A week later, on 8 March, Philips publicly demonstrated a prototype of an optical digital audio disc at a press conference called "Philips Introduce Compact Disc" in Eindhoven, Netherlands. Sony executive Norio Ohga, later CEO and chairman of Sony, and Heitaro Nakajima were convinced of the format's commercial potential and pushed further development despite widespread skepticism. As a result, in 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. Led by engineers Kees Schouhamer Immink and Toshitada Doi, the research pushed forward laser and optical disc technology. After a year of experimentation and discussion, the task force produced the "Red Book" CD-DA standard. First published in 1980, the standard was formally adopted by the IEC as an international standard in 1987, with various amendments becoming part of the standard in 1996. Philips coined the term "compact disc" in line with another audio product, the Compact Cassette, and contributed the general manufacturing process, based on video LaserDisc technology. Philips also contributed 8-to-14 modulation (EFM), while Sony contributed the error-correction method, CIRC, which offers a certain resilience to defects such as scratches and fingerprints. The "Compact Disc Story", told by a former member of the task force, gives background information on the many technical decisions made, including the choice of the sampling frequency, playing time, and disc diameter. The task force consisted of around 6 persons, though according to Philips, the compact disc was "invented collectively by a large group of people working as a team." Philips established the Polydor Pressing Operations plant in Langenhagen near Hannover, Germany, and quickly passed a series of milestones. The Japanese launch was followed in March 1983 by the introduction of CD players and discs to Europe and North America (where CBS Records released sixteen titles). This 1983 event is often seen as the "Big Bang" of the digital audio revolution. The new audio disc was enthusiastically received, especially in the early-adopting classical music and audiophile communities, and its handling quality received particular praise. As the price of players gradually came down, and with the introduction of the portable Discman the CD began to gain popularity in the larger popular and rock music markets. With the rise in CD sales, pre-recorded cassette tape sales began to decline in the late 1980s; CD sales overtook cassette sales in the early 1990s. The first artist to sell a million copies on CD was Dire Straits, with their 1985 album "Brothers in Arms". One of the first CD markets was devoted to reissuing popular music whose commercial potential was already proven. The first major artist to have their entire catalog converted to CD was David Bowie, whose first fourteen studio albums of (then) sixteen were made available by RCA Records in February 1985, along with four greatest hits albums; his fifteenth and sixteenth albums had already been issued on CD by EMI Records in 1983 and 1984, respectively. On February 26, 1987, the first four UK albums by the Beatles were released in mono on compact disc. In 1988, 400 million CDs were manufactured by 50 pressing plants around the world. The CD was planned to be the successor of the vinyl record for playing music, rather than primarily as a data storage medium. From its origins as a musical format, CDs have grown to encompass other applications. In 1983, following the CD's introduction, Immink and Braat presented the first experiments with erasable compact discs during the 73rd AES Convention. In June 1985, the computer-readable CD-ROM (read-only memory) and, in 1990, CD-Recordable were introduced, also developed by both Sony and Philips. Recordable CDs were a new alternative to tape for recording music and copying music albums without defects introduced in compression used in other digital recording methods. Other newer video formats such as DVD and Blu-ray use the same physical geometry as CD, and most DVD and Blu-ray players are backward compatible with audio CD. CD sales in the United States peaked by 2000. By the early 2000s, the CD player had largely replaced the audio cassette player as standard equipment in new automobiles, with 2010 being the final model year for any car in the United States to have a factory-equipped cassette player. With the increasing popularity of portable digital audio players, such as mobile phones, and solid state music storage, CD players are being phased out of automobiles in favor of minijack auxiliary inputs, wired connection to USB devices and wireless Bluetooth connection. Meanwhile, with the advent and popularity of Internet-based distribution of files in lossily-compressed audio formats such as MP3, sales of CDs began to decline in the 2000s. For example, between 2000 and 2008, despite overall growth in music sales and one anomalous year of increase, major-label CD sales declined overall by 20%, although independent and DIY music sales may be tracking better according to figures released March 30th, 2009, and CDs still continue to sell greatly. As of 2012, CDs and DVDs made up only 34% of music sales in the United States. , only 24% of music in the United States was purchased on physical media, 2/3 of this consisting of CDs; however, in the same year in Japan, over 80% of music was bought on CDs and other physical formats. In 2018, U.S. CD sales were 52 million units—less than 6% of the peak sales volume in 2000. Despite the rapidly declining sales year-over-year, the pervasiveness of the technology remained for a time, with companies placing CDs in pharmacies, supermarkets, and filling station convenience stores targeting buyers least able to use Internet-based distribution. In 2018 Best Buy announced plans to decrease their focus on CD sales, however, while continuing to sell records, sales of which are growing during the vinyl revival. Sony and Philips received praise for the development of the compact disc from professional organizations. These awards include A CD is made from thick, polycarbonate plastic and weighs 15–20 grams. From the center outward, components are: the center spindle hole (15 mm), the first-transition area (clamping ring), the clamping area (stacking ring), the second-transition area (mirror band), the program (data) area, and the rim. The inner program area occupies a radius from 25 to 58 mm. A thin layer of aluminum or, more rarely, gold is applied to the surface, making it reflective. The metal is protected by a film of lacquer normally spin coated directly on the reflective layer. The label is printed on the lacquer layer, usually by screen printing or offset printing. CD data is represented as tiny indentations known as "pits", encoded in a spiral track moulded into the top of the polycarbonate layer. The areas between pits are known as "lands". Each pit is approximately 100 nm deep by 500 nm wide, and varies from 850 nm to 3.5 µm in length. The distance between the tracks (the "pitch") is 1.6 µm. When playing an audio CD, a motor within the CD player spins the disc to a scanning velocity of 1.2–1.4 m/s (constant linear velocity, CLV)—equivalent to approximately 500  RPM at the inside of the disc, and approximately 200  RPM at the outside edge. The track on the CD begins at the inside and spirals outward so a disc played from beginning to end slows its rotation rate during playback. The program area is 86.05 cm2 and the length of the recordable spiral is With a scanning speed of 1.2 m/s, the playing time is 74 minutes, or 650 MiB of data on a CD-ROM. A disc with data packed slightly more densely is tolerated by most players (though some old ones fail). Using a linear velocity of 1.2 m/s and a narrower track pitch of 1.5 µm increases the playing time to 80 minutes, and data capacity to 700 MiB. A CD is read by focusing a 780 nm wavelength (near infrared) semiconductor laser through the bottom of the polycarbonate layer. The change in height between pits and lands results in a difference in the way the light is reflected. Because the pits are indented into the top layer of the disc and are read through the transparent polycarbonate base, the pits form bumps when read. The laser hits the disc, casting a circle of light wider than the modulated spiral track reflecting partially from the lands and partially from the top of any bumps where they are present. As the laser passes over a pit (bump), its height means that the part of the light reflected from its peak is 1/2 wavelength out of phase with the light reflected from the land around it. This causes partial cancellation of the laser's reflection from the surface. By measuring the reflected intensity change with a photodiode, a modulated signal is read back from the disc. To accommodate the spiral pattern of data, the laser is placed on a mobile mechanism within the disc tray of any CD player. This mechanism typically takes the form of a sled that moves along a rail. The sled can be driven by a worm gear or linear motor. Where a worm gear is used, a second shorter-throw linear motor, in the form of a coil and magnet, makes fine position adjustments to track eccentricities in the disk at high speed. Some CD drives (particularly those manufactured by Philips during the 1980s and early 1990s) use a swing arm similar to that seen on a gramophone. This mechanism allows the laser to read information from the center to the edge of a disc without having to interrupt the spinning of the disc itself. The pits and lands do "not" directly represent the 0's and 1's of binary data. Instead, non-return-to-zero, inverted encoding is used: a change from either pit to land or land to pit indicates a 1, while no change indicates a series of 0's. There must be at least 2, and no more than 10 0's between each 1, which is defined by the length of the pit. This, in turn, is decoded by reversing the eight-to-fourteen modulation used in mastering the disc, and then reversing the cross-interleaved Reed–Solomon coding, finally revealing the raw data stored on the disc. These encoding techniques (defined in the "Red Book") were originally designed for CD Digital Audio, but they later became a standard for almost all CD formats (such as CD-ROM). CDs are susceptible to damage during handling and from environmental exposure. Pits are much closer to the label side of a disc, enabling defects and contaminants on the clear side to be out of focus during playback. Consequently, CDs are more likely to suffer damage on the label side of the disc. Scratches on the clear side can be repaired by refilling them with similar refractive plastic or by careful polishing. The edges of CDs are sometimes incompletely sealed, allowing gases and liquids to enter the CD and corrode the metal reflective layer and/or interfere with the focus of the laser on the pits, a condition known as disc rot. The fungus "Geotrichum candidum" has been found—under conditions of high heat and humidity—to consume the polycarbonate plastic and aluminium found in CDs. The digital data on a CD begins at the center of the disc and proceeds toward the edge, which allows adaptation to the different size formats available. Standard CDs are available in two sizes. By far, the most common is in diameter, with a 74- or 80-minute audio capacity and a 650 or 700 MiB (737,280,000-byte) data capacity. Discs are 1.2 mm thick, with a 15 mm center hole. The official Philips history says this capacity was specified by Sony executive Norio Ohga to be able to contain the entirety of Beethoven's Ninth Symphony on one disc. This is a myth according to Kees Immink, as the code format had not yet been decided in December 1979. The adoption of EFM in June 1980 would have allowed a playing time of 97 minutes for 120  mm diameter or 74 minutes for a disc as small as 100  mm, but instead the information density was lowered by 30 percent to keep the playing time at 74 minutes. The 120  mm diameter has been adopted by subsequent formats, including Super Audio CD, DVD, HD DVD, and Blu-ray Disc. The 80-mm-diameter discs ("Mini CDs") can hold up to 24 minutes of music or 210 MiB. The logical format of an audio CD (officially Compact Disc Digital Audio or CD-DA) is described in a document produced in 1980 by the format's joint creators, Sony and Philips. The document is known colloquially as the "Red Book" CD-DA after the color of its cover. The format is a two-channel 16-bit PCM encoding at a 44.1 kHz sampling rate per channel. Four-channel sound was to be an allowable option within the "Red Book" format, but has never been implemented. Monaural audio has no existing standard on a "Red Book" CD; thus, the mono source material is usually presented as two identical channels in a standard "Red Book" stereo track (i.e., mirrored mono); an MP3 CD, however, can have audio file formats with mono sound. CD-Text is an extension of the "Red Book" specification for an audio CD that allows for the storage of additional text information (e.g., album name, song name, artist) on a standards-compliant audio CD. The information is stored either in the lead-in area of the CD, where there is roughly five kilobytes of space available or in the subcode channels R to W on the disc, which can store about 31 megabytes. Compact Disc + Graphics is a special audio compact disc that contains graphics data in addition to the audio data on the disc. The disc can be played on a regular audio CD player, but when played on a special CD+G player, it can output a graphics signal (typically, the CD+G player is hooked up to a television set or a computer monitor); these graphics are almost exclusively used to display lyrics on a television set for karaoke performers to sing along with. The CD+G format takes advantage of the channels R through W. These six bits store the graphics information. CD + Extended Graphics (CD+EG, also known as CD+XG) is an improved variant of the Compact Disc + Graphics (CD+G) format. Like CD+G, CD+EG uses basic CD-ROM features to display text and video information in addition to the music being played. This extra data is stored in subcode channels R-W. Very few, if any, CD+EG discs have been published. Super Audio CD (SACD) is a high-resolution read-only optical audio disc format that was designed to provide higher-fidelity digital audio reproduction than the "Red Book". Introduced in 1999, it was developed by Sony and Philips, the same companies that created the "Red Book". SACD was in a format war with DVD-Audio, but neither has replaced audio CDs. The SACD standard is referred to as the "Scarlet Book" standard. Titles in the SACD format can be issued as hybrid discs; these discs contain the SACD audio stream as well as a standard audio CD layer which is playable in standard CD players, thus making them backward compatible. CD-MIDI is a format used to store music-performance data, which upon playback is performed by electronic instruments that synthesize the audio. Hence, unlike the original "Red Book" CD-DA, these recordings are not digitally sampled audio recordings. The CD-MIDI format is defined as an extension of the original "Red Book". For the first few years of its existence, the CD was a medium used purely for audio. However, in 1988, the "Yellow Book" CD-ROM standard was established by Sony and Philips, which defined a non-volatile optical data computer data storage medium using the same physical format as audio compact discs, readable by a computer with a CD-ROM drive. Video CD (VCD, View CD, and Compact Disc digital video) is a standard digital format for storing video media on a CD. VCDs are playable in dedicated VCD players, most modern DVD-Video players, personal computers, and some video game consoles. The VCD standard was created in 1993 by Sony, Philips, Matsushita, and JVC and is referred to as the "White Book" standard. Overall picture quality is intended to be comparable to VHS video. Poorly compressed VCD video can sometimes be lower quality than VHS video, but VCD exhibits block artifacts rather than analog noise and does not deteriorate further with each use. 352×240 (or SIF) resolution was chosen because it is half the vertical and half the horizontal resolution of the NTSC video. 352×288 is similarly one-quarter PAL/SECAM resolution. This approximates the (overall) resolution of an analog VHS tape, which, although it has double the number of (vertical) scan lines, has a much lower horizontal resolution. Super Video CD (Super Video Compact Disc or SVCD) is a format used for storing video media on standard compact discs. SVCD was intended as a successor to VCD and an alternative to DVD-Video and falls somewhere between both in terms of technical capability and picture quality. SVCD has two thirds the resolution of DVD, and over 2.7 times the resolution of VCD. One CD-R disc can hold up to 60 minutes of standard-quality SVCD-format video. While no specific limit on SVCD video length is mandated by the specification, one must lower the video bit rate, and therefore quality, to accommodate very long videos. It is usually difficult to fit much more than 100 minutes of video onto one SVCD without incurring significant quality loss, and many hardware players are unable to play video with an instantaneous bit rate lower than 300 to 600 kilobits per second. Photo CD is a system designed by Kodak for digitizing and storing photos on a CD. Launched in 1992, the discs were designed to hold nearly 100 high-quality images, scanned prints and slides using special proprietary encoding. Photo CDs are defined in the "Beige Book" and conform to the CD-ROM XA and CD-i Bridge specifications as well. They are intended to play on CD-i players, Photo CD players, and any computer with suitable software (irrespective of operating system). The images can also be printed out on photographic paper with a special Kodak machine. This format is not to be confused with Kodak Picture CD, which is a consumer product in CD-ROM format. The Philips "Green Book" specifies a standard for interactive multimedia compact discs designed for CD-i players (1993). CD-i discs can contain audio tracks that can be played on regular CD players, but CD-i discs are not compatible with most CD-ROM drives and software. The CD-i Ready specification was later created to improve compatibility with audio CD players, and the CD-i Bridge specification was added to create CD-i compatible discs that can be accessed by regular CD-ROM drives. Philips defined a format similar to CD-i called CD-i Ready, which puts CD-i software and data into the pregap of track 1. This format was supposed to be more compatible with older audio CD players. Enhanced Music CD, also known as CD Extra or CD Plus, is a format which combines audio tracks and data tracks on the same disc by putting audio tracks in a first session and data in a second session. It was developed by Philips and Sony, and it is defined in the "Blue Book". VinylDisc is the hybrid of a standard audio CD and the vinyl record. The vinyl layer on the disc's label side can hold approximately three minutes of music. In 1995, material costs were 30 cents for the jewel case and 10 to 15 cents for the CD. Wholesale cost of CDs was $0.75 to $1.15, while the typical retail price of a prerecorded music CD was $16.98. On average, the store received 35 percent of the retail price, the record company 27 percent, the artist 16 percent, the manufacturer 13 percent, and the distributor 9 percent. When 8-track tapes, cassette tapes, and CDs were introduced, each was marketed at a higher price than the format they succeeded, even though the cost to produce the media was reduced. This was done because the apparent value increased. This continued from vinyl to CDs but was broken when Apple marketed MP3s for $0.99, and albums for $9.99. The incremental cost, though, to produce an MP3 is negligible. Recordable Compact Discs, CD-Rs, are injection-molded with a "blank" data spiral. A photosensitive dye is then applied, after which the discs are metalized and lacquer-coated. The write laser of the CD recorder changes the color of the dye to allow the read laser of a standard CD player to see the data, just as it would with a standard stamped disc. The resulting discs can be read by most CD-ROM drives and played in most audio CD players. CD-Rs follow the "Orange Book" standard. CD-R recordings are designed to be permanent. Over time, the dye's physical characteristics may change causing read errors and data loss until the reading device cannot recover with error correction methods. The design life is from 20 to 100 years, depending on the quality of the discs, the quality of the writing drive, and storage conditions. However, testing has demonstrated such degradation of some discs in as little as 18 months under normal storage conditions. This failure is known as disc rot, for which there are several, mostly environmental, reasons. The recordable audio CD is designed to be used in a consumer audio CD recorder. These consumer audio CD recorders use SCMS (Serial Copy Management System), an early form of digital rights management (DRM), to conform to the AHRA (Audio Home Recording Act). The Recordable Audio CD is typically somewhat more expensive than CD-R due to lower production volume and a 3 percent AHRA royalty used to compensate the music industry for the making of a copy. High-capacity recordable CD is a higher-density recording format that can hold 20% more data than of conventional discs. The higher capacity is incompatible with some recorders and recording software. CD-RW is a re-recordable medium that uses a metallic alloy instead of a dye. The write laser, in this case, is used to heat and alter the properties (amorphous vs. crystalline) of the alloy, and hence change its reflectivity. A CD-RW does not have as great a difference in reflectivity as a pressed CD or a CD-R, and so many earlier CD audio players "cannot" read CD-RW discs, although "most" later CD audio players and stand-alone DVD players can. CD-RWs follow the "Orange Book" standard. The ReWritable Audio CD is designed to be used in a consumer audio CD recorder, which will not (without modification) accept standard CD-RW discs. These consumer audio CD recorders use the Serial Copy Management System (SCMS), an early form of digital rights management (DRM), to conform to the United States' Audio Home Recording Act (AHRA). The ReWritable Audio CD is typically somewhat more expensive than CD-RW due to (a) lower volume and (b) a 3 percent AHRA royalty used to compensate the music industry for the making of a copy. The "Red Book" audio specification, except for a simple "anti-copy" statement in the subcode, does not include any copy protection mechanism. Known at least as early as 2001, attempts were made by record companies to market "copy-protected" non-standard compact discs, which cannot be ripped, or copied, to hard drives or easily converted to other formats (like FLAC, MP3 or Vorbis). One major drawback to these copy-protected discs is that most will not play on either computer CD-ROM drives or some standalone CD players that use CD-ROM mechanisms. Philips has stated that such discs are not permitted to bear the trademarked "Compact Disc Digital Audio" logo because they violate the "Red Book" specifications. Numerous copy-protection systems have been countered by readily available, often free, software, or even by simply turning off automatic AutoPlay to prevent the running of the DRM executable program. After the fall in popularity of CDs, old discs or failed CD-R have been repurposed, since the reflections of the sun on a moving plate may scare birds.
https://en.wikipedia.org/wiki?curid=6429
Charles Farrar Browne Charles Farrar Browne (April 26, 1834 – March 6, 1867) was an American humor writer, better known under his "nom de plume", Artemus Ward. He is considered to be America's first stand-up comedian. His birth name was Brown but he added the "e" after he became famous. Browne was born in Waterford, Maine. He began his career as a compositor and occasional contributor to the daily and weekly journals. In 1858, he published in "The Plain Dealer" (Cleveland, Ohio) the first of the "Artemus Ward" series, which, in a collected form, achieved great popularity in both America and England. Brownes' companion at the Plain Dealer George Hoyt wrote "his desk was a rickety table which had been whittled and gashed until it looked as if it had been the victim of lightning. His chair was a fit companion thereto, a wabbling, unsteady affair, sometimes with four and sometimes with three legs. But Browne saw neither the table, nor the chair, nor any person who might be near, nothing, in fact, but the funny pictures which were tumbling out of his brain. When writing, his gaunt form looked ridiculous enough. One leg hung over the arm of his chair like a great hook, while he would write away, sometimes laughing to himself, and then slapping the table in the excess of his mirth." In 1860, he became editor of "Vanity Fair", a humorous New York weekly, which proved a failure. About the same time, he began to appear as a lecturer and, by his droll and eccentric humor, attracted large audiences. In 1863, Browne came as Artemus Ward to San Francisco to perform. Browne was an expert at publicity and by the time of his arrival, his manager had already been there for weeks advertising with notices in the local papers and talking with prominent citizens for endorsements. On November 13, 1863, he performed to a packed crowd at Platt's Music Hall. Ward played the part of Artemus as an illiterate rube but with "Yankee common sense." Writer Brett Harte was in the audience that night and he described it in "the Golden Era" as capturing American speech, "humor that belongs to the country of boundless prairies, limitless rivers, and stupendous cataracts--that fun which overlies the surface of our national life, which is met in the stage, rail-car, canal and flat-boat, which bursts out over camp-fires and around bar-room stoves. "Artemus Ward" was the favorite author of U.S. President Abraham Lincoln. Before presenting "The Emancipation Proclamation" to his Cabinet, Lincoln read to them the latest episode, "Outrage in Utiky", also known as "High-Handed Outrage at Utica". Browne was also known as a member of the New York Bohemian set which included leader Henry Clapp Jr., Walt Whitman, Fitz Hugh Ludlow, and actress Adah Isaacs Menken. Ward met Mark Twain when Ward performed in Virginia City, Nevada and the two became friends. In his correspondences with Twain, Browne called him "My Dearest Love." Legend has it that, following Ward's stage performance, he, Mark Twain, and Dan De Quille were taking a drunken rooftop tour of Virginia City until a town constable threatened to blast all three of them with a shotgun loaded with rock salt. Browne recommended Twain to the editors of the New York Press and urged him to journey to New York. In 1866, Ward visited England, where he became exceedingly popular both as a lecturer and as a contributor to "Punch". In the spring of the following year, Ward's health gave way and he died of tuberculosis at Southampton on March 6, 1867. After initially being buried at Kensal Green Cemetery, Ward's remains were removed to the United States on May 20, 1868. He is buried at Elm Vale Cemetery in Waterford, Maine.
https://en.wikipedia.org/wiki?curid=6431
Caelum Caelum is a faint constellation in the southern sky, introduced in the 1750s by Nicolas Louis de Lacaille and counted among the 88 modern constellations. Its name means “"chisel"” in Latin, and it was formerly known as Caelum Scalptorium (“"the engravers’ chisel"”); It is a rare word, unrelated to the far more common Latin "caelum", meaning “sky, heaven, atmosphere”. It is the eighth-smallest constellation, and subtends a solid angle of around 0.038 steradians, just less than that of Corona Australis. Due to its small size and location away from the plane of the Milky Way, Caelum is a rather barren constellation, with few objects of interest. The constellation's brightest star, Alpha Caeli, is only of magnitude 4.45, and only one other star, (Gamma) γ 1 Caeli, is brighter than magnitude 5 . Other notable objects in Caelum are RR Caeli, a binary star with one known planet approximately away; X Caeli, a Delta Scuti variable that forms an optical double with γ 1 Caeli; and HE0450-2958, a Seyfert galaxy that at first appeared as just a jet, with no host galaxy visible. Caelum was incepted as one of fourteen southern constellations in the 18th century by Nicolas Louis de Lacaille, a French astronomer and celebrater of the Age of Enlightenment. It retains its name "Burin" among French speakers, latinized in his catalogue of 1763 as "Caelum Sculptoris" (“"Engraver's Chisel"”). Francis Baily shortened this name to "Caelum", as suggested by John Herschel. In Lacaille's original chart, it was shown as a pair of engraver's tools: a standard burin and more specific shape-forming échoppe tied by a ribbon, but came to be ascribed a simple chisel. Johann Elert Bode stated the name as plural with a singular possessor, "Caela Scalptoris" – in German ("die" ) "Grabstichel" (“"the Engraver’s Chisels"”) – but this did not stick. Caelum is bordered by Dorado and Pictor to the south, Horologium and Eridanus to the east, Lepus to the north, and Columba to the west. Covering only 125 square degrees, it ranks 81st of the 88 modern constellations in size. Its main asterism consists of four stars, and twenty stars in total are brighter than magnitude 6.5 . The constellation's boundaries, as set by Eugène Delporte in 1930, are a 12-sided polygon. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and and declinations of to . The International Astronomical Union (IAU) adopted the three-letter abbreviation “Cae” for the constellation in 1922. Its main stars are visible in favourable conditions and with a clear southern horizon, for part of the year as far as about the 41st parallel north These stars avoid being engulfed by daylight for some of every day (when above the horizon) to viewers in mid- and well-inhabited higher latitudes of the Southern Hemisphere. Caelum shares with (to the north) Taurus, Eridanus and Orion midnight culmination in December (high summer), resulting in this fact. In winter (such as June) the constellation can be observed sufficiently inset from the horizons during its rising before dawn and/or setting after dusk as it culminates then at around mid-day, well above the sun. In South Africa, Argentina, their sub-tropical neighbouring areas and some of Australia in high June the key stars may be traced before dawn in the east; near the equator the stars lose night potential in May to June; they ill-compete with the Sun in northern tropics and sub-tropics from late February to mid-September with March being unfavorable as to post-sunset due to the light of the Milky Way. Caelum is a faint constellation: It has no star brighter than magnitude 4 and only two stars brighter than magnitude 5. Lacaille gave six stars Bayer designations, labeling them Alpha (α ) to Zeta (ζ ) in 1756, but omitted Epsilon (ε ) and designated two adjacent stars as Gamma (γ ). Bode extended the designations to Rho (ρ ) for other stars, but most of these have fallen out of use. Caelum is too far south for any of its stars to bear Flamsteed designations. The brightest star, (Alpha) α Caeli, is a double star, containing an F-type main-sequence star of magnitude 4.45 and a red dwarf of magnitude 12.5 , from Earth. (Beta) β Caeli, another F-type star of magnitude 5.05 , is further away, being located from Earth. Unlike α, β Caeli is a subgiant star, slightly evolved from the main sequence. (Delta) δ Caeli, also of magnitude 5.05 , is a B-type subgiant and is much farther from Earth, at . (Gamma) γ 1 Caeli is a double-star with a red giant primary of magnitude 4.58 and a secondary of magnitude 8.1 . The primary is from Earth. The two components are difficult to resolve with small amateur telescopes because of their difference in visual magnitude and their close separation. This star system forms an optical double with the unrelated X Caeli (previously named γ 2 Caeli), a Delta Scuti variable located from Earth. These are a class of short-period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. X Caeli itself is also a binary star, specifically a contact binary, meaning that the stars are so close that they share envelopes. The only other variable star in Caelum visible to the naked eye is RV Caeli, a pulsating red giant of spectral type M1III, which varies between magnitudes 6.44 and 6.56 . Three other stars in Caelum are still occasionally referred to by their Bayer designations, although they are only on the edge of naked-eye visibility. (Nu) ν Caeli is another double star, containing a white giant of magnitude 6.07 and a star of magnitude 10.66, with unknown spectral type. The system is approximately away. (Lambda) λ Caeli, at magnitude 6.24, is much redder and farther away, being a red giant around from Earth. (Zeta) ζ Caeli is even fainter, being only of magnitude 6.36 . This star, located away, is a K-type subgiant of spectral type K1. The other twelve naked-eye stars in Caelum are not referred to by Bode's Bayer designations anymore, including RV Caeli. One of the nearest stars in Caelum is the eclipsing binary star RR Caeli, at a distance of . This star system consists of a dim red dwarf and a white dwarf. Despite its closeness to the Earth, the system's apparent magnitude is only 14.40 due to the faintness of its components, and thus it cannot be easily seen with amateur equipment. In 2012, the system was found to contain a giant planet, and there is evidence for a second substellar body. The system is a post-common-envelope binary and is losing angular momentum over time, which will eventually cause mass transfer from the red dwarf to the white dwarf. In approximately 9–20 billion years, this will cause the system to become a cataclysmic variable. Due to its small size and location away from the plane of the Milky Way, Caelum is rather devoid of deep-sky objects, and contains no Messier objects. The only deep-sky object in Caelum to receive much attention is HE0450-2958, an unusual Seyfert galaxy. Originally, the jet's host galaxy proved elusive to find, and this jet appeared to be emanating from nothing. Although it has been suggested that the object is an ejected supermassive black hole, the host is now agreed to be a small galaxy that is difficult to see due to light from the jet and a nearby starburst galaxy. The 13th magnitude planetary nebula PN G243-37.1 is also in the eastern regions of the constellation. It is one of only a few planetary nebulae found in the galactic halo, being light-years below the Milky Way's 1000 light-year-thick disk.
https://en.wikipedia.org/wiki?curid=6432
Clarinet The clarinet is a family of woodwind instruments. It has a single-reed mouthpiece, a straight, cylindrical tube with an almost cylindrical bore, and a flared bell. A person who plays a clarinet is called a "clarinetist" (sometimes spelled "clarinettist"). While the similarity in sound between the earliest clarinets and the trumpet may hold a clue to its name, other factors may have been involved. During the Late Baroque era, composers such as Bach and Handel were making new demands on the skills of their trumpeters, who were often required to play difficult melodic passages in the high, or as it came to be called, "clarion" register. Since the trumpets of this time had no valves or pistons, melodic passages would often require the use of the highest part of the trumpet's range, where the harmonics were close enough together to produce scales of adjacent notes as opposed to the gapped scales or arpeggios of the lower register. The trumpet parts that required this specialty were known by the term "clarino" and this in turn came to apply to the musicians themselves. It is probable that the term clarinet may stem from the diminutive version of the 'clarion' or 'clarino' and it has been suggested that clarino players may have helped themselves out by playing particularly difficult passages on these newly developed "mock trumpets". Johann Christoph Denner is generally believed to have invented the clarinet in Germany around the year 1700 by adding a register key to the earlier chalumeau, usually in the key of C. Over time, additional keywork and airtight pads were added to improve the tone and playability. In modern times, the most common clarinet is the B clarinet. However, the clarinet in A, just a semitone lower, is regularly used in orchestral, chamber and solo music. An orchestral clarinetist must own both a clarinet in A and B since the repertoire is divided fairly evenly between the two. Since the middle of the 19th century the bass clarinet (nowadays invariably in B but with extra keys to extend the register down to low written C3) has become an essential addition to the orchestra. The clarinet family ranges from the (extremely rare) BBB octo-contrabass to the A piccolo clarinet. The clarinet has proved to be an exceptionally flexible instrument, used in the classical repertoire as in concert bands, military bands, marching bands, klezmer, jazz, and other styles. The word "clarinet" may have entered the English language via the French "clarinette" (the feminine diminutive of Old French "clarin" or "clarion"), or from Provençal "", "oboe". It would seem, however, that its real roots are to be found among some of the various names for trumpets used around the Renaissance and Baroque eras. "Clarion", "clarin", and the Italian "clarino" are all derived from the medieval term "claro", which referred to an early form of trumpet. This is probably the origin of the Italian "clarinetto", itself a diminutive of "clarino", and consequently of the European equivalents such as "clarinette" in French or the German "Klarinette". According to Johann Gottfried Walther, writing in 1732, the reason for the name is that "it sounded from far off not unlike a trumpet". The English form "clarinet" is found as early as 1733, and the now-archaic "clarionet" appears from 1784 until the early years of the 20th century. The cylindrical bore is primarily responsible for the clarinet's distinctive timbre, which varies between its three main registers, known as the "chalumeau", "clarion", and "altissimo". The tone quality can vary greatly with the clarinetist, music, instrument, mouthpiece, and reed. The differences in instruments and geographical isolation of clarinetists led to the development from the last part of the 18th century onwards of several different schools of playing. The most prominent were the German/Viennese traditions and French school. The latter was centered on the clarinetists of the Conservatoire de Paris. The proliferation of recorded music has made examples of different styles of playing available. The modern clarinetist has a diverse palette of "acceptable" tone qualities to choose from. The A and B clarinets have nearly the same bore and use the same mouthpiece. Orchestral clarinetists using the A and B instruments in a concert could use the same mouthpiece (and often the same barrel) (see 'usage' below). The A and B have nearly identical tonal quality, although the A typically has a slightly warmer sound. The tone of the E clarinet is brighter and can be heard even through loud orchestral or concert band textures. The bass clarinet has a characteristically deep, mellow sound, while the alto clarinet is similar in tone to the bass (though not as dark). Clarinets have the largest pitch range of common woodwinds. The intricate key organization that makes this possible can make the playability of some passages awkward. The bottom of the clarinet's written range is defined by the keywork on each instrument, standard keywork schemes allowing a low E on the common B clarinet. The lowest concert pitch depends on the transposition of the instrument in question. The nominal highest note of the B clarinet is a semitone higher than the highest note of the oboe but this depends on the setup and skill of the player. Since the clarinet has a wider range of notes, the lowest note of the B clarinet is significantly deeper (a minor or major sixth) than the lowest note of the oboe. Nearly all soprano and piccolo clarinets have keywork enabling them to play the E below middle C as their lowest written note (in scientific pitch notation that sounds D3 on a soprano clarinet or C4, i.e. concert middle C, on a piccolo clarinet), though some B clarinets go down to E3 to enable them to match the range of the A clarinet. On the B soprano clarinet, the concert pitch of the lowest note is D3, a whole tone lower than the written pitch. Most alto and bass clarinets have an extra key to allow a (written) E3. Modern professional-quality bass clarinets generally have additional keywork to written C3. Among the less commonly encountered members of the clarinet family, contra-alto and contrabass clarinets may have keywork to written E3, D3, or C3; the basset clarinet and basset horn generally go to low C3. Defining the top end of a clarinet's range is difficult, since many advanced players can produce notes well above the highest notes commonly found in method books. G6 is usually the highest note clarinetists encounter in classical repertoire. The C above that (C7 i.e. resting on the fifth ledger line above the treble staff) is attainable by advanced players and is shown on many fingering charts, and fingerings as high as A7 exist. The range of a clarinet can be divided into three distinct registers: All three registers have characteristically different sounds. The chalumeau register is rich and dark. The clarion register is brighter and sweet, like a trumpet ("clarion") heard from afar. The altissimo register can be piercing and sometimes shrill. Sound is a wave that propagates through the air as a result of a local variation in air pressure. The production of sound by a clarinet follows these steps: The cycle repeats at a frequency relative to how long it takes a wave to travel to the first open hole and back twice (i.e. four times the length of the pipe). For example: when all the holes bar the very top one are open (i.e. the trill 'B' key is pressed), the note A4 (440 Hz) is produced. This represents a repeat of the cycle 440 times per second. In addition to this primary compression wave, other waves, known as harmonics, are created. Harmonics are caused by factors including the imperfect wobbling and shaking of the reed, the reed sealing the mouthpiece opening for part of the wave cycle (which creates a flattened section of the sound wave), and imperfections (bumps and holes) in the bore. A wide variety of compression waves are created, but only some (primarily the odd harmonics) are reinforced. These extra waves are what gives the clarinet its characteristic tone. The bore is cylindrical for most of the tube with an inner bore diameter between , but there is a subtle hourglass shape, with the thinnest part below the junction between the upper and lower joint. The reduction is depending on the maker. This hourglass shape, although invisible to the naked eye, helps to correct the pitch/scale discrepancy between the chalumeau and clarion registers (perfect twelfth). The diameter of the bore affects characteristics such as available harmonics, timbre, and pitch stability (how far the player can bend a note in the manner required in jazz and other music). The bell at the bottom of the clarinet flares out to improve the tone and tuning of the lowest notes. Most modern clarinets have "undercut" tone holes that improve intonation and sound. Undercutting means chamfering the bottom edge of tone holes inside the bore. Acoustically, this makes the tone hole function as if it were larger, but its main function is to allow the air column to follow the curve up through the tone hole (surface tension) instead of "blowing past" it under the increasingly directional frequencies of the upper registers. The fixed reed and fairly uniform diameter of the clarinet give the instrument an acoustical behavior approximating that of a cylindrical stopped pipe. Recorders use a tapered internal bore to overblow at the octave when the thumb/register hole is pinched open, while the clarinet, with its cylindrical bore, overblows at the twelfth. Adjusting the angle of the bore taper controls the frequencies of the overblown notes (harmonics). Changing the mouthpiece's tip opening and the length of the reed changes aspects of the harmonic timbre or voice of the clarinet because this changes the speed of reed vibrations. Generally, the goal of the clarinetist when producing a sound is to make as much of the reed vibrate as possible, making the sound fuller, warmer, and potentially louder. The lip position and pressure, shaping of the vocal tract, choice of reed and mouthpiece, amount of air pressure created, and evenness of the airflow account for most of the clarinetist's ability to control the tone of a clarinet. A highly skilled clarinetist will provide the ideal lip and air pressure for each frequency (note) being produced. They will have an embouchure which places an even pressure across the reed by carefully controlling their lip muscles. The airflow will also be carefully controlled by using the strong stomach muscles (as opposed to the weaker and erratic chest muscles) and they will use the diaphragm to oppose the stomach muscles to achieve a tone softer than a forte rather than weakening the stomach muscle tension to lower air pressure. Their vocal tract will be shaped to resonate at frequencies associated with the tone being produced. Covering or uncovering the tone holes varies the length of the pipe, changing the resonant frequencies of the enclosed air column and hence the pitch. A clarinetist moves between the chalumeau and clarion registers through use of the register key; clarinetists call the change from chalumeau register to clarion register "the break". The open register key stops the fundamental frequency from being reinforced, and the reed is forced to vibrate at three times the speed it was originally. This produces a note a twelfth above the original note. Most instruments overblow at two times the speed of the fundamental frequency (the octave), but as the clarinet acts as a closed pipe system, the reed cannot vibrate at twice its original speed because it would be creating a 'puff' of air at the time the previous 'puff' is returning as a rarefaction. This means it cannot be reinforced and so would die away. The chalumeau register plays fundamentals, whereas the clarion register, aided by the register key, plays third harmonics (a perfect twelfth higher than the fundamentals). The first several notes of the altissimo range, aided by the register key and venting with the first left-hand hole, play fifth harmonics (a major seventeenth, a perfect twelfth plus a major sixth, above the fundamentals). The clarinet is therefore said to overblow at the twelfth and, when moving to the altissimo register, seventeenth. By contrast, nearly all other woodwind instruments overblow at the octave or (like the ocarina and tonette) do not overblow at all. A clarinet must have holes and keys for nineteen notes, a chromatic octave and a half from bottom E to B, in its lowest register to play the chromatic scale. This overblowing behavior explains the clarinet's great range and complex fingering system. The fifth and seventh harmonics are also available, sounding a further sixth and fourth (a flat, diminished fifth) higher respectively; these are the notes of the altissimo register. This is also why the inner "waist" measurement is so critical to these harmonic frequencies. The highest notes can have a shrill, piercing quality and can be difficult to tune accurately. Different instruments often play differently in this respect due to the sensitivity of the bore and reed measurements. Using alternate fingerings and adjusting the embouchure helps correct the pitch of these notes. Since approximately 1850, clarinets have been nominally tuned according to twelve-tone equal temperament. Older clarinets were nominally tuned to meantone. Skilled performers can use their embouchures to considerably alter the tuning of individual notes or produce vibrato, a pulsating change of pitch often employed in jazz. Vibrato is rare in classical or concert band literature; however, certain clarinetists, such as Richard Stoltzman, use vibrato in classical music. Special fingerings may be used to play quarter tones and other microtonal intervals. Around 1900, Dr. Richard H. Stein, a Berlin musicologist, made a quarter-tone clarinet, which was soon abandoned. Years later, another German, Fritz Schüller of Markneukirchen, built a quarter tone clarinet, with two parallel bores of slightly different lengths whose tone holes are operated using the same keywork and a valve to switch from one bore to the other. Clarinet bodies have been made from a variety of materials including wood, plastic, hard rubber, metal, resin, and ivory. The vast majority of clarinets used by professionals are made from African hardwood, mpingo (African Blackwood) or grenadilla, rarely (because of diminishing supplies) Honduran rosewood, and sometimes even cocobolo. Historically other woods, notably boxwood, were used. Most inexpensive clarinets are made of plastic resin, such as ABS. "Resonite" is Selmer's trademark name for its type of plastic. Metal soprano clarinets were popular in the early 20th century until plastic instruments supplanted them; metal construction is still used for the bodies of some contra-alto and contrabass clarinets and the necks and bells of nearly all alto and larger clarinets. Ivory was used for a few 18th-century clarinets, but it tends to crack and does not keep its shape well. Buffet Crampon's Greenline clarinets are made from a composite of grenadilla wood powder and carbon fiber. Such clarinets are less affected by humidity and temperature changes than wooden instruments but are heavier. Hard rubber, such as ebonite, has been used for clarinets since the 1860s, although few modern clarinets are made of it. Clarinet designers Alastair Hanson and Tom Ridenour are strong advocates of hard rubber. The Hanson Clarinet Company manufactures clarinets using a grenadilla compound reinforced with ebonite, known as "BTR" (bithermal-reinforced) grenadilla. This material is also not affected by humidity, and the weight is the same as that of a wooden clarinet. Mouthpieces are generally made of hard rubber, although some inexpensive mouthpieces may be made of plastic. Other materials such as crystal/glass, wood, ivory, and metal have also been used. Ligatures are often made of metal and plated in nickel, silver, or gold. Other materials include wire, wire mesh, plastic, naugahyde, string, or leather. The clarinet uses a single reed made from the cane of "Arundo donax", a type of grass. Reeds may also be manufactured from synthetic materials. The ligature fastens the reed to the mouthpiece. When air is blown through the opening between the reed and the mouthpiece facing, the reed vibrates and produces the clarinet's sound. Basic reed measurements are as follows: tip, wide; lay, long (distance from the place where the reed touches the mouthpiece to the tip); gap, (distance between the underside of the reed tip and the mouthpiece). Adjustment to these measurements is one method of affecting tone color. Most clarinetists buy manufactured reeds, although many make adjustments to these reeds, and some make their own reeds from cane "blanks". Reeds come in varying degrees of hardness, generally indicated on a scale from one (soft) through five (hard). This numbering system is not standardized—reeds with the same number often vary in hardness across manufacturers and models. Reed and mouthpiece characteristics work together to determine ease of playability, pitch stability, and tonal characteristics. Note: A Böhm system soprano clarinet is shown in the photos illustrating this section. However, all modern clarinets have similar components. The "reed" is attached to the "mouthpiece" by the "ligature", and the top half-inch or so of this assembly is held in the player's mouth. In the past, clarinetists used to wrap a string around the mouthpiece and reed instead of using a ligature. The formation of the mouth around the mouthpiece and reed is called the "embouchure". The reed is on the underside of the mouthpiece, pressing against the player's lower lip, while the top teeth normally contact the top of the mouthpiece (some players roll the upper lip under the top teeth to form what is called a 'double-lip' embouchure). Adjustments in the strength and shape of the embouchure change the tone and intonation (tuning). It is not uncommon for clarinetists to employ methods to relieve the pressure on the upper teeth and inner lower lip by attaching pads to the top of the mouthpiece or putting (temporary) padding on the front lower teeth, commonly from folded paper. Next is the short "barrel"; this part of the instrument may be extended to fine-tune the clarinet. As the pitch of the clarinet is fairly temperature-sensitive, some instruments have interchangeable barrels whose lengths vary slightly. Additional compensation for pitch variation and tuning can be made by pulling out the barrel and thus increasing the instrument's length, particularly common in group playing in which clarinets are tuned to other instruments (such as in an orchestra or concert band). Some performers use a plastic barrel with a thumbwheel that adjusts the barrel length. On basset horns and lower clarinets, the barrel is normally replaced by a curved metal neck. The main body of most clarinets is divided into the "upper joint", the holes and most keys of which are operated by the left hand, and the "lower joint" with holes and most keys operated by the right hand. Some clarinets have a single joint: on some basset horns and larger clarinets the two joints are held together with a screw clamp and are usually not disassembled for storage. The left thumb operates both a "tone hole" and the "register key". On some models of clarinet, such as many Albert system clarinets and increasingly some higher-end Böhm system clarinets, the register key is a 'wraparound' key, with the key on the back of the clarinet and the pad on the front. Advocates of the wraparound register key say it improves sound, and it is harder for moisture to accumulate in the tube beneath the pad. Nevertheless, there is a consensus among repair techs that this type of register key is harder to keep in adjustment, i.e., it is hard to have enough spring pressure to close the hole securely. The body of a modern soprano clarinet is equipped with numerous "tone holes" of which seven (six front, one back) are covered with the fingertips, and the rest are opened or closed using a set of keys. These tone holes let the player produce every note of the chromatic scale. On alto and larger clarinets, and a few soprano clarinets, key-covered holes replace some or all finger holes. The most common system of keys was named the Böhm system by its designer Hyacinthe Klosé in honour of flute designer Theobald Böhm, but it is not the same as the Böhm system used on flutes. The other main system of keys is called the Oehler system and is used mostly in Germany and Austria (see History). The related Albert system is used by some jazz, klezmer, and eastern European folk musicians. The Albert and Oehler systems are both based on the early Mueller system. The cluster of keys at the bottom of the upper joint (protruding slightly beyond the cork of the joint) are known as the "trill keys" and are operated by the right hand. These give the player alternative fingerings that make it easy to play ornaments and trills. The entire weight of the smaller clarinets is supported by the right thumb behind the lower joint on what is called the "thumb-rest". Basset horns and larger clarinets are supported with a neck strap or a floor peg. Finally, the flared end is known as the "bell". Contrary to popular belief, the bell does not amplify the sound; rather, it improves the uniformity of the instrument's tone for the lowest notes in each register. For the other notes, the sound is produced almost entirely at the tone holes, and the bell is irrelevant. On basset horns and larger clarinets, the bell curves up and forward and is usually made of metal. Theobald Böhm did not directly invent the key system of the clarinet. Böhm was a flautist who created the key system that is now used for the transverse flute. Klosé and Buffet applied Böhm's system to the clarinet. Although the credit goes to those people, Böhm's name was given to that key system because it was based on that used for the flute. The current Böhm key system consists of generally 6 rings, on the thumb, 1st, 2nd, 4th, 5th, and 6th holes, and a register key just above the thumb hole, easily accessible with the thumb. Above the 1st hole, there is a key that lifts two covers creating the note A in the throat register (high part of low register) of the clarinet. A key at the side of the instrument at the same height as the A key lifts only one of the two covers, producing G, a semitone lower. The A key can be used in conjunction solely with the register key to produce A/B. The clarinet has its roots in the early single-reed instruments or hornpipes used in Ancient Greece, Ancient Egypt, Middle East, and Europe since the Middle Ages, such as the albogue, alboka, and double clarinet. The modern clarinet developed from a Baroque instrument called the chalumeau. This instrument was similar to a recorder, but with a single-reed mouthpiece and a cylindrical bore. Lacking a register key, it was played mainly in its fundamental register, with a limited range of about one and a half octaves. It had eight finger holes, like a recorder, and two keys for its two highest notes. At this time, contrary to modern practice, the reed was placed in contact with the upper lip. Around the turn of the 18th century, the chalumeau was modified by converting one of its keys into a register key to produce the first clarinet. This development is usually attributed to German instrument maker Johann Christoph Denner, though some have suggested his son Jacob Denner was the inventor. This instrument played well in the middle register with a loud, shrill sound, so it was given the name "clarinetto" meaning "little trumpet" (from "clarino" + "-etto"). Early clarinets did not play well in the lower register, so players continued to play the chalumeaux for low notes. As clarinets improved, the chalumeau fell into disuse, and these notes became known as the "chalumeau register". Original Denner clarinets had two keys, and could play a chromatic scale, but various makers added more keys to get improved tuning, easier fingerings, and a slightly larger range. The classical clarinet of Mozart's day typically had eight finger holes and five keys. Clarinets were soon accepted into orchestras. Later models had a mellower tone than the originals. Mozart (d. 1791) liked the sound of the clarinet (he considered its tone the closest in quality to the human voice) and wrote numerous pieces for the instrument., and by the time of Beethoven (c. 1800–1820), the clarinet was a standard fixture in the orchestra. The next major development in the history of clarinet was the invention of the modern pad. Because early clarinets used felt pads to cover the tone holes, they leaked air. This required pad-covered holes to be kept to a minimum, restricting the number of notes the clarinet could play with good tone. In 1812, Iwan Müller, a Baltic German community-born clarinetist and inventor, developed a new type of pad that was covered in leather or fish bladder. It was airtight and let makers increase the number of pad-covered holes. Müller designed a new type of clarinet with seven finger holes and thirteen keys. This allowed the instrument to play in any key with near-equal ease. Over the course of the 19th-century, makers made many enhancements to Müller's clarinet, such as the Albert system and the Baermann system, all keeping the same basic design. Modern instruments may also have cork or synthetic pads. The final development in the modern design of the clarinet used in most of the world today was introduced by Hyacinthe Klosé in 1839. He devised a different arrangement of keys and finger holes, which allow simpler fingering. It was inspired by the Boehm system developed for flutes by Theobald Böhm. Klosé was so impressed by Böhm's invention that he named his own system for clarinets the Boehm system, although it is different from the one used on flutes. This new system was slow to gain popularity but gradually became the standard, and today the Boehm system is used everywhere in the world except Germany and Austria. These countries still use a direct descendant of the Mueller clarinet known as the Oehler system clarinet. Also, some contemporary Dixieland players continue to use Albert system clarinets. Other key systems have been developed, many built around modifications to the basic Böhm system: Full Böhm, Mazzeo, McIntyre, Benade NX, and the Reform Boehm system for example. Each of these addressed—and often improved—issues of particular "weak" tones, or simplified awkward fingerings, but none has caught on widely among players, and the Boehm system remains the standard, to date. The modern orchestral standard of using soprano clarinets in B and A has to do partly with the history of the instrument and partly with acoustics, aesthetics, and economics. Before about 1800, due to the lack of airtight pads "(see History)", practical woodwinds could have only a few keys to control accidentals (notes outside their diatonic home scales). The low (chalumeau) register of the clarinet spans a twelfth (an octave plus a perfect fifth), so the clarinet needs keys/holes to produce all nineteen notes in this range. This involves more keywork than on instruments that "overblow" at the octave—oboes, flutes, bassoons, and saxophones, for example, which need only twelve notes before overblowing. Clarinets with few keys cannot therefore easily play chromatically, limiting any such instrument to a few closely related keys. For example, an eighteenth-century clarinet in C could be played in F, C, and G (and their relative minors) with good intonation, but with progressive difficulty and poorer intonation as the key moved away from this range. In contrast, for octave-overblowing instruments, an instrument in C with few keys could much more readily be played in any key. This problem was overcome by using three clarinets—in A, B, and C—so that early 19th-century music, which rarely strayed into the remote keys (five or six sharps or flats), could be played as follows: music in 5 to 2 sharps (B major to D major concert pitch) on A clarinet (D major to F major for the player), music in 1 sharp to 1 flat (G to F) on C clarinet, and music in 2 flats to 4 flats (B to A) on the B clarinet (C to B for the clarinetist). Difficult key signatures and numerous accidentals were thus largely avoided. With the invention of the airtight pad, and as key technology improved and more keys were added to woodwinds, the need for clarinets in multiple keys was reduced. However, the use of multiple instruments in different keys persisted, with the three instruments in C, B, and A all used as specified by the composer. The lower-pitched clarinets sound "mellower" (less bright), and the C clarinet—being the highest and therefore brightest of the three—fell out of favour as the other two could cover its range and their sound was considered better. While the clarinet in C began to fall out of general use around 1850, some composers continued to write C parts after this date, e.g., Bizet's Symphony in C (1855), Tchaikovsky's Symphony No. 2 (1872), Smetana's overture to "The Bartered Bride" (1866) and "Má Vlast" (1874), Dvořák's "Slavonic Dance" Op. 46, No. 1 (1878), Brahms' Symphony No. 4 (1885), Mahler's Symphony No. 6 (1906), and Richard Strauss deliberately reintroduced it to take advantage of its brighter tone, as in "Der Rosenkavalier" (1911). While technical improvements and an equal-tempered scale reduced the need for two clarinets, the technical difficulty of playing in remote keys persisted, and the A has thus remained a standard orchestral instrument. In addition, by the late 19th century, the orchestral clarinet repertoire contained so much music for clarinet in A that the disuse of this instrument was not practical. Attempts were made to standardise to the B instrument between 1930 and 1950 (e.g., tutors recommended learning routine transposition of orchestral A parts on the B clarinet, including solos written for A clarinet, and some manufacturers provided a low E on the B to match the range of the A), but this failed in the orchestral sphere. Similarly there have been E and D instruments in the upper soprano range, B, A, and C instruments in the bass range, and so forth; but over time the E and B instruments have become predominant. The B instrument remains dominant in concert bands and jazz. B and C instruments are used in some ethnic traditions, such as klezmer. In classical music, clarinets are part of standard orchestral and concert band instrumentation. The orchestra frequently includes two clarinetists playing individual parts—each player is usually equipped with a pair of standard clarinets in B and A, and clarinet parts commonly alternate between B and A instruments several times over the course of a piece, or less commonly, a movement (e.g., 1st movement Brahms' 3rd symphony). Clarinet sections grew larger during the last few decades of the 19th century, often employing a third clarinetist, an E or a bass clarinet. In the 20th century, composers such as Igor Stravinsky, Richard Strauss, Gustav Mahler, and Olivier Messiaen enlarged the clarinet section on occasion to up to nine players, employing many different clarinets including the E or D soprano clarinets, basset horn, alto clarinet, bass clarinet, and/or contrabass clarinet. In concert bands, clarinets are an important part of the instrumentation. The E clarinet, B clarinet, alto clarinet, bass clarinet, and contra-alto/contrabass clarinet are commonly used in concert bands. Concert bands generally have multiple B clarinets; there are commonly 3 B clarinet parts with 2–3 players per part. There is generally only one player per part on the other clarinets. There are not always E clarinet, alto clarinet, and contra-alto clarinets/contrabass clarinet parts in concert band music, but all three are quite common. This practice of using a variety of clarinets to achieve coloristic variety was common in 20th-century classical music and continues today. However, many clarinetists and conductors prefer to play parts originally written for obscure instruments on B or E clarinets, which are often of better quality and more prevalent and accessible. The clarinet is widely used as a solo instrument. The relatively late evolution of the clarinet (when compared to other orchestral woodwinds) has left solo repertoire from the Classical period and later, but few works from the Baroque era. Many clarinet concertos have been written to showcase the instrument, with the concerti by Mozart, Copland, and Weber being well known. Many works of chamber music have also been written for the clarinet. Common combinations are: The clarinet was originally a central instrument in jazz, beginning with the New Orleans players in the 1910s. It remained a signature instrument of jazz music through much of the big band era into the 1940s. American players Alphonse Picou, Larry Shields, Jimmie Noone, Johnny Dodds, and Sidney Bechet were all pioneers of the instrument in jazz. The B soprano was the most common instrument, but a few early jazz musicians such as Louis Nelson Delisle and Alcide Nunez preferred the C soprano, and many New Orleans jazz brass bands have used E soprano. Swing clarinetists such as Benny Goodman, Artie Shaw, and Woody Herman led successful big bands and smaller groups from the 1930s onward. Duke Ellington, active from the 1920s to the 1970s, used the clarinet as lead instrument in his works, with several players of the instrument (Barney Bigard, Jimmy Hamilton, and Russell Procope) spending a significant portion of their careers in his orchestra. Harry Carney, primarily Ellington's baritone saxophonist, occasionally doubled on bass clarinet. Meanwhile, Pee Wee Russell had a long and successful career in small groups. With the decline of the big bands' popularity in the late 1940s, the clarinet faded from its prominent position in jazz. By that time, an interest in Dixieland or traditional New Orleans jazz had revived; Pete Fountain was one of the best known performers in this genre. Bob Wilber, active since the 1950s, is a more eclectic jazz clarinetist, playing in several classic jazz styles. During the 1950s and 1960s, Britain underwent a surge in the popularity of what was termed 'Trad jazz'. In 1956 the British clarinetist Acker Bilk founded his own ensemble. Several singles recorded by Bilk reached the British pop charts, including the ballad "Stranger on the Shore". The clarinet's place in the jazz ensemble was usurped by the saxophone, which projects a more powerful sound and uses a less complicated fingering system. The requirement for an increased speed of execution in modern jazz also did not favour the clarinet, but the clarinet did not entirely disappear. The clarinetist Stan Hasselgård made a transition from swing to bebop in the mid-1940s. A few players such as Buddy DeFranco, Tony Scott, and Jimmy Giuffre emerged during the 1950s playing bebop or other styles. A little later, Eric Dolphy (on bass clarinet), Perry Robinson, John Carter, Theo Jörgensmann, and others used the clarinet in free jazz. The French composer and clarinetist Jean-Christian Michel initiated a jazz-classical cross-over on the clarinet with the drummer Kenny Clarke. In the U.S., the prominent players on the instrument since the 1980s have included Eddie Daniels, Don Byron, Marty Ehrlich, Ken Peplowski, and others playing the clarinet in more contemporary contexts. The clarinet is uncommon, but not unheard of, in rock music. Jerry Martini played clarinet on Sly and the Family Stone's 1968 hit, "Dance to the Music"; Don Byron, a founder of the Black Rock Coalition who was a member of hard rock guitarist Vernon Reid's band, plays clarinet on the "Mistaken Identity" album (1996). The Beatles, Pink Floyd, Radiohead, Aerosmith, Billy Joel, and Tom Waits have also all used clarinet on occasion. A clarinet is prominently featured for two different solos in "Breakfast in America", the title song from the Supertramp album of the same name. Clarinets feature prominently in klezmer music, which entails a distinctive style of playing. The use of quarter-tones requires a different embouchure. Some klezmer musicians prefer Albert system clarinets. The popular Brazilian music styles of choro and samba use the clarinet. Prominent contemporary players include Paulo Moura, Naylor 'Proveta' Azevedo, Paulo Sérgio dos Santos, and Cuban born Paquito D'Rivera. Even though it has been adopted recently in Albanian folklore (around the 18th century), the clarinet, or "gërneta" as it is called, is one of the most important instruments in Albania, especially in the central and southern areas. The clarinet plays a crucial role in "saze" (folk) ensembles that perform in weddings and other celebrations. It is worth mentioning that the "kaba" (an instrumental Albanian Isopolyphony included in UNESCO's intangible cultural heritage list) is characteristic of these ensembles. Prominent Albanian clarinet players include Selim Leskoviku, Gaqo Lena, Remzi Lela (Çobani), Laver Bariu (Ustai), and Nevruz Nure (Lulushi i Korçës). The clarinet is prominent in Bulgarian wedding music also; it is an offshoot of Roma/Romani traditional music. Ivo Papazov is a well-known clarinetist in this genre. In Moravian dulcimer bands, the clarinet is usually the only wind instrument among string instruments. In old-town folk music in North Macedonia (called čalgija ("чалгија")), the clarinet has the most important role in wedding music; clarinet solos mark the high point of dancing euphoria. One of the most renowned Macedonian clarinet players is Tale Ognenovski, who gained worldwide fame for his virtuosity. In Greece, the clarinet (usually referred to as "κλαρίνο"—"clarino") is prominent in traditional music, especially in central, northwest, and northern Greece (Thessaly, Epirus, and Macedonia). The double-reed zurna was the dominant woodwind instrument before the clarinet arrived in the country, although many Greeks regard the clarinet as a native instrument. Traditional dance music, wedding music, and laments include a clarinet soloist and quite often improvisations. Petroloukas Chalkias is a famous clarinetist in this genre. The instrument is equally famous in Turkey, especially the lower-pitched clarinet in G. The western European clarinet crossed via Turkey to Arabic music, where it is widely used in Arabic pop, especially if the intention of the arranger is to imitate the Turkish style. Also in Turkish folk music, a clarinet-like woodwind instrument, the sipsi, is used. However, it is far more rare than the soprano clarinet and is mainly limited to folk music of the Aegean Region. Groups of clarinets playing together have become increasingly popular among clarinet enthusiasts in recent years. Common forms are: Clarinet choirs and quartets often play arrangements of both classical and popular music, in addition to a body of literature specially written for a combination of clarinets by composers such as Arnold Cooke, Alfred Uhl, Lucien Caillet, and Václav Nelhýbel. There is a family of many differently pitched clarinet types, some of which are very rare. The following are the most important sizes, from highest to lowest: EEE and BBB octocontra-alto and octocontrabass clarinets have also been built. There have also been soprano clarinets in C, A, and B with curved barrels and bells marketed under the names saxonette, claribel, and clariphon.
https://en.wikipedia.org/wiki?curid=6433
Chojnów Chojnów () (Silesian German:Hoyn) is a small town in Legnica County, Lower Silesian Voivodeship, in south-western Poland. It is located on the Skora river, a tributary of the Kaczawa at an average altitude of above sea level. Chojnów is the administrative seat of the rural gmina called Gmina Chojnów, although the town is not part of its territory and forms a separate urban gmina. it had 13,355 inhabitants. Chojnów is located west of Legnica, east from Bolesławiec and north of Złotoryja, from the A4 motorway. It has railroad connections to Bolesławiec and Legnica. Coat of arms of the Chojnów has is a blue escutcheon. On the dial there is a tower with three bastions of white colour. The central tower has two Windows, and one side. On the towers is located on the right side of the Moon and Sun on the left. In the gate of the Silesian Eagle on a yellow background. The Motto of Chojnów is "Friendly City". Chojnów is located in the Central-Western part of the Lower Silesia region. The Skora (Leather) River flows through the town in a westerly direction. The city of Chojnów is in area, including 41% agricultural land. Chojnów has a connection with the major cities of the country (road and rail) and located south of Chojnów has the A4 Autostrada. To the South of the town is the surrounding Chojnowska Plain. The town is first mentioned in a Latin mediaeval document issued in Wrocław on February 26, 1253, stating, the Silesian Duke Henry III when the town is mentioned under the name Honowo. Possible the name of nearby Hainau Island. The name is of Polish origin, and in more modern records from the 19th century, the Polish name appears as "Hajnów", while "Haynau" is the Germanized version of the original Polish name. The settlement of "Haynow" was mentioned in a 1272 deed. It was already called a "civitas" in a 1288 document issued by the Piast duke Henry V of Legnica, and officially received town privileges in 1333 from Duke Bolesław III the Generous. It was part of the duchies of Wrocław, Głogów and Legnica of fragmented Poland and remained under the rule of the Piast dynasty until 1675. Its population was predominantly Polish. In 1292 the first castellan of Chojnów, Bronisław Budziwojowic, was mentioned. In the 14th and early 15th centuries Chojnów was granted various privileges, including staple right and gold mining right, thanks to which it flourished. The town survived the Hussites, who burned almost the entire town center and castle, but it quickly helped recover its former glory. The largest boom Chojnów experienced was in the 16th century, however by the end of that century began to decline due to fires and epidemic, which claimed many victims in 1613. During the Thirty Years War (1618–1648), there was another outbreak in the city, it was occupied by the Austrians and Swedes and in 1642 it was also plundered by the Swedes. It remained part of the Piast-ruled Duchy of Legnica until its dissolution in 1675, when it was incorporated to Habsburg-ruled Bohemia. In the 18th century, cloth production developed and a clothmaking school was established in the town. One of two main routes connecting Warsaw and Dresden ran through the town in the 18th century and Kings Augustus II the Strong and Augustus III of Poland traveled that route numerous times. In 1740 the town was captured by Prussia and subsequently annexed in 1742. In 1804 it suffered a flood. During the Napoleonic wars there were more epidemics. In 1813 in Chojnów, Napoleon Bonaparte issued instructions regarding the reorganization of the 8th Polish Corps of Prince Józef Poniatowski. The event is commemorated by a plaque in the facade of the Piast Castle. A railway line was opened in the 19th century. Sewer, Gas lighting a Newspaper and a hospital soon followed as the towns economy improved. The city was not spared in World War II, with 30% of the town being destroyed on February 10, 1945 when Soviet Red Army troops took the abandoned town. After World War II and the implementation of the Oder-Neisse line in 1945, the town passed to the Republic of Poland. It was repopulated by Poles, expelled from former eastern Poland annexed by the Soviet Union. In 1946 it was renamed "Chojnów", a more modern version of the old Polish "Hajnów". Also Greeks, refugees of the Greek Civil War, settled in Chojnów. Chojnów is an industrial and agricultural town. Among local products are: paper, agricultural machinery, chains, metal furniture for hospitals, equipment for the meat industry, beer, wine, leather clothing, and clothing for infants, children and adults. Among the interesting monuments of Chojnów are the 13th-century castle of the Dukes of Legnica (currently used as a museum), two old churches, the "Baszta Tkaczy" ("Weavers' Tower") and preserved fragments of city walls. The biggest green area in Chojnów is small forest "Park Piastowski" ("Piast's Park"), named after Piast dynasty. Wild animals that can be found in the Chojnów area are roe deer, foxes, rabbits and wild domestic animals, especially cats. Every year in the first days of June, the "Days of Chojnów" ("Dni Chojnowa") are celebrated. The Whole-Poland bike race "Masters" has been organized yearly in Chojnów for the past few years. Chojnów has a Municipal sports and recreation center formed in 2008 holding various events, festivals, reviews, exhibitions, and competitions. The regional Museum is housed in the old Piast era castle. The collections include tiles, relics, and the castle garden. Next to the Museum there is a municipal library. In śródmiejskim Park, near the Town Hall is the amphitheatre. The local government-run weekly newspaper is Gazeta Chojnowska, which has been published since 1992. It is published biweekly. Editions have a run of 900 copies and it is one of the oldest newspapers in Poland issued without interruption. The "Chojnów" is the official newspaper of Chojnów with copy run of 750 copies. In Chojnów, there are two kindergartens, two elementary schools and two middle schools. Chojnów is in the Catholic deanery of Chojnów and has two parishes, Immaculate Conception of the Blessed Virgin Mary and also the Holy Apostles Peter and Paul. Both parishes have active congregations. There are also two Congregations of Jehovah's witnesses. Chojnów is twinned with:
https://en.wikipedia.org/wiki?curid=6434
Chamaeleon Chamaeleon () is a small constellation in the southern sky. It is named after the chameleon, a kind of lizard. It was first defined in the 16th century. Chamaeleon was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius and Jodocus Hondius. Johann Bayer was the first uranographer to put Chamaeleon in a celestial atlas. It was one of many constellations created by European explorers in the 15th and 16th centuries out of unfamiliar Southern Hemisphere stars. There are four bright stars in Chamaeleon that form a compact diamond-shape approximately 10 degrees from the South Celestial Pole and about 15 degrees south of Acrux, along the axis formed by Acrux and Gamma Crucis. Alpha Chamaeleontis is a white-hued star of magnitude 4.1, 63 light-years from Earth. Beta Chamaeleontis is a blue-white hued star of magnitude 4.2, 271 light-years from Earth. Gamma Chamaeleontis is a red-hued giant star of magnitude 4.1, 413 light-years from Earth. The other bright star in Chamaeleon is Delta Chamaeleontis, a wide double star. The brighter star is Delta2 Chamaeleontis, a blue-hued star of magnitude 4.4. Delta1 Chamaeleontis, the dimmer component, is an orange-hued giant star of magnitude 5.5. They both lie about 350 light years away. Chamaeleon is also the location of Cha 110913, a unique dwarf star or proto solar system. In 1999, a nearby open cluster was discovered centered on the star η Chamaeleontis. The cluster, known as either the Eta Chamaeleontis cluster or Mamajek 1, is 8 million years old, and lies 316 light years from Earth. The constellation contains a number of molecular clouds (the Chamaeleon dark clouds) that are forming low-mass T Tauri stars. The cloud complex lies some 400 to 600 light years from Earth, and contains tens of thousands of solar masses of gas and dust. The most prominent cluster of T Tauri stars and young B-type stars are in the Chamaeleon I cloud, and are associated with the reflection nebula IC 2631. Chamaeleon contains one planetary nebula, NGC 3195, which is fairly faint. It appears in a telescope at about the same apparent size as Jupiter. In Chinese astronomy, the stars that form Chamaeleon were classified as the Little Dipper (小斗, "Xiǎodǒu") among the Southern Asterisms (近南極星區, "Jìnnánjíxīngōu") by Xu Guangqi. Chamaeleon is sometimes also called the Frying Pan in Australia.
https://en.wikipedia.org/wiki?curid=6436
Cholesterol Cholesterol (from the Ancient Greek "chole-" (bile) and "stereos" (solid), followed by the chemical suffix "-ol" for an alcohol) is an organic molecule. It is a sterol (or modified steroid), a type of lipid. Cholesterol is biosynthesized by all animal cells and is an essential structural component of animal cell membranes. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D. Cholesterol is the principal sterol synthesized by all animals. In vertebrates, hepatic cells typically produce the greatest amounts. It is absent among prokaryotes (bacteria and archaea), although there are some exceptions, such as "Mycoplasma", which require cholesterol for growth. François Poulletier de la Salle first identified cholesterol in solid form in gallstones in 1769. However, it was not until 1815 that chemist Michel Eugène Chevreul named the compound "cholesterine". Cholesterol is essential for all animal life, with each cell capable of synthesizing it by way of a complex 37-step process. This begins with the mevalonate or HMG-CoA reductase pathway, the target of statin drugs, which encompasses the first 18 steps. This is followed by 19 additional steps to convert the resulting lanosterol into cholesterol. A human male weighing 68 kg (150 lb) normally synthesizes about 1 gram (1,000 mg) of cholesterol per day, and his body contains about 35 g, mostly contained within the cell membranes. Typical daily cholesterol dietary intake for a man in the United States is 307 mg. Most ingested cholesterol is esterified, which causes it to be poorly absorbed by the gut. The body also compensates for absorption of ingested cholesterol by reducing its own cholesterol synthesis. For these reasons, cholesterol in food, seven to ten hours after ingestion, has little, if any effect on concentrations of cholesterol in the blood. However, during the first seven hours after ingestion of cholesterol, as absorbed fats are being distributed around the body within extracellular water by the various lipoproteins (which transport all fats in the water outside cells), the concentrations increase. Plants do not make cholesterol but manufacture phytosterols, chemically similar substances which can compete with cholesterol for reabsorption in the intestinal tract, thus potentially reducing cholesterol reabsorption. When intestinal lining cells absorb phytosterols, in place of cholesterol, they usually excrete the phytosterol molecules back into the GI tract, an important protective mechanism. The intake of naturally occurring phytosterols, which encompass plant sterols and stanols, ranges between ≈200–300 mg/day depending on eating habits. Specially designed vegetarian experimental diets have been produced yielding upwards of 700 mg/day. Cholesterol, given that it composes about 30% of all animal cell membranes, is required to build and maintain membranes and modulates membrane fluidity over the range of physiological temperatures. The hydroxyl group of each cholesterol molecule interacts with water molecules surrounding the membrane, as do the polar heads of the membrane phospholipids and sphingolipids, while the bulky steroid and the hydrocarbon chain are embedded in the membrane, alongside the nonpolar fatty-acid chain of the other lipids. Through the interaction with the phospholipid fatty-acid chains, cholesterol increases membrane packing, which both alters membrane fluidity and maintains membrane integrity so that animal cells do not need to build cell walls (like plants and most bacteria). The membrane remains stable and durable without being rigid, allowing animal cells to change shape and animals to move. The structure of the tetracyclic ring of cholesterol contributes to the fluidity of the cell membrane, as the molecule is in a "trans" conformation making all but the side chain of cholesterol rigid and planar. In this structural role, cholesterol also reduces the permeability of the plasma membrane to neutral solutes, hydrogen ions, and sodium ions. Within the cell membrane, cholesterol also functions in intracellular transport, cell signaling and nerve conduction. Cholesterol is essential for the structure and function of invaginated caveolae and clathrin-coated pits, including caveola-dependent and clathrin-dependent endocytosis. The role of cholesterol in endocytosis of these types can be investigated by using methyl beta cyclodextrin (MβCD) to remove cholesterol from the plasma membrane. Cholesterol regulates the biological process of substrate presentation and the enzymes that use substrate presentation as a mechanism of their activation. (PLD2) is a well-defined example of an enzyme activated by substrate presentation. The enzyme is palmitoylated causing the enzyme to traffic to cholesterol dependent lipid domains sometimes called "lipid rafts". The substrate of phospholipase D is phosphatidylcholine (PC) which is unsaturated and is of low abundance in lipid rafts. PC localizes to the disordered region of the cell along with the polyunsaturated lipid phosphatidylinositol 4,5-bisphosphate (PIP2). PLD2 has a PIP2 binding domain. When PIP2 concentration in the membrane increases, PLD2 leaves the cholesterol dependent domains and binds to PIP2 where it then gains access to its substrate PC and commences catalysis based on substrate presentation. Cholesterol is also implicated in cell signaling processes, assisting in the formation of lipid rafts in the plasma membrane, which brings receptor proteins in close proximity with high concentrations of second messenger molecules. In multiple layers, cholesterol and phospholipids, both electrical insulators, can facilitate speed of transmission of electrical impulses along nerve tissue. For many neuron fibers, a myelin sheath, rich in cholesterol since it is derived from compacted layers of Schwann cell membrane, provides insulation for more efficient conduction of impulses. Demyelination (loss of some of these Schwann cells) is believed to be part of the basis for multiple sclerosis. Cholesterol binds to and affects the gating of a number of ion channels such as the nicotinic acetylcholine receptor, GABAA receptor, and the inward-rectifier potassium channel. Cholesterol also activates the estrogen-related receptor alpha (ERRα), and may be the endogenous ligand for the receptor. The constitutively active nature of the receptor may be explained by the fact that cholesterol is ubiquitous in the body. Inhibition of ERRα signaling by reduction of cholesterol production has been identified as a key mediator of the effects of statins and bisphosphonates on bone, muscle, and macrophages. On the basis of these findings, it has been suggested that the ERRα should be de-orphanized and classified as a receptor for cholesterol. Within cells, cholesterol is also a precursor molecule for several biochemical pathways. For example, it is the precursor molecule for the synthesis of vitamin D in the calcium metabolism and all steroid hormones, including the adrenal gland hormones cortisol and aldosterone, as well as the sex hormones progesterone, estrogens, and testosterone, and their derivatives. Cholesterol is recycled in the body. The liver excretes cholesterol into biliary fluids, which is then stored in the gallbladder, which then excretes it in a non-esterified form (via bile) into the digestive tract. Typically, about 50% of the excreted cholesterol is reabsorbed by the small intestine back into the bloodstream. All animal cells manufacture cholesterol, for both membrane structure and other uses, with relative production rates varying by cell type and organ function. About 80% of total daily cholesterol production occurs in the liver and the intestines; other sites of higher synthesis rates include adrenal glands, and reproductive organs. Synthesis within the body starts with the mevalonate pathway where two molecules of acetyl CoA condense to form acetoacetyl-CoA. This is followed by a second condensation between acetyl CoA and acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl CoA (HMG-CoA). This molecule is then reduced to mevalonate by the enzyme HMG-CoA reductase. Production of mevalonate is the rate-limiting and irreversible step in cholesterol synthesis and is the site of action for statins (a class of cholesterol-lowering drugs). Mevalonate is finally converted to isopentenyl pyrophosphate (IPP) through two phosphorylation steps and one decarboxylation step that requires ATP. Three molecules of isopentenyl pyrophosphate condense to form farnesyl pyrophosphate through the action of geranyl transferase. Two molecules of farnesyl pyrophosphate then condense to form squalene by the action of squalene synthase in the endoplasmic reticulum. Oxidosqualene cyclase then cyclizes squalene to form lanosterol. Finally, lanosterol is converted to cholesterol through a 19-step process. The final 19 steps to cholesterol contain NADPH and oxygen to help oxidize methyl groups for removal of carbons, mutases to move alkene groups, and NADH to help reduce ketones. Konrad Bloch and Feodor Lynen shared the Nobel Prize in Physiology or Medicine in 1964 for their discoveries concerning some of the mechanisms and methods of regulation of cholesterol and fatty acid metabolism. Biosynthesis of cholesterol is directly regulated by the cholesterol levels present, though the homeostatic mechanisms involved are only partly understood. A higher intake from food leads to a net decrease in endogenous production, whereas lower intake from food has the opposite effect. The main regulatory mechanism is the sensing of intracellular cholesterol in the endoplasmic reticulum by the protein SREBP (sterol regulatory element-binding protein 1 and 2). In the presence of cholesterol, SREBP is bound to two other proteins: SCAP (SREBP cleavage-activating protein) and INSIG-1. When cholesterol levels fall, INSIG-1 dissociates from the SREBP-SCAP complex, which allows the complex to migrate to the Golgi apparatus. Here SREBP is cleaved by S1P and S2P (site-1 protease and site-2 protease), two enzymes that are activated by SCAP when cholesterol levels are low. The cleaved SREBP then migrates to the nucleus, and acts as a transcription factor to bind to the sterol regulatory element (SRE), which stimulates the transcription of many genes. Among these are the low-density lipoprotein (LDL) receptor and HMG-CoA reductase. The LDL receptor scavenges circulating LDL from the bloodstream, whereas HMG-CoA reductase leads to an increase of endogenous production of cholesterol. A large part of this signaling pathway was clarified by Dr. Michael S. Brown and Dr. Joseph L. Goldstein in the 1970s. In 1985, they received the Nobel Prize in Physiology or Medicine for their work. Their subsequent work shows how the SREBP pathway regulates expression of many genes that control lipid formation and metabolism and body fuel allocation. Cholesterol synthesis can also be turned off when cholesterol levels are high. HMG-CoA reductase contains both a cytosolic domain (responsible for its catalytic function) and a membrane domain. The membrane domain senses signals for its degradation. Increasing concentrations of cholesterol (and other sterols) cause a change in this domain's oligomerization state, which makes it more susceptible to destruction by the proteosome. This enzyme's activity can also be reduced by phosphorylation by an AMP-activated protein kinase. Because this kinase is activated by AMP, which is produced when ATP is hydrolyzed, it follows that cholesterol synthesis is halted when ATP levels are low. As an isolated molecule, cholesterol is only minimally soluble in water, or hydrophilic. Because of this, it dissolves in blood at exceedingly small concentrations. To be transported effectively, cholesterol is instead packaged within lipoproteins, complex discoidal particles with exterior amphiphilic proteins and lipids, whose outward-facing surfaces are water-soluble and inward-facing surfaces are lipid-soluble. This allows it to travel through the blood via emulsification. Unbound cholesterol, being amphipathic, is transported in the monolayer surface of the lipoprotein particle along with phospholipids and proteins. Cholesterol esters bound to fatty acid, on the other hand, are transported within the fatty hydrophilic core of the lipoprotein, along with triglyceride. There are several types of lipoproteins in the blood. In order of increasing density, they are chylomicrons, very-low-density lipoprotein (VLDL), intermediate-density lipoprotein (IDL), low-density lipoprotein (LDL), and high-density lipoprotein (HDL). Lower protein/lipid ratios make for less dense lipoproteins. Cholesterol within different lipoproteins is identical, although some is carried as its native "free" alcohol form (the cholesterol-OH group facing the water surrounding the particles), while others as fatty acyl esters, known also as cholesterol esters, within the particles. Lipoprotein particles are organized by complex apolipoproteins, typically 80–100 different proteins per particle, which can be recognized and bound by specific receptors on cell membranes, directing their lipid payload into specific cells and tissues currently ingesting these fat transport particles. These surface receptors serve as unique molecular signatures, which then help determine fat distribution delivery throughout the body. Chylomicrons, the least dense cholesterol transport molecules, contain apolipoprotein B-48, apolipoprotein C, and apolipoprotein E (the principal cholesterol carrier in the brain) in their shells. Chylomicrons carry fats from the intestine to muscle and other tissues in need of fatty acids for energy or fat production. Unused cholesterol remains in more cholesterol-rich chylomicron remnants, and taken up from here to the bloodstream by the liver. VLDL molecules are produced by the liver from triacylglycerol and cholesterol which was not used in the synthesis of bile acids. These molecules contain apolipoprotein B100 and apolipoprotein E in their shells, and can be degraded by lipoprotein lipase on the artery wall to IDL. This arterial wall cleavage allows absorption of triacylglycerol and increases concentration of circulating cholesterol. IDL molecules are then consumed in two processes: half is metabolized by HTGL and taken up by the LDL receptor on the liver cell surfaces, while the other half continues to lose triacylglycerols in the bloodstream until they become cholesterol laden LDL particles. LDL particles are the major blood cholesterol carriers. Each one contains approximately 1,500 molecules of cholesterol ester. LDL molecule shells contain just one molecule of apolipoprotein B100, recognized by LDL receptors in peripheral tissues. Upon binding of apolipoprotein B100, many LDL receptors concentrate in clathrin-coated pits. Both LDL and its receptor form vesicles within a cell via endocytosis. These vesicles then fuse with a lysosome, where the lysosomal acid lipase enzyme hydrolyzes the cholesterol esters. The cholesterol can then be used for membrane biosynthesis or esterified and stored within the cell, so as to not interfere with the cell membranes. LDL receptors are used up during cholesterol absorption, and its synthesis is regulated by SREBP, the same protein that controls the synthesis of cholesterol "de novo", according to its presence inside the cell. A cell with abundant cholesterol will have its LDL receptor synthesis blocked, to prevent new cholesterol in LDL molecules from being taken up. Conversely, LDL receptor synthesis proceeds when a cell is deficient in cholesterol. When this process becomes unregulated, LDL molecules without receptors begin to appear in the blood. These LDL molecules are oxidized and taken up by macrophages, which become engorged and form foam cells. These foam cells often become trapped in the walls of blood vessels and contribute to atherosclerotic plaque formation. Differences in cholesterol homeostasis affect the development of early atherosclerosis (carotid intima-media thickness). These plaques are the main causes of heart attacks, strokes, and other serious medical problems, leading to the association of so-called LDL cholesterol (actually a lipoprotein) with "bad" cholesterol. HDL particles are thought to transport cholesterol back to the liver, either for excretion or for other tissues that synthesize hormones, in a process known as reverse cholesterol transport (RCT). Large numbers of HDL particles correlates with better health outcomes, whereas low numbers of HDL particles is associated with atheromatous disease progression in the arteries. Cholesterol is susceptible to oxidation and easily forms oxygenated derivatives called oxysterols. Three different mechanisms can form these: autoxidation, secondary oxidation to lipid peroxidation, and cholesterol-metabolizing enzyme oxidation. A great interest in oxysterols arose when they were shown to exert inhibitory actions on cholesterol biosynthesis. This finding became known as the “oxysterol hypothesis”. Additional roles for oxysterols in human physiology include their participation in bile acid biosynthesis, function as transport forms of cholesterol, and regulation of gene transcription. In biochemical experiments radiolabelled forms of cholesterol, such as tritiated-cholesterol are used. These derivatives undergo degradation upon storage and it is essential to purify cholesterol prior to use. Cholesterol can be purified using small Sephadex LH-20 columns. Cholesterol is oxidized by the liver into a variety of bile acids. These, in turn, are conjugated with glycine, taurine, glucuronic acid, or sulfate. A mixture of conjugated and nonconjugated bile acids, along with cholesterol itself, is excreted from the liver into the bile. Approximately 95% of the bile acids are reabsorbed from the intestines, and the remainder are lost in the feces. The excretion and reabsorption of bile acids forms the basis of the enterohepatic circulation, which is essential for the digestion and absorption of dietary fats. Under certain circumstances, when more concentrated, as in the gallbladder, cholesterol crystallises and is the major constituent of most gallstones (lecithin and bilirubin gallstones also occur, but less frequently). Every day, up to 1 g of cholesterol enters the colon. This cholesterol originates from the diet, bile, and desquamated intestinal cells, and can be metabolized by the colonic bacteria. Cholesterol is converted mainly into coprostanol, a nonabsorbable sterol that is excreted in the feces. Although cholesterol is a steroid generally associated with mammals, the human pathogen "Mycobacterium tuberculosis" is able to completely degrade this molecule and contains a large number of genes that are regulated by its presence. Many of these cholesterol-regulated genes are homologues of fatty acid β-oxidation genes, but have evolved in such a way as to bind large steroid substrates like cholesterol. Animal fats are complex mixtures of triglycerides, with lesser amounts of both the phospholipids and cholesterol molecules from which all animal (and human) cell membranes are constructed. Since all animal cells manufacture cholesterol, all animal-based foods contain cholesterol in varying amounts. Major dietary sources of cholesterol include red meat, egg yolks and whole eggs, liver, kidney, giblets, fish oil, and butter. Human breast milk also contains significant quantities of cholesterol. Plant cells synthesize cholesterol as a precursor for other compounds, such as phytosterols and steroidal glycoalkaloids, with cholesterol remaining in plant foods only in minor amounts or absent. Some plant foods, such as avocado, flax seeds and peanuts, contain phytosterols, which compete with cholesterol for absorption in the intestines, reduce the absorption of both dietary and bile cholesterol. A typical diet contributes on the order of 0.2 gram of phytosterols, which is not enough to have a significant impact on blocking cholesterol absorption. Phytosterols intake can be supplemented through the use of phytosterol-containing functional foods or dietary supplements that are recognized as having potential to reduce levels of LDL-cholesterol. Some supplemental guidelines have recommended doses of phytosterols in the 1.6–3.0 grams per day range (Health Canada, EFSA, ATP III, FDA). A recent meta-analysis demonstrating a 12% reduction in LDL-cholesterol at a mean dose of 2.1 grams per day. However, the benefits of a diet supplemented with phytosterols have been questioned. In 2016, the United States Department of Agriculture Dietary Guidelines Advisory Committee recommended that Americans eat as little dietary cholesterol as possible. Increased dietary intake of industrial trans fats is associated with an increased risk in all-cause mortality and cardiovascular diseases. Trans fats have been shown to correlate with reduced levels of HDL and increased levels of LDL. Based on this evidence, along with other claims implicating low HDL and high LDL levels in cardiovascular disease, many health authorities advocate reducing LDL-cholesterol through changes in diet in addition to other lifestyle modifications. The related studies which correlate trans fats, as well as saturated fats, with unhealthy serum cholesterol levels, have since been contested on numerous points. The most notable challenge to these standards comes from a NCBI published meta analysis of the data used in the development of these guidelines, in which the correlation between serum cholesterol and saturated fat intake, was similarly or less significant than the correlation to visceral fat. As well as others, one of which concluded that current evidence "does not clearly support cardiovascular guidelines that encourage high consumption of polyunsaturated fatty acids and low consumption of total saturated fats." Other evidences such as metabolic ward and lab studies, including a study where rats subjected to high-fat or fructose diets became dyslipidemic are similarly questionable, given indications of an increase of produced visceral fat, which occurs as a result of metabolic differences in the processing of fructose. A general inconsistency of conclusions regarding the impact of simple carbohydrates on visceral fat, and a lack of data regarding the causal relationship between serum cholesterol and either saturated fat and visceral fat, makes drawing a definitive conclusion unreasonable, especially given the presence of numerous correlations. As such, given that well designed, adequately powered randomized controlled trials investigating patient-relevant outcomes of low-fat diets for otherwise healthy people with hypercholesterolaemia are lacking; large, parallel, randomized controlled trials are still needed to investigate the effectiveness of a cholesterol-lowering diet and the addition of omega-3 fatty acids, soya protein, plant sterols or stanols, especially in the case of familial hypercholesterolemia. According to the lipid hypothesis, elevated levels of cholesterol in the blood lead to atherosclerosis which may increase the risk of heart attack, stroke, and peripheral artery disease. Since higher blood LDL – especially higher LDL concentrations and smaller LDL particle size – contributes to this process more than the cholesterol content of the HDL particles, LDL particles are often termed "bad cholesterol". High concentrations of functional HDL, which can remove cholesterol from cells and atheromas, offer protection and are commonly referred to as "good cholesterol". These balances are mostly genetically determined, but can be changed by body composition, medications, diet, and other factors. A 2007 study demonstrated that blood total cholesterol levels have an exponential effect on cardiovascular and total mortality, with the association more pronounced in younger subjects. Because cardiovascular disease is relatively rare in the younger population, the impact of high cholesterol on health is larger in older people. Elevated levels of the lipoprotein fractions, LDL, IDL and VLDL, rather than the total cholesterol level, correlate with the extent and progress of atherosclerosis. Conversely, the total cholesterol can be within normal limits, yet be made up primarily of small LDL and small HDL particles, under which conditions atheroma growth rates are high. A "post hoc" analysis of the IDEAL and the EPIC prospective studies found an association between high levels of HDL cholesterol (adjusted for apolipoprotein A-I and apolipoprotein B) and increased risk of cardiovascular disease, casting doubt on the cardioprotective role of "good cholesterol". One in 250 adults can have a genetic mutation for the LDL cholesterol receptor that causes them to have familial hypercholerolemia. Inherited high cholesterol can also include genetic mutations in the PCSK9 gene and the gene for apolipoprotein B. Elevated cholesterol levels are treated with a strict diet consisting of low saturated fat, trans fat-free, low cholesterol foods, often followed by one of various hypolipidemic agents, such as statins, fibrates, cholesterol absorption inhibitors, nicotinic acid derivatives or bile acid sequestrants. There are several international guidelines on the treatment of hypercholesterolaemia. Human trials using HMG-CoA reductase inhibitors, known as statins, have repeatedly confirmed that changing lipoprotein transport patterns from unhealthy to healthier patterns significantly lowers cardiovascular disease event rates, even for people with cholesterol values currently considered low for adults. Studies have shown that reducing LDL cholesterol levels by about 38.7mg/dL with the use of statins can reduce cardiovascular disease and stroke risk by about 21%.Studies have also found that statins reduce atheroma progression. As a result, people with a history of cardiovascular disease may derive benefit from statins irrespective of their cholesterol levels (total cholesterol below 5.0 mmol/L [193 mg/dL]), and in men without cardiovascular disease, there is benefit from lowering abnormally high cholesterol levels ("primary prevention"). Primary prevention in women was originally practiced only by extension of the findings in studies on men, since, in women, none of the large statin trials conducted prior to 2007 demonstrated a significant reduction in overall mortality or in cardiovascular endpoints. Meta-analyses have demonstrated significant reductions in all-cause and cardiovascular mortality, without significant heterogeneity by sex. The 1987 report of National Cholesterol Education Program, Adult Treatment Panels suggests the total blood cholesterol level should be: < 200 mg/dL normal blood cholesterol, 200–239 mg/dL borderline-high, > 240 mg/dL high cholesterol. The American Heart Association provides a similar set of guidelines for total (fasting) blood cholesterol levels and risk for heart disease: Statins are effective in lowering LDL cholesterol and widely used for primary prevention in people at high risk of cardiovascular disease, as well as in secondary prevention for those who have developed cardiovascular disease. More current testing methods determine LDL ("bad") and HDL ("good") cholesterol separately, allowing cholesterol analysis to be more nuanced. The desirable LDL level is considered to be less than 130 mg/dL (2.6 mmol/L), although a newer upper limit of 70 mg/dL (1.8 mmol/L) can be considered in higher-risk individuals based on some of the above-mentioned trials. A ratio of total cholesterol to HDL—another useful measure—of far less than 5:1 is thought to be healthier. Total cholesterol is defined as the sum of HDL, LDL, and VLDL. Usually, only the total, HDL, and triglycerides are measured. For cost reasons, the VLDL is usually estimated as one-fifth of the triglycerides and the LDL is estimated using the Friedewald formula (or a variant): estimated LDL = [total cholesterol] − [total HDL] − [estimated VLDL]. VLDL can be calculated by dividing total triglycerides by five. Direct LDL measures are used when triglycerides exceed 400 mg/dL. The estimated VLDL and LDL have more error when triglycerides are above 400 mg/dL. In the Framingham Heart Study, in subjects over 50 years of age, they found an 11% increase overall and 14% increase in cardiovascular disease mortality per 1 mg/dL per year drop in total cholesterol levels. The researchers attributed this phenomenon to the fact that people with severe chronic diseases or cancer tend to have below-normal cholesterol levels. This explanation is not supported by the Vorarlberg Health Monitoring and Promotion Programme, in which men of all ages and women over 50 with very low cholesterol were likely to die of cancer, liver diseases, and mental diseases. This result indicates the low-cholesterol effect occurs even among younger respondents, contradicting the previous assessment among cohorts of older people that this is a proxy or marker for frailty occurring with age. Although there is a link between cholesterol and atherosclerosis as discussed above, a 2014 meta-analysis concluded there is insufficient evidence to support the recommendation of high consumption of polyunsaturated fatty acids and low consumption of total saturated fats for cardiovascular health. A 2016 review concluded there was either no link between LDL and mortality or that lower LDL was linked to a higher mortality risk, especially in older adults. Abnormally low levels of cholesterol are termed "hypocholesterolemia". Research into the causes of this state is relatively limited, but some studies suggest a link with depression, cancer, and cerebral hemorrhage. In general, the low cholesterol levels seem to be a consequence, rather than a cause, of an underlying illness. A genetic defect in cholesterol synthesis causes Smith–Lemli–Opitz syndrome, which is often associated with low plasma cholesterol levels. Hyperthyroidism, or any other endocrine disturbance which causes upregulation of the LDL receptor, may result in hypocholesterolemia. The American Heart Association recommends testing cholesterol every 4–6 years for people aged 20 years or older. A separate set of American Heart Association guidelines issued in 2013 indicates that patients taking statin medications should have their cholesterol tested 4–12 weeks after their first dose and then every 3–12 months thereafter. A blood sample after 12-hour fasting is taken by a doctor, or a home cholesterol-monitoring device is used to measure a lipid profile, an approach used to estimate a person's lipoproteins, the vastly more important issue because lipoproteins have always been concordant with outcomes though the lipid profile is commonly discordant LDL Particle Number and Risk of Future Cardiovascular Disease in the Framingham Offspring Study. The lipid profile measures: (a) total cholesterol, (b) cholesterol associated with HDL (i.e. Higher Density {than water} Lipids-transported-within-proteins) particles ("which can regress arterial disease"), (c) triglycerides and (d) (by a calculation and assumptions) cholesterol carried by LDL (i.e. Lower Density {than water} Lipids-transported-within-proteins) particles ("which drive arterial disease"). It is recommended to test cholesterol at least every five years if a person has total cholesterol of 5.2 mmol/L or more (200+ mg/dL), or if a man over age 45 or a woman over age 50 has HDL-C values less than 1 mmol/L (40 mg/dL), or there are other drivers heart disease and stroke. Additional drivers of heart disease include diabetes mellitus, hypertension (or use of anti-hypertensive medication), low HDL level, family history of coronary artery disease (CAD) and hypercholesterolemia, and cigarette smoking. Some cholesterol derivatives (among other simple cholesteric lipids) are known to generate the liquid crystalline "cholesteric phase". The cholesteric phase is, in fact, a chiral nematic phase, and it changes colour when its temperature changes. This makes cholesterol derivatives useful for indicating temperature in liquid-crystal display thermometers and in temperature-sensitive paints. Cholesterol has 256 stereoisomers that arise from its 8 stereocenters, although only two of the stereoisomers are of biochemical significance ("nat"-cholesterol and "ent"-cholesterol, for "natural" and "enantiomer", respectively), and only one occurs naturally ("nat"-cholesterol).
https://en.wikipedia.org/wiki?curid=6437
Chromosome A chromosome is a DNA (deoxyribonucleic acid) molecule with part or all of the genetic material (genome) of an organism. Most eukaryotic chromosomes include packaging proteins which, aided by chaperone proteins, bind to and condense the DNA molecule to prevent it from becoming an unmanageable tangle. This three-dimensional genome structure plays a significant role in transcriptional regulation. Chromosomes are normally visible under a light microscope only when the cell is undergoing the metaphase of cell division (where all chromosomes are aligned in the center of the cell in their condensed form). Before this happens, every chromosome is copied once (S phase), and the copy is joined to the original by a centromere, resulting either in an X-shaped structure (pictured here) if the centromere is located in the middle of the chromosome or a two-arm structure if the centromere is located near one of the ends. The original chromosome and the copy are now called sister chromatids. During metaphase the X-shape structure is called a metaphase chromosome. In this highly condensed form chromosomes are easiest to distinguish and study. In animal cells, chromosomes reach their highest compaction level in anaphase during chromosome segregation. Chromosomal recombination during meiosis and subsequent sexual reproduction play a significant role in genetic diversity. If these structures are manipulated incorrectly, through processes known as chromosomal instability and translocation, the cell may undergo mitotic catastrophe. Usually, this will make the cell initiate apoptosis leading to its own death, but sometimes mutations in the cell hamper this process and thus cause progression of cancer. Some use the term chromosome in a wider sense, to refer to the individualized portions of chromatin in cells, either visible or not under light microscopy. Others use the concept in a narrower sense, to refer to the individualized portions of chromatin during cell division, visible under light microscopy due to high condensation. The word "chromosome" () comes from the Greek ("chroma", "colour") and ("soma", "body"), describing their strong staining by particular dyes. The term was coined by the German scientist von Waldeyer-Hartz, referring to the term chromatin, which was itself introduced by Walther Flemming, who discovered cell division. Some of the early karyological terms have become outdated. For example, Chromatin (Flemming 1880) and Chromosom (Waldeyer 1888), both ascribe color to a non-colored state. The German scientists Schleiden, Virchow and Bütschli were among the first scientists who recognized the structures now familiar as chromosomes. In a series of experiments beginning in the mid-1880s, Theodor Boveri gave the definitive demonstration that chromosomes are the vectors of heredity; his two principles or postulates were the "continuity" of chromosomes and the "individuality" of chromosomes. It is the second of these principles that was so original. Wilhelm Roux suggested that each chromosome carries a different genetic configuration , and Boveri was able to test and confirm this hypothesis. Aided by the rediscovery at the start of the 1900s of Gregor Mendel's earlier work, Boveri was able to point out the connection between the rules of inheritance and the behaviour of the chromosomes. Boveri influenced two generations of American cytologists: Edmund Beecher Wilson, Nettie Stevens, Walter Sutton and Theophilus Painter were all influenced by Boveri (Wilson, Stevens, and Painter actually worked with him). In his famous textbook "The Cell in Development and Heredity", Wilson linked together the independent work of Boveri and Sutton (both around 1902) by naming the chromosome theory of inheritance the Boveri–Sutton chromosome theory (the names are sometimes reversed). Ernst Mayr remarks that the theory was hotly contested by some famous geneticists: William Bateson, Wilhelm Johannsen, Richard Goldschmidt and T.H. Morgan, all of a rather dogmatic turn of mind. Eventually, complete proof came from chromosome maps in Morgan's own lab. The number of human chromosomes was published in 1923 by Theophilus Painter. By inspection through the microscope, he counted 24 pairs, which would mean 48 chromosomes. His error was copied by others and it was not until 1956 that the true number, 46, was determined by Indonesia-born cytogeneticist Joe Hin Tjio. The prokaryotes – bacteria and archaea – typically have a single circular chromosome, but many variations exist. The chromosomes of most bacteria, which some authors prefer to call genophores, can range in size from only 130,000 base pairs in the endosymbiotic bacteria "Candidatus Hodgkinia cicadicola" and "Candidatus Tremblaya princeps", to more than 14,000,000 base pairs in the soil-dwelling bacterium "Sorangium cellulosum". Spirochaetes of the genus "Borrelia" are a notable exception to this arrangement, with bacteria such as "Borrelia burgdorferi", the cause of Lyme disease, containing a single "linear" chromosome. Prokaryotic chromosomes have less sequence-based structure than eukaryotes. Bacteria typically have a one-point (the origin of replication) from which replication starts, whereas some archaea contain multiple replication origins. The genes in prokaryotes are often organized in operons, and do not usually contain introns, unlike eukaryotes. Prokaryotes do not possess nuclei. Instead, their DNA is organized into a structure called the nucleoid. The nucleoid is a distinct structure and occupies a defined region of the bacterial cell. This structure is, however, dynamic and is maintained and remodeled by the actions of a range of histone-like proteins, which associate with the bacterial chromosome. In archaea, the DNA in chromosomes is even more organized, with the DNA packaged within structures similar to eukaryotic nucleosomes. Certain bacteria also contain plasmids or other extrachromosomal DNA. These are circular structures in the cytoplasm that contain cellular DNA and play a role in horizontal gene transfer. In prokaryotes (see nucleoids) and viruses, the DNA is often densely packed and organized; in the case of archaea, by homology to eukaryotic histones, and in the case of bacteria, by histone-like proteins. Bacterial chromosomes tend to be tethered to the plasma membrane of the bacteria. In molecular biology application, this allows for its isolation from plasmid DNA by centrifugation of lysed bacteria and pelleting of the membranes (and the attached DNA). Prokaryotic chromosomes and plasmids are, like eukaryotic DNA, generally supercoiled. The DNA must first be released into its relaxed state for access for transcription, regulation, and replication. Chromosomes in eukaryotes are composed of chromatin fiber. Chromatin fiber is made of nucleosomes (histone octamers with part of a DNA strand attached to and wrapped around it). Chromatin fibers are packaged by proteins into a condensed structure called chromatin. Chromatin contains the vast majority of DNA, but a small amount inherited maternally, can be found in the mitochondria. Chromatin is present in most cells, with a few exceptions, for example, red blood cells. Chromatin allows the very long DNA molecules to fit into the cell nucleus. During cell division chromatin condenses further to form microscopically visible chromosomes. The structure of chromosomes varies through the cell cycle. During cellular division chromosomes are replicated, divided, and passed successfully to their daughter cells so as to ensure the genetic diversity and survival of their progeny. Chromosomes may exist as either duplicated or unduplicated. Unduplicated chromosomes are single double helixes, whereas duplicated chromosomes contain two identical copies (called chromatids or sister chromatids) joined by a centromere. Eukaryotes (cells with nuclei such as those found in plants, fungi, and animals) possess multiple large linear chromosomes contained in the cell's nucleus. Each chromosome has one centromere, with one or two arms projecting from the centromere, although, under most circumstances, these arms are not visible as such. In addition, most eukaryotes have a small circular mitochondrial genome, and some eukaryotes may have additional small circular or linear cytoplasmic chromosomes. In the nuclear chromosomes of eukaryotes, the uncondensed DNA exists in a semi-ordered structure, where it is wrapped around histones (structural proteins), forming a composite material called chromatin. During interphase (the period of the cell cycle where the cell is not dividing), two types of chromatin can be distinguished: In the early stages of mitosis or meiosis (cell division), the chromatin double helix become more and more condensed. They cease to function as accessible genetic material (transcription stops) and become a compact transportable form. This compact form makes the individual chromosomes visible, and they form the classic four arm structure, a pair of sister chromatids attached to each other at the centromere. The shorter arms are called "p arms" (from the French "petit", small) and the longer arms are called "q arms" ("q" follows "p" in the Latin alphabet; q-g "grande"; alternatively it is sometimes said q is short for "queue" meaning tail in French). This is the only natural context in which individual chromosomes are visible with an optical microscope. Mitotic metaphase chromosomes are best described by a linearly organized longitudinally compressed array of consecutive chromatin loops. During mitosis, microtubules grow from centrosomes located at opposite ends of the cell and also attach to the centromere at specialized structures called kinetochores, one of which is present on each sister chromatid. A special DNA base sequence in the region of the kinetochores provides, along with special proteins, longer-lasting attachment in this region. The microtubules then pull the chromatids apart toward the centrosomes, so that each daughter cell inherits one set of chromatids. Once the cells have divided, the chromatids are uncoiled and DNA can again be transcribed. In spite of their appearance, chromosomes are structurally highly condensed, which enables these giant DNA structures to be contained within a cell nucleus. Chromosomes in humans can be divided into two types: autosomes (body chromosome(s)) and allosome (sex chromosome(s)). Certain genetic traits are linked to a person's sex and are passed on through the sex chromosomes. The autosomes contain the rest of the genetic hereditary information. All act in the same way during cell division. Human cells have 23 pairs of chromosomes (22 pairs of autosomes and one pair of sex chromosomes), giving a total of 46 per cell. In addition to these, human cells have many hundreds of copies of the mitochondrial genome. Sequencing of the human genome has provided a great deal of information about each of the chromosomes. Below is a table compiling statistics for the chromosomes, based on the Sanger Institute's human genome information in the Vertebrate Genome Annotation (VEGA) database. Number of genes is an estimate, as it is in part based on gene predictions. Total chromosome length is an estimate as well, based on the estimated size of unsequenced heterochromatin regions. These tables give the total number of chromosomes (including sex chromosomes) in a cell nucleus. For example, most eukaryotes are diploid, like humans who have 22 different types of autosomes, each present as two homologous pairs, and two sex chromosomes. This gives 46 chromosomes in total. Other organisms have more than two copies of their chromosome types, such as bread wheat, which is "hexaploid" and has six copies of seven different chromosome types – 42 chromosomes in total. Normal members of a particular eukaryotic species all have the same number of nuclear chromosomes (see the table). Other eukaryotic chromosomes, i.e., mitochondrial and plasmid-like small chromosomes, are much more variable in number, and there may be thousands of copies per cell. Asexually reproducing species have one set of chromosomes that are the same in all body cells. However, asexual species can be either haploid or diploid. Sexually reproducing species have somatic cells (body cells), which are diploid [2n] having two sets of chromosomes (23 pairs in humans with one set of 23 chromosomes from each parent), one set from the mother and one from the father. Gametes, reproductive cells, are haploid [n]: They have one set of chromosomes. Gametes are produced by meiosis of a diploid germ line cell. During meiosis, the matching chromosomes of father and mother can exchange small parts of themselves (crossover), and thus create new chromosomes that are not inherited solely from either parent. When a male and a female gamete merge (fertilization), a new diploid organism is formed. Some animal and plant species are polyploid [Xn]: They have more than two sets of homologous chromosomes. Plants important in agriculture such as tobacco or wheat are often polyploid, compared to their ancestral species. Wheat has a haploid number of seven chromosomes, still seen in some cultivars as well as the wild progenitors. The more-common pasta and bread wheat types are polyploid, having 28 (tetraploid) and 42 (hexaploid) chromosomes, compared to the 14 (diploid) chromosomes in the wild wheat. Prokaryote species generally have one copy of each major chromosome, but most cells can easily survive with multiple copies. For example, "Buchnera", a symbiont of aphids has multiple copies of its chromosome, ranging from 10–400 copies per cell. However, in some large bacteria, such as "Epulopiscium fishelsoni" up to 100,000 copies of the chromosome can be present. Plasmids and plasmid-like small chromosomes are, as in eukaryotes, highly variable in copy number. The number of plasmids in the cell is almost entirely determined by the rate of division of the plasmid – fast division causes high copy number. In general, the karyotype is the characteristic chromosome complement of a eukaryote species. The preparation and study of karyotypes is part of cytogenetics. Although the replication and transcription of DNA is highly standardized in eukaryotes, "the same cannot be said for their karyotypes", which are often highly variable. There may be variation between species in chromosome number and in detailed organization. In some cases, there is significant variation within species. Often there is: Also, variation in karyotype may occur during development from the fertilized egg. The technique of determining the karyotype is usually called "karyotyping". Cells can be locked part-way through division (in metaphase) in vitro (in a reaction vial) with colchicine. These cells are then stained, photographed, and arranged into a "karyogram", with the set of chromosomes arranged, autosomes in order of length, and sex chromosomes (here X/Y) at the end. Like many sexually reproducing species, humans have special gonosomes (sex chromosomes, in contrast to autosomes). These are XX in females and XY in males. Investigation into the human karyotype took many years to settle the most basic question: "How many chromosomes does a normal diploid human cell contain?" In 1912, Hans von Winiwarter reported 47 chromosomes in spermatogonia and 48 in oogonia, concluding an XX/XO sex determination mechanism. Painter in 1922 was not certain whether the diploid number of man is 46 or 48, at first favouring 46. He revised his opinion later from 46 to 48, and he correctly insisted on humans having an XX/XY system. New techniques were needed to definitively solve the problem: It took until 1954 before the human diploid number was confirmed as 46. Considering the techniques of Winiwarter and Painter, their results were quite remarkable. Chimpanzees, the closest living relatives to modern humans, have 48 chromosomes as do the other great apes: in humans two chromosomes fused to form chromosome 2. Chromosomal aberrations are disruptions in the normal chromosomal content of a cell and are a major cause of genetic conditions in humans, such as Down syndrome, although most aberrations have little to no effect. Some chromosome abnormalities do not cause disease in carriers, such as translocations, or chromosomal inversions, although they may lead to a higher chance of bearing a child with a chromosome disorder. Abnormal numbers of chromosomes or chromosome sets, called aneuploidy, may be lethal or may give rise to genetic disorders. Genetic counseling is offered for families that may carry a chromosome rearrangement. The gain or loss of DNA from chromosomes can lead to a variety of genetic disorders. Human examples include: Exposure of males to certain lifestyle, environmental and/or occupational hazards may increase the risk of aneuploid spermatozoa. In particular, risk of aneuploidy is increased by tobacco smoking, and occupational exposure to benzene, insecticides, and perfluorinated compounds. Increased aneuploidy is often associated with increased DNA damage in spermatozoa.
https://en.wikipedia.org/wiki?curid=6438
Colonna family The Colonna family, also known as Sciarrillo or Sciarra, is an Italian papal noble family. It was powerful in medieval and Renaissance Rome, supplying one Pope (Martin V) and many other church and political leaders. The family is notable for its bitter feud with the Orsini family over influence in Rome, until it was stopped by Papal Bull in 1511. In 1571, the heads of both families married nieces of Pope Sixtus V. Thereafter, historians recorded that "no peace had been concluded between the princes of Christendom, in which they had not been included by name". According to tradition, the Colonna family is a branch of the Counts of Tusculum — by Peter (1099–1151) son of Gregory III, called Peter "de Columna" from his property the Columna Castle in Colonna, Alban Hills. Further back, they trace their lineage past the Counts of Tusculum via Lombard and Italo-Roman nobles, merchants, and clergy through the Early Middle Ages — ultimately claiming origins from the Julio-Claudian dynasty. The first cardinal from the family was appointed in 1206, when Giovanni Colonna di Carbognano was made Cardinal Deacon of SS. Cosma e Damiano. For many years, Cardinal Giovanni di San Paolo (elevated in 1193) was identified as a member of the Colonna family and therefore its first representative in the College of Cardinals, but modern scholars have established that this was based on false information from the beginning of the 16th century. Giovanni Colonna (born c. 1206) nephew of Cardinal Giovanni Colonna di Carbognano, made his solemn vows as a Dominican around 1228 and received his theological and philosophical training at the Roman "studium" of Santa Sabina, the forerunner of the Pontifical University of Saint Thomas Aquinas, "Angelicum". He served as the Provincial of the Roman province of the Dominican Order and led the provincial chapter of 1248 at Anagni. Colonna was appointed as Archbishop of Messina in 1255. Margherita Colonna (died 1248) was a member of the Franciscan Order. She was beatified by Pope Pius IX in 1848. At this time, a rivalry began with the pro-papal Orsini family, leaders of the Guelph faction. This reinforced the pro-Emperor Ghibelline course that the Colonna family followed throughout the period of conflict between the Papacy and the Holy Roman Empire. In 1297, Cardinal Jacopo (Giacomo Colonna) disinherited his brothers Ottone, Matteo, and Landolfo of their lands. The latter three appealed to Pope Boniface VIII, who ordered Jacopo to return the land, and furthermore hand over the family's strongholds of Colonna, Palestrina, and other towns to the Papacy. Jacopo refused; in May, Boniface removed him from the College of Cardinals and excommunicated him and his followers. The Colonna family (aside from the three brothers allied with the Pope) declared that Boniface had been elected illegally following the unprecedented abdication of Pope Celestine V. The dispute led to open warfare, and in September, Boniface appointed Landolfo to the command of his army, to put down the revolt of Landolfo's own Colonna relatives. By the end of 1298, Landolfo had captured Colonna, Palestrina and other towns, and razed them to the ground. The family's lands were distributed among Landolfo and his loyal brothers; the rest of the family fled Italy. The exiled Colonnas allied with the Pope's other great enemy, Philip IV of France, who in his youth had been tutored by Cardinal Egidio Colonna. In September 1303, Sciarra and Philipp's advisor, Guillaume de Nogaret, led a small force into Anagni to arrest Boniface VIII and bring him to France, where he was to stand trial. The two managed to apprehend the pope, and Sciarra reportedly slapped the pope in the face in the process, which was accordingly dubbed the "Outrage of Anagni". The attempt eventually failed after a few days, when locals freed the pope. However, Boniface VIII died on 11 October, allowing France to dominate his weaker successors during the Avignon papacy. The family remained at the centre of civic and religious life throughout the late Middle Ages. Cardinal Egidio Colonna died at the papal court in Avignon in 1314. An Augustinian, he had studied theology in Paris under St. Thomas of Aquinas to become one of the most authoritative thinkers of his time. In the 14th century, the family sponsored the decoration of the Church of San Giovanni, most notably the floor mosaics. In 1328, Louis IV of Germany marched into Italy for his coronation as Holy Roman Emperor. As Pope John XXII was residing in Avignon and had publicly declared that he would not crown Louis, the King decided to be crowned by a member of the Roman aristocracy, who proposed Sciarra Colonna. In honor of this event, the Colonna family was granted the privilege of using the imperial pointed crown on top of their coat of arms. The celebrated poet Petrarch, was a great friend of the family, in particular of Giovanni Colonna and often lived in Rome as a guest of the family. He composed a number of sonnets for special occasions within the Colonna family, including "Colonna the Glorious, the great Latin name upon which all our hopes rest". In this period, the Colonna started claiming they were descendants of the Julio-Claudian dynasty. At the Council of Constance, the Colonna finally succeeded in their papal ambitions when Oddone Colonna was elected on 14 November 1417. As Martin V, he reigned until his death on 20 February 1431. Vittoria Colonna became famous in the sixteenth century as a poet and a figure in literate circles. In 1627 Anna Colonna, daughter of Filippo I Colonna, married Taddeo Barberini of the family Barberini; nephew of Pope Urban VIII. In 1728, the Carbognano branch (Colonna di Sciarra) of the Colonna family added the name Barberini to its family name when Giulio Cesare Colonna di Sciarra married Cornelia Barberini, daughter of the last male Barberini to hold the name and granddaughter of Maffeo Barberini (son of Taddeo Barberini). The Colonna family have been Prince Assistants to the Papal Throne since 1710, though their papal princely title only dates from 1854. The family residence in Rome, the Palazzo Colonna, is open to the public every Saturday morning. The main 'Colonna di Paliano' family is represented today by Prince Marcantonio Colonna di Paliano, Prince and Duke of Paliano (b. 1948), whose heir is Don Giovanni Andrea Colonna di Paliano (b. 1975), and by Don Prospero Colonna di Paliano, Prince of Avella (b. 1956), whose heir is Don Filippo Colonna di Paliano (b. 1995). The 'Colonna di Stigliano' line is represented by Don Prospero Colonna di Stigliano, Prince of Stigliano (b. 1938), whose heir is his nephew Don Stefano Colonna di Stigliano (b. 1975, Príncipe Frederico Giuseppe b.(1954) is a great grandson descendant of Maria Giulia Colonna (1783-1867).
https://en.wikipedia.org/wiki?curid=6440
Ceuta Ceuta (, , ; ; ) is a Spanish autonomous city on the north coast of Africa. Bordered by Morocco, it lies along the boundary between the Mediterranean Sea and the Atlantic Ocean and is one of nine populated Spanish territories in Africa and, along with Melilla, one of two populated Spanish territories on mainland Africa. It was part of province of Cádiz until 14 March 1995. On that date Statutes of Autonomy were passed for both Ceuta and Melilla. Ceuta, like Melilla and the Canary Islands, was classified as a free port before Spain joined the European Union. Its population consists of Christians, Muslims and small minorities of Sephardic Jews and ethnic Sindhi Hindus. Spanish is the official language. Darija Arabic is also spoken by the 40–50% of the population who are of Moroccan origin. The name Abyla has been said to have been a Punic name ("Lofty Mountain" or "Mountain of God") for Jebel Musa, the southern Pillar of Hercules. The name of the mountain was in fact "Habenna" (, , "Stone" or "Stele") or "ʾAbin-ḥīq" (, , "Rock of the Bay"), in reference to the nearby Bay of Benzú. The name was hellenized variously as "Ápini" (), "Abýla" (), "Abýlē" (), "Ablýx" (), and "Abílē Stḗlē" (, "Pillar of Abyla") and in Latin as Mount Abyla (') or the Pillar of Abyla ('). The settlement below Jebel Musa was later renamed for the seven hills around the site, collectively referred to as the "Seven Brothers" (; ). In particular, the Roman stronghold at the site took the name "Fort at the Seven Brothers" (). This was gradually shortened to Septem ( "Sépton") or, occasionally, Septum or Septa. These clipped forms continued as Berber "Sebta" and Arabic "Sabtan" or "Sabtah" (), which themselves became in Portuguese () and Spanish (). Controlling access between the Atlantic Ocean and the Mediterranean Sea, the Strait of Gibraltar is an important military and commercial chokepoint. The Phoenicians realized the extremely narrow isthmus joining the Peninsula of Almina to the African mainland makes Ceuta eminently defensible and established an outpost there in the early 1st millenniumBC. The Greek geographers record it by variations of "Abyla", the ancient name of nearby Jebel Musa. Beside Calpe, the other Pillar of Hercules now known as the Rock of Gibraltar, the Phoenicians established Kart at what is now San Roque, Spain. Other good anchorages nearby became Phoenician and then Carthaginian ports at what are now Tangiers and Cadiz. After Carthage's destruction in the Punic Wars, most of northwest Africa was left to the Roman client states of Numidia andaround AbylaMauretania. Punic culture continued to thrive in what the Romans knew as "Septem". After the Battle of Thapsus in 46 BC, Caesar and his heirs began annexing north Africa directly as Roman provinces but, as late as Augustus, most of Septem's Berber residents continued to speak and write in Punic. Caligula assassinated the Mauretanian king Ptolemy in AD40 and seized his kingdom, which Claudius organized in AD 42, placing Septem in the province of Tingitana and raising it to the level of a colony. It subsequently romanized and thrived into the late 3rd century, trading heavily with Roman Spain and becoming well known for its salted fish. Roads connected it overland with Tingis (Tangiers) and Volubilis. Under in the late 4th century, Septem still had 10,000 inhabitants, nearly all Christian citizens speaking Latin and African Romance. Vandals, probably invited by Count Boniface as protection against the empress dowager, crossed the strait near Tingis around 425 and swiftly overran Roman North Africa. Their king Gaiseric focused his attention on the rich lands around Carthage; although the Romans eventually accepted his conquests and he continued to raid them anyway, he soon lost control of Tingis and Septem in a series of Berber revolts. When Justinian decided to reconquer the Vandal lands, his victorious general Belisarius continued along the coast, making Septem an outpost of the Byzantine Empire around 533. Unlike the Roman administration, however, the Byzantines did not push far into hinterland and made the more defensible Septem their regional capital in place of Tingis. Epidemics, less capable successors, and overstretched supply lines forced a retrenchment and left Septem isolated. It is likely that its count ("") was obliged to pay homage to the Visigoth Kingdom in Spain in the early 7th century. There are no reliable contemporary accounts of the end of the Islamic conquest of the Maghreb around 710. Instead, the rapid Muslim conquest of Spain produced romances concerning Count Julian of Septem and his betrayal of Christendom in revenge for the dishonor that befell his daughter at King Roderick's court. Allegedly with Julian's encouragement and instructions, the Berber convert and freedman Tariq ibn Ziyad took his garrison from Tangiers across the strait and overran the Spanish so swiftly that both he and his master Musa bin Nusayr fell afoul of a jealous caliph, who stripped them of their wealth and titles. After the death of Julian, sometimes also described as a king of the Ghomara Berbers, Berber converts to Islam took direct control of what they called Sebta. It was then destroyed during their great revolt against the Umayyad Caliphate around 740. Sebta subsequently remained a small village of Muslims and Christians surrounded by ruins until its resettlement in the 9th century by Mâjakas, chief of the Majkasa Berber tribe, who started the short-lived Banu Isam dynasty. His great-grandson briefly allied his tribe with the Idrisids, but Banu Isam rule ended in 931 when he abdicated in favor of Abd ar-Rahman III, the Umayyad caliph of Cordoba. Ceuta reverted to Moorish Andalusian rule in 927 along with Melilla, and later Tangier, in 951. Chaos ensued with the fall of the Spanish Umayyads in 1031. Following this, Ceuta and Muslim Iberia were controlled by successive North African dynasties. Starting in 1084, the Almoravid Berbers ruled the region until 1147, when the Almohads conquered the land. Apart from Ibn Hud's rebellion in 1232, they ruled until the Tunisian Hafsids established control. The Hafsids' influence in the west rapidly waned, and Ceuta's inhabitants eventually expelled them in 1249. After this, a period of political instability persisted, under competing interests from the kingdoms of Fez and Granada as well as autonomous rule under the native Banu al-Azafi. The Fez finally conquered the region in 1387, with assistance from Aragon. On the morning of 21 August 1415, King John I of Portugal led his sons and their assembled forces in a surprise assault that would come to be known as the Conquest of Ceuta. The battle was almost anti-climactic, because the 45,000 men who traveled on 200 Portuguese ships caught the defenders of Ceuta off guard and only suffered eight casualties. By nightfall the town was captured. On the morning of August 22, Ceuta was in Portuguese hands. Álvaro Vaz de Almada, 1st Count of Avranches was asked to hoist what was to become the flag of Ceuta, which is identical to the flag of Lisbon, but in which the coat of arms of the Kingdom of Portugal was added to the center; the original Portuguese flag and coat of arms of Ceuta remained unchanged, and the modern-day Ceuta flag features the configuration of the Portuguese shield. John's son Henry the Navigator distinguished himself in the battle, being wounded during the conquest. The looting of the city proved to be less profitable than expected for John I; he decided to keep the city to pursue further enterprises in the area. From 1415 to 1437, Pedro de Meneses became the first governor of Ceuta. The Benemerine sultan started the 1418 siege but was defeated by the first governor of Ceuta before reinforcements arrived in the form of John, Constable of Portugal and his brother Henry the Navigator who were sent with troops to defend Ceuta. Under King John I's son, Duarte, the colony at Ceuta rapidly became a drain on the Portuguese treasury. Trans-Saharan trade journeyed instead to Tangier. It was soon realized that without the city of Tangier, possession of Ceuta was worthless. In 1437, Duarte's brothers Henry the Navigator and Fernando, the Saint Prince persuaded him to launch an attack on the Marinid sultanate. The resulting Battle of Tangier (1437), led by Henry, was a debacle. In the resulting treaty, Henry promised to deliver Ceuta back to the Marinids in return for allowing the Portuguese army to depart unmolested, which he reneged on. Possession of Ceuta would indirectly lead to further Portuguese expansion. The main area of Portuguese expansion, at this time, was the coast of the Maghreb, where there was grain, cattle, sugar, and textiles, as well as fish, hides, wax, and honey. Ceuta had to endure alone for 43 years, until the position of the city was consolidated with the taking of Ksar es-Seghir (1458), Arzila and Tangier (1471) by the Portuguese. The city was recognized as a Portuguese possession by the Treaty of Alcáçovas (1479) and by the Treaty of Tordesilhas (1494). In the 1540s the Portuguese began building the Royal Walls of Ceuta as they are today including bastions, a navigable moat and a drawbridge. Some of these bastions are still standing, like the bastions of Coraza Alta, Bandera and Mallorquines. Luís de Camões lived in Ceuta between 1549 and 1551, losing his right eye in battle, which influenced his work of poetry "Os Lusíadas". In 1578 King Sebastian of Portugal died at the Battle of Alcácer Quibir (known as the Battle of Three Kings) in what is today northern Morocco, without descendants, triggering the 1580 Portuguese succession crisis. His granduncle, the elderly Cardinal Henry, succeeded him as King, but Henry also had no descendants, having taken holy orders. When the cardinal-king died two years after Sebastian's disappearance, three grandchildren of King Manuel I of Portugal claimed the throne: Infanta Catarina, Duchess of Braganza, António, Prior of Crato, and Philip II of Spain (Uncle of former King Sebastian of Portugal), who would go on to be crowned King Philip I of Portugal in 1581, uniting the two crowns and overseas empires known as the Iberian Union, which allowed the two kingdoms to continue without being merged. During the Iberian Union 1580 to 1640, Ceuta attracted many residents of Spanish origin. Ceuta became the only city of the Portuguese Empire that sided with Spain when Portugal regained its independence in the Portuguese Restoration War of 1640. On 1 January 1668, King Afonso VI of Portugal recognized the formal allegiance of Ceuta to Spain and formally ceded Ceuta to King Carlos II of Spain by the Treaty of Lisbon. The city was attacked by Moroccan forces under Moulay Ismail during the Siege of Ceuta (1694-1727). During the longest siege in history, the city underwent changes leading to the loss of its Portuguese character. While most of the military operations took place around the Royal Walls of Ceuta, there were also small-scale penetrations by Spanish forces at various points on the Moroccan coast, and seizure of shipping in the Strait of Gibraltar. Disagreements regarding the border of Ceuta resulted in the Hispano-Moroccan War (1859–60), which ended at the Battle of Tetuán. In July 1936, General Francisco Franco took command of the Spanish Army of Africa and rebelled against the Spanish republican government; his military uprising led to the Spanish Civil War of 1936–1939. Franco transported troops to mainland Spain in an airlift using transport aircraft supplied by Germany and Italy. Ceuta became one of the first casualties of the uprising: General Franco's rebel nationalist forces seized Ceuta, while at the same time the city came under fire from the air and sea forces of the official republican government. The Llano Amarillo monument was erected to honor Francisco Franco, it was inaugurated on 13 July 1940. The tall obelisk has since been abandoned, but the shield symbols of the Falange and Imperial Eagle remain visible. When Spain recognized the independence of Spanish Morocco in 1956, Ceuta and the other remained under Spanish rule. Spain considered them integral parts of the Spanish state, but Morocco has disputed this point. Culturally, modern Ceuta is part of the Spanish region of Andalusia. It was attached to the province of Cádiz until 1925, the Spanish coast being only 20 km (12.5 miles) away. It is a cosmopolitan city, with a large ethnic Arab Muslim minority as well as Sephardic Jewish and Hindu minorities. On 5 November 2007, King Juan Carlos I visited the city, sparking great enthusiasm from the local population and protests from the Moroccan government. It was the first time a Spanish head of state had visited Ceuta in 80 years. Since 2010, Ceuta (and Melilla) have declared the Muslim holiday of Eid al-Adha, or Feast of the Sacrifice, an official public holiday. It is the first time a non-Christian religious festival has been officially celebrated in Spain since the Reconquista. It is separated by from the province of Cádiz on the Spanish mainland by the Strait of Gibraltar and it shares a land border with M'diq-Fnideq Prefecture in the Kingdom of Morocco. It has an area of . Ceuta is dominated by Monte Anyera, a hill along its western frontier with Morocco. The mountain is guarded by a military fort. Monte Hacho on the Peninsula of Almina overlooking the port is one of the possible locations for the southern pillar of the Pillars of Hercules of Greek legend (the other possibility being Jebel Musa). Ceuta has a maritime-influenced Subtropical/Mediterranean climate, similar to nearby Spanish and Moroccan cities such as Tarifa, Algeciras or Tangiers. The average diurnal temperature variation is relatively low; the average annual temperature is with average yearly highs of and lows of though the Ceuta weather station has only been in operation since 2003. Ceuta has relatively mild winters for the latitude, while summers are warm yet milder than in the interior of Southern Spain, due to the moderating effect of the Straits of Gibraltar. Summers are very dry, but yearly precipitation is still at , which could be considered a humid climate if the summers were not so arid. Since 1995, Ceuta is, along with Melilla, one of the two autonomous cities of Spain. Ceuta is known officially in Spanish as "Ciudad Autónoma de Ceuta" (English: "Autonomous City of Ceuta"), with a rank between a standard Spanish city and an autonomous community. Ceuta is part of the territory of the European Union. The city was a free port before Spain joined the European Union in 1986. Now it has a low-tax system within the Economic and Monetary Union of the European Union. As of 2018, its population was 85,144. Ceuta has held elections every four years since 1979, for its 25-seat assembly. The leader of its government was the Mayor until the Autonomy Statute had the title changed to the Mayor-President. , the People's Party (PP) won 18 seats, keeping Juan Jesús Vivas as Mayor-President, which he has been since 2001. The remaining seats are held by the regionalist Caballas Coalition (4) and the Socialist Workers' Party (PSOE, 3). Due to its small population, Ceuta elects only one member of the Congress of Deputies, the lower house of the Spanish legislature. election, this post is held by María Teresa López of Vox. Ceuta is subdivided into 63 "barriadas" ("neighborhoods"), such as Barriada de Berizu, Barriada de P. Alfonso, Barriada del Sarchal, and El Hacho. The government of Morocco has repeatedly called for Spain to transfer the sovereignty of Ceuta and Melilla, along with uninhabited islets such as the islands of Alhucemas, Velez and the Perejil island, drawing comparisons with Spain's territorial claim to Gibraltar. In both cases, the national governments and local populations of the disputed territories reject these claims by a large majority. The Spanish position states that both Ceuta and Melilla are integral parts of Spain, and have been since the 16th century, centuries prior to Morocco's independence from France in 1956, whereas Gibraltar, being a British Overseas Territory, is not and never has been part of the United Kingdom. Morocco has claimed the territories are colonies. One of the chief arguments used by Morocco to reclaim Ceuta comes from geography, as this enclave, which is surrounded by Morocco and the Mediterranean Sea, has no territorial continuity with the rest of Spanish territory. This argument was originally developed by one of the founders of the Moroccan Istiqlal Party, Alal-El Faasi, who openly advocated the Moroccan conquest of Ceuta and other territories under Spanish rule. The official currency of Ceuta is the euro. It is part of a special low tax zone in Spain. Ceuta is one of two Spanish port cities on the northern shore of Africa, along with Melilla. They are historically military strongholds, free ports, oil ports, and also fishing ports. Today the economy of the city depends heavily on its port (now in expansion) and its industrial and retail centers. Ceuta Heliport is now used to connect the city to mainland Spain by air. Lidl, Decathlon and El Corte Inglés (hardware) have branches in Ceuta. There is also a casino. Border trade between Ceuta and Morocco is active because of advantage of tax-free status. Thousands of Moroccan women are involved in porter trade daily. Moroccan dirham is actually used in such trade, despite the fact that prices are marked in euro. The city's Port of Ceuta receives high numbers of ferries each day from Algeciras in Andalusia in the south of Spain, along with Melilla and the Canary Islands. The closest airport is Sania Ramel Airport in Morocco. A single road border checkpoint to the south of Ceuta near Fnideq allows for cars and pedestrians to travel between Morocco and Ceuta. An additional border crossing for pedestrians also exists between Benzú and Belyounech on the northern coast. The rest of the border is closed and inaccessible. There is a bus service throughout the city, and while it does not pass into neighboring Morocco, it services both frontier crossings. Due to its location, Ceuta is home to a mixed ethnic and religious population. The two main religious groups are Christians and Muslims. As of 2006 approximately 50% of the population was Christian and approximately 48% Muslim. However, by 2012, the portion of Ceuta's population that identify as Roman Catholic was 68.0%, while the portion of Ceuta's population that identify as Muslim was 28.3%. Spanish is the primary and official language of the enclave. Moroccan Arabic is widely spoken, as are Berber and French. Christianity has been present in Ceuta continuously from late antiquity, as evidenced by the ruins of a basilica in downtown Ceuta and accounts of the martyrdom of St. Daniel Fasanella and his Franciscans in 1227 during the Almohad Caliphate. The town's Grand Mosque had been built over a Byzantine-era church. In 1415, the year of the city's conquest, the Portuguese converted the Grand Mosque into Ceuta Cathedral. The present form of the cathedral dates to refurbishments undertaken in the late 17th century, combining baroque and neoclassical elements. It was dedicated to StMary of the Assumption in 1726. The Roman Catholic Diocese of Ceuta was established in 1417. It incorporated the suppressed Diocese of Tanger in 1570. The Diocese of Ceuta was a suffragan of Lisbon until 1675, when it became a suffragan of Seville. In 1851, Ceuta's administration was notionally merged into the Diocese of Cadiz and Ceuta as part of a concordat between Spain and the Holy See; the union was not actually accomplished, however, until 1879. Small Jewish and Hindu minorities are also present in the city. The University of Granada offers undergraduate programs at their campus in Ceuta. Like all areas of Spain, Ceuta is also served by the National University of Distance Education (UNED). Primary and secondary education is possible only in Spanish however a growing number of schools are entering the Bilingual Education Program. Like Melilla, Ceuta attracts African migrants who try to use it as an entry to Europe. As a result, the enclave is surrounded by double fences that are high and hundreds of migrants congregate near the fences waiting for a chance to cross them. The fences are regularly stormed by migrants trying to claim asylum once they enter Ceuta. Ceuta is twinned with:
https://en.wikipedia.org/wiki?curid=6443
Carcinogen A carcinogen is any substance, radionuclide, or radiation that promotes carcinogenesis, the formation of cancer. This may be due to the ability to damage the genome or to the disruption of cellular metabolic processes. Several radioactive substances are considered carcinogens, but their carcinogenic activity is attributed to the radiation, for example gamma rays and alpha particles, which they emit. Common examples of non-radioactive carcinogens are inhaled asbestos, certain dioxins, and tobacco smoke. Although the public generally associates carcinogenicity with synthetic chemicals, it is equally likely to arise in both natural and synthetic substances. Carcinogens are not necessarily immediately toxic; thus, their effect can be insidious. Cancer is any disease in which normal cells are damaged and do not undergo programmed cell death as fast as they divide via mitosis. Carcinogens may increase the risk of cancer by altering cellular metabolism or damaging DNA directly in cells, which interferes with biological processes, and induces the uncontrolled, malignant division, ultimately leading to the formation of tumors. Usually, severe DNA damage leads to programmed cell death, but if the programmed cell death pathway is damaged, then the cell cannot prevent itself from becoming a cancer cell. There are many natural carcinogens. Aflatoxin B1, which is produced by the fungus "Aspergillus flavus" growing on stored grains, nuts and peanut butter, is an example of a potent, naturally occurring microbial carcinogen. Certain viruses such as hepatitis B and human papilloma virus have been found to cause cancer in humans. The first one shown to cause cancer in animals is Rous sarcoma virus, discovered in 1910 by Peyton Rous. Other infectious organisms which cause cancer in humans include some bacteria (e.g. "Helicobacter pylori" ) and helminths (e.g. "Opisthorchis viverrini" and "Clonorchis sinensis"). Dioxins and dioxin-like compounds, benzene, kepone, EDB, and asbestos have all been classified as carcinogenic. As far back as the 1930s, Industrial smoke and tobacco smoke were identified as sources of dozens of carcinogens, including benzo["a"]pyrene, tobacco-specific nitrosamines such as nitrosonornicotine, and reactive aldehydes such as formaldehyde, which is also a hazard in embalming and making plastics. Vinyl chloride, from which PVC is manufactured, is a carcinogen and thus a hazard in PVC production. Co-carcinogens are chemicals that do not necessarily cause cancer on their own, but promote the activity of other carcinogens in causing cancer. After the carcinogen enters the body, the body makes an attempt to eliminate it through a process called biotransformation. The purpose of these reactions is to make the carcinogen more water-soluble so that it can be removed from the body. However, in some cases, these reactions can also convert a less toxic carcinogen into a more toxic carcinogen. DNA is nucleophilic; therefore, soluble carbon electrophiles are carcinogenic, because DNA attacks them. For example, some alkenes are toxicated by human enzymes to produce an electrophilic epoxide. DNA attacks the epoxide, and is bound permanently to it. This is the mechanism behind the carcinogenicity of benzo["a"]pyrene in tobacco smoke, other aromatics, aflatoxin and mustard gas. CERCLA identifies all radionuclides as carcinogens, although the nature of the emitted radiation (alpha, beta, gamma, or neutron and the radioactive strength), its consequent capacity to cause ionization in tissues, and the magnitude of radiation exposure, determine the potential hazard. Carcinogenicity of radiation depends on the type of radiation, type of exposure, and penetration. For example, alpha radiation has low penetration and is not a hazard outside the body, but emitters are carcinogenic when inhaled or ingested. For example, Thorotrast, a (incidentally radioactive) suspension previously used as a contrast medium in x-ray diagnostics, is a potent human carcinogen known because of its retention within various organs and persistent emission of alpha particles. Low-level ionizing radiation may induce irreparable DNA damage (leading to replicational and transcriptional errors needed for neoplasia or may trigger viral interactions) leading to pre-mature aging and cancer. Not all types of electromagnetic radiation are carcinogenic. Low-energy waves on the electromagnetic spectrum including radio waves, microwaves, infrared radiation and visible light are thought not to be, because they have insufficient energy to break chemical bonds. Evidence for carcinogenic effects of non-ionizing radiation is generally inconclusive, though there are some documented cases of radar technicians with prolonged high exposure experiencing significantly higher cancer incidence. Higher-energy radiation, including ultraviolet radiation (present in sunlight), x-rays, and gamma radiation, generally "is" carcinogenic, if received in sufficient doses. For most people, ultraviolet radiations from sunlight is the most common cause of skin cancer. In Australia, where people with pale skin are often exposed to strong sunlight, melanoma is the most common cancer diagnosed in people aged 15–44 years. Substances or foods irradiated with electrons or electromagnetic radiation (such as microwave, X-ray or gamma) are not carcinogenic. In contrast, non-electromagnetic neutron radiation produced inside nuclear reactors can produce secondary radiation through nuclear transmutation. Chemicals used in processed and cured meat such as some brands of bacon, sausages and ham may produce carcinogens. For example, nitrites used as food preservatives in cured meat such as bacon have also been noted as being carcinogenic with demographic links, but not causation, to colon cancer. Cooking food at high temperatures, for example grilling or barbecuing meats, may also lead to the formation of minute quantities of many potent carcinogens that are comparable to those found in cigarette smoke (i.e., benzo["a"]pyrene). Charring of food looks like coking and tobacco pyrolysis, and produces carcinogens. There are several carcinogenic pyrolysis products, such as polynuclear aromatic hydrocarbons, which are converted by human enzymes into epoxides, which attach permanently to DNA. Pre-cooking meats in a microwave oven for 2–3 minutes before grilling shortens the time on the hot pan, and removes heterocyclic amine (HCA) precursors, which can help minimize the formation of these carcinogens. Reports from the Food Standards Agency have found that the known animal carcinogen acrylamide is generated in fried or overheated carbohydrate foods (such as french fries and potato chips). Studies are underway at the FDA and Europe regulatory agencies to assess its potential risk to humans. There is a strong association of smoking with lung cancer; the lifetime risk of developing lung cancer increases significantly in smokers. A large number of known carcinogens are found in cigarette smoke. Potent carcinogens found in cigarette smoke include polycyclic aromatic hydrocarbons (PAH, such as benzo[a]pyrene), Benzene, and Nitrosamine. Carcinogens can be classified as genotoxic or nongenotoxic. Genotoxins cause irreversible genetic damage or mutations by binding to DNA. Genotoxins include chemical agents like N-nitroso-N-methylurea (NMU) or non-chemical agents such as ultraviolet light and ionizing radiation. Certain viruses can also act as carcinogens by interacting with DNA. Nongenotoxins do not directly affect DNA but act in other ways to promote growth. These include hormones and some organic compounds. The International Agency for Research on Cancer (IARC) is an intergovernmental agency established in 1965, which forms part of the World Health Organization of the United Nations. It is based in Lyon, France. Since 1971 it has published a series of "Monographs on the Evaluation of Carcinogenic Risks to Humans" that have been highly influential in the classification of possible carcinogens. The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is a United Nations initiative to attempt to harmonize the different systems of assessing chemical risk which currently exist (as of March 2009) around the world. It classifies carcinogens into two categories, of which the first may be divided again into subcategories if so desired by the competent regulatory authority: The National Toxicology Program of the U.S. Department of Health and Human Services is mandated to produce a biennial "Report on Carcinogens". As of June 2011, the latest edition was the 12th report (2011). It classifies carcinogens into two groups: The American Conference of Governmental Industrial Hygienists (ACGIH) is a private organization best known for its publication of threshold limit values (TLVs) for occupational exposure and monographs on workplace chemical hazards. It assesses carcinogenicity as part of a wider assessment of the occupational hazards of chemicals. The European Union classification of carcinogens is contained in the Dangerous Substances Directive and the Dangerous Preparations Directive. It consists of three categories: This assessment scheme is being phased out in favor of the GHS scheme (see above), to which it is very close in category definitions. Under a previous name, the NOHSC, in 1999 Safe Work Australia published the Approved Criteria for Classifying Hazardous Substances [NOHSC:1008(1999)]. Section 4.76 of this document outlines the criteria for classifying carcinogens as approved by the Australian government. This classification consists of three categories: Occupational carcinogens are agents that pose a risk of cancer in several specific work-locations: In this section, the carcinogens implicated as the main causative agents of the four most common cancers worldwide are briefly described. These four cancers are lung, breast, colon, and stomach cancers. Together they account for about 41% of worldwide cancer incidence and 42% of cancer deaths (for more detailed information on the carcinogens implicated in these and other cancers, see references). Lung cancer (pulmonary carcinoma) is the most common cancer in the world, both in terms of cases (1.6 million cases; 12.7% of total cancer cases) and deaths (1.4 million deaths; 18.2% of total cancer deaths). Lung cancer is largely caused by tobacco smoke. Risk estimates for lung cancer in the United States indicate that tobacco smoke is responsible for 90% of lung cancers. Other factors are implicated in lung cancer, and these factors can interact synergistically with smoking so that total attributable risk adds up to more than 100%. These factors include occupational exposure to carcinogens (about 9-15%), radon (10%) and outdoor air pollution (1-2%). Tobacco smoke is a complex mixture of more than 5,300 identified chemicals. The most important carcinogens in tobacco smoke have been determined by a “Margin of Exposure” approach. Using this approach, the most important tumorigenic compounds in tobacco smoke were, in order of importance, acrolein, formaldehyde, acrylonitrile, 1,3-butadiene, cadmium, acetaldehyde, ethylene oxide, and isoprene. Most of these compounds cause DNA damage by forming DNA adducts or by inducing other alterations in DNA. DNA damages are subject to error-prone DNA repair or can cause replication errors. Such errors in repair or replication can result in mutations in tumor suppressor genes or oncogenes leading to cancer. Breast cancer is the second most common cancer [(1.4 million cases, 10.9%), but ranks 5th as cause of death (458,000, 6.1%)]. Increased risk of breast cancer is associated with persistently elevated blood levels of estrogen. Estrogen appears to contribute to breast carcinogenesis by three processes; (1) the metabolism of estrogen to genotoxic, mutagenic carcinogens, (2) the stimulation of tissue growth, and (3) the repression of phase II detoxification enzymes that metabolize ROS leading to increased oxidative DNA damage. The major estrogen in humans, estradiol, can be metabolized to quinone derivatives that form adducts with DNA. These derivatives can cause dupurination, the removal of bases from the phosphodiester backbone of DNA, followed by inaccurate repair or replication of the apurinic site leading to mutation and eventually cancer. This genotoxic mechanism may interact in synergy with estrogen receptor-mediated, persistent cell proliferation to ultimately cause breast cancer. Genetic background, dietary practices and environmental factors also likely contribute to the incidence of DNA damage and breast cancer risk. Colorectal cancer is the third most common cancer [1.2 million cases (9.4%), 608,000 deaths (8.0%)]. Tobacco smoke may be responsible for up to 20% of colorectal cancers in the United States. In addition, substantial evidence implicates bile acids as an important factor in colon cancer. Twelve studies (summarized in Bernstein et al.) indicate that the bile acids deoxycholic acid (DCA) or lithocholic acid (LCA) induce production of DNA-damaging reactive oxygen species or reactive nitrogen species in human or animal colon cells. Furthermore, 14 studies showed that DCA and LCA induce DNA damage in colon cells. Also 27 studies reported that bile acids cause programmed cell death (apoptosis). Increased apoptosis can result in selective survival of cells that are resistant to induction of apoptosis. Colon cells with reduced ability to undergo apoptosis in response to DNA damage would tend to accumulate mutations, and such cells may give rise to colon cancer. Epidemiologic studies have found that fecal bile acid concentrations are increased in populations with a high incidence of colon cancer. Dietary increases in total fat or saturated fat result in elevated DCA and LCA in feces and elevated exposure of the colon epithelium to these bile acids. When the bile acid DCA was added to the standard diet of wild-type mice invasive colon cancer was induced in 56% of the mice after 8 to 10 months. Overall, the available evidence indicates that DCA and LCA are centrally important DNA-damaging carcinogens in colon cancer. Stomach cancer is the fourth most common cancer [990,000 cases (7.8%), 738,000 deaths (9.7%)]. "Helicobacter pylori" infection is the main causative factor in stomach cancer. Chronic gastritis (inflammation) caused by "H. pylori" is often long-standing if not treated. Infection of gastric epithelial cells with "H. pylori" results in increased production of reactive oxygen species (ROS). ROS cause oxidative DNA damage including the major base alteration 8-hydroxydeoxyguanosine (8-OHdG). 8-OHdG resulting from ROS is increased in chronic gastritis. The altered DNA base can cause errors during DNA replication that have mutagenic and carcinogenic potential. Thus "H. pylori"-induced ROS appear to be the major carcinogens in stomach cancer because they cause oxidative DNA damage leading to carcinogenic mutations. Diet is thought to be a contributing factor in stomach cancer - in Japan where very salty pickled foods are popular, the incidence of stomach cancer is high. Preserved meat such as bacon, sausages, and ham increases the risk while a diet high in fresh fruit and vegetables may reduce the risk. The risk also increases with age.
https://en.wikipedia.org/wiki?curid=6445
Camouflage Camouflage is the use of any combination of materials, coloration, or illumination for concealment, either by making animals or objects hard to see (crypsis), or by disguising them as something else (mimesis). Examples include the leopard's spotted coat, the battledress of a modern soldier, and the leaf-mimic katydid's wings. A third approach, motion dazzle, confuses the observer with a conspicuous pattern, making the object visible but momentarily harder to locate. The majority of camouflage methods aim for crypsis, often through a general resemblance to the background, high contrast disruptive coloration, eliminating shadow, and countershading. In the open ocean, where there is no background, the principal methods of camouflage are transparency, silvering, and countershading, while the ability to produce light is among other things used for counter-illumination on the undersides of cephalopods such as squid. Some animals, such as chameleons and octopuses, are capable of actively changing their skin pattern and colours, whether for camouflage or for signalling. It is possible that some plants use camouflage to evade being eaten by herbivores. Military camouflage was spurred by the increasing range and accuracy of firearms in the 19th century. In particular the replacement of the inaccurate musket with the rifle made personal concealment in battle a survival skill. In the 20th century, military camouflage developed rapidly, especially during the First World War. On land, artists such as André Mare designed camouflage schemes and observation posts disguised as trees. At sea, merchant ships and troop carriers were painted in dazzle patterns that were highly visible, but designed to confuse enemy submarines as to the target's speed, range, and heading. During and after the Second World War, a variety of camouflage schemes were used for aircraft and for ground vehicles in different theatres of war. The use of radar since the mid-20th century has largely made camouflage for fixed-wing military aircraft obsolete. Non-military use of camouflage includes making cell telephone towers less obtrusive and helping hunters to approach wary game animals. Patterns derived from military camouflage are frequently used in fashion clothing, exploiting their strong designs and sometimes their symbolism. Camouflage themes recur in modern art, and both figuratively and literally in science fiction and works of literature. In ancient Greece, Aristotle (384–322 BC) commented on the colour-changing abilities, both for camouflage and for signalling, of cephalopods including the octopus, in his "Historia animalium": Camouflage has been a topic of interest and research in zoology for well over a century. According to Charles Darwin's 1859 theory of natural selection, features such as camouflage evolved by providing individual animals with a reproductive advantage, enabling them to leave more offspring, on average, than other members of the same species. In his "Origin of Species", Darwin wrote: The English zoologist Edward Bagnall Poulton studied animal coloration, especially camouflage. In his 1890 book "The Colours of Animals", he classified different types such as "special protective resemblance" (where an animal looks like another object), or "general aggressive resemblance" (where a predator blends in with the background, enabling it to approach prey). His experiments showed that swallow-tailed moth pupae were camouflaged to match the backgrounds on which they were reared as larvae. Poulton's "general protective resemblance" was at that time considered to be the main method of camouflage, as when Frank Evers Beddard wrote in 1892 that "tree-frequenting animals are often green in colour. Among vertebrates numerous species of parrots, iguanas, tree-frogs, and the green tree-snake are examples". Beddard did however briefly mention other methods, including the "alluring coloration" of the flower mantis and the possibility of a different mechanism in the orange tip butterfly. He wrote that "the scattered green spots upon the under surface of the wings might have been intended for a rough sketch of the small flowerets of the plant [an umbellifer], so close is their mutual resemblance." He also explained the coloration of sea fish such as the mackerel: "Among pelagic fish it is common to find the upper surface dark-coloured and the lower surface white, so that the animal is inconspicuous when seen either from above or below." The artist Abbott Handerson Thayer formulated what is sometimes called Thayer's Law, the principle of countershading. However, he overstated the case in the 1909 book "Concealing-Coloration in the Animal Kingdom", arguing that "All patterns and colors whatsoever of all animals that ever preyed or are preyed on are under certain normal circumstances obliterative" (that is, cryptic camouflage), and that "Not one 'mimicry' mark, not one 'warning color'... nor any 'sexually selected' color, exists anywhere in the world where there is not every reason to believe it the very best conceivable device for the concealment of its wearer", and using paintings such as "Peacock in the Woods" (1907) to reinforce his argument. Thayer was roundly mocked for these views by critics including Teddy Roosevelt. The English zoologist Hugh Cott's 1940 book "Adaptive Coloration in Animals" corrected Thayer's errors, sometimes sharply: "Thus we find Thayer straining the theory to a fantastic extreme in an endeavour to make it cover almost every type of coloration in the animal kingdom." Cott built on Thayer's discoveries, developing a comprehensive view of camouflage based on "maximum disruptive contrast", countershading and hundreds of examples. The book explained how disruptive camouflage worked, using streaks of boldly contrasting colour, paradoxically making objects less visible by breaking up their outlines. While Cott was more systematic and balanced in his view than Thayer, and did include some experimental evidence on the effectiveness of camouflage, his 500-page textbook was, like Thayer's, mainly a natural history narrative which illustrated theories with examples. Experimental evidence that camouflage helps prey avoid being detected by predators was first provided in 2016, when ground-nesting birds (plovers and coursers) were shown to survive according to how well their egg contrast matched the local environment. Camouflage is a soft-tissue feature that is rarely preserved in the fossil record, but rare fossilised skin samples from the Cretaceous period show that some marine reptiles were countershaded. The skins, pigmented with dark-coloured eumelanin, reveal that both leatherback turtles and mosasaurs had dark backs and light bellies. There is fossil evidence of camouflaged insects going back over 100 million years, for example lacewings larvae that stick debris all over their bodies much as their modern descendants do, hiding them from their prey. Dinosaurs appear to have been camouflaged, as a 120 million year old fossil of a "Psittacosaurus" has been preserved with countershading. Camouflage can be achieved by different methods, described below. Most of the methods contribute to crypsis, helping to hide against a background; but mimesis and motion dazzle protect without hiding. Methods may be applied on their own or in combination. Crypsis means making the animal or military equipment hard to see (or to detect in other ways, such as by sound or scent). Visual crypsis can be achieved in many different ways, such as by living underground or by being active only at night, as well as by a variety of methods of camouflage. Some animals' colours and patterns resemble a particular natural background. This is an important component of camouflage in all environments. For instance, tree-dwelling parakeets are mainly green; woodcocks of the forest floor are brown and speckled; reedbed bitterns are streaked brown and buff; in each case the animal's coloration matches the hues of its habitat. Similarly, desert animals are almost all desert coloured in tones of sand, buff, ochre, and brownish grey, whether they are mammals like the gerbil or fennec fox, birds such as the desert lark or sandgrouse, or reptiles like the skink or horned viper. Military uniforms, too, generally resemble their backgrounds; for example khaki uniforms are a muddy or dusty colour, originally chosen for service in South Asia.Many moths show industrial melanism, including the peppered moth which has coloration that blends in with tree bark. The coloration of these insects evolved between 1860 and 1940 to match the changing colour of the tree trunks on which they rest, from pale and mottled to almost black in polluted areas. This is taken by zoologists as evidence that camouflage is influenced by natural selection, as well as demonstrating that it changes where necessary to resemble the local background. Disruptive patterns use strongly contrasting, non-repeating markings such as spots or stripes to break up the outlines of an animal or military vehicle, or to conceal telltale features, especially by masking the eyes, as in the common frog. Disruptive patterns may use more than one method to defeat visual systems such as edge detection. Predators like the leopard use disruptive camouflage to help them approach prey, while potential prey like the Egyptian nightjar use it to avoid detection by predators. Disruptive patterning is common in military usage, both for uniforms and for military vehicles. Disruptive patterning, however, does not always achieve crypsis on its own, as an animal or a military target may be given away by factors like shape, shine, and shadow. The presence of bold skin markings does not in itself prove that an animal relies on camouflage, as that depends on its behaviour. For example, although giraffes have a high contrast pattern that could be disruptive coloration, the adults are very conspicuous when in the open. Some authors have argued that adult giraffes are cryptic, since when standing among trees and bushes they are hard to see at even a few metres distance. However, adult giraffes move about to gain the best view of an approaching predator, relying on their size and ability to defend themselves, even from lions, rather than on camouflage. A different explanation is implied by young giraffes being far more vulnerable to predation than adults: more than half of all giraffe calves die within a year, and giraffe mothers hide their calves, which spend much of the time lying down in cover while their mothers are away feeding. Since the presence of a mother nearby does not affect survival, it is argued that young giraffes must be very well camouflaged; this is supported by coat markings being strongly inherited. The possibility of camouflage in plants has been little studied until the late 20th century. Leaf variegation with white spots may serve as camouflage in forest understory plants, where there is a dappled background; leaf mottling is correlated with closed habitats. Disruptive camouflage would have a clear evolutionary advantage in plants: they would tend to escape from being eaten by herbivores. Another possibility is that some plants have leaves differently coloured on upper and lower surfaces or on parts such as veins and stalks to make green-camouflaged insects conspicuous, and thus benefit the plants by favouring the removal of herbivores by carnivores. These hypotheses are testable. Some animals, such as the horned lizards of North America, have evolved elaborate measures to eliminate shadow. Their bodies are flattened, with the sides thinning to an edge; the animals habitually press their bodies to the ground; and their sides are fringed with white scales which effectively hide and disrupt any remaining areas of shadow there may be under the edge of the body. The theory that the body shape of the horned lizards which live in open desert is adapted to minimise shadow is supported by the one species which lacks fringe scales, the roundtail horned lizard, which lives in rocky areas and resembles a rock. When this species is threatened, it makes itself look as much like a rock as possible by curving its back, emphasizing its three-dimensional shape. Some species of butterflies, such as the speckled wood, "Pararge aegeria", minimise their shadows when perched by closing the wings over their backs, aligning their bodies with the sun, and tilting to one side towards the sun, so that the shadow becomes a thin inconspicuous line rather than a broad patch. Similarly, some ground-nesting birds, including the European nightjar, select a resting position facing the sun. Eliminating shadow was identified as a principle of military camouflage during the Second World War. Many prey animals have conspicuous high-contrast markings which paradoxically attract the predator's gaze. These distractive markings serve as camouflage by distracting the predator's attention from recognising the prey as a whole, for example by keeping the predator from identifying the prey's outline. Experimentally, search times for blue tits increased when artificial prey had distractive markings. Some animals actively seek to hide by decorating themselves with materials such as twigs, sand, or pieces of shell from their environment, to break up their outlines, to conceal the features of their bodies, and to match their backgrounds. For example, a caddisfly larva builds a decorated case and lives almost entirely inside it; a decorator crab covers its back with seaweed, sponges, and stones. The nymph of the predatory masked bug uses its hind legs and a 'tarsal fan' to decorate its body with sand or dust. There are two layers of bristles (trichomes) over the body. On these, the nymph spreads an inner layer of fine particles and an outer layer of coarser particles. The camouflage may conceal the bug from both predators and prey. Similar principles can be applied for military purposes, for instance when a sniper wears a ghillie suit designed to be further camouflaged by decoration with materials such as tufts of grass from the sniper's immediate environment. Such suits were used as early as 1916, the British army having adopted "coats of motley hue and stripes of paint" for snipers. Cott takes the example of the larva of the blotched emerald moth, which fixes a screen of fragments of leaves to its specially hooked bristles, to argue that military camouflage uses the same method, pointing out that the "device is ... essentially the same as one widely practised during the Great War for the concealment, not of caterpillars, but of caterpillar-tractors, [gun] battery positions, observation posts and so forth." Movement catches the eye of prey animals on the lookout for predators, and of predators hunting for prey. Most methods of crypsis therefore also require suitable cryptic behaviour, such as lying down and keeping still to avoid being detected, or in the case of stalking predators such as the tiger, moving with extreme stealth, both slowly and quietly, watching its prey for any sign they are aware of its presence. As an example of the combination of behaviours and other methods of crypsis involved, young giraffes seek cover, lie down, and keep still, often for hours until their mothers return; their skin pattern blends with the pattern of the vegetation, while the chosen cover and lying position together hide the animals' shadows. The flat-tail horned lizard similarly relies on a combination of methods: it is adapted to lie flat in the open desert, relying on stillness, its cryptic coloration, and concealment of its shadow to avoid being noticed by predators. In the ocean, the leafy sea dragon sways mimetically, like the seaweeds amongst which it rests, as if rippled by wind or water currents. Swaying is seen also in some insects, like Macleay's spectre stick insect, "Extatosoma tiaratum". The behaviour may be motion crypsis, preventing detection, or motion masquerade, promoting misclassification (as something other than prey), or a combination of the two. Most forms of camouflage are ineffective when the camouflaged animal or object moves, because the motion is easily seen by the observing predator, prey or enemy. However, insects such as hoverflies and dragonflies use motion camouflage: the hoverflies to approach possible mates, and the dragonflies to approach rivals when defending territories. Motion camouflage is achieved by moving so as to stay on a straight line between the target and a fixed point in the landscape; the pursuer thus appears not to move, but only to loom larger in the target's field of vision. The same method can be used for military purposes, for example by missiles to minimise their risk of detection by an enemy. However, missile engineers, and animals such as bats, use the method mainly for its efficiency rather than camouflage. Animals such as chameleon, frog, flatfish such as the peacock flounder, squid and octopus actively change their skin patterns and colours using special chromatophore cells to resemble their current background, or, as in most chameleons, for signalling. However, Smith's dwarf chameleon does use active colour change for camouflage. Each chromatophore contains pigment of only one colour. In fish and frogs, colour change is mediated by a type of chromatophore known as melanophores that contain dark pigment. A melanophore is star-shaped; it contains many small pigmented organelles which can be dispersed throughout the cell, or aggregated near its centre. When the pigmented organelles are dispersed, the cell makes a patch of the animal's skin appear dark; when they are aggregated, most of the cell, and the animal's skin, appears light. In frogs, the change is controlled relatively slowly, mainly by hormones. In fish, the change is controlled by the brain, which sends signals directly to the chromatophores, as well as producing hormones. The skins of cephalopods such as the octopus contain complex units, each consisting of a chromatophore with surrounding muscle and nerve cells. The cephalopod chromatophore has all its pigment grains in a small elastic sac, which can be stretched or allowed to relax under the control of the brain to vary its opacity. By controlling chromatophores of different colours, cephalopods can rapidly change their skin patterns and colours. On a longer timescale, animals like the Arctic hare, Arctic fox, stoat, and rock ptarmigan have snow camouflage, changing their coat colour (by moulting and growing new fur or feathers) from brown or grey in the summer to white in the winter; the Arctic fox is the only species in the dog family to do so. However, Arctic hares which live in the far north of Canada, where summer is very short, remain white year-round. The principle of varying coloration either rapidly or with the changing seasons has military applications. "Active camouflage" could in theory make use of both dynamic colour change and counterillumination. Simple methods such as changing uniforms and repainting vehicles for winter have been in use since World War II. In 2011, BAE Systems announced their Adaptiv infrared camouflage technology. It uses about 1,000 hexagonal panels to cover the sides of a tank. The Peltier plate panels are heated and cooled to match either the vehicle's surroundings (crypsis), or an object such as a car (mimesis), when viewed in infrared. Countershading uses graded colour to counteract the effect of self-shadowing, creating an illusion of flatness. Self-shadowing makes an animal appear darker below than on top, grading from light to dark; countershading 'paints in' tones which are darkest on top, lightest below, making the countershaded animal nearly invisible against a suitable background. Thayer observed that "Animals are painted by Nature, darkest on those parts which tend to be most lighted by the sky's light, and "vice versa"". Accordingly, the principle of countershading is sometimes called "Thayer's Law". Countershading is widely used by terrestrial animals, such as gazelles and grasshoppers; marine animals, such as sharks and dolphins; and birds, such as snipe and dunlin. Countershading is less often used for military camouflage, despite Second World War experiments that showed its effectiveness. English zoologist Hugh Cott encouraged the use of methods including countershading, but despite his authority on the subject, failed to persuade the British authorities. Soldiers often wrongly viewed camouflage netting as a kind of invisibility cloak, and they had to be taught to look at camouflage practically, from an enemy observer's viewpoint. At the same time in Australia, zoologist William John Dakin advised soldiers to copy animals' methods, using their instincts for wartime camouflage. The term countershading has a second meaning unrelated to "Thayer's Law". It is that the upper and undersides of animals such as sharks, and of some military aircraft, are different colours to match the different backgrounds when seen from above or from below. Here the camouflage consists of two surfaces, each with the simple function of providing concealment against a specific background, such as a bright water surface or the sky. The body of a shark or the fuselage of an aircraft is not gradated from light to dark to appear flat when seen from the side. The camouflage methods used are the matching of background colour and pattern, and disruption of outlines. Counter-illumination means producing light to match a background that is brighter than an animal's body or military vehicle; it is a form of active camouflage. It is notably used by some species of squid, such as the firefly squid and the midwater squid. The latter has light-producing organs (photophores) scattered all over its underside; these create a sparkling glow that prevents the animal from appearing as a dark shape when seen from below. Counterillumination camouflage is the likely function of the bioluminescence of many marine organisms, though light is also produced to attract or to detect prey and for signalling. Counterillumination has rarely been used for military purposes. "Diffused lighting camouflage" was trialled by Canada's National Research Council during the Second World War. It involved projecting light on to the sides of ships to match the faint glow of the night sky, requiring awkward external platforms to support the lamps. The Canadian concept was refined in the American Yehudi lights project, and trialled in aircraft including B-24 Liberators and naval Avengers. The planes were fitted with forward-pointing lamps automatically adjusted to match the brightness of the night sky. This enabled them to approach much closer to a target – within – before being seen. Counterillumination was made obsolete by radar, and neither diffused lighting camouflage nor Yehudi lights entered active service. Many marine animals that float near the surface are highly transparent, giving them almost perfect camouflage. However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. Some marine animals such as jellyfish have gelatinous bodies, composed mainly of water; their thick mesogloea is acellular and highly transparent. This conveniently makes them buoyant, but it also makes them large for their muscle mass, so they cannot swim fast, making this form of camouflage a costly trade-off with mobility. Gelatinous planktonic animals are between 50 and 90 percent transparent. A transparency of 50 percent is enough to make an animal invisible to a predator such as cod at a depth of ; better transparency is required for invisibility in shallower water, where the light is brighter and predators can see better. For example, a cod can see prey that are 98 percent transparent in optimal lighting in shallow water. Therefore, sufficient transparency for camouflage is more easily achieved in deeper waters. Some tissues such as muscles can be made transparent, provided either they are very thin or organised as regular layers or fibrils that are small compared to the wavelength of visible light. A familiar example is the transparency of the lens of the vertebrate eye, which is made of the protein crystallin, and the vertebrate cornea which is made of the protein collagen. Other structures cannot be made transparent, notably the retinas or equivalent light-absorbing structures of eyes – they must absorb light to be able to function. The camera-type eye of vertebrates and cephalopods must be completely opaque. Finally, some structures are visible for a reason, such as to lure prey. For example, the nematocysts (stinging cells) of the transparent siphonophore "Agalma okenii" resemble small copepods. Examples of transparent marine animals include a wide variety of larvae, including radiata (coelenterates), siphonophores, salps (floating tunicates), gastropod molluscs, polychaete worms, many shrimplike crustaceans, and fish; whereas the adults of most of these are opaque and pigmented, resembling the seabed or shores where they live. Adult comb jellies and jellyfish obey the rule, often being mainly transparent. Cott suggests this follows the more general rule that animals resemble their background: in a transparent medium like seawater, that means being transparent. The small Amazon river fish "Microphilypnus amazonicus" and the shrimps it associates with, "Pseudopalaemon gouldingi", are so transparent as to be "almost invisible"; further, these species appear to select whether to be transparent or more conventionally mottled (disruptively patterned) according to the local background in the environment. Where transparency cannot be achieved, it can be imitated effectively by silvering to make an animal's body highly reflective. At medium depths at sea, light comes from above, so a mirror oriented vertically makes animals such as fish invisible from the side. Most fish in the upper ocean such as sardine and herring are camouflaged by silvering. The marine hatchetfish is extremely flattened laterally, leaving the body just millimetres thick, and the body is so silvery as to resemble aluminium foil. The mirrors consist of microscopic structures similar to those used to provide structural coloration: stacks of between 5 and 10 crystals of guanine spaced about ¼ of a wavelength apart to interfere constructively and achieve nearly 100 per cent reflection. In the deep waters that the hatchetfish lives in, only blue light with a wavelength of 500 nanometres percolates down and needs to be reflected, so mirrors 125 nanometres apart provide good camouflage. In fish such as the herring which live in shallower water, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically. Silvering is found in other marine animals as well as fish. The cephalopods, including squid, octopus and cuttlefish, have multilayer mirrors made of protein rather than guanine. In mimesis (also called "masquerade"), the camouflaged object looks like something else which is of no special interest to the observer. Mimesis is common in prey animals, for example when a peppered moth caterpillar mimics a twig, or a grasshopper mimics a dry leaf. It is also found in nest structures; some eusocial wasps, such as "Leipomeles dorsata", build a nest envelope in patterns that mimic the leaves surrounding the nest. Mimesis is also employed by some predators and parasites to lure their prey. For example, a flower mantis mimics a particular kind of flower, such as an orchid. This tactic has occasionally been used in warfare, for example with heavily armed Q-ships disguised as merchant ships. The common cuckoo, a brood parasite, provides examples of mimesis both in the adult and in the egg. The female lays her eggs in nests of other, smaller species of bird, one per nest. The female mimics a sparrowhawk. The resemblance is sufficient to make small birds take action to avoid the apparent predator. The female cuckoo then has time to lay her egg in their nest without being seen to do so. The cuckoo's egg itself mimics the eggs of the host species, reducing its chance of being rejected. Most forms of camouflage are made ineffective by movement: a deer or grasshopper may be highly cryptic when motionless, but instantly seen when it moves. But one method, motion dazzle, requires rapidly moving bold patterns of contrasting stripes. Motion dazzle may degrade predators' ability to estimate the prey's speed and direction accurately, giving the prey an improved chance of escape. Motion dazzle distorts speed perception and is most effective at high speeds; stripes can also distort perception of size (and so, perceived range to the target). As of 2011, motion dazzle had been proposed for military vehicles, but never applied. Since motion dazzle patterns would make animals more difficult to locate accurately when moving, but easier to see when stationary, there would be an evolutionary trade-off between motion dazzle and crypsis. An animal that is commonly thought to be dazzle-patterned is the zebra. The bold stripes of the zebra have been claimed to be disruptive camouflage, background-blending and countershading. After many years in which the purpose of the coloration was disputed, an experimental study by Tim Caro suggested in 2012 that the pattern reduces the attractiveness of stationary models to biting flies such as horseflies and tsetse flies. However, a simulation study by Martin How and Johannes Zanker in 2014 suggests that when moving, the stripes may confuse observers, such as mammalian predators and biting insects, by two visual illusions: the wagon-wheel effect, where the perceived motion is inverted, and the barberpole illusion, where the perceived motion is in a wrong direction. Ship camouflage was occasionally used in ancient times. Philostratus () wrote in his "Imagines" that Mediterranean pirate ships could be painted blue-gray for concealment. Vegetius () says that "Venetian blue" (sea green) was used in the Gallic Wars, when Julius Caesar sent his "speculatoria navigia" (reconnaissance boats) to gather intelligence along the coast of Britain; the ships were painted entirely in bluish-green wax, with sails, ropes and crew the same colour. There is little evidence of military use of camouflage on land before 1800, but two unusual ceramics show men in Peru's Mochica culture from before 500 AD, hunting birds with blowpipes which are fitted with a kind of shield near the mouth, perhaps to conceal the hunters' hands and faces. Another early source is a 15th-century French manuscript, "The Hunting Book of Gaston Phebus", showing a horse pulling a cart which contains a hunter armed with a crossbow under a cover of branches, perhaps serving as a hide for shooting game. Jamaican Maroons are said to have used plant materials as camouflage in the First Maroon War (). The development of military camouflage was driven by the increasing range and accuracy of infantry firearms in the 19th century. In particular the replacement of the inaccurate musket with weapons such as the Baker rifle made personal concealment in battle essential. Two Napoleonic War skirmishing units of the British Army, the 95th Rifle Regiment and the 60th Rifle Regiment, were the first to adopt camouflage in the form of a rifle green jacket, while the Line regiments continued to wear scarlet tunics. A contemporary study in 1800 by the English artist and soldier Charles Hamilton Smith provided evidence that grey uniforms were less visible than green ones at a range of 150 yards. In the American Civil War, rifle units such as the 1st United States Sharp Shooters (in the Federal army) similarly wore green jackets while other units wore more conspicuous colours. The first British Army unit to adopt khaki uniforms was the Corps of Guides at Peshawar, when Sir Harry Lumsden and his second in command, William Hodson introduced a "drab" uniform in 1848. Hodson wrote that it would be more appropriate for the hot climate, and help make his troops "invisible in a land of dust". Later they improvised by dyeing cloth locally. Other regiments in India soon adopted the khaki uniform, and by 1896 khaki drill uniform was used everywhere outside Europe; by the Second Boer War six years later it was used throughout the British Army. During the late 19th century camouflage was applied to British coastal fortifications. The fortifications around Plymouth, England were painted in the late 1880s in "irregular patches of red, brown, yellow and green." From 1891 onwards British coastal artillery was permitted to be painted in suitable colours "to harmonise with the surroundings" and by 1904 it was standard practice that artillery and mountings should be painted with "large irregular patches of different colours selected to suit local conditions." In the First World War, the French army formed a camouflage corps, led by Lucien-Victor Guirand de Scévola, employing artists known as "camoufleurs" to create schemes such as tree observation posts and covers for guns. Other armies soon followed them. The term "camouflage" probably comes from "camoufler", a Parisian slang term meaning "to disguise", and may have been influenced by "camouflet", a French term meaning "smoke blown in someone's face". The English zoologist John Graham Kerr, artist Solomon J. Solomon and the American artist Abbott Thayer led attempts to introduce scientific principles of countershading and disruptive patterning into military camouflage, with limited success. In early 1916 the Royal Naval Air Service began to create dummy air fields to draw the attention of enemy planes to empty land. They created decoy homes and lined fake runways with flares, which were meant to help protect real towns from night raids. This strategy was not common practice and did not succeed at first, but in 1918 it caught the Germans off guard multiple times. Ship camouflage was introduced in the early 20th century as the range of naval guns increased, with ships painted grey all over. In April 1917, when German U-boats were sinking many British ships with torpedoes, the marine artist Norman Wilkinson devised dazzle camouflage, which paradoxically made ships more visible but harder to target. In Wilkinson's own words, dazzle was designed "not for low visibility, but in such a way as to break up her form and thus confuse a submarine officer as to the course on which she was heading". In the Second World War, the zoologist Hugh Cott, a protégé of Kerr, worked to persuade the British army to use more effective camouflage methods, including countershading, but, like Kerr and Thayer in the First World War, with limited success. For example, he painted two rail-mounted coastal guns, one in conventional style, one countershaded. In aerial photographs, the countershaded gun was essentially invisible. The power of aerial observation and attack led every warring nation to camouflage targets of all types. The Soviet Union's Red Army created the comprehensive doctrine of "Maskirovka" for military deception, including the use of camouflage. For example, during the Battle of Kursk, General Katukov, the commander of the Soviet 1st Tank Army, remarked that the enemy "did not suspect that our well-camouflaged tanks were waiting for him. As we later learned from prisoners, we had managed to move our tanks forward unnoticed". The tanks were concealed in previously prepared defensive emplacements, with only their turrets above ground level. In the air, Second World War fighters were often painted in ground colours above and sky colours below, attempting two different camouflage schemes for observers above and below. Bombers and night fighters were often black, while maritime reconnaissance planes were usually white, to avoid appearing as dark shapes against the sky. For ships, dazzle camouflage was mainly replaced with plain grey in the Second World War, though experimentation with colour schemes continued. As in the First World War, artists were pressed into service; for example, the surrealist painter Roland Penrose became a lecturer at the newly founded Camouflage Development and Training Centre at Farnham Castle, writing the practical "Home Guard Manual of Camouflage". The film-maker Geoffrey Barkas ran the Middle East Command Camouflage Directorate during the 1941–1942 war in the Western Desert, including the successful deception of Operation Bertram. Hugh Cott was chief instructor; the artist camouflage officers, who called themselves "camoufleurs", included Steven Sykes and Tony Ayrton. In Australia, artists were also prominent in the Sydney Camouflage Group, formed under the chairmanship of Professor William John Dakin, a zoologist from Sydney University. Max Dupain, Sydney Ure Smith, and William Dobell were among the members of the group, which worked at Bankstown Airport, RAAF Base Richmond and Garden Island Dockyard. In the United States, artists like John Vassos took a certificate course in military and industrial camouflage at the American School of Design with Baron Nicholas Cerkasoff, and went on to create camouflage for the Air Force. Camouflage has been used to protect military equipment such as vehicles, guns, ships, aircraft and buildings as well as individual soldiers and their positions. Vehicle camouflage methods begin with paint, which offers at best only limited effectiveness. Other methods for stationary land vehicles include covering with improvised materials such as blankets and vegetation, and erecting nets, screens and soft covers which may suitably reflect, scatter or absorb near infrared and radar waves. Some military textiles and vehicle camouflage paints also reflect infrared to help provide concealment from night vision devices. After the Second World War, radar made camouflage generally less effective, though coastal boats are sometimes painted like land vehicles. Aircraft camouflage too came to be seen as less important because of radar, and aircraft of different air forces, such as the Royal Air Force's Lightning, were often uncamouflaged. Many camouflaged textile patterns have been developed to suit the need to match combat clothing to different kinds of terrain (such as woodland, snow, and desert). The design of a pattern effective in all terrains has proved elusive. The American Universal Camouflage Pattern of 2004 attempted to suit all environments, but was withdrawn after a few years of service. Terrain-specific patterns have sometimes been developed but are ineffective in other terrains. The problem of making a pattern that works at different ranges has been solved with multiscale designs, often with a pixellated appearance and designed digitally, that provide a fractal-like range of patch sizes so they appear disruptively coloured both at close range and at a distance. The first genuinely digital camouflage pattern was the Canadian Disruptive Pattern (CADPAT), issued to the army in 2002, soon followed by the American Marine pattern (MARPAT). A pixellated appearance is not essential for this effect, though it is simpler to design and to print. Hunters of game have long made use of camouflage in the form of materials such as animal skins, mud, foliage, and green or brown clothing to enable them to approach wary game animals. Field sports such as driven grouse shooting conceal hunters in hides (also called blinds or shooting butts). Modern hunting clothing makes use of fabrics that provide a disruptive camouflage pattern; for example, in 1986 the hunter Bill Jordan created cryptic clothing for hunters, printed with images of specific kinds of vegetation such as grass and branches. Camouflage is occasionally used to make built structures less conspicuous: for example, in South Africa, towers carrying cell telephone antennae are sometimes camouflaged as tall trees with plastic branches, in response to "resistance from the community". Since this method is costly (a figure of three times the normal cost is mentioned), alternative forms of camouflage can include using neutral colours or familiar shapes such as cylinders and flagpoles. Conspicuousness can also be reduced by siting masts near, or on, other structures. Automotive manufacturers often use patterns to disguise upcoming products. This camouflage is designed to obfuscate the vehicle's visual lines, and is used along with padding, covers, and decals. The patterns' purpose is to prevent visual observation (and to a lesser degree photography), that would subsequently enable reproduction of the vehicle's form factors. Military camouflage patterns influenced fashion and art from the time of the First World War onwards. Gertrude Stein recalled the cubist artist Pablo Picasso's reaction in around 1915: In 1919, the attendants of a "dazzle ball", hosted by the Chelsea Arts Club, wore dazzle-patterned black and white clothing. The ball influenced fashion and art via postcards and magazine articles. The "Illustrated London News" announced: More recently, fashion designers have often used camouflage fabric for its striking designs, its "patterned disorder" and its symbolism. Camouflage clothing can be worn largely for its symbolic significance rather than for fashion, as when, during the late 1960s and early 1970s in the United States, anti-war protestors often ironically wore military clothing during demonstrations against the American involvement in the Vietnam War. Modern artists such as Ian Hamilton Finlay have used camouflage to reflect on war. His 1973 screenprint of a tank camouflaged in a leaf pattern, "Arcadia", is described by the Tate as drawing "an ironic parallel between this idea of a natural paradise and the camouflage patterns on a tank". The title refers to the Utopian Arcadia of poetry and art, and the "memento mori" Latin phrase "Et in Arcadia ego" which recurs in Hamilton Finlay's work. In science fiction, "Camouflage" is a novel about shapeshifting alien beings by Joe Haldeman. The word is used more figuratively in works of literature such as Thaisa Frank's collection of stories of love and loss, "A Brief History of Camouflage".
https://en.wikipedia.org/wiki?curid=6446
Clock A clock is a device used to measure, keep, and indicate time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units: the day, the lunar month, and the year. Devices operating on several physical processes have been used over the millennia. Some predecessors to the modern clock may be considered as "clocks" that are based on movement in nature: A sundial shows the time by displaying the position of a shadow on a flat surface. There is a range of duration timers, a well-known example being the hourglass. Water clocks, along with the sundials, are possibly the oldest time-measuring instruments. A major advance occurred with the invention of the verge escapement, which made possible the first mechanical clocks around 1300 in Europe, which kept time with oscillating timekeepers like balance wheels. Traditionally in horology, the term "clock" was used for a striking clock, while a clock that did not strike the hours audibly was called a timepiece. In general usage today, a "clock" refers to any device for measuring and displaying the time. Watches and other timepieces that can be carried on one's person are often distinguished from clocks. Spring-driven clocks appeared during the 15th century. During the 15th and 16th centuries, clockmaking flourished. The next development in accuracy occurred after 1656 with the invention of the pendulum clock by Christiaan Huygens. A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The electric clock was patented in 1840. The development of electronics in the 20th century led to clocks with no clockwork parts at all. The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates at a particular frequency. This object can be a pendulum, a tuning fork, a quartz crystal, or the vibration of electrons in atoms as they emit microwaves. Clocks have different ways of displaying the time. Analog clocks indicate time with a traditional clock face, with moving hands. Digital clocks display a numeric representation of time. Two numbering systems are in use; 24-hour time notation and 12-hour notation. Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays. For the blind and use over telephones, speaking clocks state the time audibly in words. There are also clocks for the blind that have displays that can be read by touch. The study of timekeeping is known as horology. The word "clock" derives from the medieval Latin word for "bell"; "clogga", and has cognates in many European languages. Clocks spread to England from the Low Countries, so the English word came from the Middle Low German and Middle Dutch "Klocke". The apparent position of the Sun in the sky moves over the course of each day, reflecting the rotation of the Earth. Shadows cast by stationary objects move correspondingly, so their positions can be used to indicate the time of day. A sundial shows the time by displaying the position of a shadow on a (usually) flat surface, which has markings that correspond to the hours. Sundials can be horizontal, vertical, or in other orientations. Sundials were widely used in ancient times. With the knowledge of latitude, a well-constructed sundial can measure local solar time with reasonable accuracy, within a minute or two. Sundials continued to be used to monitor the performance of clocks until the 1830s, with the use of the telegraph and train to standardize time and time zones between cities. Many devices can be used to mark passage of time without respect to reference time (time of day, hours, minutes, etc.) and can be useful for measuring duration or intervals. Examples of such duration timers are candle clocks, incense clocks and the hourglass. Both the candle clock and the incense clock work on the same principle wherein the consumption of resources is more or less constant allowing reasonably precise and repeatable estimates of time passages. In the hourglass, fine sand pouring through a tiny hole at a constant rate indicates an arbitrary, predetermined, passage of time. The resource is not consumed but re-used. Water clocks, along with the sundials, are possibly the oldest time-measuring instruments, with the only exceptions being the day counting tally stick. Given their great antiquity, where and when they first existed is not known and perhaps unknowable. The bowl-shaped outflow is the simplest form of a water clock and is known to have existed in Babylon and in Egypt around the 16th century BC. Other regions of the world, including India and China, also have early evidence of water clocks, but the earliest dates are less certain. Some authors, however, write about water clocks appearing as early as 4000 BC in these regions of the world. Greek astronomer Andronicus of Cyrrhus supervised the construction of the Tower of the Winds in Athens in the 1st century B.C. The Greek and Roman civilizations advanced water clock design with improved accuracy. These advances were passed on through Byzantium and Islamic times, eventually making their way back to Europe. Independently, the Chinese developed their own advanced water clocks(水鐘)in 725 AD, passing their ideas on to Korea and Japan. Some water clock designs were developed independently and some knowledge was transferred through the spread of trade. Pre-modern societies do not have the same precise timekeeping requirements that exist in modern industrial societies, where every hour of work or rest is monitored, and work may start or finish at any time regardless of external conditions. Instead, water clocks in ancient societies were used mainly for astrological reasons. These early water clocks were calibrated with a sundial. While never reaching the level of accuracy of a modern timepiece, the water clock was the most accurate and commonly used timekeeping device for millennia, until it was replaced by the more accurate pendulum clock in 17th-century Europe. Islamic civilization is credited with further advancing the accuracy of clocks with elaborate engineering. In 797 (or possibly 801), the Abbasid caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian Elephant named Abul-Abbas together with a "particularly elaborate example" of a water clock. Pope Sylvester II introduced clocks to northern and western Europe around 1000 AD. The first geared clock was invented in the 11th century by the Arab engineer Ibn Khalaf al-Muradi in Islamic Iberia; it was a water clock that employed a complex gear train mechanism, including both segmental and epicyclic gearing, capable of transmitting high torque. The clock was unrivalled in its use of sophisticated complex gearing, until the mechanical clocks of the mid-14th century. Al-Muradi's clock also employed the use of mercury in its hydraulic linkages, which could function mechanical automata. Al-Muradi's work was known to scholars working under Alfonso X of Castile, hence the mechanism may have played a role in the development of the European mechanical clocks. Other monumental water clocks constructed by medieval Muslim engineers also employed complex gear trains and arrays of automata. Arab engineers at the time also developed a liquid-driven escapement mechanism which they employed in some of their water clocks. Heavy floats were used as weights and a constant-head system was used as an escapement mechanism, which was present in the hydraulic controls they used to make heavy floats descend at a slow and steady rate. A water-powered cogwheel clock was created in China by Yi Xing and Liang Lingzan. This is not considered an escapement mechanism clock as it was unidirectional, the Song dynasty polymath and genius Su Song (1020–1101) incorporated it into his monumental innovation of the astronomical clock-tower of Kaifeng in 1088. His astronomical clock and rotating armillary sphere still relied on the use of either flowing water during the spring, summer, autumn seasons and liquid mercury during the freezing temperature of winter (i.e. hydraulics). A mercury clock, described in the "Libros del saber", a Spanish work from 1277 consisting of translations and paraphrases of Arabic works, is sometimes quoted as evidence for Muslim knowledge of a mechanical clock. A mercury-powered cogwheel clock was created by Ibn Khalaf al-Muradi. In the 13th century, Al-Jazari, an engineer from Mesopotamia (lived 1136–1206) who worked for Artuqid king of Diyar-Bakr, Nasir al-Din, made numerous clocks of all shapes and sizes. A book on his work described 50 mechanical devices in 6 categories, including water clocks. The most reputed clocks included the Elephant, Scribe and Castle clocks, all of which have been successfully reconstructed. As well as telling the time, these grand clocks were symbols of status, grandeur and wealth of the Urtuq State. The word "horologia" (from the Greek ὥρα, hour, and λέγειν, to tell) was used to describe early mechanical clocks, but the use of this word (still used in several Romance languages) for all timekeepers conceals the true nature of the mechanisms. For example, there is a record that in 1176 Sens Cathedral installed an 'horologe' but the mechanism used is unknown. According to Jocelin of Brakelond, in 1198 during a fire at the abbey of St Edmundsbury (now Bury St Edmunds), the monks 'ran to the clock' to fetch water, indicating that their water clock had a reservoir large enough to help extinguish the occasional fire. The word "clock" (via Medieval Latin "clocca" from Old Irish "clocc", both meaning 'bell'), which gradually supersedes "horologe", suggests that it was the sound of bells which also characterized the prototype mechanical clocks that appeared during the 13th century in Europe. In Europe, between 1280 and 1320, there was an increase in the number of references to clocks and horologes in church records, and this probably indicates that a new type of clock mechanism had been devised. Existing clock mechanisms that used water power were being adapted to take their driving power from falling weights. This power was controlled by some form of oscillating mechanism, probably derived from existing bell-ringing or alarm devices. This controlled release of power—the escapement—marks the beginning of the true mechanical clock, which differed from the previously mentioned cogwheel clocks. Verge escapement mechanism derived in the surge of true mechanical clocks, which didn't need any kind of fluid power, like water or mercury, to work. These mechanical clocks were intended for two main purposes: for signalling and notification (e.g. the timing of services and public events), and for modeling the solar system. The former purpose is administrative, the latter arises naturally given the scholarly interests in astronomy, science, astrology, and how these subjects integrated with the religious philosophy of the time. The astrolabe was used both by astronomers and astrologers, and it was natural to apply a clockwork drive to the rotating plate to produce a working model of the solar system. Simple clocks intended mainly for notification were installed in towers, and did not always require faces or hands. They would have announced the canonical hours or intervals between set times of prayer. Canonical hours varied in length as the times of sunrise and sunset shifted. The more sophisticated astronomical clocks would have had moving dials or hands, and would have shown the time in various time systems, including Italian hours, canonical hours, and time as measured by astronomers at the time. Both styles of clock started acquiring extravagant features such as automata. In 1283, a large clock was installed at Dunstable Priory; its location above the rood screen suggests that it was not a water clock. In 1292, Canterbury Cathedral installed a 'great horloge'. Over the next 30 years there are mentions of clocks at a number of ecclesiastical institutions in England, Italy, and France. In 1322, a new clock was installed in Norwich, an expensive replacement for an earlier clock installed in 1273. This had a large (2 metre) astronomical dial with automata and bells. The costs of the installation included the full-time employment of two clockkeepers for two years. Besides the Chinese astronomical clock of Su Song in 1088 mentioned above, contemporary Muslim astronomers also constructed a variety of highly accurate astronomical clocks for use in their mosques and observatories, such as the water-powered astronomical clock by Al-Jazari in 1206, and the astrolabic clock by Ibn al-Shatir in the early 14th century. The most sophisticated timekeeping astrolabes were the geared astrolabe mechanisms designed by Abū Rayhān Bīrūnī in the 11th century and by Muhammad ibn Abi Bakr in the 13th century. These devices functioned as timekeeping devices and also as calendars. A sophisticated water-powered astronomical clock was built by Al-Jazari in 1206. This castle clock was a complex device that was about high, and had multiple functions alongside timekeeping. It included a display of the zodiac and the solar and lunar paths, and a pointer in the shape of the crescent moon which travelled across the top of a gateway, moved by a hidden cart and causing doors to open, each revealing a mannequin, every hour. It was possible to reset the length of day and night in order to account for the changing lengths of day and night throughout the year. This clock also featured a number of automata including falcons and musicians who automatically played music when moved by levers operated by a hidden camshaft attached to a water wheel. In Europe, there were the clocks constructed by Richard of Wallingford in St Albans by 1336, and by Giovanni de Dondi in Padua from 1348 to 1364. They no longer exist, but detailed descriptions of their design and construction survive, and modern reproductions have been made. They illustrate how quickly the theory of the mechanical clock had been translated into practical constructions, and also that one of the many impulses to their development had been the desire of astronomers to investigate celestial phenomena. Wallingford's clock had a large astrolabe-type dial, showing the sun, the moon's age, phase, and node, a star map, and possibly the planets. In addition, it had a wheel of fortune and an indicator of the state of the tide at London Bridge. Bells rang every hour, the number of strokes indicating the time. Dondi's clock was a seven-sided construction, 1 metre high, with dials showing the time of day, including minutes, the motions of all the known planets, an automatic calendar of fixed and movable feasts, and an eclipse prediction hand rotating once every 18 years. It is not known how accurate or reliable these clocks would have been. They were probably adjusted manually every day to compensate for errors caused by wear and imprecise manufacture. Water clocks are sometimes still used today, and can be examined in places such as ancient castles and museums. The Salisbury Cathedral clock, built in 1386, is considered to be the world's oldest surviving mechanical clock that strikes the hours. Clockmakers developed their art in many ways. Building smaller clocks was a technical challenge, as was improving accuracy and reliability. Clocks could be impressive showpieces to demonstrate skilled craftsmanship, or less expensive, mass-produced items for domestic use. The escapement in particular was an important factor affecting the clock's accuracy, so many different mechanisms were tried. Spring-driven clocks appeared during the 15th century, although they are often erroneously credited to Nuremberg watchmaker Peter Henlein (or Henle, or Hele) around 1511. The earliest existing spring driven clock is the chamber clock given to Phillip the Good, Duke of Burgundy, around 1430, now in the Germanisches National museum. Spring power presented clockmakers with a new problem: how to keep the clock movement running at a constant rate as the spring ran down. This resulted in the invention of the "stackfreed" and the fusee in the 15th century, and many other innovations, down to the invention of the modern "going barrel" in 1760. Early clock dials did not indicate minutes and seconds. A clock with a dial indicating minutes was illustrated in a 1475 manuscript by Paulus Almanus, and some 15th-century clocks in Germany indicated minutes and seconds. An early record of a seconds hand on a clock dates back to about 1560 on a clock now in the Fremersdorf collection. During the 15th and 16th centuries, clockmaking flourished, particularly in the metalworking towns of Nuremberg and Augsburg, and in Blois, France. Some of the more basic table clocks have only one time-keeping hand, with the dial between the hour markers being divided into four equal parts making the clocks readable to the nearest 15 minutes. Other clocks were exhibitions of craftsmanship and skill, incorporating astronomical indicators and musical movements. The cross-beat escapement was invented in 1584 by Jost Bürgi, who also developed the remontoire. Bürgi's clocks were a great improvement in accuracy as they were correct to within a minute a day. These clocks helped the 16th-century astronomer Tycho Brahe to observe astronomical events with much greater precision than before. The next development in accuracy occurred after 1656 with the invention of the pendulum clock. Galileo had the idea to use a swinging bob to regulate the motion of a time-telling device earlier in the 17th century. Christiaan Huygens, however, is usually credited as the inventor. He determined the mathematical formula that related pendulum length to time (about 99.4 cm or 39.1 inches for the one second movement) and had the first pendulum-driven clock made. The first model clock was built in 1657 in the Hague, but it was in England that the idea was taken up. The longcase clock (also known as the "grandfather clock") was created to house the pendulum and works by the English clockmaker William Clement in 1670 or 1671. It was also at this time that clock cases began to be made of wood and clock faces to utilize enamel as well as hand-painted ceramics. In 1670, William Clement created the anchor escapement, an improvement over Huygens' crown escapement. Clement also introduced the pendulum suspension spring in 1671. The concentric minute hand was added to the clock by Daniel Quare, a London clockmaker and others, and the second hand was first introduced. In 1675, Huygens and Robert Hooke invented the spiral balance spring, or the hairspring, designed to control the oscillating speed of the balance wheel. This crucial advance finally made accurate pocket watches possible. The great English clockmaker, Thomas Tompion, was one of the first to use this mechanism successfully in his pocket watches, and he adopted the minute hand which, after a variety of designs were trialled, eventually stabilised into the modern-day configuration. The rack and snail striking mechanism for striking clocks, was introduced during the 17th century and had distinct advantages over the 'countwheel' (or 'locking plate') mechanism. During the 20th century there was a common misconception that Edward Barlow invented "rack and snail" striking. In fact, his invention was connected with a repeating mechanism employing the rack and snail. The repeating clock, that chimes the number of hours (or even minutes) on demand was invented by either Quare or Barlow in 1676. George Graham invented the deadbeat escapement for clocks in 1720. A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The position of a ship at sea could be determined with reasonable accuracy if a navigator could refer to a clock that lost or gained less than about 10 seconds per day. This clock could not contain a pendulum, which would be virtually useless on a rocking ship. In 1714, the British government offered large financial rewards to the value of 20,000 pounds, for anyone who could determine longitude accurately. John Harrison, who dedicated his life to improving the accuracy of his clocks, later received considerable sums under the Longitude Act. In 1735, Harrison built his first chronometer, which he steadily improved on over the next thirty years before submitting it for examination. The clock had many innovations, including the use of bearings to reduce friction, weighted balances to compensate for the ship's pitch and roll in the sea and the use of two different metals to reduce the problem of expansion from heat. The chronometer was tested in 1761 by Harrison's son and by the end of 10 weeks the clock was in error by less than 5 seconds. The British had predominated in watch manufacture for much of the 17th and 18th centuries, but maintained a system of production that was geared towards high quality products for the elite. Although there was an attempt to modernise clock manufacture with mass production techniques and the application of duplicating tools and machinery by the British Watch Company in 1843, it was in the United States that this system took off. In 1816, Eli Terry and some other Connecticut clockmakers developed a way of mass-producing clocks by using interchangeable parts. Aaron Lufkin Dennison started a factory in 1851 in Massachusetts that also used interchangeable parts, and by 1861 was running a successful enterprise incorporated as the Waltham Watch Company. In 1815, Francis Ronalds published the first electric clock powered by dry pile batteries. Alexander Bain, Scottish clockmaker, patented the electric clock in 1840. The electric clock's mainspring is wound either with an electric motor or with an electromagnet and armature. In 1841, he first patented the electromagnetic pendulum. By the end of the nineteenth century, the advent of the dry cell battery made it feasible to use electric power in clocks. Spring or weight driven clocks that use electricity, either alternating current (AC) or direct current (DC), to rewind the spring or raise the weight of a mechanical clock would be classified as an electromechanical clock. This classification would also apply to clocks that employ an electrical impulse to propel the pendulum. In electromechanical clocks the electricity serves no time keeping function. These types of clocks were made as individual timepieces but more commonly used in synchronized time installations in schools, businesses, factories, railroads and government facilities as a master clock and slave clocks. Electric clocks that are powered from the AC supply often use synchronous motors. The supply current alternates with a frequency of 50 hertz in many countries, and 60 hertz in others. The rotor of the motor rotates at a speed that is related to the alternation frequency. Appropriate gearing converts this rotation speed to the correct ones for the hands of the analog clock. The development of electronics in the 20th century led to clocks with no clockwork parts at all. Time in these cases is measured in several ways, such as by the alternation of the AC supply, vibration of a tuning fork, the behaviour of quartz crystals, or the quantum vibrations of atoms. Electronic circuits divide these high-frequency oscillations to slower ones that drive the time display. Even mechanical clocks have since come to be largely powered by batteries, removing the need for winding. The piezoelectric properties of crystalline quartz were discovered by Jacques and Pierre Curie in 1880. The first crystal oscillator was invented in 1917 by Alexander M. Nicholson after which, the first quartz crystal oscillator was built by Walter G. Cady in 1921. In 1927 the first quartz clock was built by Warren Marrison and J.W. Horton at Bell Telephone Laboratories in Canada. The following decades saw the development of quartz clocks as precision time measurement devices in laboratory settings—the bulky and delicate counting electronics, built with vacuum tubes, limited their practical use elsewhere. The National Bureau of Standards (now NIST) based the time standard of the United States on quartz clocks from late 1929 until the 1960s, when it changed to atomic clocks. In 1969, Seiko produced the world's first quartz wristwatch, the Astron. Their inherent accuracy and low cost of production resulted in the subsequent proliferation of quartz clocks and watches. Currently, atomic clocks are the most accurate clocks in existence. They are considerably more accurate than quartz clocks as they can be accurate to within a few seconds over trillions of years. Atomic clocks were first theorized by Lord Kelvin in 1879. In the 1930s the development of Magnetic resonance created practical method for doing this. A prototype ammonia maser device was built in 1949 at the U.S. National Bureau of Standards (NBS, now NIST). Although it was less accurate than existing quartz clocks, it served to demonstrate the concept. The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale "ephemeris time" (ET). As of 2013, the most stable atomic clocks are ytterbium clocks, which are stable to within less than two parts in 1 quintillion (). The invention of the mechanical clock in the 13th century initiated a change in timekeeping methods from continuous processes, such as the motion of the gnomon's shadow on a sundial or the flow of liquid in a water clock, to periodic oscillatory processes, such as the swing of a pendulum or the vibration of a quartz crystal, which had the potential for more accuracy. All modern clocks use oscillation. Although the mechanisms they use vary, all oscillating clocks, mechanical, digital and atomic, work similarly and can be divided into analogous parts. They consist of an object that repeats the same motion over and over again, an "oscillator", with a precisely constant time interval between each repetition, or 'beat'. Attached to the oscillator is a "controller" device, which sustains the oscillator's motion by replacing the energy it loses to friction, and converts its oscillations into a series of pulses. The pulses are then counted by some type of "counter", and the number of counts is converted into convenient units, usually seconds, minutes, hours, etc. Finally some kind of "indicator" displays the result in human readable form. The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates repetitively at a precisely constant frequency. The advantage of a harmonic oscillator over other forms of oscillator is that it employs resonance to vibrate at a precise natural resonant frequency or "beat" dependent only on its physical characteristics, and resists vibrating at other rates. The possible precision achievable by a harmonic oscillator is measured by a parameter called its Q, or quality factor, which increases (other things being equal) with its resonant frequency. This is why there has been a long term trend toward higher frequency oscillators in clocks. Balance wheels and pendulums always include a means of adjusting the rate of the timepiece. Quartz timepieces sometimes include a rate screw that adjusts a capacitor for that purpose. Atomic clocks are primary standards, and their rate cannot be adjusted. Some clocks rely for their accuracy on an external oscillator; that is, they are automatically synchronized to a more accurate clock: This has the dual function of keeping the oscillator running by giving it 'pushes' to replace the energy lost to friction, and converting its vibrations into a series of pulses that serve to measure the time. In mechanical clocks, the low Q of the balance wheel or pendulum oscillator made them very sensitive to the disturbing effect of the impulses of the escapement, so the escapement had a great effect on the accuracy of the clock, and many escapement designs were tried. The higher Q of resonators in electronic clocks makes them relatively insensitive to the disturbing effects of the drive power, so the driving oscillator circuit is a much less critical component. This counts the pulses and adds them up to get traditional time units of seconds, minutes, hours, etc. It usually has a provision for "setting" the clock by manually entering the correct time into the counter. This displays the count of seconds, minutes, hours, etc. in a human readable form. Clocks can be classified by the type of time display, as well as by the method of timekeeping. Analog clocks usually use a clock face which indicates time using rotating pointers called "hands" on a fixed numbered dial or dials. The standard clock face, known universally throughout the world, has a short "hour hand" which indicates the hour on a circular dial of 12 hours, making two revolutions per day, and a longer "minute hand" which indicates the minutes in the current hour on the same dial, which is also divided into 60 minutes. It may also have a "second hand" which indicates the seconds in the current minute. The only other widely used clock face today is the 24 hour analog dial, because of the use of 24 hour time in military organizations and timetables. Before the modern clock face was standardized during the Industrial Revolution, many other face designs were used throughout the years, including dials divided into 6, 8, 10, and 24 hours. During the French Revolution the French government tried to introduce a 10-hour clock, as part of their decimal-based metric system of measurement, but it didn't catch on. An Italian 6 hour clock was developed in the 18th century, presumably to save power (a clock or watch striking 24 times uses more power). Another type of analog clock is the sundial, which tracks the sun continuously, registering the time by the shadow position of its gnomon. Because the sun does not adjust to daylight saving time, users must add an hour during that time. Corrections must also be made for the equation of time, and for the difference between the longitudes of the sundial and of the central meridian of the time zone that is being used (i.e. 15 degrees east of the prime meridian for each hour that the time zone is ahead of GMT). Sundials use some or part of the 24 hour analog dial. There also exist clocks which use a digital display despite having an analog mechanism—these are commonly referred to as flip clocks. Alternative systems have been proposed. For example, the "Twelv" clock indicates the current hour using one of twelve colors, and indicates the minute by showing a proportion of a circular disk, similar to a moon phase. Digital clocks display a numeric representation of time. Two numeric display formats are commonly used on digital clocks: Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays; many other display technologies are used as well (cathode ray tubes, nixie tubes, etc.). After a reset, battery change or power failure, these clocks without a backup battery or capacitor either start counting from 12:00, or stay at 12:00, often with blinking digits indicating that the time needs to be set. Some newer clocks will reset themselves based on radio or Internet time servers that are tuned to national atomic clocks. Since the advent of digital clocks in the 1960s, the use of analog clocks has declined significantly. Some clocks, called 'flip clocks', have digital displays that work mechanically. The digits are painted on sheets of material which are mounted like the pages of a book. Once a minute, a page is turned over to reveal the next digit. These displays are usually easier to read in brightly lit conditions than LCDs or LEDs. Also, they do not go back to 12:00 after a power interruption. Flip clocks generally do not have electronic mechanisms. Usually, they are driven by AC-synchronous motors. Clocks with analog quadrants, with a digital component, usually minutes and hours displayed analogously and seconds displayed in digital mode. For convenience, distance, telephony or blindness, auditory clocks present the time as sounds. The sound is either spoken natural language, (e.g. "The time is twelve thirty-five"), or as auditory codes (e.g. number of sequential bell rings on the hour represents the number of the hour like the bell, Big Ben). Most telecommunication companies also provide a speaking clock service as well. Word clocks are clocks that display the time visually using sentences. E.g.: "It's about three o'clock." These clocks can be implemented in hardware or software. Some clocks, usually digital ones, include an optical projector that shines a magnified image of the time display onto a screen or onto a surface such as an indoor ceiling or wall. The digits are large enough to be easily read, without using glasses, by persons with moderately imperfect vision, so the clocks are convenient for use in their bedrooms. Usually, the timekeeping circuitry has a battery as a backup source for an uninterrupted power supply to keep the clock on time, while the projection light only works when the unit is connected to an A.C. supply. Completely battery-powered portable versions resembling flashlights are also available. Auditory and projection clocks can be used by people who are blind or have limited vision. There are also clocks for the blind that have displays that can be read by using the sense of touch. Some of these are similar to normal analog displays, but are constructed so the hands can be felt without damaging them. Another type is essentially digital, and uses devices that use a code such as Braille to show the digits so that they can be felt with the fingertips. Some clocks have several displays driven by a single mechanism, and some others have several completely separate mechanisms in a single case. Clocks in public places often have several faces visible from different directions, so that the clock can be read from anywhere in the vicinity; all the faces show the same time. Other clocks show the current time in several time-zones. Watches that are intended to be carried by travellers often have two displays, one for the local time and the other for the time at home, which is useful for making pre-arranged phone calls. Some equation clocks have two displays, one showing mean time and the other solar time, as would be shown by a sundial. Some clocks have both analog and digital displays. Clocks with Braille displays usually also have conventional digits so they can be read by sighted people. Clocks are in homes, offices and many other places; smaller ones (watches) are carried on the wrist or in a pocket; larger ones are in public places, e.g. a railway station or church. A small clock is often shown in a corner of computer displays, mobile phones and many MP3 players. The primary purpose of a clock is to "display" the time. Clocks may also have the facility to make a loud alert signal at a specified time, typically to waken a sleeper at a preset time; they are referred to as "alarm clocks". The alarm may start at a low volume and become louder, or have the facility to be switched off for a few minutes then resume. Alarm clocks with visible indicators are sometimes used to indicate to children too young to read the time that the time for sleep has finished; they are sometimes called "training clocks". A clock mechanism may be used to "control" a device according to time, e.g. a central heating system, a VCR, or a time bomb (see: digital counter). Such mechanisms are usually called timers. Clock mechanisms are also used to drive devices such as solar trackers and astronomical telescopes, which have to turn at accurately controlled speeds to counteract the rotation of the Earth. Most digital computers depend on an internal signal at constant frequency to synchronize processing; this is referred to as a clock signal. (A few research projects are developing CPUs based on asynchronous circuits.) Some equipment, including computers, also maintains time and date for use as required; this is referred to as time-of-day clock, and is distinct from the system clock signal, although possibly based on counting its cycles. In Chinese culture, giving a clock () is often taboo, especially to the elderly as the term for this act is a homophone with the term for the act of attending another's funeral (). A UK government official Susan Kramer gave a watch to Taipei mayor Ko Wen-je unaware of such a taboo which resulted in some professional embarrassment and a pursuant apology. This homonymic pair works in both Mandarin and Cantonese, although in most parts of China only clocks and large bells, and not watches, are called ""zhong"", and watches are commonly given as gifts in China. However, should such a gift be given, the "unluckiness" of the gift can be countered by exacting a small monetary payment so the recipient is buying the clock and thereby counteracting the ("give") expression of the phrase. For some scientific work timing of the utmost accuracy is essential. It is also necessary to have a standard of the maximum accuracy against which working clocks can be calibrated. An ideal clock would give the time to unlimited accuracy, but this is not realisable. Many physical processes, in particular including some transitions between atomic energy levels, occur at exceedingly stable frequency; counting cycles of such a process can give a very accurate and consistent time—clocks which work this way are usually called atomic clocks. Such clocks are typically large, very expensive, require a controlled environment, and are far more accurate than required for most purposes; they are typically used in a standards laboratory. Until advances in the late twentieth century, navigation depended on the ability to measure latitude and longitude. Latitude can be determined through celestial navigation; the measurement of longitude requires accurate knowledge of time. This need was a major motivation for the development of accurate mechanical clocks. John Harrison created the first highly accurate marine chronometer in the mid-18th century. The Noon gun in Cape Town still fires an accurate signal to allow ships to check their chronometers. Many buildings near major ports used to have (some still do) a large ball mounted on a tower or mast arranged to drop at a pre-determined time, for the same purpose. While satellite navigation systems such as the Global Positioning System (GPS) require unprecedentedly accurate knowledge of time, this is supplied by equipment on the satellites; vehicles no longer need timekeeping equipment.
https://en.wikipedia.org/wiki?curid=6449
Charles Proteus Steinmetz Charles Proteus Steinmetz (born Karl August Rudolph Steinmetz, April 9, 1865 – October 26, 1923) was a German-born American mathematician and electrical engineer and professor at Union College. He fostered the development of alternating current that made possible the expansion of the electric power industry in the United States, formulating mathematical theories for engineers. He made ground-breaking discoveries in the understanding of hysteresis that enabled engineers to design better electromagnetic apparatus equipment, especially electric motors for use in industry. At the time of his death, Steinmetz held over 200 patents. A genius in both mathematics and electronics, he did work that earned him the nicknames "Forger of Thunderbolts" and "The Wizard of Schenectady". Steinmetz's equation, Steinmetz solids, Steinmetz curves, and Steinmetz equivalent circuit theory are all named after him, as are numerous honors and scholarships, including the "IEEE Charles Proteus Steinmetz Award", one of the highest technical recognitions given by the Institute of Electrical and Electronics Engineers professional society. Steinmetz was born Karl August Rudolph Steinmetz on April 9, 1865 in Breslau, Province of Silesia, Prussia (now Wrocław, Poland) the son of Caroline (Neubert) and Karl Heinrich Steinmetz. He was baptized a Lutheran into the Evangelical Church of Prussia. Steinmetz, who only stood four feet tall as an adult, suffered from dwarfism, hunchback, and hip dysplasia, as did his father and grandfather. Steinmetz attended Johannes Gymnasium and astonished his teachers with his proficiency in mathematics and physics. Following the Gymnasium, Steinmetz went on to the University of Breslau to begin work on his undergraduate degree in 1883. He was on the verge of finishing his doctorate in 1888 when he came under investigation by the German police for activities on behalf of a socialist university group and articles he had written for a local socialist newspaper. As socialist meetings and press had been banned in Germany, Steinmetz fled to Zürich in 1888 to escape possible arrest. Cornell University Professor Ronald R. Kline, author of "Steinmetz: Engineer and Socialist", contended that other factors were more directly involved in Steinmetz's decision to leave his homeland such as being in arrears with his tuition at the University and life at home with his father, stepmother and their daughters being tension-filled. Faced with an expiring visa, he emigrated to the United States in 1889. He changed his first name to "Charles" in order to sound more American, and chose the middle name "Proteus", a wise hunchbacked character from the "Odyssey" who knew many secrets, after a childhood epithet given by classmates Steinmetz felt suited him. Despite his earlier efforts and interest in socialism, by 1922 Steinmetz concluded that socialism would never work in the United States, because the country lacked a "powerful, centralized government of competent men, remaining continuously in office", and because "only a small percentage of Americans accept this viewpoint today". A member of the original Technical Alliance, which also included Thorstein Veblen and Leland Olds, Steinmetz had great faith in the ability of machines to eliminate human toil and create abundance for all. He put it this way: "Some day we make the good things of life for everybody". Steinmetz is known for his contribution in three major fields of alternating current (AC) systems theory: hysteresis, steady-state analysis, and transients. Shortly after arriving in the United States, Steinmetz went to work for Rudolf Eickemeyer in Yonkers, New York, and published in the field of magnetic hysteresis, earning worldwide professional recognition. Eickemeyer's firm developed transformers for use in the transmission of electrical power among many other mechanical and electrical devices. In 1893 Eickemeyer's company, along with all of its patents and designs, was bought by the newly formed General Electric Company, where Steinmetz quickly became known as the engineering wizard in GE's engineering community. Steinmetz's work revolutionized AC circuit theory and analysis, which had been carried out using complicated, time-consuming calculus-based methods. In the groundbreaking paper, "Complex Quantities and Their Use in Electrical Engineering", presented at a July 1893 meeting published in the American Institute of Electrical Engineers (AIEE), Steinmetz simplified these complicated methods to "a simple problem of algebra". He systematized the use of complex number phasor representation in electrical engineering education texts, whereby the lower-case letter "j" is used to designate the 90-degree rotation operator in AC system analysis. His seminal books and many other AIEE papers "taught a whole generation of engineers how to deal with AC phenomena". Steinmetz also greatly advanced the understanding of lightning. His systematic experiments resulted in the first laboratory created "man-made lightning", earning him the nickname the "Forger of Thunderbolts". These were conducted in a football field-sized laboratory at General Electric, using 120,000 volt generators. He also erected a lightning tower to attract natural lightning in order to study its patterns and effects, which resulted in several theories. Steinmetz acted in the following professional capacities: He was granted an honorary degree from Harvard University in 1901 and a doctorate from Union College in 1903. Steinmetz wrote 13 books and 60 articles, not exclusively about engineering. He was a member and adviser to the fraternity Phi Gamma Delta at Union College, whose chapter house there was one of the first ever electrified residences. While serving as president of the Schenectady Board of Education Steinmetz introduced numerous progressive reforms, including extended school hours, school meals, school nurses, special classes for the children of immigrants, and the distribution of free textbooks. In spite of his love for children and family life, Steinmetz remained unmarried, to prevent the spinal deformity afflicting himself, his father, and grandfather from being passed on to any offspring. When Joseph LeRoy Hayden, a loyal and hardworking lab assistant, announced that he would marry and look for his own living quarters, Steinmetz made the unusual proposal of opening his large home, complete with research lab, greenhouse, and office to the Haydens and their prospective family. Hayden favored the idea, but his future wife was very wary of the unorthodox setup. She finally agreed after Steinmetz's assurance that she could run the house as she saw fit. After an uneasy start, the arrangement worked well for all parties, especially after three Hayden children were born. Steinmetz legally adopted Joseph Hayden as his son, becoming grandfather to the youngsters, entertaining them with fantastic stories and spectacular scientific demonstrations. The unusual but harmonious living arrangements lasted for the rest of Steinmetz's life. Steinmetz founded America's first glider club, but none of its prototypes "could be dignified with the term 'flight'". Steinmetz was a lifelong agnostic. He died on October 26, 1923, and was buried in Vale Cemetery in Schenectady. The "Forger of Thunderbolts" and "Wizard of Schenectady" earned wide recognition among the scientific community and numerous awards and honors both during his life and posthumously. "Steinmetz's equation", derived from his experiments, defines the approximate heat energy due to magnetic hysteresis released, per cycle per unit volume of magnetic material. A Steinmetz solid is the solid body generated by the intersection of two or three cylinders of equal radius at right angles. Steinmetz equivalent circuit theory is still widely used for the design and testing of induction motors. One of the highest technical recognitions given by the Institute of Electrical and Electronics Engineers, the "IEEE Charles Proteus Steinmetz Award", is given for major contributions to standardization within the field of electrical and electronics engineering. Other awards include the Certificate of Merit of Franklin Institute, 1908; the Elliott Cresson Medal, 1913; and the Cedergren Medal, 1914. The "Charles P. Steinmetz Memorial Lecture" series was begun in his honor in 1925, sponsored by the Schenectady branch of the IEEE. Through 2017 seventy-three gatherings have taken place, held almost exclusively at Union College, featuring notable figures such as Nobel laureate experimental physicist Robert A. Millikan, helicopter inventor Igor Sikorsky, nuclear submarine pioneer Admiral Hyman G. Rickover (1963), Nobel-winning semiconductor inventor William Shockley, and Internet 'founding father' Leonard Kleinrock. The "Charles P. Steinmetz Scholarship" is awarded annually by the college, underwritten since its inception in 1923 by the General Electric Company. The "Charles P. Steinmetz Memorial Scholarship" was established at Union by Marjorie Hayden, daughter of Joseph and Corrine Hayden, and is awarded to students majoring in engineering or physics. Steinmetz's connection to Union is further celebrated with the annual Steinmetz Symposium, a day-long event in which Union undergraduates give presentations on research they have done. Steinmetz Hall, which houses the Union College computer center, is named after him. Steinmetz was portrayed in 1959 by the actor Rod Steiger in the CBS television anthology series, "The Joseph Cotten Show". The episode focused on his socialist activities in Germany. A Chicago public high school, Steinmetz College Prep, is named for him. A public park in north Schenectady, New York was named for him in 1931. Steinmetz is featured in John Dos Passos' "U.S.A." trilogy in one of the biographies. He also serves as a major character in Starling Lawrence's "The Lightning Keeper". Steinmetz is a major character in the novel "Electric City" by Elizabeth Rosner. Moe refers to Curly as a "Steinmetz" in the 1944 Three Stooges short "Busy Buddies". At the time of his death, Steinmetz held over 200 patents:
https://en.wikipedia.org/wiki?curid=6451
Charles Martel Charles Martel ( 688 – 22 October 741) was a Frankish statesman and military leader who, as Duke and Prince of the Franks and Mayor of the Palace, was the "de facto" ruler of Francia from 718 until his death. He was a son of the Frankish statesman Pepin of Herstal and Pepin's mistress, a noblewoman named Alpaida. Charles successfully asserted his claims to power as successor to his father as the power behind the throne in Frankish politics. Continuing and building on his father's work, he restored centralized government in Francia and began the series of military campaigns that re-established the Franks as the undisputed masters of all Gaul. According to a near-contemporary source, the "Liber Historiae Francorum", Charles was "a warrior who was uncommonly [...] effective in battle". Martel defeated an Arab invasion of Aquitaine at the Battle of Tours. Alongside his military endeavours, Charles has been traditionally credited with a seminal role in the development of the Frankish system of feudalism. At the end of his reign, Charles divided Francia between his sons, Carloman and Pepin. The latter became the first king of the Carolingian dynasty. Charles' grandson, Charlemagne, extended the Frankish realms, and became the first emperor in the West since the fall of Rome. Charles, nicknamed "Martel", or "Charles the Hammer", in later chronicles, was the son of Pepin of Herstal and his second wife Alpaida. He had a brother named Childebrand, who later became the Frankish "dux" (that is, "duke") of Burgundy. In older historiography, it was common to describe Charles as "illegitimate". But the dividing line between wives and concubines was not clear-cut in eighth-century Francia, and it is likely that the accusation of "illegitimacy" derives from the desire of Pepin's first wife Plectrude to see her progeny as heirs to Pepin's power. After the reign of Dagobert I (629–639) the Merovingians effectively ceded power to the Pippinid Mayors of the Palace, who ruled the Frankish realm of Austrasia in all but name. They controlled the royal treasury, dispensed patronage, and granted land and privileges in the name of the figurehead king. Charles' father, Pepin of Herstal, was able to unite the Frankish realm by conquering Neustria and Burgundy. He was the first to call himself Duke and Prince of the Franks, a title later taken up by Charles. In December 714, Pepin of Herstal died. Prior to his death, he had, at his wife Plectrude's urging, designated Theudoald, his grandson by their late son Grimoald, his heir in the entire realm. This was immediately opposed by the nobles because Theudoald was a child of only eight years of age. To prevent Charles using this unrest to his own advantage, Plectrude had him imprisoned in Cologne, the city which was intended to be her capital. This prevented an uprising on his behalf in Austrasia, but not in Neustria. Pepin's death occasioned open conflict between his heirs and the Neustrian nobles who sought political independence from Austrasian control. In 715, Dagobert III named Ragenfrid mayor of their palace, effectively declaring political independence. On 26 September 715, Ragenfrid's Neustrians met the young Theudoald's forces at the Battle of Compiegne. Theudoald was defeated and fled back to Cologne. Before the end of the year, Charles Martel had escaped from prison and been acclaimed mayor by the nobles of Austrasia. That same year, Dagobert III died and the Neustrians proclaimed Chilperic II, the cloistered son of Childeric II, as king. In 716, Chilperic and Ragenfrid together led an army into Austrasia intent on seizing the Pippinid wealth at Cologne. The Neustrians allied with another invading force under Redbad, King of the Frisians, and met Charles in battle near Cologne, which was still held by Plectrude. Charles had little time to gather men, or prepare, and the result was the only defeat of his career. The Frisians held off Charles, while the king and his mayor besieged Plectrude at Cologne, where she bought them off with a substantial portion of Pepin's treasure. Then they withdrew. Charles retreated to the hills of the Eifel to gather men, and train them. Having made the proper preparations, in April 716, he fell upon the triumphant army near Malmedy as it was returning to its own province. In the ensuing Battle of Amblève, Martel attacked as the enemy rested at midday. According to one source, he split his forces into several groups which fell at them from many sides. Another suggests that while this was his intention, he then decided, given the enemy's unpreparedness, this was not necessary. In any event, the suddenness of the assault lead them to believe they were facing a much larger host. Many of the enemy fled and Martel's troops gathered the spoils of the camp. Martel's reputation increased considerably as a result, and he attracted more followers. This battle is often considered by historians as the turning point in Charles's struggle. Richard Gerberding points out that up to this time, much of Martel's support was probably from his mother's kindred in the lands around Liege. After Amblève, he seems to have won the backing of the influential Willibrord, founder of the Abbey of Echternach. The abbey had been built on land donated by Plectrude's mother, Irmina of Oeren, but most of Willibrord's missionary work had been carried out in Frisia. In joining Chilperic and Ragenfrid, Radbod of Frisia sacked Utrecht, burning churches and killing many missionaries. Willibrord and his monks were forced to flee to Echternach. Gerberding suggests that Willibrord had decided that the chances of preserving his life's work were better with a successful field commander like Martel than with Plectrude in Cologne. Willibrord subsequently baptized Martel's son Pepin. Gerberding suggests a likely date of Easter 716. Martel also received support from Bishop Pepo of Verdun. Charles took time to rally more men and prepare. By the following spring, Charles had attracted enough support to invade Neustria. Charles sent an envoy who proposed a cessation of hostilities if Chilperic would recognize his rights as mayor of the palace in Austrasia. The refusal was not unexpected but served to impress upon Martel's forces the unreasonableness of the Neustrians. They met near Cambrai at the Battle of Vincy on 21 March 717. The victorious Martel pursued the fleeing king and mayor to Paris, but as he was not yet prepared to hold the city, he turned back to deal with Plectrude and Cologne. He took the city and dispersed her adherents. Plectrude was allowed to retire to a convent; Theudoald lived to 741 under his uncle's protection, a kindness unusual for those times, when mercy to a former gaoler, or a potential rival, was rare. Upon this success, Charles proclaimed Chlothar IV king of Austrasia in opposition to Chilperic and deposed Rigobert, archbishop of Reims, replacing him with Milo, a lifelong supporter. In 718, Chilperic responded to Charles' new ascendancy by making an alliance with Odo the Great (or Eudes, as he is sometimes known), the duke of Aquitaine, who had become independent during the civil war in 715, but was again defeated, at the Battle of Soissons, by Charles. Chilperic fled with his ducal ally to the land south of the Loire and Ragenfrid fled to Angers. Soon Chlotar IV died and Odo surrendered King Chilperic in exchange for Charles recognizing his dukedom. Charles recognized Chilperic as king of the Franks in return for legitimate royal affirmation of his own mayoralty over all the kingdoms. Between 718 and 732, Charles secured his power through a series of victories. Having unified the Franks under his banner, Charles was determined to punish the Saxons who had invaded Austrasia. Therefore, late in 718, he laid waste their country to the banks of the Weser, the Lippe, and the Ruhr. He defeated them in the Teutoburg Forest and thus secured the Frankish border in the name of King Chlotaire. When the Frisian leader Radbod died in 719, Charles seized West Frisia without any great resistance on the part of the Frisians, who had been subjected to the Franks but had rebelled upon the death of Pippin. When Chilperic II died the following year (720), Charles appointed as his successor the son of Dagobert III, Theuderic IV, who was still a minor, and who occupied the throne from 720 to 737 Charles was now appointing the kings whom he supposedly served, "rois fainéants" who were mere figureheads; by the end of his reign, he didn't appoint one at all. At this time, Charles again marched against the Saxons. Then the Neustrians rebelled under Ragenfrid, who had left the county of Anjou. They were easily defeated (724), but Ragenfrid gave up his sons as hostages in turn for keeping his county. This ended the civil wars of Charles' reign. The next six years were devoted in their entirety to assuring Frankish authority over the neighboring political groups. Between 720 and 723, Charles was fighting in Bavaria, where the Agilolfing dukes had gradually evolved into independent rulers, recently in alliance with Liutprand the Lombard. He forced the Alemanni to accompany him, and Duke Hugbert submitted to Frankish suzerainty. In 725 he brought back the Agilolfing Princess Swanachild as a second wife. In 725 and 728, he again entered Bavaria, but in 730, he marched against Lantfrid, Duke of Alemannia, who had also become independent, and killed him in battle. He forced the Alemanni to capitulate to Frankish suzerainty and did not appoint a successor to Lantfrid. Thus, southern Germany once more became part of the Frankish kingdom, as had northern Germany during the first years of the reign. In 731, after defeating the Saxons, Charles turned his attention to the rival southern realm of Aquitaine, and crossed the Loire, breaking the treaty with Duke Odo. The Franks ransacked Aquitaine twice, and captured Bourges, although Odo retook it. The "Continuations of Fredegar" allege that Odo called on assistance from the recently established emirate of al-Andalus, but there had been Arab raids into Aquitaine from the 720s onwards: indeed, in 721 the Chronicle of 754 records a victory of Odo at the Battle of Toulouse, while the "Liber Pontificalis" records that Odo had killed 375,000 Saracens. It is more likely that this invasion or raid took place in revenge for Odo's support for a rebel Berber leader named Munnuza. Whatever the precise circumstances, it is clear that an army under the leadership of Abd al-Rahman al-Ghafiqi headed north, and after some minor engagements marched on the wealthy city of Tours. According to British medieval historian Paul Fouracre, "Their campaign should perhaps be interpreted as a long-distance raid rather than the beginning of a war". They were however defeated by the army of Charles at a location between Tours and Poitiers, in a victory described by the "Continuations of Fredegar". News of this battle spread, and may be recorded in Bede's "Ecclesiastical History" (Book V, ch. 23). However, it is not given prominence in Arabic sources from the period. Despite his victory, Charles did not gain full control of Aquitaine, and Odo remained duke until his death in 735. Between his victory of 732 and 735, Charles reorganized the kingdom of Burgundy, replacing the counts and dukes with his loyal supporters, thus strengthening his hold on power. He was forced, by the ventures of Bubo, Duke of the Frisians, to invade independent-minded Frisia again in 734. In that year, he slew the duke at the Battle of the Boarn. Charles ordered the Frisian pagan shrines destroyed, and so wholly subjugated the populace that the region was peaceful for twenty years after. In 735, Duke Odo of Aquitaine died. Though Charles wished to rule the duchy directly and went there to elicit the submission of the Aquitainians, the aristocracy proclaimed Odo's son, Hunald I of Aquitaine, as duke, and Charles and Hunald eventually recognised each other's position. In 737, at the tail end of his campaigning in Provence and Septimania, the Merovingian king, Theuderic IV, died. Charles, titling himself "maior domus" and "princeps et dux Francorum", did not appoint a new king and nobody acclaimed one. The throne lay vacant until Charles' death. The interregnum, the final four years of Charles' life, was more peaceful than most of it had been but in 738, he compelled the Saxons of Westphalia to submit and pay tribute, and in 739 he checked an uprising in Provence, the rebels being under the leadership of Maurontus. Charles used the relative peace to set about integrating the outlying realms of his empire into the Frankish church. He erected four dioceses in Bavaria (Salzburg, Regensburg, Freising, and Passau) and gave them Boniface as archbishop and metropolitan over all Germany east of the Rhine, with his seat at Mainz. Boniface had been under his protection from 723 on; indeed the saint himself explained to his old friend, Daniel of Winchester, that without it he could neither administer his church, defend his clergy, nor prevent idolatry. In 739, Pope Gregory III begged Charles for his aid against Liutprand, but Charles was loath to fight his onetime ally and ignored the plea. Nonetheless, the pope's request for Frankish protection showed how far Charles had come from the days he was tottering on excommunication, and set the stage for his son and grandson to assert themselves in the peninsula. Charles Martel died on 22 October 741, at Quierzy-sur-Oise in what is today the Aisne "département" in the Picardy region of France. He was buried at Saint Denis Basilica in Paris. His territories had been divided among his adult sons a year earlier: to Carloman he gave Austrasia, Alemannia, and Thuringia, and to Pippin the Younger Neustria, Burgundy, Provence, and Metz and Trier in the "Mosel duchy"; Grifo was given several lands throughout the kingdom, but at a later date, just before Charles died. At the beginning of Charles Martel's career, he had many internal opponents and felt the need to appoint his own kingly claimant, Chlotar IV. By his end, however, the dynamics of rulership in Francia had changed, and no hallowed Merovingian ruler was required. Charles divided his realm between his sons without opposition (though he ignored his young son Bernard). For many historians, Charles Martel laid the foundations for his son Pepin's rise to the Frankish throne in 751, and his grandson Charlemagne's imperial acclamation in 800. However, for Paul Fouracre, while Charles was "the most effective military leader in Francia", his career "finished on a note of unfinished business". Some historical sources say that Charles Martel formed the first regular order of knights in France. They hold that among the spoils Charles Martel's forces captured after the Battle of Tours were many genets (raised for their fur) and several of their pelts. These were presented to him and found favor in his eyes due to their soft fine fur and pleasant smell (the fur was valued by aristocrats to serve as inner lining for garments). As marks of his favor, Charles Martel distributed some of the genets to leaders among his army. Soon after, to commemorate the great victory, he began the first Order of Knighthood in France - called the Order of the Genet. The order was limited to fifteen knights at a time. Charles Martel served as its Chief and that office was handed down to heirs in his bloodline. This order of knights continued for little over two centuries, when it was replaced by Robert II of France's new order - Knights of our Lady of the Star (named in honor of his devotion to the Virgin Mary). Some historians question if the story of the captured genets is a fabrication and that the order was named after small Arabian horses, while others challenge the historical existence of the order altogether. Charles Martel married twice, his first wife being Rotrude of Treves, daughter either of Lambert II, Count of Hesbaye, or of Leudwinus, Count of Treves. They had the following children: Most of the children married and had issue. Hiltrud married Odilo I (a Duke of Bavaria). Landrade was once believed to have married a Sigrand (Count of Hesbania) but Sigrand's wife was more likely the sister of Rotrude. Auda married Thierry IV (a Count of Autun and Toulouse). Charles also married a second time, to Swanhild, and they had a child, Grifo. Finally, Charles Martel also had a known mistress, Ruodhaid, with whom he had children Bernard, Hieronymus, and Remigius. Remigius became an archbishop of Rouen. For early medieval authors, Charles Martel was famous for his military victories. Paul the Deacon for instance attributed a victory against the Saracens actually won by Odo of Aquitaine to Charles. However, alongside this there soon developed a darker reputation, for his alleged abuse of church property. A ninth-century text, the "Visio Eucherii", possibly written by Hincmar of Reims, portrayed Martel as suffering in hell for this reason. According to British medieval historian Paul Fouracre, this was "the single most important text in the construction of Charles Martel's reputation as a seculariser or despoiler of church lands". By the eighteenth century, historians such as Edward Gibbon had begun to portray the Frankish leader as the saviour of Christian Europe from a full-scale Islamic invasion. In Gibbon's "The Decline And Fall Of The Roman Empire" he wonders whether without Charles' victory, "Perhaps the interpretation of the Koran would now be taught in the schools of Oxford". In the nineteenth century, the German historian Heinrich Brunner argued that Charles had confiscated church lands in order to fund military reforms that allowed him to defeat the Arab conquests, in this way brilliantly combining two traditions about the ruler. But Fouracre has argued that "...there is not enough evidence to show that there was a decisive change either in the way in which the Franks fought, or in the way in which they organised the resources needed to support their warriors." Many twentieth-century European historians continued to develop Gibbon's perspectives, such as French medievalist Christian Pfister, who wrote in 1911 that Similarly, William E. Watson who wrote of the battle's importance in Frankish and world history in 1993, suggested that Other recent historians however argue that the importance of the battle is dramatically overstated, both for European history in general and for Charles Martel's reign in particular. This view is typified by Alessandro Barbero, who in 2004 wrote, Similarly, in 2002 Tomaž Mastnak wrote: More recently, the memory of Charles Martel has been appropriated by far right and white nationalist groups, such as the 'Charles Martel Group' in France, and by Australia-born Brenton Harrison Tarrant, the alleged perpetrator of the Christchurch mosque shootings at Al Noor Mosque and Linwood Islamic Centre in Christchurch, New Zealand in 2019.
https://en.wikipedia.org/wiki?curid=6452
Charles Edward Jones Colonel Charles Edward ("Chuck") Jones (November 8, 1952 – September 11, 2001) was a United States Air Force officer, a computer programmer, and an astronaut in the USAF Manned Spaceflight Engineer Program. Jones was born November 8, 1952, in Clinton, Indiana. He graduated from Wichita East High School in 1970, earned a Bachelor of Science degree in Astronautical Engineering from the United States Air Force Academy in 1974, and received a Master of Science degree in Astronautics from MIT in 1980. He entered the USAF Manned Spaceflight Engineer program in 1982, and was scheduled to fly on mission STS-71-B in December 1986, but the mission was cancelled after the "Challenger" Disaster in January 1986. He left the Manned Spaceflight Engineer program in 1987. He later worked for Defense Intelligence Agency, Bolling AFB in Washington D.C., and was Systems Program Director for Intelligence and Information Systems, Hanscom AFB, Massachusetts. He was killed at the age of 48 in the attacks of September 11, 2001, aboard American Airlines Flight 11. He had been living as a retired U.S. Air Force Colonel in Bedford, Massachusetts, at the time of his death. He was survived by his wife Jeanette. At the National 9/11 Memorial, Jones is memorialized at the North Pool, on Panel N-74.
https://en.wikipedia.org/wiki?curid=6456
Ceramic A ceramic ( – , "potter's", from  – , "potter's clay") is a solid material comprising an inorganic compound of metal or metalloid and non-metal with ionic or covalent bonds. Common examples are earthenware, porcelain, and brick. The crystallinity of ceramic materials ranges from highly oriented to semi-crystalline, vitrified, and often completely amorphous (e.g., glasses). Most often, fired ceramics are either vitrified or semi-vitrified as is the case with earthenware, stoneware, and porcelain. Varying crystallinity and electron composition in the ionic and covalent bonds cause most ceramic materials to be good thermal and electrical insulators (extensively researched in ceramic engineering). With such a large range of possible options for the composition/structure of a ceramic (e.g. nearly all of the elements, nearly all types of bonding, and all levels of crystallinity), the breadth of the subject is vast, and identifiable attributes (e.g. hardness, toughness, electrical conductivity, etc.) are difficult to specify for the group as a whole. General properties such as high melting temperature, high hardness, poor conductivity, high moduli of elasticity, chemical resistance and low ductility are the norm, with known exceptions to each of these rules (e.g. piezoelectric ceramics, glass transition temperature, superconductive ceramics, etc.). Many composites, such as fiberglass and carbon fiber, while containing ceramic materials, are not considered to be part of the ceramic family. The earliest ceramics made by humans were pottery objects (i.e. "pots" or "vessels") or figurines made from clay, either by itself or mixed with other materials like silica, hardened and sintered in fire. Later ceramics were glazed and fired to create smooth, colored surfaces, decreasing porosity through the use of glassy, amorphous ceramic coatings on top of the crystalline ceramic substrates. Ceramics now include domestic, industrial and building products, as well as a wide range of ceramic art. In the 20th century, new ceramic materials were developed for use in advanced ceramic engineering, such as in semiconductors. The word ""ceramic"" comes from the Greek word (), "of pottery" or "for pottery", from (), "potter's clay, tile, pottery". The earliest known mention of the root "ceram-" is the Mycenaean Greek , "workers of ceramics", written in Linear B syllabic script. The word "ceramic" may be used as an adjective to describe a material, product or process, or it may be used as a noun, either singular, or, more commonly, as the plural noun "ceramics". A ceramic material is an inorganic, non-metallic, often crystalline oxide, nitride or carbide material. Some elements, such as carbon or silicon, may be considered ceramics. Ceramic materials are brittle, hard, strong in compression, and weak in shearing and tension. They withstand chemical erosion that occurs in other materials subjected to acidic or caustic environments. Ceramics generally can withstand very high temperatures, ranging from 1,000 °C to 1,600 °C (1,800 °F to 3,000 °F). Glass is often not considered a ceramic because of its amorphous (noncrystalline) character. However, glassmaking involves several steps of the ceramic process, and its mechanical properties are similar to ceramic materials. Traditional ceramic raw materials include clay minerals such as kaolinite, whereas more recent materials include aluminium oxide, more commonly known as alumina. The modern ceramic materials, which are classified as advanced ceramics, include silicon carbide and tungsten carbide. Both are valued for their abrasion resistance and hence find use in applications such as the wear plates of crushing equipment in mining operations. Advanced ceramics are also used in the medicine, electrical, electronics industries and body armor. Crystalline ceramic materials are not amenable to a great range of processing. Methods for dealing with them tend to fall into one of two categories – either make the ceramic in the desired shape, by reaction "in situ", or by "forming" powders into the desired shape, and then sintering to form a solid body. Ceramic forming techniques include shaping by hand (sometimes including a rotation process called "throwing"), slip casting, tape casting (used for making very thin ceramic capacitors), injection molding, dry pressing, and other variations. Noncrystalline ceramics, being glass, tend to be formed from melts. The glass is shaped when either fully molten, by casting, or when in a state of toffee-like viscosity, by methods such as blowing into a mold. If later heat treatments cause this glass to become partly crystalline, the resulting material is known as a glass-ceramic, widely used as cook-tops and also as a glass composite material for nuclear waste disposal. Human beings appear to have been making their own ceramics for at least 26,000 years, subjecting clay and silica to intense heat to fuse and form ceramic materials. The earliest found so far were in southern central Europe, and were sculpted figures, not dishes. The earliest known pottery was made by mixing animal products with clay, and baked in kilns at up to 800 °C. While actual pottery fragments have been found up to 19,000 years old, it was not until about ten thousand years later that regular pottery became common. An early people that spread across much of Europe is named after its use of pottery, the Corded Ware culture. These early Indo-European peoples decorated their pottery by wrapping it with rope while still wet. When the ceramics were fired, the rope burned off but left a decorative pattern of complex grooves in the surface. The invention of the wheel eventually led to the production of smoother, more even pottery using the wheel-forming technique, like the pottery wheel. Early ceramics were porous, absorbing water easily. It became useful for more items with the discovery of glazing techniques, coating pottery with silicon, bone ash, or other materials that could melt and reform into a glassy surface, making a vessel less pervious to water. Ceramic artifacts have an important role in archaeology for understanding the culture, technology and behavior of peoples of the past. They are among the most common artifacts to be found at an archaeological site, generally in the form of small fragments of broken pottery called sherds. Processing of collected sherds can be consistent with two main types of analysis: technical and traditional. Traditional analysis involves sorting ceramic artifacts, sherds and larger fragments into specific types based on style, composition, manufacturing and morphology. By creating these typologies it is possible to distinguish between different cultural styles, the purpose of the ceramic and technological state of the people among other conclusions. In addition, by looking at stylistic changes of ceramics over time is it possible to separate (seriate) the ceramics into distinct diagnostic groups (assemblages). A comparison of ceramic artifacts with known dated assemblages allows for a chronological assignment of these pieces. The technical approach to ceramic analysis involves a finer examination of the composition of ceramic artifacts and sherds to determine the source of the material and through this the possible manufacturing site. Key criteria are the composition of the clay and the temper used in the manufacture of the article under study: temper is a material added to the clay during the initial production stage, and it is used to aid the subsequent drying process. Types of temper include shell pieces, granite fragments and ground sherd pieces called 'grog'. Temper is usually identified by microscopic examination of the temper material. Clay identification is determined by a process of refiring the ceramic, and assigning a color to it using Munsell Soil Color notation. By estimating both the clay and temper compositions, and locating a region where both are known to occur, an assignment of the material source can be made. From the source assignment of the artifact further investigations can be made into the site of manufacture. The physical properties of any ceramic substance are a direct result of its crystalline structure and chemical composition. Solid-state chemistry reveals the fundamental connection between microstructure and properties such as localized density variations, grain size distribution, type of porosity and second-phase content, which can all be correlated with ceramic properties such as mechanical strength σ by the Hall-Petch equation, hardness, toughness, dielectric constant, and the optical properties exhibited by transparent materials. Ceramography is the art and science of preparation, examination and evaluation of ceramic microstructures. Evaluation and characterization of ceramic microstructures is often implemented on similar spatial scales to that used commonly in the emerging field of nanotechnology: from tens of angstroms (A) to tens of micrometers (µm). This is typically somewhere between the minimum wavelength of visible light and the resolution limit of the naked eye. The microstructure includes most grains, secondary phases, grain boundaries, pores, micro-cracks, structural defects and hardness microindentions. Most bulk mechanical, optical, thermal, electrical and magnetic properties are significantly affected by the observed microstructure. The fabrication method and process conditions are generally indicated by the microstructure. The root cause of many ceramic failures is evident in the cleaved and polished microstructure. Physical properties which constitute the field of materials science and engineering include the following: Mechanical properties are important in structural and building materials as well as textile fabrics. In modern materials science, fracture mechanics is an important tool in improving the mechanical performance of materials and components. It applies the physics of stress and strain, in particular the theories of elasticity and plasticity, to the microscopic crystallographic defects found in real materials in order to predict the macroscopic mechanical failure of bodies. Fractography is widely used with fracture mechanics to understand the causes of failures and also verify the theoretical failure predictions with real life failures. Ceramic materials are usually ionic or covalent bonded materials, and can be crystalline or amorphous. A material held together by either type of bond will tend to fracture before any plastic deformation takes place, which results in poor toughness in these materials. Additionally, because these materials tend to be porous, the pores and other microscopic imperfections act as stress concentrators, decreasing the toughness further, and reducing the tensile strength. These combine to give catastrophic failures, as opposed to the more ductile failure modes of metals. These materials do show plastic deformation. However, because of the rigid structure of the crystalline materials, there are very few available slip systems for dislocations to move, and so they deform very slowly. With the non-crystalline (glassy) materials, viscous flow is the dominant source of plastic deformation, and is also very slow. It is therefore neglected in many applications of ceramic materials. To overcome the brittle behaviour, ceramic material development has introduced the class of ceramic matrix composite materials, in which ceramic fibers are embedded and with specific coatings are forming fiber bridges across any crack. This mechanism substantially increases the fracture toughness of such ceramics. Ceramic disc brakes are an example of using a ceramic matrix composite material manufactured with a specific process. If a ceramic will be subjected to substantial mechanical loading it can undergo a process called ice-templating, which allows some control of the microstructure of the ceramic product and therefore some control of the mechanical properties. Ceramic engineers use this technique to tune the mechanical properties to their desired application. Specifically, strength is increased when this technique is employed. Ice templating allows the creation of macroscopic pores in a unidirectional arrangement. The applications of this oxide strengthening technique are important for solid oxide fuel cells and water filtration devices. To process a sample through ice templating, an aqueous colloidal suspension is prepared containing the dissolved ceramic powder evenly dispersed throughout the colloid, for example Yttria-stabilized zirconia (YSZ). The solution is then cooled from the bottom to the top on a platform that allows for unidirectional cooling. This forces ice crystals to grow in compliance to the unidirectional cooling, and these ice crystals force the dissolved YSZ particles to the solidification front of the solid-liquid interphase boundary, resulting in pure ice crystals lined up unidirectionally alongside concentrated pockets of colloidal particles. The sample is then simultaneously heated and the pressure is reduced enough to force the ice crystals to sublimate and the YSZ pockets begin to anneal together to form macroscopically aligned ceramic microstructures. The sample is then further sintered to complete the evaporation of the residual water and the final consolidation of the ceramic microstructure. During ice-templating a few variables can be controlled to influence the pore size and morphology of the microstructure. These important variables are the initial solids loading of the colloid, the cooling rate, the sintering temperature and duration, and the use of certain additives which can influence the micro-structural morphology during the process. A good understanding of these parameters is essential to understanding the relationships between processing, microstructure, and mechanical properties of anisotropically porous materials. Some ceramics are semiconductors. Most of these are transition metal oxides that are II-VI semiconductors, such as zinc oxide. While there are prospects of mass-producing blue LEDs from zinc oxide, ceramicists are most interested in the electrical properties that show grain boundary effects. One of the most widely used of these is the varistor. These are devices that exhibit the property that resistance drops sharply at a certain threshold voltage. Once the voltage across the device reaches the threshold, there is a breakdown of the electrical structure in the vicinity of the grain boundaries, which results in its electrical resistance dropping from several megohms down to a few hundred ohms. The major advantage of these is that they can dissipate a lot of energy, and they self-reset – after the voltage across the device drops below the threshold, its resistance returns to being high. This makes them ideal for surge-protection applications; as there is control over the threshold voltage and energy tolerance, they find use in all sorts of applications. The best demonstration of their ability can be found in electrical substations, where they are employed to protect the infrastructure from lightning strikes. They have rapid response, are low maintenance, and do not appreciably degrade from use, making them virtually ideal devices for this application. Semiconducting ceramics are also employed as gas sensors. When various gases are passed over a polycrystalline ceramic, its electrical resistance changes. With tuning to the possible gas mixtures, very inexpensive devices can be produced. Under some conditions, such as extremely low temperature, some ceramics exhibit high-temperature superconductivity. The reason for this is not understood, but there are two major families of superconducting ceramics. Piezoelectricity, a link between electrical and mechanical response, is exhibited by a large number of ceramic materials, including the quartz used to measure time in watches and other electronics. Such devices use both properties of piezoelectrics, using electricity to produce a mechanical motion (powering the device) and then using this mechanical motion to produce electricity (generating a signal). The unit of time measured is the natural interval required for electricity to be converted into mechanical energy and back again. The piezoelectric effect is generally stronger in materials that also exhibit pyroelectricity, and all pyroelectric materials are also piezoelectric. These materials can be used to inter-convert between thermal, mechanical, or electrical energy; for instance, after synthesis in a furnace, a pyroelectric crystal allowed to cool under no applied stress generally builds up a static charge of thousands of volts. Such materials are used in motion sensors, where the tiny rise in temperature from a warm body entering the room is enough to produce a measurable voltage in the crystal. In turn, pyroelectricity is seen most strongly in materials which also display the ferroelectric effect, in which a stable electric dipole can be oriented or reversed by applying an electrostatic field. Pyroelectricity is also a necessary consequence of ferroelectricity. This can be used to store information in ferroelectric capacitors, elements of ferroelectric RAM. The most common such materials are lead zirconate titanate and barium titanate. Aside from the uses mentioned above, their strong piezoelectric response is exploited in the design of high-frequency loudspeakers, transducers for sonar, and actuators for atomic force and scanning tunneling microscopes. Increases in temperature can cause grain boundaries to suddenly become insulating in some semiconducting ceramic materials, mostly mixtures of heavy metal titanates. The critical transition temperature can be adjusted over a wide range by variations in chemistry. In such materials, current will pass through the material until joule heating brings it to the transition temperature, at which point the circuit will be broken and current flow will cease. Such ceramics are used as self-controlled heating elements in, for example, the rear-window defrost circuits of automobiles. At the transition temperature, the material's dielectric response becomes theoretically infinite. While a lack of temperature control would rule out any practical use of the material near its critical temperature, the dielectric effect remains exceptionally strong even at much higher temperatures. Titanates with critical temperatures far below room temperature have become synonymous with "ceramic" in the context of ceramic capacitors for just this reason. Optically transparent materials focus on the response of a material to incoming lightwaves of a range of wavelengths. Frequency selective optical filters can be utilized to alter or enhance the brightness and contrast of a digital image. Guided lightwave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation, though low powered, is virtually lossless. Optical waveguides are used as components in Integrated optical circuits (e.g. light-emitting diodes, LEDs) or as the transmission medium in local and long haul optical communication systems. Also of value to the emerging materials scientist is the sensitivity of materials to radiation in the thermal infrared (IR) portion of the electromagnetic spectrum. This heat-seeking ability is responsible for such diverse optical phenomena as Night-vision and IR luminescence. Thus, there is an increasing need in the military sector for high-strength, robust materials which have the capability to transmit light (electromagnetic waves) in the visible (0.4 – 0.7 micrometers) and mid-infrared (1 – 5 micrometers) regions of the spectrum. These materials are needed for applications requiring transparent armor, including next-generation high-speed missiles and pods, as well as protection against improvised explosive devices (IED). In the 1960s, scientists at General Electric (GE) discovered that under the right manufacturing conditions, some ceramics, especially aluminium oxide (alumina), could be made translucent. These translucent materials were transparent enough to be used for containing the electrical plasma generated in high-pressure sodium street lamps. During the past two decades, additional types of transparent ceramics have been developed for applications such as nose cones for heat-seeking missiles, windows for fighter aircraft, and scintillation counters for computed tomography scanners. In the early 1970s, Thomas Soules pioneered computer modeling of light transmission through translucent ceramic alumina. His model showed that microscopic pores in ceramic, mainly trapped at the junctions of microcrystalline grains, caused light to scatter and prevented true transparency. The volume fraction of these microscopic pores had to be less than 1% for high-quality optical transmission. This is basically a particle size effect. Opacity results from the incoherent scattering of light at surfaces and interfaces. In addition to pores, most of the interfaces in a typical metal or ceramic object are in the form of grain boundaries which separate tiny regions of crystalline order. When the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent. In the formation of polycrystalline materials (metals and ceramics) the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during formation (or pressing) of the object. Moreover, the size of the grain boundaries scales directly with particle size. Thus a reduction of the original particle size below the wavelength of visible light (~ 0.5 micrometers for shortwave violet) eliminates any light scattering, resulting in a transparent material. Recently, Japanese scientists have developed techniques to produce ceramic parts that rival the transparency of traditional crystals (grown from a single seed) and exceed the fracture toughness of a single crystal. In particular, scientists at the Japanese firm Konoshima Ltd., a producer of ceramic construction materials and industrial chemicals, have been looking for markets for their transparent ceramics. Livermore researchers realized that these ceramics might greatly benefit high-powered lasers used in the National Ignition Facility (NIF) Programs Directorate. In particular, a Livermore research team began to acquire advanced transparent ceramics from Konoshima to determine if they could meet the optical requirements needed for Livermore's Solid-State Heat Capacity Laser (SSHCL). Livermore researchers have also been testing applications of these materials for applications such as advanced drivers for laser-driven fusion power plants. A composite material of ceramic and metal is known as cermet. Other ceramic materials, generally requiring greater purity in their make-up than those above, include forms of several chemical compounds, including: For convenience, ceramic products are usually divided into four main types; these are shown below with some examples: Frequently, the raw materials of modern ceramics do not include clays. Those that do are classified as follows: Ceramics can also be classified into three distinct material categories: Each one of these classes can be developed into unique material properties because ceramics tend to be crystalline. http://www.worldscientific.com/worldscibooks/10.1142/p652.
https://en.wikipedia.org/wiki?curid=6458
Wuxing (Chinese philosophy) The wuxing (), also known as the Five Elements, Five Agents, Five Movements, Five Phases, Five Planets, Five Processes, Five Stages, Five Steps, or Five Ways, is the short form of "wǔ zhǒng liúxíng zhī qì" () or "the five types of chi dominating at different times". It is a fivefold conceptual scheme that many traditional Chinese fields used to explain a wide array of phenomena, from cosmic cycles to the interaction between internal organs, and from the succession of political regimes to the properties of medicinal drugs. The "Five Phases" are Fire ( "huǒ"), Water ( "shuǐ"), Wood ( "mù"), Metal ( "jīn"), and Earth ( "tǔ"). This order of presentation is known as the "Days of the Week" sequence. In the order of "mutual generation" ( "xiāngshēng"), they are Wood, Fire, Earth, Metal, and Water. In the order of "mutual overcoming" ( "xiāngkè"), they are Wood, Earth, Water, Fire, and Metal. The system of five phases was used for describing interactions and relationships between phenomena. After it came to maturity in the second or first century BCE during the Han dynasty, this device was employed in many fields of early Chinese thought, including seemingly disparate fields such as Yi jing divination, feng shui, astrology, traditional Chinese medicine, music, military strategy, and martial arts. "Xíng" () of "wǔxíng" () means moving; a planet is called a 'moving star' ( "xíngxīng") in Chinese. Wǔxíng originally refers to the five major planets (Jupiter, Saturn, Mercury, Mars, Venus) that create five dimensions of earth life. "Wǔxíng" is also widely translated as "Five Elements" and this is used extensively by many including practitioners of Five Element acupuncture. This translation arose by false analogy with the Western system of the four elements. Whereas the classical Greek elements were concerned with substances or natural qualities, the Chinese "xíng" are "primarily concerned with process and change," hence the common translation as "phases" or "agents". By the same token, "Mù" is thought of as "Tree" rather than "Wood". The word "element" is thus used within the context of Chinese medicine with a different meaning to its usual meaning. It should be recognized that the word "phase", although commonly preferred, is not perfect. "Phase" is a better translation for the five "seasons" ( "wǔyùn") mentioned below, and so "agents" or "processes" might be preferred for the primary term "xíng". Manfred Porkert attempts to resolve this by using "Evolutive Phase" for "wǔxíng" and "Circuit Phase" for "wǔyùn", but these terms are unwieldy. Some of the Mawangdui Silk Texts (no later than 168 BC) also present the "wǔxíng" as "five virtues" or types of activities. Within Chinese medicine texts the "wǔxíng" are also referred to as "wǔyǔn" () or a combination of the two characters ( wǔxíngyǔn) these emphasise the correspondence of five elements to five 'seasons' (four seasons plus one). Another tradition refers to the "wǔxíng" as "wǔdé" (), the . The five phases are around 72 days each and are usually used to describe the state in nature: The doctrine of five phases describes two cycles, a generating or creation ( "shēng") cycle, also known as "mother-son", and an overcoming or destruction ( "kè") cycle, also known as "grandfather-grandson", of interactions between the phases. Within Chinese medicine the effects of these two main relations are further elaborated: Common verbs for the "shēng" cycle include "generate", "create" or "strengthens", as well as "grow" or "promote". The phase interactions in the "shēng" cycle are: A deficient "shēng" cycle is called the "xiè" cycle and is the reverse of the "shēng" cycle. Common verbs for the "xiè" include "weaken", "drain", "diminish" or "exhaust". The phase interactions in the "xiè" cycle are: Common verbs for the "kè" cycle include "controls", "restrains" and "fathers", as well as "overcome" or "regulate". The phase interactions in the "kè" cycle are: An excessive "kè" cycle is called the "chéng" cycle. Common verbs for the "chéng" cycle include "restrict", "overwhelm", "dominate" or "destroy". The phase interactions in the "chéng" cycle are: A deficient "kè" cycle is called the "wǔ" cycle and is the reverse of the "kè" cycle. Common verbs for the "wǔ" cycle can include "insult" or "harm". The phase interactions in the "wǔ" cycle are: According to wuxing theory, the structure of the cosmos mirrors the five phases. Each phase has a complex series of associations with different aspects of nature, as can be seen in the following table. In the ancient Chinese form of geomancy, known as Feng Shui, practitioners all based their art and system on the five phases (wuxing). All of these phases are represented within the trigrams. Associated with these phases are colors, seasons and shapes; all of which are interacting with each other. Based on a particular directional energy flow from one phase to the next, the interaction can be expansive, destructive, or exhaustive. A proper knowledge of each aspect of energy flow will enable the Feng Shui practitioner to apply certain cures or rearrangement of energy in a way they believe to be beneficial for the receiver of the Feng Shui Treatment. According to the Warring States period political philosopher Zou Yan (c. 305–240 BCE), each of the five elements possesses a personified "virtue" ("de" ), which indicates the foreordained destiny ("yun" ) of a dynasty; accordingly, the cyclic succession of the elements also indicates dynastic transitions. Zou Yan claims that the Mandate of Heaven sanctions the legitimacy of a dynasty by sending self-manifesting auspicious signs in the ritual color (yellow, blue, white, red, and black) that matches the element of the new dynasty (Earth, Wood, Metal, Fire, and Water). From the Qin dynasty onward, most Chinese dynasties invoked the theory of the Five Elements to legitimize their reign. The interdependence of zang-fu networks in the body was said to be a circle of five things, and so mapped by the Chinese doctors onto the five phases. In order to explain the integrity and complexity of the human body, Chinese medical scientists used the Five Elements theory to classify the human body's organs, physiological activities, and pathological reactions. In Ziwei, "neiyin" () or the method of divination is the further classification of the Five Elements into 60 "ming" (), or life orders, based on the ganzhi. Similar to the astrology zodiac, the ming is used by fortune-tellers to analyse a person's personality and future fate. The "Yuèlìng" chapter () of the "Lǐjì" () and the "Huáinánzǐ" () make the following correlations: T'ai chi ch'uan uses the five elements to designate different directions, positions or footwork patterns. Either forward, backward, left, right and centre, or three steps forward (attack) and two steps back (retreat). The Five Steps ( wǔ bù): Xingyiquan uses the five elements metaphorically to represent five different states of combat. There are spring, summer, fall, and winter teas. The perennial tea ceremony includes four tea settings () and a tea master (). Each tea setting is arranged and stands for the four directions (North, South, East, and West). A vase of the seasons' flowers is put on the tea table. The tea settings are:
https://en.wikipedia.org/wiki?curid=6459
Church of Christ, Scientist The Church of Christ, Scientist was founded in 1879 in Boston, Massachusetts, by Mary Baker Eddy, author of "Science and Health with Key to the Scriptures," and founder of Christian Science. The church was founded "to commemorate the word and works of [Christ Jesus]" and "reinstate primitive Christianity and its lost element of healing". Sunday services are held throughout the year and weekly testimony meetings are held on Wednesday evenings, where following brief readings from the Bible and the Christian Science textbook, those in attendance are invited to give testimonies of healing brought about through Christian Science prayer. In the early decades of the 20th century, Christian Science churches sprang up in communities around the world, though in the last several decades of that century, there was a marked decline in membership, except in Africa, where there has been growth. Headquartered in Boston, the church does not officially report membership, and estimates as to worldwide membership range between about 400,000 to less than 100,000. The church was incorporated by Mary Baker Eddy in 1879 following a claimed personal healing in 1866, which she said resulted from reading the Bible. The Bible and Eddy's textbook on Christian healing, "Science and Health with Key to the Scriptures", are together the church's key doctrinal sources and have been ordained as the church's "dual impersonal pastor". The First Church of Christ, Scientist, is widely known for its publications, especially "The Christian Science Monitor", a weekly newspaper published internationally in print and online. The seal of Christian Science is a cross and crown with the words, "Heal the sick, raise the dead, cleanse the lepers, cast out demons," and is a registered trademark of the church. Christian Scientists believe that prayer is effective. The Church has collected over 50,000 testimonies of incidents that it considers healing through Christian Science treatment alone. While most of these testimonies represent ailments neither diagnosed nor treated by medical professionals, the Church requires three other people to vouch for any testimony published in any of its official organ, including the "Christian Science Journal", "Christian Science Sentinel", and "Herald of Christian Science"; verifiers say that they witnessed the healing or know the testifier well enough to vouch for them. Christian Scientists may take an intensive two-week "Primary" class from an authorized Christian Science teacher. Those who wish to become "Journal-listed" (accredited) practitioners, devoting themselves full-time to the practice of healing, must first have Primary class instruction. When they have what the church regards as a record of healing, they may submit their names for publication in the directory of practitioners and teachers in the "Christian Science Journal." A practitioner who has been listed for at least three years' may apply for "Normal" class instruction, given once every three years. Those who receive a certificate are authorized to teach. Both Primary and Normal classes are based on the Bible and the writings of Mary Baker Eddy. The Primary class focuses on the chapter, "Recapitulation" in "Science and Health with Key to the Scriptures". This chapter uses the Socratic method of teaching and contains the "Scientific Statement of Being". The "Normal" class focuses on the platform of Christian Science, contained on pages 330-340 of "Science and Health." The First Church of Christ, Scientist is the legal title of the Mother Church and administrative headquarters of the Christian Science Church. The complex is located in a plaza alongside Huntington Avenue in the Back Bay neighborhood of Boston, Massachusetts. The church itself was built in 1894, and an annex larger in footprint than the original structure was added in 1906. It boasts one of the world's largest pipe organs, built by the Aeolian-Skinner Company of Boston. The Mary Baker Eddy Library for the Betterment of Humanity is housed in an 11-story structure originally built for The Christian Science Publishing Society constructed between 1932 and 1934, and the present plaza was constructed in the late 1960s and early 1970s to include a 28 story administration building, a colonnade, and a reflecting pool with fountain, designed by Araldo Cossutta of I. M. Pei and Partners (now Pei Cobb Freed). Branch churches of The Mother Church may take the title of "First Church of Christ, Scientist"; Second; but the article "The" must not be used, presumably to concede the primacy of the Boston Mother Church. An international newspaper, the "Christian Science Monitor", founded by Eddy in 1908 and winner of seven Pulitzer prizes, is published by the church through the Christian Science Publishing Society. Branch Christian Science churches and Christian Science societies are subordinate to the Mother Church, but are self-governed. They have their own by-laws, bank accounts, assets and officers, but in order to be recognised must abide by the by-laws in the "Manual of The Mother Church". Church services are regulated by the "Manual," the set of by-laws written by Eddy, that establishes the church organization and explains the duties and responsibilities of members, officers, practitioners, teachers and nurses; and establishes rules for discipline and other aspects of church business. The Christian Science Board of Directors is a five-person executive entity created by Mary Baker Eddy to conduct the business of the Christian Science Church under the terms defined in the by-laws of the "Church Manual". Its functions and restrictions are defined by the "Manual." The Board (occasionally CSBD or the BoD for short) also includes functions defined by a Deed of Trust written by Eddy (one of several, in fact) under which it consisted of four persons, though she later expanded the Board to five persons, thus in effect leaving one of its members out of Deed functions. This later bore on a dispute during the 1920s, known as the Great Litigation in CS circles, pivoting on whether the CSBD could remove trustees of the Christian Science Publishing Society or whether the CSPS trustees were established independently. While Eddy's Manual established limited executive functions under the rule of law in place of a traditional hierarchy, the controversial 1991 publication of a book by Bliss Knapp led the then Board of Directors to make the unusual affidavit during a suit over Knapp's estate that neither acts by it violating the "Manual," nor acts refraining from required action, constituted violations of the "Manual". A traditionally-minded minority held that the Board's act in publishing Knapp's book constituted a fundamental violation of several by-laws and its legal trust, automatically mandating the offending Board members' resignations under Article I, Section 9. Another minority believed that Eddy intended various requirements for her consent (in their view, "estoppels") to effect the church's dissolution on her death, since they could no longer be followed literally. Ironically, one of the stronger arguments against this position came from an individual highly respected by their theological quarter, Bliss Knapp, who claimed that Eddy understood through her lawyer that these consent clauses would not hinder normal operation after her decease. Churches worldwide hold a one-hour service each Sunday, consisting of hymns, prayer, and currently, readings from the "King James Version" (KJV) of the Bible (although there is no requirement that this version of the Bible be used) and "Science and Health with Key to the Scriptures". These readings are the weekly Lesson-Sermon, which is read aloud at all Sunday services in all Christian Science churches worldwide, and is studied by individuals at home throughout the preceding week. The Lesson, as it is informally called, is compiled by a committee at The Mother Church, and is usually made up of six sections, each of which consists of passages from the Bible (read by the Second Reader) and passages from "Science and Health" (read by the First Reader). Eddy selected 26 subjects for the Lesson-Sermon. These Lessons run in continuous rotation in the order she established, hence each subject is studied twice a year. In years in which there are 53 Sundays, the topic "Christ Jesus" occurs a third time, in December. In addition, there is a special, shortened Lesson-Sermon for Thanksgiving Day. Branch churches outside the United States may schedule their Thanksgiving service when convenient for them, most choosing a day in October or November, and the Thanksgiving Day proclamation by the United States president, may be omitted. Because there are no clergy in the church, branch church Sunday services are conducted by two Readers: the First Reader, who reads passages from Science and Health, and the Second Reader, who reads passages from the Bible. First Readers determine the beginning "scriptural selection", hymns to be sung on Sundays, and the benediction. The vast majority of the service is the reading of the weekly Bible lesson supplied by Boston, and the order of the service set out by the Manual. To be elected the First Reader in one's branch church is one of the highest and most important positions the lay Christian Scientist may aspire to. Churches also hold a one-hour Wednesday evening testimony meeting, with similar readings, after which, those in attendance are invited to share accounts of healing through prayer. At these services, the First Reader reads passages from the Bible and Science and Health. Departing from denominational practice for over 120 years, English language churches may now choose alternate Bible translations at these services (i.e. Phillips). Branch churches also sponsor annual public talks (called lectures) given by speakers selected annually by the Board of Lectureship in Boston. Beginning in the mid-1980s, church executives undertook a controversial and ambitious foray into electronic broadcast media. The first significant effort was to create a weekly half-hour syndicated television program, The Christian Science Monitor Reports. "Monitor Reports" was anchored in its first season by newspaper veteran Rob Nelson. He was replaced in the second by the "Christian Science Monitor"'s former Moscow correspondent, David Willis. The program was usually broadcast by independent stations — often at odd hours. In 1988, Monitor Reports was supplanted by a nightly half-hour news show, World Monitor, which was broadcast by the Discovery Channel. The program was anchored by veteran journalist John Hart. The Church then purchased a Boston cable television station for elaborate in-house programming production. In parallel, the church purchased a shortwave radio station and syndicated radio production to National Public Radio. However, revenues fell far short of optimistic predictions by church managers, who had ignored early warnings by members and media experts. In October 1991, after a series of conflicts over the boundaries between Christian Science teachings and his journalistic independence, John Hart resigned. The Monitor Channel went off the air in June 1992. Most of the other operations closed in well under a decade. Public accounts in both the mainstream and trade media reported that the church lost approximately $250 million on these ventures. The hundreds of millions lost on broadcasting brought the church to the brink of bankruptcy. However, with the 1991 publication of "The Destiny of The Mother Church" by the late Bliss Knapp, the church secured a $90 million bequest from the Knapp trust. The trust dictated that the book be published as "Authorized Literature," with neither modification nor comment. Historically, the church had censured Knapp for deviating at several points from Eddy's teaching, and had refused to publish the work. The church's archivist, fired in anticipation of the book's publication, wrote to branch churches to inform them of the book's history. Many Christian Scientists thought the book violated the church's by-laws, and the editors of the church's religious periodicals and several other church employees resigned in protest. Alternate beneficiaries subsequently sued to contest the church's claim it had complied fully with the will's terms, and the church ultimately received only half of the original sum. The fallout of the broadcasting debacle also sparked a minor revolt among some prominent church members. In late 1993, a group of Christian Scientists filed suit against the Board of Directors, alleging a willful disregard for the Manual of the Mother Church in its financial dealings. The suit was thrown out by the Supreme Judicial Court of Massachusetts in 1997, but a lingering discontent with the church's financial matters persists to this day. In spite of its early meteoric rise, church membership has declined over the past eight decades, according to the church's former treasurer, J. Edward Odegaard. Though the Church is prohibited by the Manual from publishing membership figures, the number of branch churches in the United States has fallen steadily since World War II. In 2009, for the first time in church history, more new members came from Africa than the United States. In 2005, the "Boston Globe" reported that the church was considering consolidating Boston operations into fewer buildings and leasing out space in buildings it owned. Church official Philip G. Davis noted that the administration and Colonnade buildings had not been fully used for many years and that vacancy increased after staff reductions in 2004. The church posted an $8 million financial loss in fiscal 2003, and in 2004 cut 125 jobs, a quarter of the staff, at the "Christian Science Monitor". Conversely, Davis noted that "the financial situation right now is excellent" and stated that the church was not facing financial problems.
https://en.wikipedia.org/wiki?curid=6462
Connecticut Connecticut () is the southernmost state in the New England region of the northeastern United States. As of the 2010 Census, it has the highest per-capita income, Human Development Index (0.962), and median household income in the United States. It is bordered by Rhode Island to the east, Massachusetts to the north, New York to the west, and Long Island Sound to the south. Its capital is Hartford and its most populous city is Bridgeport. According to most sources, it is part of New England, although large portions of it are often grouped with New York and New Jersey as the tri-state area instead. The state is named for the Connecticut River which approximately bisects the state. The word "Connecticut" is derived from various anglicized spellings of a Mohegan-Pequot word for "long tidal river". Connecticut's first European settlers were Dutchmen who established a small, short-lived settlement called Fort Hoop in Hartford at the confluence of the Park and Connecticut Rivers. Half of Connecticut was initially part of the Dutch colony New Netherland, which included much of the land between the Connecticut and Delaware Rivers, although the first major settlements were established in the 1630s by the English. Thomas Hooker led a band of followers from the Massachusetts Bay Colony and founded the Connecticut Colony; other settlers from Massachusetts founded the Saybrook Colony and the New Haven Colony. The Connecticut and New Haven colonies established documents of Fundamental Orders, considered the first constitutions in America. In 1662, the three colonies were merged under a royal charter, making Connecticut a crown colony. This was one of the Thirteen Colonies which rejected British rule in the American Revolution. Connecticut is the third smallest state by area, the 29th most populous, and the fourth most densely populated of the fifty states. It is known as the "Constitution State", the "Nutmeg State", the "Provisions State", and the "Land of Steady Habits". It was influential in the development of the federal government of the United States (see Connecticut Compromise). The Connecticut River, Thames River, and ports along Long Island Sound have given Connecticut a strong maritime tradition which continues today. The state also has a long history of hosting the financial services industry, including insurance companies in Hartford and hedge funds in Fairfield County. Connecticut is bordered on the south by Long Island Sound, on the west by New York, on the north by Massachusetts, and on the east by Rhode Island. The state capital and fourth largest city is Hartford, and other major cities and towns (by population) include Bridgeport, New Haven, Stamford, Waterbury, Norwalk, Danbury, New Britain, Greenwich, and Bristol. Connecticut is slightly larger than the country of Montenegro. There are 169 incorporated towns in Connecticut. The highest peak in Connecticut is Bear Mountain in Salisbury in the northwest corner of the state. The highest point is just east of where Connecticut, Massachusetts, and New York meet (42°3′ N, 73°29′ W), on the southern slope of Mount Frissell, whose peak lies nearby in Massachusetts. At the opposite extreme, many of the coastal towns have areas that are less than 20 feet (6 m) above sea level. Connecticut has a long maritime history and a reputation based on that history—yet the state has no direct oceanfront (technically speaking). The coast of Connecticut sits on Long Island Sound, which is an estuary. The state's access to the open Atlantic Ocean is both to the west (toward New York City) and to the east (toward the "race" near Rhode Island). This situation provides many safe harbors from ocean storms, and many transatlantic ships seek anchor inside Long Island Sound when tropical cyclones pass off the upper East Coast. The Connecticut River cuts through the center of the state, flowing into Long Island Sound. The most populous metropolitan region centered within the state lies in the Connecticut River Valley. Despite Connecticut's relatively small size, it features wide regional variations in its landscape; for example, in the northwestern Litchfield Hills, it features rolling mountains and horse farms, whereas in areas to the east of New Haven along the coast, the landscape features coastal marshes, beaches, and large scale maritime activities. Connecticut's rural areas and small towns in the northeast and northwest corners of the state contrast sharply with its industrial cities such as Stamford, Bridgeport, and New Haven, located along the coastal highways from the New York border to New London, then northward up the Connecticut River to Hartford. Many towns in northeastern and northwestern Connecticut center around a green, such as the Litchfield Green, Lebanon Green (the largest in New England), Milford Green (second largest in New England) and Wethersfield Green (the oldest in the state). Near the green typically stand historical visual symbols of New England towns, such as a white church, a colonial meeting house, a colonial tavern or inn, several colonial houses, and so on, establishing a scenic historical appearance maintained for both historic preservation and tourism. Many of the areas in southern and coastal Connecticut have been built up and rebuilt over the years, and look less visually like traditional New England. The northern boundary of the state with Massachusetts is marked by the Southwick Jog or Granby Notch, an approximately square detour into Connecticut. The origin of this anomaly is clearly established in a long line of disputes and temporary agreements which were finally concluded in 1804, when southern Southwick's residents sought to leave Massachusetts, and the town was split in half. The southwestern border of Connecticut where it abuts New York State is marked by a panhandle in Fairfield County, containing the towns of Greenwich, Stamford, New Canaan, Darien, and parts of Norwalk and Wilton. This irregularity in the boundary is the result of territorial disputes in the late 17th century, culminating with New York giving up its claim to the area, whose residents considered themselves part of Connecticut, in exchange for an equivalent area extending northwards from Ridgefield to the Massachusetts border, as well as undisputed claim to Rye, New York. Areas maintained by the National Park Service include Appalachian National Scenic Trail, Quinebaug and Shetucket Rivers Valley National Heritage Corridor, and Weir Farm National Historic Site. Connecticut lies at the rough transition zone between the southern end of the humid continental climate, and the northern portion of the humid subtropical climate. Northern Connecticut generally experiences a climate with cold winters with moderate snowfall and hot, humid summers. Far southern and coastal Connecticut has a climate with cool winters with a mix of rain and infrequent snow, and the long hot and humid summers typical of the middle and lower East Coast. Connecticut sees a fairly even precipitation pattern with rainfall/snowfall spread throughout the 12 months. Connecticut averages 56% of possible sunshine (higher than the U.S. national average), averaging 2,400 hours of sunshine annually. Early spring (April) can range from slightly cool (40s to low 50s F) to warm (65 to 70 F), while mid and late spring (late April/May) is warm. By late May, the building Bermuda High creates a southerly flow of warm and humid tropical air, bringing hot weather conditions throughout the state, with average highs in New London of and in Windsor Locks at the peak of summer in late July. On occasion, heat waves with highs from 90 to occur across Connecticut. Although summers are sunny in Connecticut, quick moving summer thunderstorms can bring brief downpours with thunder and lightning. Occasionally these thunderstorms can be severe, and the state usually averages one tornado per year. During hurricane season, the remains of tropical cyclones occasionally affect the region, though a direct hit is rare. Weather commonly associated with the fall season typically begins in October and lasts to the first days of December. Daily high temperatures in October and November range from the 50s to 60s (Fahrenheit) with nights in the 40s and upper 30s. Colorful foliage begins across northern parts of the state in early October and moves south and east reaching southeast Connecticut by early November. Far southern and coastal areas, however, have more oak and hickory trees (and fewer maples) and are often less colorful than areas to the north. By December daytime highs are in the 40s ºF for much of the state, and average overnight lows are below freezing. Winters (December through mid-March) are generally cold from south to north in Connecticut. The coldest month (January) has average high temperatures ranging from in the coastal lowlands to in the inland and northern portions on the state. The average yearly snowfall ranges from about in the higher elevations of the northern portion of the state to only along the southeast coast of Connecticut (Branford to Groton). Generally, any locale north or west of Interstate 84 receives the most snow, during a storm, and throughout the season. Most of Connecticut has less than 60 days of snow cover. Snow usually falls from late November to late March in the northern part of the state, and from early December to mid-March in the southern and coastal parts of the state. Connecticut's record high temperature is which occurred in Danbury on July 15, 1995; the record low is which occurred in the Northwest Hills Falls Village on February 16, 1943, and Coventry on January 22, 1961. Forests consist of a mix of Northeastern coastal forests of Oak in southern areas of the state, to the upland New England-Acadian forests in the northwestern parts of the state. Mountain Laurel (Kalmia latifolia) is the state flower and is native to low ridges in several parts of Connecticut. Rosebay Rhododendron (Rhododendron maximum) is also native to eastern uplands of Connecticut and Pachaug State Forest is home to the Rhododendron Sanctuary Trail. Atlantic white cedar (Chamaecyparis thyoides), is found in wetlands in the southern parts of the state. Connecticut has one native cactus (Opuntia humifusa), found in sandy coastal areas and low hillsides. Several types of beach grasses and wildflowers are also native to Connecticut. Connecticut spans USDA Plant Hardiness Zones 5b to 7a. Coastal Connecticut is the broad transition zone where more southern and subtropical plants are cultivated. In some coastal communities, Magnolia grandiflora (southern magnolia), Crape Myrtles, scrub palms (Sabal minor), and other broadleaved evergreens are cultivated in small numbers. The name Connecticut is derived from the Mohegan-Pequot word that has been translated as "long tidal river" and "upon the long river", referring to the Connecticut River. The Connecticut region was inhabited by multiple Indian tribes before European settlement and colonization, including the Mohegans, the Pequots, and the Paugusetts. The first European explorer in Connecticut was Dutchman Adriaen Block, who explored the region in 1614. Dutch fur traders then sailed up the Connecticut River, which they called Versche Rivier ("Fresh River"), and built a fort at Dutch Point in Hartford that they named "House of Hope" (). The Connecticut Colony was originally a number of separate, smaller settlements at Windsor, Wethersfield, Saybrook, Hartford, and New Haven. The first English settlers came in 1633 and settled at Windsor, and then at Wethersfield the following year. John Winthrop the Younger of Massachusetts received a commission to create Saybrook Colony at the mouth of the Connecticut River in 1635. The main body of settlers came in one large group in 1636. They were Puritans from Massachusetts Bay Colony led by Thomas Hooker, who established the Connecticut Colony at Hartford. The Quinnipiack Colony was established by John Davenport, Theophilus Eaton, and others at New Haven in March 1638. The New Haven Colony had its own constitution called "The Fundamental Agreement of the New Haven Colony", signed on June 4, 1639. The settlements were established without official sanction of the English Crown, and each was an independent political entity. In 1662, Winthrop traveled to England and obtained a charter from CharlesII which united the settlements of Connecticut. Historically important colonial settlements included Windsor (1633), Wethersfield (1634), Saybrook (1635), Hartford (1636), New Haven (1638), Fairfield (1639), Guilford (1639), Milford (1639), Stratford (1639), Farmington (1640), Stamford (1641), and New London (1646). The Pequot War marked the first major clash between colonists and Indians in New England. The Pequots reacted with increasing aggression to Colonial settlements in their territory—while simultaneously taking lands from the Narragansett and Mohegan tribes. Settlers responded to a murder in 1636 with a raid on a Pequot village on Block Island; the Pequots laid siege to Saybrook Colony's garrison that autumn, then raided Wethersfield in the spring of 1637. Colonists declared war on the Pequots, organized a band of militia and allies from the Mohegan and Narragansett tribes, and attacked a Pequot village on the Mystic River, with death toll estimates ranging between 300 and 700 Pequots. After suffering another major loss at a battle in Fairfield, the Pequots asked for a truce and peace terms. The western boundaries of Connecticut have been subject to change over time. The Hartford Treaty with the Dutch was signed on September 19, 1650, but it was never ratified by the British. According to it, the western boundary of Connecticut ran north from Greenwich Bay for a distance of , "provided the said line come not within 10 miles of Hudson River". This agreement was observed by both sides until war erupted between England and The Netherlands in 1652. Conflict continued concerning colonial limits until the Duke of York captured New Netherland in 1664. On the other hand, Connecticut's original Charter in 1662 granted it all the land to the "South Sea"—that is, to the Pacific Ocean. Most Colonial royal grants were for long east-west strips. Connecticut took its grant seriously and established a ninth county between the Susquehanna River and Delaware River named Westmoreland County. This resulted in the brief Pennamite Wars with Pennsylvania. Yale College was established in 1701, providing Connecticut with an important institution to educate clergy and civil leaders. The Congregational church dominated religious life in the colony and, by extension, town affairs in many parts. With more than 600 miles of coastline including along its navigable rivers, during the colonial years Connecticut developed the antecedents of a maritime tradition that would later produce booms in shipbuilding, marine transport, naval support, seafood production, and leisure boating. Historical records list the Tryall as the first vessel built in Connecticut Colony, in 1649 at a site on the Connecticut River in present-day Wethersfield. In the two decades leading up to 1776 and the American Revolution, Connecticut boatyards launched about 100 sloops, schooners and brigs according to a database of U.S. customs records maintained online by the Mystic Seaport Museum, the largest being the 180-ton Patient Mary launched in New Haven in 1763. Connecticut's first lighthouse was constructed in 1760 at the mouth of the Thames River with the New London Harbor Lighthouse. Connecticut designated four delegates to the Second Continental Congress who signed the Declaration of Independence: Samuel Huntington, Roger Sherman, William Williams, and Oliver Wolcott. Connecticut's legislature authorized the outfitting of six new regiments in 1775, in the wake of the clashes between British regulars and Massachusetts militia at Lexington and Concord. There were some 1,200 Connecticut troops on hand at the Battle of Bunker Hill in June 1775. In 1775, David Bushnell invented the Turtle which the following year launched the first submarine attack in history, unsuccessfully against a British warship at anchor in New York Harbor. In 1777, the British got word of Continental Army supplies in Danbury, and they landed an expeditionary force of some 2,000 troops in Westport. This force then marched to Danbury and destroyed homes and much of the depot. Continental Army troops and militia led by General David Wooster and General Benedict Arnold engaged them on their return march at Ridgefield in 1777. For the winter of 1778–79, General George Washington decided to split the Continental Army into three divisions encircling New York City, where British General Sir Henry Clinton had taken up winter quarters. Major General Israel Putnam chose Redding as the winter encampment quarters for some 3,000 regulars and militia under his command. The Redding encampment allowed Putnam's soldiers to guard the replenished supply depot in Danbury and to support any operations along Long Island Sound and the Hudson River Valley. Some of the men were veterans of the winter encampment at Valley Forge, Pennsylvania the previous winter. Soldiers at the Redding camp endured supply shortages, cold temperatures, and significant snow, with some historians dubbing the encampment "Connecticut's Valley Forge". The state was also the launching site for a number of raids against Long Island orchestrated by Samuel Holden Parsons and Benjamin Tallmadge, and provided men and material for the war effort, especially to Washington's army outside New York City. General William Tryon raided the Connecticut coast in July 1779, focusing on New Haven, Norwalk, and Fairfield. New London and Groton Heights were raided in September 1781 by Benedict Arnold, who had turned traitor to the British. At the outset of the American Revolution, the Continental Congress assigned Nathaniel Shaw Jr. of New London as its naval agent in charge of recruiting privateers to seize British vessels as opportunities presented, with nearly 50 operating out of the Thames River which eventually drew the reprisal from the British force led by Arnold. Connecticut ratified the U.S. Constitution on January 9, 1788, becoming the fifth state. The state prospered during the era following the American Revolution, as mills and textile factories were built and seaports flourished from trade and fisheries. After Congress established in 1790 the predecessor to the U.S. Revenue Cutter Service that would evolve into the U.S. Coast Guard, President Washington assigned Jonathan Maltbie as one of seven masters to enforce customs regulations, with Maltbie monitoring the southern New England coast with a 48-foot cutter sloop named Argus. In 1786, Connecticut ceded territory to the U.S. government that became part of the Northwest Territory. The state retained land extending across the northern part of present-day Ohio called the Connecticut Western Reserve. The Western Reserve section was settled largely by people from Connecticut, and they brought Connecticut place names to Ohio. Connecticut made agreements with Pennsylvania and New York which extinguished the land claims within those states' boundaries and created the Connecticut Panhandle. The state then ceded the Western Reserve in 1800 to the federal government, which brought it to its present boundaries (other than minor adjustments with Massachusetts). For the first time in 1800, Connecticut shipwrights launched more than 100 vessels in a single year. Over the following decade to the doorstep of renewed hostilities with Britain that sparked the War of 1812, Connecticut boatyards constructed close to 1,000 vessels, the most productive stretch of any decade in the 19th century. During the war, the British launched raids in Stonington and Essex and blockaded vessels in the Thames River. Derby native Isaac Hull became Connecticut's best-known naval figure to win renown during the conflict, as captain of the USS Constitution. The British blockade during the War of 1812 hurt exports and bolstered the influence of Federalists who opposed the war. The cessation of imports from Britain stimulated the construction of factories to manufacture textiles and machinery. Connecticut came to be recognized as a major center for manufacturing, due in part to the inventions of Eli Whitney and other early innovators of the Industrial Revolution. The war led to the development of fast clippers that helped extend the reach of New England merchants to the Pacific and Indian oceans. The first half of the 19th century saw as well a rapid rise in whaling, with New London emerging as one of the New England industry's three biggest home ports after Nantucket and New Bedford. The state was known for its political conservatism, typified by its Federalist party and the Yale College of Timothy Dwight. The foremost intellectuals were Dwight and Noah Webster, who compiled his great dictionary in New Haven. Religious tensions polarized the state, as the Congregational Church struggled to maintain traditional viewpoints, in alliance with the Federalists. The failure of the Hartford Convention in 1814 hurt the Federalist cause, with the Democratic-Republican Party gaining control in 1817. Connecticut had been governed under the "Fundamental Orders" since 1639, but the state adopted a new constitution in 1818. Connecticut manufacturers played a major role in supplying the Union forces with weapons and supplies during the Civil War. The state furnished 55,000 men, formed into thirty full regiments of infantry, including two in the U.S. Colored Troops, with several Connecticut men becoming generals. The Navy attracted 250 officers and 2,100 men, and Glastonbury native Gideon Welles was Secretary of the Navy. James H. Ward of Hartford was the first U.S. Naval Officer killed in the Civil War. Connecticut casualties included 2,088 killed in combat, 2,801 dying from disease, and 689 dying in Confederate prison camps. A surge of national unity in 1861 brought thousands flocking to the colors from every town and city. However, as the war became a crusade to end slavery, many Democrats (especially Irish Catholics) pulled back. The Democrats took a pro-slavery position and included many Copperheads willing to let the South secede. The intensely fought 1863 election for governor was narrowly won by the Republicans. Connecticut's extensive industry, dense population, flat terrain, and wealth encouraged the construction of railroads starting in 1839. By 1840, of line were in operation, growing to in 1850 and in 1860. The New York, New Haven and Hartford Railroad, called the "New Haven" or "The Consolidated", became the dominant Connecticut railroad company after 1872. J. P. Morgan began financing the major New England railroads in the 1890s, dividing territory so that they would not compete. The New Haven purchased 50 smaller companies, including steamship lines, and built a network of light rails (electrified trolleys) that provided inter-urban transportation for all of southern New England. By 1912, the New Haven operated over of track with 120,000 employees. As steam-powered passenger ships proliferated after the Civil War, Noank would produce the two largest built in Connecticut during the 19th century, with the 332-foot wooden steam paddle wheeler Rhode Island launched in 1882, and the 345-foot paddle wheeler Connecticut seven years later. Connecticut shipyards would launch more than 165 steam-powered vessels in the 19th century. In 1875, the first telephone exchange in the world was established in New Haven. When World War I broke out in 1914, Connecticut became a major supplier of weaponry to the U.S. military; by 1918, 80% of the state's industries were producing goods for the war effort. Remington Arms in Bridgeport produced half the small-arms cartridges used by the U.S. Army, with other major suppliers including Winchester in New Haven and Colt in Hartford. Connecticut was also an important U.S. Navy supplier, with Electric Boat receiving orders for 85 submarines, Lake Torpedo Boat building more than 20 subs, and the Groton Iron Works building freighters. On June 21, 1916, the Navy made Groton the site for its East Coast submarine base and school. The state enthusiastically supported the American war effort in 1917 and 1918 with large purchases of war bonds, a further expansion of industry, and an emphasis on increasing food production on the farms. Thousands of state, local, and volunteer groups mobilized for the war effort and were coordinated by the Connecticut State Council of Defense. Manufacturers wrestled with manpower shortages; Waterbury's American Brass and Manufacturing Company was running at half capacity, so the federal government agreed to furlough soldiers to work there. In 1919, J. Henry Roraback started the Connecticut Light & Power Co. which became the state's dominant electric utility. In 1925, Frederick Rentschler spurred the creation of Pratt & Whitney in Hartford to develop engines for aircraft; the company became an important military supplier in World WarII and one of the three major manufacturers of jet engines in the world. On September 21, 1938, the most destructive storm in New England history struck eastern Connecticut, killing hundreds of people. The eye of the "Long Island Express" passed just west of New Haven and devastated the Connecticut shoreline between Old Saybrook and Stonington from the full force of wind and waves, even though they had partial protection by Long Island. The hurricane caused extensive damage to infrastructure, homes, and businesses. In New London, a 500-foot (150 m) sailing ship was driven into a warehouse complex, causing a major fire. Heavy rainfall caused the Connecticut River to flood downtown Hartford and East Hartford. An estimated 50,000 trees fell onto roadways. The advent of lend-lease in support of Britain helped lift Connecticut from the Great Depression, with the state a major production center for weaponry and supplies used in World WarII. Connecticut manufactured 4.1% of total U.S. military armaments produced during the war, ranking ninth among the 48 states, with major factories including Colt for firearms, Pratt & Whitney for aircraft engines, Chance Vought for fighter planes, Hamilton Standard for propellers, and Electric Boat for submarines and PT boats. In Bridgeport, General Electric produced a significant new weapon to combat tanks: the bazooka. On May 13, 1940, Igor Sikorsky made an untethered flight of the first practical helicopter. The helicopter saw limited use in World War II, but future military production made Sikorsky Aircraft's Stratford plant Connecticut's largest single manufacturing site by the start of the 21st century. Connecticut lost some wartime factories following the end of hostilities, but the state shared in a general post-war expansion that included the construction of highways and resulting in middle-class growth in suburban areas. Prescott Bush represented Connecticut in the U.S. Senate from 1952 to 1963; his son George H.W. Bush and grandson George W. Bush both became presidents of the United States. In 1965, Connecticut ratified its current constitution, replacing the document that had served since 1818. In 1968, commercial operation began for the Connecticut Yankee Nuclear Power Plant in East Haddam; in 1970, the Millstone Nuclear Power Station began operations in Waterford. In 1974, Connecticut elected Democratic Governor Ella T. Grasso, who became the first woman in any state to be elected governor. Connecticut's dependence on the defense industry posed an economic challenge at the end of the Cold War. The resulting budget crisis helped elect Lowell Weicker as governor on a third-party ticket in 1990. Weicker's remedy was a state income tax which proved effective in balancing the budget, but only for the short-term. He did not run for a second term, in part because of this politically unpopular move. In 1992, initial construction was completed on Foxwoods Casino at the Mashantucket Pequots reservation in eastern Connecticut, which became the largest casino in the Western Hemisphere. Mohegan Sun followed four years later. In 2000, presidential candidate Al Gore chose Senator Joe Lieberman as his running mate, marking the first time that a major party presidential ticket included someone of the Jewish faith. Gore and Lieberman fell five votes short of George W. Bush and Dick Cheney in the Electoral College. In the terrorist attacks of September 11, 2001, 65 state residents were killed, mostly Fairfield County residents who were working in the World Trade Center. In 2004, Republican Governor John G. Rowland resigned during a corruption investigation, later pleading guilty to federal charges. Connecticut was hit by three major storms in just over 14 months in 2011 and 2012, with all three causing extensive property damage and electric outages. Hurricane Irene struck Connecticut August 28, and damage totaled $235 million. Two months later, the "Halloween nor'easter" dropped extensive snow onto trees, resulting in snapped branches and trunks that damaged power lines; some areas were without electricity for 11 days. Hurricane Sandy had tropical storm-force winds when it reached Connecticut October 29, 2012. Sandy's winds drove storm surges into streets and cut power to 98% of homes and businesses, with more than $360 million in damage. On December 14, 2012, Adam Lanza shot and killed 26 people at Sandy Hook Elementary School in Newtown, and then killed himself. The massacre spurred renewed efforts by activists for tighter laws on gun ownership nationally. In the summer and fall of 2016, Connecticut experienced a drought in many parts of the state, causing some water-use bans. As of , 45% of the state was listed at Severe Drought by the U.S. Drought Monitor, including almost all of Hartford and Litchfield counties. All the rest of the state was in Moderate Drought or Severe Drought, including Middlesex, Fairfield, New London, New Haven, Windham, and Tolland counties. This affected the agricultural economy in the state. The United States Census Bureau estimates that the population of Connecticut was 3,565,287 on July 1, 2019, a 0.25% decrease since the 2010 United States Census. , Connecticut had an estimated population of 3,565,287, which is a decrease of 7,378 (0.25%) from the prior year and a decrease of 8,810 (0.25%) since 2010. This includes a natural increase since the last census of 67,427 (222,222 births minus 154,795 deaths) and an increase due to net migration of 41,718 people into the state. Immigration from outside the United States resulted in a net increase of 75,991 people, and migration within the country produced a net loss of 34,273. Based on the 2005 estimates, Connecticut moved from 29th most populous state to 30th. 2018 estimates put Connecticut's population at 3,572,665. 6.6% of its population was reported as being under5 years old, 24.7% under 18 years old, and 13.8% were 65 years of age or older. Females made up approximately 51.6% of the population, with 48.4% male. In 1790, 97% of the population in Connecticut was classified as "rural". The first census in which less than half the population was classified as rural was 1890. In the 2000 census, only 12.3% was considered rural. Most of western and southern Connecticut (particularly the Gold Coast) is strongly associated with New York City; this area is the most affluent and populous region of the state and has high property costs and high incomes. The center of population of Connecticut is located in the town of Cheshire. As of the 2010 United States Census, Connecticut's race and ethnic percentages were: Hispanics and Latinos of any race made up 13.4% of the population in the 2010 Census. The state's most populous ethnic group is Non-Hispanic White, but this has declined from 98% in 1940 to 71% in 2010. As of 2004, 11.4% of the population (400,000) was foreign-born. In 1870, native-born Americans had accounted for 75% of the state's population, but that had dropped to 35% by 1918. As of 2000, 81.69% of Connecticut residents age5 and older spoke English at home and 8.42% spoke Spanish, followed by Italian at 1.59%, French at 1.31%, and Polish at 1.20%. The largest European ancestry groups are: , 46.1% of Connecticut's population younger than age1 were minorities. "Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number." The religious affiliations of the people of Connecticut : A Pew survey of Connecticut residents' religious self-identification showed the following distribution of affiliations: Protestant 35%, Mormonism 1%, Jewish 3%, Roman Catholic 33%, Orthodox 1%, Non-religious 28%, Jehovah's Witness 1%, Hinduism 1%, Buddhism 1% and Islam 1%. Jewish congregations had 108,280 (3.2%) members in 2000. The Jewish population is concentrated in the towns near Long Island Sound between Greenwich and New Haven, in Greater New Haven and in Greater Hartford, especially the suburb of West Hartford. According to the Association of Religion Data Archives, the largest Christian denominations, by number of adherents, in 2010 were: the Catholic Church, with 1,252,936; the United Church of Christ, with 96,506; and non-denominational Evangelical Protestants, with 72,863. Recent immigration has brought other non-Christian religions to the state, but the numbers of adherents of other religions are still low. Connecticut is also home to New England's largest Protestant Church: The First Cathedral in Bloomfield, Connecticut located in Hartford County. Hartford is seat to the Roman Catholic Archdiocese of Hartford, which is sovereign over the Diocese of Bridgeport and the Diocese of Norwich. Connecticut's economic output in 2019 as measured by gross domestic product was $289 billion, up from $277.9 billion in 2018. Connecticut's per capita personal income in 2019 was estimated at $79,087, the highest of any state. There is, however, a great disparity in incomes throughout the state; after New York, Connecticut had the second largest gap nationwide between the average incomes of the top 1% and the average incomes of the bottom 99%. According to a 2018 study by Phoenix Marketing International, Connecticut had the third-largest number of millionaires per capita in the United States, with a ratio of 7.75%. New Canaan is the wealthiest town in Connecticut, with a per capita income of $85,459. Hartford is the poorest municipality in Connecticut, with a per capita income of $13,428 in 2000. As of December 2019, Connecticut's seasonally adjusted unemployment rate was 3.8%, with U.S. unemployment at 3.5% that month. Dating back to 1982, Connecticut recorded its lowest unemployment in 2000 between August and October, at 2.2%. The highest unemployment rate during that period occurred in November and December 2010 at 9.3%, but economists expect record new levels of layoffs as a result of business closures in the spring of 2020 as the result of the coronavirus pandemic. Tax is collected by the Connecticut Department of Revenue Services and by local municipalities. As of 2012, Connecticut residents had the second highest rate in the nation of combined state and local taxes after New York, at 12.6% of income compared to the national average of 9.9% as reported by the Tax Foundation. Before 1991, Connecticut had an investment-only income tax system. Income from employment was untaxed, but income from investments was taxed at 13%, the highest rate in the U.S., with no deductions allowed for costs of producing the investment income, such as interest on borrowing. In 1991, under Governor Lowell P. Weicker Jr., an independent, the system was changed to one in which the taxes on employment income and investment income were equalized at a maximum rate of 4%. The new tax policy drew investment firms to Connecticut; , Fairfield County was home to the headquarters for 16 of the 200 largest hedge funds in the world. , the income tax rates on Connecticut individuals were divided into seven tax brackets of 3% (on income up to $10,000); 5% ($10,000-$50,000); 5.5% ($50,000-$100,000); 6% ($100,000-$200,000); 6.5% ($200,000-$250,000); 6.9% ($250,000-$500,000); and 6.99% above $500,000, with additional amounts owed depending on the bracket. All wages of Connecticut residents are subject to the state's income tax, even if earned outside the state. However, in those cases, Connecticut income tax must be withheld only to the extent the Connecticut tax exceeds the amount withheld by the other jurisdiction. Since New York has higher income tax rates than Connecticut, this effectively means that Connecticut residents who work in New York have no Connecticut income tax withheld. Connecticut permits a credit for taxes paid to other jurisdictions, but since residents who work in other states are still subject to Connecticut income taxation, they may owe taxes if the jurisdictional credit does not fully offset the Connecticut tax amount. Connecticut levies a 6.35% state sales tax on the retail sale, lease, or rental of most goods. Some items and services in general are not subject to sales and use taxes unless specifically enumerated as taxable by statute. A provision excluding clothing under $50 from sales tax was repealed . There are no additional sales taxes imposed by local jurisdictions. In 2001, Connecticut instituted what became an annual sales tax "holiday" each August lasting one week, when retailers do not have to remit sales tax on certain items and quantities of clothing that has varied from year to year. State law authorizes municipalities to tax property, including real estate, vehicles and other personal property, with state statute providing varying exemptions, credits and abatements. All assessments are at 70% of fair market value. The maximum property tax credit is $200 per return and any excess may not be refunded or carried forward. According to the Tax Foundation, on a per capita basis in the 2017 fiscal year Connecticut residents paid the 3rd highest average property taxes in the nation after New Hampshire and New Jersey. , gasoline taxes and fees in Connecticut were 40.13 cents per gallon, 11th highest in the United States which had a nationwide average of 36.13 cents a gallon excluding federal taxes. Diesel taxes and fees as of January 2020 in Connecticut were 46.50 cents per gallon, ninth highest nationally with the U.S. average at 37.91 cents. In 2019, sales of single-family homes in Connecticut totaled 33,146 units, a 2.1 percent decline from the 2018 transaction total. The median home sold in 2019 recorded a transaction amount of $260,000, up 0.4 percent from 2018. Connecticut had the seventh highest rate of home foreclosure activity in the country in 2019 at 0.53 percent of the total housing stock. Finance, insurance and real estate was Connecticut's largest industry in 2018 as ranked by gross domestic product, generating $75.7 billion in GDP that year. Major financial industry employers include The Hartford, Travelers, Cigna, the Aetna subsidiary of CVS Health, Mass Mutual, People's United Financial, Bank of America, Realogy, Bridgewater Associates, GE Capital, William Raveis Real Estate, and Berkshire Hathaway through reinsurance and residential real estate subsidiaries. The combined educational, health and social services sector was the largest single industry as ranked by employment, with a combined workforce of 342,600 people at the end of 2019, ranking fourth the year before in GDP at $28.3 billion. The broad business and professional services sector had the second highest GDP total in Connecticut in 2018 at an estimated $33.7 billion. Manufacturing was the third biggest industry in 2018 with GDP of $30.8 billion, dominated by Raytheon Technologies formed in the March 2020 merger of Hartford-based United Technologies and Waltham, Mass.-based Raytheon Co. As of the merger, Raytheon Technologies employed about 19,000 people in Connecticut through subsidiaries Pratt & Whitney and Collins Aerospace. Lockheed Martin subsidiary Sikorsky Aircraft operates Connecticut's single largest manufacturing plant in Stratford, where it makes helicopters. Other major manufacturers include the Electric Boat division of General Dynamics, which makes submarines in Groton, Boehringer Ingelheim, a pharmaceuticals manufacturer with its U.S. headquarters in Ridgefield, and ASML, which in Wilton makes precision lithography machines used to create circuitry on semiconductors and flat-screen displays. Connecticut historically was a center of gun manufacturing, and four gun-manufacturing firms continued to operate in the state , employing 2,000 people: Colt, Stag, Ruger, and Mossberg. Marlin, owned by Remington, closed in April 2011. Other large components of the Connecticut economy in 2018 included wholesale trade ($18.1 billion in GDP); information services ($13.8 billion); retail ($13.7 billion); arts, entertainment and food services ($9.1 billion); and construction ($8.3 billion). Tourists spent $9.3 billion in Connecticut in 2017 according to estimates as part of a series of studies commissioned by the state of Connecticut. Foxwoods Resort Casino and Mohegan Sun are the two biggest tourist draws and number among the state's largest employers; both are located on Indian reservations in the eastern part of Connecticut. Connecticut's agricultural production totaled $580 million in 2017, with just over half of that revenue the result of nursery stock production. Milk production totaled $81 million that year, with other major product categories including eggs, vegetables and fruit, tobacco and shellfish. The Interstate highways in the state are Interstate 95 (I-95) traveling southwest to northeast along the coast, I-84 traveling southwest to northeast in the center of the state, I-91 traveling north to south in the center of the state, and I-395 traveling north to south near the eastern border of the state. The other major highways in Connecticut are the Merritt Parkway and Wilbur Cross Parkway, which together form Connecticut Route 15 (Route 15), traveling from the Hutchinson River Parkway in New York parallel to I-95 before turning north of New Haven and traveling parallel to I-91, finally becoming a surface road in Berlin. I-95 and Route 15 were originally toll roads; they relied on a system of toll plazas at which all traffic stopped and paid fixed tolls. A series of major crashes at these plazas eventually contributed to the decision to remove the tolls in 1988. Other major arteries in the state include U.S. Route7 (US7) in the west traveling parallel to the New York state line, Route8 farther east near the industrial city of Waterbury and traveling north–south along the Naugatuck River Valley nearly parallel with US7, and Route9 in the east. Between New Haven and New York City, I-95 is one of the most congested highways in the United States. Although I-95 has been widened in several spots, some areas are only three lanes and this strains traffic capacity, resulting in frequent and lengthy rush hour delays. Frequently, the congestion spills over to clog the parallel Merritt Parkway and even US1. The state has encouraged traffic reduction schemes, including rail use and ride-sharing. Connecticut also has a very active bicycling community, with one of the highest rates of bicycle ownership and use in the United States, particularly in New Haven. According to the U.S. Census 2006 American Community Survey, New Haven has the highest percentage of commuters who bicycle to work of any major metropolitan center on the East Coast. Rail is a popular travel mode between New Haven and New York City's Grand Central Terminal. Southwestern Connecticut is served by the Metro-North Railroad's New Haven Line, operated by the Metropolitan Transportation Authority and providing commuter service to New York City and New Haven, with branches servicing New Canaan, Danbury, and Waterbury. Connecticut lies along Amtrak's Northeast Corridor which features frequent Northeast Regional and Acela Express service from New Haven south to New York City, Philadelphia, Baltimore, Washington, DC, and Norfolk, VA. Coastal cities and towns between New Haven and New London are also served by the Shore Line East commuter line. Several new stations were completed along the Connecticut shoreline recently, and a commuter rail service called the Hartford Line between New Haven and Springfield on Amtrak's New Haven-Springfield Line began operating in June 2018. A proposed commuter rail service, the Central Corridor Rail Line, will connect New London with Norwich, Willimantic, Storrs, and Stafford Springs, with service continuing into Massachusetts and Brattleboro. Amtrak also operates a shuttle service (CTRail) between New Haven and Springfield, Massachusetts, serving Wallingford, Meriden, Berlin, Hartford, Windsor Locks, and Springfield, MA and the Vermonter runs from Washington to St. Albans, Vermont via the same line. Statewide bus service is supplied by Connecticut Transit, owned by the Connecticut Department of Transportation, with smaller municipal authorities providing local service. Bus networks are an important part of the transportation system in Connecticut, especially in urban areas like Hartford, Stamford, Norwalk, Bridgeport and New Haven. Connecticut Transit also operates CTfastrak, a bus rapid transit service between New Britain and Hartford. The bus route opened to the public on March 28, 2015. Bradley International Airport, is located in Windsor Locks, north of Hartford. Many residents of central and southern Connecticut also make heavy use of JFK International Airport and Newark International Airports, especially for international travel. Smaller regional air service is provided at Tweed New Haven Regional Airport. Larger civil airports include Danbury Municipal Airport and Waterbury-Oxford Airport in western Connecticut, Hartford–Brainard Airport in central Connecticut, and Groton-New London Airport in eastern Connecticut. Sikorsky Memorial Airport is located in Stratford and mostly services cargo, helicopter and private aviation. The Bridgeport & Port Jefferson Ferry travels between Bridgeport, Connecticut and Port Jefferson, New York by crossing Long Island Sound. Ferry service also operates out of New London to Orient, New York; Fishers Island, New York; and Block Island, Rhode Island, which are popular tourist destinations. Small local services operate the Rocky Hill–Glastonbury Ferry and the Chester–Hadlyme Ferry which cross the Connecticut River. Hartford has been the sole capital of Connecticut since 1875. Before then, New Haven and Hartford alternated as capitals. Connecticut is known as the "Constitution State". The origin of this nickname is uncertain, but it likely comes from Connecticut's pivotal role in the federal constitutional convention of 1787, during which Roger Sherman and Oliver Ellsworth helped to orchestrate what became known as the Connecticut Compromise, or the Great Compromise. This plan combined the Virginia Plan and the New Jersey Plan to form a bicameral legislature, a form copied by almost every state constitution since the adoption of the federal constitution. Variations of the bicameral legislature had been proposed by Virginia and New Jersey, but Connecticut's plan was the one that was in effect until the early 20th century, when Senators ceased to be selected by their state legislatures and were instead directly elected. Otherwise, it is still the design of Congress. The nickname also might refer to the Fundamental Orders of 1638–39. These Fundamental Orders represent the framework for the first formal Connecticut state government written by a representative body in Connecticut. The State of Connecticut government has operated under the direction of four separate documents in the course of the state's constitutional history. After the Fundamental Orders, Connecticut was granted governmental authority by King Charles II of England through the Connecticut Charter of 1662. Separate branches of government did not exist during this period, and the General Assembly acted as the supreme authority. A constitution similar to the modern U.S. Constitution was not adopted in Connecticut until 1818. Finally, the current state constitution was implemented in 1965. The 1965 constitution absorbed a majority of its 1818 predecessor, but incorporated a handful of important modifications. The governor heads the executive branch. , Ned Lamont is the Governor and Susan Bysiewicz is the Lieutenant Governor; both are Democrats. From 1639 until the adoption of the 1818 constitution, the governor presided over the General Assembly. In 1974, Ella Grasso was elected as the governor of Connecticut. This was the first time in United States history when a woman was a governor without her husband being governor first. There are several executive departments: Administrative Services, Agriculture, Banking, Children and Families, Consumer Protection, Correction, Economic and Community Development, Developmental Services, Construction Services, Education, Emergency Management and Public Protection, Energy & Environmental Protection, Higher Education, Insurance, Labor, Mental Health and Addiction Services, Military, Motor Vehicles, Public Health, Public Utility Regulatory Authority, Public Works, Revenue Services, Social Services, Transportation, and Veterans Affairs. In addition to these departments, there are other independent bureaus, offices and commissions. In addition to the Governor and Lieutenant Governor, there are four other executive officers named in the state constitution that are elected directly by voters: Secretary of the State, Treasurer, Comptroller, and Attorney General. All executive officers are elected to four-year terms. The legislature is the General Assembly. The General Assembly is a bicameral body consisting of an upper body, the State Senate (36 senators); and a lower body, the House of Representatives (151 representatives). Bills must pass each house in order to become law. The governor can veto the bill, but this veto can be overridden by a two-thirds majority in each house. Per Article XV of the state constitution, Senators and Representatives must be at least 18 years of age and are elected to two-year terms in November on even-numbered years. There also must always be between 30 and 50 senators and 125 to 225 representatives. The Lieutenant Governor presides over the Senate, except when absent from the chamber, when the President pro tempore presides. The Speaker of the House presides over the House. , Joe Aresimowicz is the Speaker of the House of Connecticut. , Connecticut's United States Senators are Richard Blumenthal (Democrat) and Chris Murphy (Democrat). Connecticut has five representatives in the U.S. House, all of whom are Democrats. Locally elected representatives also develop Local ordinances to govern cities and towns. The town ordinances often include noise control and zoning guidelines. However, the State of Connecticut also provides statewide ordinances for noise control as well. The highest court of Connecticut's judicial branch is the Connecticut Supreme Court, headed by the Chief Justice of Connecticut. The Supreme Court is responsible for deciding on the constitutionality of the law or cases as they relate to the law. Its proceedings are similar to those of the United States Supreme Court, with no testimony given by witnesses, and the lawyers of the two sides each present oral arguments no longer than thirty minutes. Following a court proceeding, the court may take several months to arrive at a judgment. the Chief Justice is Richard A. Robinson. In 1818, the court became a separate entity, independent of the legislative and executive branches. The Appellate Court is a lesser statewide court and the Superior Courts are lower courts that resemble county courts of other states. The State of Connecticut also offers access to Arrest warrant enforcement statistics through the Office of Policy and Management. Connecticut does not have county government, unlike all other states except Rhode Island. Connecticut county governments were mostly eliminated in 1960, with the exception of sheriffs elected in each county. In 2000, the county sheriff was abolished and replaced with the state marshal system, which has districts that follow the old county territories. The judicial system is divided into judicial districts at the trial-court level which largely follow the old county lines. The eight counties are still widely used for purely geographical and statistical purposes, such as weather reports and census reporting. Connecticut shares with the rest of New England a governmental institution called the New England town. The state is divided into 169 towns which serve as the fundamental political jurisdictions. There are also 21 cities, most of which simply follow the boundaries of their namesake towns and have a merged city-town government. There are two exceptions: the City of Groton, which is a subsection of the Town of Groton, and the City of Winsted in the Town of Winchester. There are also nine incorporated boroughs which may provide additional services to a section of town. Naugatuck is a consolidated town and borough. The state is also divided into 15 planning regions defined by the state Office of Planning and Management, with the exception of the Town of Stafford in Tolland County. The Intragovernmental Policy Division of this Office coordinates regional planning with the administrative bodies of these regions. Each region has an administrative body known as a regional council of governments, a regional council of elected officials, or a regional planning agency. The regions are established for the purpose of planning "coordination of regional and state planning activities; redesignation of logical planning regions and promotion of the continuation of regional planning organizations within the state; and provision for technical aid and the administration of financial assistance to regional planning organizations". Connecticut residents who register to vote may declare an affiliation to a political party, may become unaffiliated at will, and may change affiliations subject to certain waiting periods. about 60% of registered voters are enrolled (just over1% total in 28 third parties minor parties), and ratios among unaffiliated voters and the two major parties are about eight unaffiliated for every seven in the Democratic Party of Connecticut and for every four in the Connecticut Republican Party. Many Connecticut towns and cities show a marked preference for moderate candidates of either party. In April 2012 both houses of the Connecticut state legislature passed a bill (20 to 16 and 86 to 62) that abolished the capital punishment for all future crimes, while 11 inmates who were waiting on the death row at the time could still be executed. In July 2009 the Connecticut legislature overrode a veto by Governor M. Jodi Rell to pass SustiNet, the first significant public-option health care reform legislation in the nation. Connecticut ranked third in the nation for educational performance, according to Education Week's Quality Counts 2018 report. It earned an overall score of 83.5 out of 100 points. On average, the country received a score of 75.2. Connecticut posted a B-plus in the Chance-for-Success category, ranking fourth on factors that contribute to a person's success both within and outside the K-12 education system. Connecticut received a mark of B-plus and finished fourth for School Finance. It ranked 12th with a grade of C on the K-12 Achievement Index. The Connecticut State Board of Education manages the public school system for children in grades K–12. Board of Education members are appointed by the Governor of Connecticut. Statistics for each school are made available to the public through an online database system called "CEDAR". The CEDAR database also provides statistics for "ACES" or "RESC" schools for children with behavioral disorders. Connecticut was home to the nation's first law school, Litchfield Law School, which operated from 1773 to 1833 in Litchfield. Hartford Public High School (1638) is the third-oldest secondary school in the nation after the Collegiate School (1628) in Manhattan and the Boston Latin School (1635). The state also has many noted private day schools, and its boarding schools draw students from around the world. There are two Connecticut teams in the American Hockey League. The Bridgeport Sound Tigers is a farm team for the New York Islanders which competes at the Webster Bank Arena in Bridgeport. The Hartford Wolf Pack is the affiliate of the New York Rangers; they play in the XL Center in Hartford. The Hartford Yard Goats of the Eastern League are a AA affiliate of the Colorado Rockies. Also, the Norwich Sea Unicorns play in the New York-Penn League and are an A affiliate of the Detroit Tigers. The New Britain Bees play in the Atlantic League of Professional Baseball. The Connecticut Sun of the WNBA currently play at the Mohegan Sun Arena in Uncasville. In soccer, Hartford Athletic began play in the USL Championship in 2019, serving as the reserve team for the New England Revolution of Major League Soccer. The state hosts several major sporting events. Since 1952, a PGA Tour golf tournament has been played in the Hartford area. It was originally called the "Insurance City Open" and later the "Greater Hartford Open" and is now known as the Travelers Championship. The Connecticut Open tennis tournament is held annually in the Cullman-Heyman Tennis Center at Yale University in New Haven. Lime Rock Park in Salisbury is a road racing course, home to the International Motor Sports Association, SCCA, United States Auto Club, and K&N Pro Series East races. Thompson International Speedway, Stafford Motor Speedway, and Waterford Speedbowl are oval tracks holding weekly races for NASCAR Modifieds and other classes, including the NASCAR Whelen Modified Tour. The state also hosts several major mixed martial arts events for Bellator MMA and the Ultimate Fighting Championship. The Hartford Whalers of the National Hockey League played in Hartford from 1975 to 1997 at the Hartford Civic Center. They departed to Raleigh, North Carolina after disputes with the state over the construction of a new arena, and they are now known as the Carolina Hurricanes. In 1926, Hartford had a franchise in the National Football League known as the Hartford Blues. They joined the National League for one season in 1876, making them the state's only Major League baseball franchise before moving to Brooklyn, New York and then disbanding one season later. From 2000 until 2006 the city was home to the Hartford FoxForce of World TeamTennis. The Connecticut Huskies are the team of the University of Connecticut (UConn); they play NCAA Division I sports. Both the men's basketball and women's basketball teams have won multiple national championships. In 2004, UConn became the first school in NCAA DivisionI history to have its men's and women's basketball programs win the national title in the same year; they repeated the feat in 2014 and are still the only DivisionI school to win both titles in the same year. The UConn women's basketball team holds the record for the longest consecutive winning streak in NCAA college basketball at 111 games, a streak that ended in 2017. The UConn Huskies football team has played in the Football Bowl Subdivision since 2002, and has played in four bowl games. New Haven biennially hosts "The Game" between the Yale Bulldogs and the Harvard Crimson, the country's second-oldest college football rivalry. Yale alumnus Walter Camp is deemed the "Father of American Football", and he helped develop modern football while living in New Haven. Other Connecticut universities which feature DivisionI sports teams are Quinnipiac University, Fairfield University, Central Connecticut State University, Sacred Heart University, and the University of Hartford. The Constitution State Rivalry is an in-state college football rivalry between Sacred Heart University and Central Connecticut State University. Both teams compete at the NCAA Division 1 Football Championship Subdivision level in the Northeast Conference. Since 1998, the game has been played annually with the location of the matchup determined on a yearly basis. The name "Connecticut" originated with the Mohegan word "quonehtacut", meaning "place of long tidal river". Connecticut's official nickname is "The Constitution State", adopted in 1959 and based on its colonial constitution of 1638–1639 which was the first in America and, arguably, the world. Connecticut is also unofficially known as "The Nutmeg State," whose origin is unknown. It may have come from its sailors returning from voyages with nutmeg, which was a very valuable spice in the 18th and 19th centuries. It may have originated in the early machined sheet tin nutmeg grinders sold by early Connecticut peddlers. It is also facetiously said to come from Yankee peddlers from Connecticut who would sell small carved nobs of wood shaped to look like nutmeg to unsuspecting customers. George Washington gave Connecticut the title of "The Provisions State" because of the material aid that the state rendered to the American Revolutionary War effort. Connecticut is also known as "The Land of Steady Habits". According to "Webster's New International Dictionary" (1993), a person who is a native or resident of Connecticut is a "Connecticuter". There are numerous other terms coined in print but not in use, such as "Connecticotian" (Cotton Mather in 1702) and "Connecticutensian" (Samuel Peters in 1781). Linguist Allen Walker Read suggests the more playful term "connecticutie". "Nutmegger" is sometimes used, as is "Yankee" The official state song is "Yankee Doodle". The traditional abbreviation of the state's name is "Conn".; the official postal abbreviation is CT. Commemorative stamps issued by the United States Postal Service with Connecticut themes include Nathan Hale, Eugene O'Neill, Josiah Willard Gibbs, Noah Webster, Eli Whitney, the whaling ship the "Charles W. Morgan", which is docked at Mystic Seaport, and a decoy of a broadbill duck.
https://en.wikipedia.org/wiki?curid=6466
Country Liberal Party The Country Liberal Party (CLP), officially the Country Liberals (Northern Territory), is a liberal conservative political party in Australia founded in 1974, which operates solely in the Northern Territory, however due to Christmas Island and the Cocos (Keeling) Islands forming part of the Division of Lingiari they also vote for the Country Liberal Party. The CLP first fielded candidates at the 1975 federal election, winning one seat in the Senate and the non-voting seat in the House of Representatives. Since 1979, the CLP has been formally affiliated with both the federal Liberal Party of Australia and the National Party of Australia (previously the Country Party and National Country Party). The Liberal Party, National Party, Liberal National Party of Queensland, and CLP form the Coalition of Australian centre-right parties, with the CLP alone contesting seats for the Coalition in the Northern Territory. The CLP has full voting rights within the National Party, and observer status with the Liberal Party. Currently, the CLP has one representative in federal parliament, Senator Sam McMahon. The CLP dominated the Northern Territory Legislative Assembly from its establishment in 1974 until the 2001 general election, when the CLP lost government winning only 10 of the 25 seats, and was reduced further to four parliamentary members at the 2005 election. At the 2008 election it increased its numbers, winning 11 seats. The CLP returned to office following the 2012 election, winning 16 of 25 seats, and leader Terry Mills became Chief Minister of the Northern Territory. Less than a year later, Mills was replaced as Chief Minister and CLP leader by Adam Giles at the 2013 CLP leadership ballot on 13 March. Giles was the first indigenous Australian to lead a state or territory government in Australia. Giles was defeated at the 2015 CLP leadership ballot but managed to survive in the aftermath. Multiple defections saw the CLP reduced to minority government a few months later. At the 27 August 2016 Territory election, the CLP was resoundingly defeated, winning just two of 25 seats. Gary Higgins became CLP leader and opposition leader on 2 September, with Lia Finocchiaro as his deputy. On 20 January 2020, Higgins stood down as party leader and announced his retirement at the next election. Finocchiaro became CLP leader and leader of the opposition on 1 February 2020. The Territory Country Party members first contested the 1919 federal election, with a newly established federal Country Party contesting the 1922 federal election. The 1922 election saw the main opposition party to the Australian Labor Party, the Nationalist Party of Australia deprived of a majority, and were required to form a coalition in order to command a majority on the floor of parliament. The price for such support was the resignation of Nationalist (ex-Labor) Prime Minister, Billy Hughes, who was replaced by Stanley Bruce. In 1922, the federal Division of Northern Territory was created, with one non-voting Member in the House of Representatives. Harold George Nelson was the inaugural member serving between 16 December 1922 and 15 September 1934. He was elected as an Independent but later joined the Australian Labor Party (ALP). Between 15 September 1934 and 10 December 1949 the Division of Northern Territory was held by Adair Blain, an independent member. Between 10 December 1949 and 31 October 1966 the Division was held by Jock Nelson, a member of the ALP. The Territory seat was won by the Country Party's Sam Calder at the 1966 federal election, who held the seat from 26 November 1966 to 19 September 1980. In 1966, the Country Party was established in the Northern Territory, while the Liberal Party was a small party. In recognition of this, the local Liberals supported the Country Party's Calder for the sole NT seat from 1969 to 1972. An alliance had formed, primarily against the conservatives' main opponent, the ALP. After the gradual extension of limited voting rights, in 1968 the federal Coalition government gave the Member for Northern Territory full voting rights. After the 1974 federal election and the subsequent Joint Sitting of parliament, legislation was passed to give the Australian Capital Territory and the Northern Territory representation in the Australian Senate, with two senators being elected. The Whitlam Government passed legislation in 1974 to establish a fully elected unicameral Northern Territory Legislative Assembly to replace the previous partly elected Northern Territory Legislative Council, which had been in existence since 1947. The term of the Legislative Assembly was four years. Initially, the Legislative Assembly consisted of 19 members, which was increased in 1982 to 25 members, the present number. The Northern Territory was granted self-government in 1978. Following the creation of the Legislative Assembly in 1974, the Territory's branches of the Country and Liberal parties merged to form the "Country Liberal Party" (CLP) to field candidates at the 1974 general election for the Legislative Assembly, going on to win 17 out of 19 seats. Calder was largely responsible for the push to unite the non-Labor forces in the Territory. The CLP fielded candidates at the 1975 federal election, winning one seat each in the Senate and in the House of Representatives. Since 1979, the CLP has been formally affiliated with both the federal National (previously the Country Party and National Country Party) and Liberal parties. The CLP contests seats for the Coalition in the Northern Territory rather than the Liberal or National parties. The CLP has full voting rights within the National Party, and observer status with the Liberal Party. The CLP governed the Northern Territory from 1974 until the 2001 election. During this time, it never faced more than nine opposition members. Indeed, the CLP's dominance was so absolute that its internal politics were seen as a bigger threat than any opposition party. This was especially pronounced in the mid-1980s, when a series of party-room coups resulted in the Territory having three Chief Ministers in four years. At the 2001 election the Australian Labor Party won government by one seat, ending 27 years of CLP government. The loss marked a major turning point in Northern Territory politics, a result which was exacerbated when, at the 2005 election, the ALP won the second-largest majority government in the history of the Territory, reducing the once-dominant party to just four members in the Legislative Assembly. This result was only outdone by the 1974 election, in which the CLP faced only two independents as opposition. The CLP even lost two seats in Palmerston, an area where the ALP had never come close to winning any seats before. In the 2001 federal election, the CLP won the newly formed seat of Solomon, based on Darwin/Palmerston, in the House of Representatives. In the 2004 federal election, the CLP held one seat in the House of Representatives, and one seat in the Senate. The CLP lost its federal lower house seat in the 2007 federal election, but regained it when Palmerston deputy mayor Natasha Griggs won back Solomon for the CLP. She sat with the Liberals in the House. The 2008 election saw the CLP recover from the severe loss it suffered three years earlier, increasing its representation from four to 11 members. Following the 2011 decision of ALP-turned-independent member Alison Anderson to join the CLP, this increased CLP's representation to 12 in the Assembly, leaving the incumbent Henderson Government to govern in minority with the support of Independent MP Gerry Wood. Historically, the CLP has been particularly dominant in the Territory's two major cities, Darwin/Palmerston and Alice Springs. However, in recent years the ALP has pulled even with the CLP in the Darwin area; indeed, its 2001 victory was fueled by an unexpected swing in Darwin. The CLP under the leadership of Terry Mills returned to power in the 2012 election with 16 of 25 seats, defeating the incumbent Labor Government led by Paul Henderson. In the lead up to the Territory election, CLP Senator Nigel Scullion sharply criticised the Federal Labor Government for its suspension of the live cattle trade to Indonesia - an economic mainstay of the territory. The election victory ended 11 years of ALP rule in the Northern Territory. The victory was also notable for the support it achieved from indigenous people in pastoral and remote electorates. Large swings were achieved in remote Territory electorates (where the indigenous population comprised around two-thirds of voters) and a total of five Aboriginal CLP candidates won election to the Assembly. Among the indigenous candidates elected were high-profile Aboriginal activist Bess Price and former ALP member Alison Anderson. Anderson was appointed Minister for Indigenous Advancement. In a nationally reported speech in November 2012, Anderson condemned welfare dependency and a culture of entitlement in her first ministerial statement on the status of Aboriginal communities in the Territory and said the CLP would focus on improving education and on helping create real jobs for indigenous people. Adam Giles replaced Mills as Chief Minister of the Northern Territory and party leader at the 2013 CLP leadership ballot on 13 March while Mills was on a trade mission in Japan. Giles was sworn in as Chief Minister on 14 March, becoming the first indigenous head of government of an Australian state or territory. When the CLP introduced mandatory alcohol rehabilitation for recidivist problem drinkers to replace a banned drinker register, Giles dismissed critics of the policy as "lefty welfare-orientated people". Willem Westra van Holthe challenged Giles at the 2015 CLP leadership ballot on 2 February and was elected leader by the party room in a late night vote conducted by phone. However, Giles refused to resign as Chief Minister following the vote. On 3 February, "ABC News" reported that officials were preparing an instrument for Giles' removal by the Administrator. The swearing-in of Westra van Holthe, which had been scheduled for 11:00 local time (01:30 UTC), was delayed. After a meeting of the parliamentary wing of the CLP, Giles announced that he would remain as party leader and Chief Minister, and that Westra van Holthe would be his deputy. Just one opinion poll has been released since the 2012 election – conducted by ReachTEL and commissioned by The Australian which surveyed 1036 residents via robocall on the afternoon of Sunday 1 March 2015 across all 18 electorates in Darwin, Palmerston and Alice Springs – which indicated a landslide 17.6% two-party swing against the incumbent CLP government since the last election. After four defections during the parliamentary term, the CLP was reduced to minority government by July 2015. Giles raised the possibility of an early election on 20 July stating that he would "love" to call a snap poll, but that it was "pretty much impossible to do". Crossbenchers dismissed the notion of voting against a confidence motion to bring down the government. Territory government legislation passed in February 2016 changed the voting method of single-member electorates from full-preferential voting to optional preferential voting ahead of the 2016 territory election held on 27 August. Federally, a MediaReach seat-level opinion poll of 513 voters in the seat of Solomon conducted 22−23 June ahead of the 2016 federal election held on 2 July surprisingly found Labor candidate Luke Gosling heavily leading two-term CLP incumbent Natasha Griggs 61–39 on the two-party vote from a large 12.4 percent swing. The CLP lost Solomon to Labor at the election, with Gosling defeating Griggs 56–44 on the two-party vote from a 7.4 percent swing. At the 27 August Territory election, the CLP was swept from power in a massive Labor landslide, suffering easily the worst defeat of a sitting government in Territory history. The party not only lost all of the bush seats it picked up in 2012, but was all but shut out of Darwin/Palmerston, winning only one seat there. All told, the CLP only won two seats, easily its worst showing in an election. Giles himself lost his own seat, becoming the second Majority Leader/Chief Minister to lose his own seat. Even before Giles' defeat was confirmed, second-term MP Gary Higgins—the only surviving member of the Giles cabinet—was named the party's new leader, with Lia Finocchiaro as his deputy. On 20 January 2020, Higgins announced his resignation as party leader and announced his retirement at the next election. Finocchiaro succeeded him as CLP leader and leader of the opposition on 1 February 2020. The CLP stands for office in the Northern Territory Assembly and Federal Parliament of Australia and primarily concerns itself with representing Territory interests. It is a regionally based party, that has parliamentary representation in both the Federal Parliament and at the Territory level. It brands as a party with strong roots in the Territory. The CLP competes against the Australian Labor Party (Northern Territory Branch) (the local branch of Australia's social-democratic party). It is closely affiliated with, but is independent from the Liberal Party of Australia (a mainly urban, pro-private enterprise party comprising mainly liberal membership) and the National Party of Australia (a conservative-liberal agrarian and regional interests party). The party promotes traditional Liberal Party values such as individualism and private enterprise, and what it describes as "progressive" political policy such as full statehood for the Northern Territory. Branch delegates and members of the party's Central Council attend the Annual Conference of the Country Liberal Party to decide the party's platform. The Central Council is composed of the party's office bearers, its leaders from the Territory Assembly and the Federal Parliament and representatives of party branches. The Annual Conference of the Country Liberal Party, attended by branch delegates and members of the party's Central Council, decides matters relating to the party's platform and philosophy. The Central Council administers the party and makes decisions on pre-selections. It is composed of the party's office bearers, its leaders in the Northern Territory Legislative Assembly, members in the Federal Parliament, and representation from each of the party's branches. The CLP president has full voting rights with the National Party and observer status with the Liberal Party. Both the Liberals and Nationals receive Country Liberal delegations at their conventions. After federal elections, the CLP directs its federal members and senators as to which of the two other parties they should sit with in the parliamentary chamber. In practice, CLP House members usually sit with the Liberals, while CLP Senators sit with the Nationals.
https://en.wikipedia.org/wiki?curid=6468
Canon law Canon law (from Greek "kanon", a 'straight measuring rod, ruler') is a set of ordinances and regulations made by ecclesiastical authority (Church leadership), for the government of a Christian organization or church and its members. It is the internal ecclesiastical law, or operational policy, governing the Catholic Church (both the Latin Church and the Eastern Catholic Churches), the Eastern Orthodox and Oriental Orthodox churches, and the individual national churches within the Anglican Communion. The way that such church law is legislated, interpreted and at times adjudicated varies widely among these three bodies of churches. In all three traditions, a canon was originally a rule adopted by a church council; these canons formed the foundation of canon law. Greek "kanon" / , Arabic Qaanoon / قانون, Hebrew kaneh / קנה, "straight"; a rule, code, standard, or measure; the root meaning in all these languages is "reed" ("cf." the Romance-language ancestors of the English word "cane"). The "Apostolic Canons" or "Ecclesiastical Canons of the Same Holy Apostles" is a collection of ancient ecclesiastical decrees (eighty-five in the Eastern, fifty in the Western Church) concerning the government and discipline of the Early Christian Church, incorporated with the Apostolic Constitutions which are part of the Ante-Nicene Fathers. In the Fourth century the First Council of Nicaea (325) calls canons the disciplinary measures of the Church: the term canon, κανὠν, means in Greek, a rule. There is a very early distinction between the rules enacted by the Church and the legislative measures taken by the State called "leges", Latin for laws. In the Catholic Church, canon law is the system of laws and legal principles made and enforced by the Church's hierarchical authorities to regulate its external organization and government and to order and direct the activities of Catholics toward the mission of the Church. In the Latin Church, positive ecclesiastical laws, based directly or indirectly upon immutable divine law or natural law, derive formal authority in the case of universal laws from the supreme legislator (i.e., the Supreme Pontiff), who possesses the totality of legislative, executive, and judicial power in his person, while particular laws derive formal authority from a legislator inferior to the supreme legislator. The actual subject material of the canons is not just doctrinal or moral in nature, but all-encompassing of the human condition. The Catholic Church also includes the main five rites (groups) of churches which are in full union with the Holy See and the Latin Church: All of these church groups are in full communion with the Supreme Pontiff and are subject to the "Code of Canons of the Eastern Churches". The Catholic Church has what is claimed to be the oldest continuously functioning internal legal system in Western Europe, much later than Roman law but predating the evolution of modern European civil law traditions. What began with rules ("canons") adopted by the Apostles at the Council of Jerusalem in the first century has developed into a highly complex legal system encapsulating not just norms of the New Testament, but some elements of the Hebrew (Old Testament), Roman, Visigothic, Saxon, and Celtic legal traditions. The history of Latin canon law can be divided into four periods: the "jus antiquum", the "jus novum", the "jus novissimum" and the "Code of Canon Law". In relation to the Code, history can be divided into the "jus vetus" (all law before the Code) and the "jus novum" (the law of the Code, or "jus codicis"). The canon law of the Eastern Catholic Churches, which had developed some different disciplines and practices, underwent its own process of codification, resulting in the Code of Canons of the Eastern Churches promulgated in 1990 by Pope John Paul II. Roman canon law is a fully developed legal system, with all the necessary elements: courts, lawyers, judges, a fully articulated legal code principles of legal interpretation, and coercive penalties, though it lacks civilly-binding force in most secular jurisdictions. One example where it did not previously apply was in the English legal system, as well as systems, such as the U.S., that derived from it. Here criminals could apply for the Benefit of clergy. Being in holy orders, or fraudulently claiming to be, meant that criminals could opt to be tried by ecclesiastical rather than secular courts. The ecclesiastical courts were generally more lenient. Under the Tudors, the scope of clerical benefit was steadily reduced by Henry VII, Henry VIII, and Elizabeth I. The Vatican disputed secular authority over priests' criminal offences, and this in turn contributed to the English Reformation. The benefit of clergy was systematically removed from English legal systems over the next 200 years, although it still occurred in South Carolina in 1827. The structure that the fully developed Roman Law provides is a contribution to the Canon Law. The academic degrees in canon law are the J.C.B. ("Juris Canonici Baccalaureatus", Bachelor of Canon Law, normally taken as a graduate degree), J.C.L. ("Juris Canonici Licentiatus", Licentiate of Canon Law) and the J.C.D. ("Juris Canonici Doctor", Doctor of Canon Law). Because of its specialized nature, advanced degrees in civil law or theology are normal prerequisites for the study of canon law. Much of the legislative style was adapted from the Roman Law Code of Justinian. As a result, Roman ecclesiastical courts tend to follow the Roman Law style of continental Europe with some variation, featuring collegiate panels of judges and an investigative form of proceeding, called "inquisitorial", from the Latin "inquirere", to enquire. This is in contrast to the adversarial form of proceeding found in the common law system of English and U.S. law, which features such things as juries and single judges. The institutions and practices of canon law paralleled the legal development of much of Europe, and consequently both modern civil law and common law bear the influences of canon law. Edson Luiz Sampel, a Brazilian expert in canon law, says that canon law is contained in the genesis of various institutes of civil law, such as the law in continental Europe and Latin American countries. Sampel explains that canon law has significant influence in contemporary society. Canonical jurisprudential theory generally follows the principles of Aristotelian-Thomistic legal philosophy. While the term "law" is never explicitly defined in the Code, the Catechism of the Catholic Church cites Aquinas in defining law as "...an ordinance of reason for the common good, promulgated by the one who is in charge of the community" and reformulates it as "...a rule of conduct enacted by competent authority for the sake of the common good." The law of the Eastern-rite Churches in full communion with the Roman papacy was in much the same state as that of the Latin or Western Church before 1917; much more diversity in legislation existed in the various Eastern Catholic Churches. Each had its own special law, in which custom still played an important part. One major difference in Eastern Europe however, specifically in the Orthodox Christian churches was in regards to divorce. Divorce started to slowly be allowed in specific instances such as adultery being committed, abuse, abandonment, impotence and barrenness being the primary justifications for divorce. Eventually, the church began to allow remarriage to occur (for both spouses) post-divorce. In 1929 Pius XI informed the Eastern Churches of his intention to work out a Code for the whole of the Eastern Church. The publication of these Codes for the Eastern Churches regarding the law of persons was made between 1949 through 1958 but finalized nearly 30 years later. The first Code of Canon Law (1917) was almost exclusively for the Latin Church, with extremely limited application to the Eastern Churches. After the Second Vatican Council, (1962 - 1965), another edition was published specifically for the Roman Rite in 1983. Most recently, 1990, the Vatican produced the "Code of Canons" of the Eastern Churches which became the 1st code of "Eastern Catholic Canon Law". The Eastern Orthodox Church, principally through the work of 18th-century Athonite monastic scholar Nicodemus the Hagiorite, has compiled canons and commentaries upon them in a work known as the "Pēdálion" (Greek: Πηδάλιον, "Rudder"), so named because it is meant to "steer" the Church in her discipline. The dogmatic determinations of the Councils are to be applied rigorously, since they are considered to be essential for the Church's unity and the faithful preservation of the Gospel. In the Church of England, the ecclesiastical courts that formerly decided many matters such as disputes relating to marriage, divorce, wills, and defamation, still have jurisdiction of certain church-related matters (e.g. discipline of clergy, alteration of church property, and issues related to churchyards). Their separate status dates back to the 12th century when the Normans split them off from the mixed secular/religious county and local courts used by the Saxons. In contrast to the other courts of England the law used in ecclesiastical matters is at least partially a civil law system, not common law, although heavily governed by parliamentary statutes. Since the Reformation, ecclesiastical courts in England have been royal courts. The teaching of canon law at the Universities of Oxford and Cambridge was abrogated by Henry VIII; thereafter practitioners in the ecclesiastical courts were trained in civil law, receiving a Doctor of Civil Law (D.C.L.) degree from Oxford, or a Doctor of Laws (LL.D.) degree from Cambridge. Such lawyers (called "doctors" and "civilians") were centered at "Doctors Commons", a few streets south of St Paul's Cathedral in London, where they monopolized probate, matrimonial, and admiralty cases until their jurisdiction was removed to the common law courts in the mid-19th century. Other churches in the Anglican Communion around the world (e.g., the Episcopal Church in the United States, and the Anglican Church of Canada) still function under their own private systems of canon law. In 2002 a Legal Advisors Consultation meeting at Canterbury concluded:(1) There are principles of canon law common to the churches within the Anglican Communion; (2) Their existence can be factually established; (3) Each province or church contributes through its own legal system to the principles of canon law common within the Communion; (4) these principles have strong persuasive authority and are fundamental to the self-understanding of each of the member churches; (5) These principles have a living force, and contain within themselves the possibility for further development; and (6) The existence of the principles both demonstrates and promotes unity in the Communion. In Presbyterian and Reformed churches, canon law is known as "practice and procedure" or "church order", and includes the church's laws respecting its government, discipline, legal practice and worship. Roman canon law had been criticized by the Presbyterians as early as 1572 in the Admonition to Parliament. The protest centered on the standard defense that canon law could be retained so long as it did not contradict the civil law. According to Polly Ha, the Reformed Church Government refuted this claiming that the bishops had been enforcing canon law for 1500 years. The Book of Concord is the historic doctrinal statement of the Lutheran Church, consisting of ten credal documents recognized as authoritative in Lutheranism since the 16th century. However, the Book of Concord is a confessional document (stating orthodox belief) rather than a book of ecclesiastical rules or discipline, like canon law. Each Lutheran national church establishes its own system of church order and discipline, though these are referred to as "canons." The Book of Discipline contains the laws, rules, policies and guidelines for The United Methodist Church. Its last edition was published in 2016.
https://en.wikipedia.org/wiki?curid=6469
Columbanus Columbanus (, 540 – 21 November 615), also known as St. Columban, was an Irish missionary notable for founding a number of monasteries from around 590 in the Frankish and Lombard kingdoms, most notably Luxeuil Abbey in present-day France and Bobbio Abbey in present-day Italy. He is remembered as a key figure in the Hiberno-Scottish mission, or Irish missionary activity in early medieval Europe. In recent years, however, as Columbanus's deeds and legacy have come to be re-examined by historians, the traditional narrative of his career has been challenged and doubts have been raised regarding his actual involvement in missionary work and the extent to which he was driven by purely religious motives or also by a concern for playing an active part in politics and church politics in Francia. Columbanus taught an Irish monastic rule and penitential practices for those repenting of sins, which emphasised private confession to a priest, followed by penances levied by the priest in reparation for the sins. Columbanus is one of the earliest identifiable Hiberno-Latin writers. Most of what we know about Columbanus is based on Columbanus' own works (as far as they have been preserved) and Jonas of Susa's "Vita Columbani" ("Life of Columbanus"), which was written between 639 and 641. Jonas entered Bobbio after Columbanus' death but relied on reports of monks who still knew Columbanus. A description of miracles of Columbanus written by an anonymous monk of Bobbio is of much later date. In the second volume of his "Acta Sanctorum O.S.B.", Mabillon gives the life in full, together with an appendix on the miracles of the saint, written by an anonymous member of the Bobbio community. Columbanus (the Latinised form of "Columbán", meaning "the white dove") was born in the Kingdom of Meath, now part of Leinster, in Ireland in 540, the year Saint Benedict died at Monte Cassino. Prior to his birth, his mother was said to have had visions of bearing a child who, in the judgment of those interpreting the visions, would become a "remarkable genius". Columbanus was well-educated in the areas of grammar, rhetoric, geometry, and the Holy Scriptures. Columbanus left home to study under Sinell, Abbot of Cleenish in Lough Erne. Under Sinell's instruction, Columbanus composed a commentary on the Psalms. He then moved to Bangor Abbey on the coast of Down, where Saint Comgall was serving as the abbot. He stayed at Bangor until his fortieth year, when he received Comgall's permission to travel to the continent. Columbanus gathered twelve companions for his journey—Saint Attala, Columbanus the Younger, Cummain, Domgal (Deicolus), Eogain, Eunan, Saint Gall, Gurgano, Libran, Lua, Sigisbert, and Waldoleno—and together they set sail for the continent. After a brief stop in Britain, most likely on the Scottish coast, they crossed the channel and landed in Brittany in 585. At Saint-Malo in Brittany, there is a granite cross bearing the saint's name to which people once came to pray for rain in times of drought. The nearby village of Saint-Coulomb commemorates him in name. Columbanus and his companions were received with favour by King Gontram of Burgundy, and soon they made their way to Annegray, where they founded a monastery in an abandoned Roman fortress. Despite its remote location in the Vosges Mountains, the community became a popular pilgrimage site that attracted so many monastic vocations that two new monasteries had to be formed to accommodate them. In 590, Columbanus obtained from King Gontram the Gallo-Roman castle called "Luxovium" in present-day Luxeuil-les-Bains, some eight miles from Annegray. The castle, soon transformed into a monastery, was located in a wild region, thickly covered with pine forests and brushwood. Columbanus erected a third monastery called "Ad-fontanas" at present-day Fontaine-lès-Luxeuil, named for its numerous springs. These monastic communities remained under Columbanus' authority, and their rules of life reflected the Irish tradition in which he had been formed. As these communities expanded and drew more pilgrims, Columbanus sought greater solitude, spending periods of time in a hermitage and communicating with the monks through an intermediary. Often he would withdraw to a cave seven miles away, with a single companion who acted as messenger between himself and his companions. During his twenty years in Gaul (in present-day France), Columbanus became involved in a dispute with the Frankish bishops who may have feared his growing influence. During the first half of the sixth century, the councils of Gaul had given to bishops absolute authority over religious communities. As heirs to the Irish monastic tradition, Columbanus and his monks used the Irish Easter calculation, a version of Bishop Augustalis's 84-year "computus" for determining the date of Easter (Quartodecimanism), whereas the Franks had adopted the Victorian cycle of 532 years. The bishops objected to the newcomers' continued observance of their own dating, which — among other issues — caused the end of Lent to differ. They also complained about the distinct Irish tonsure. In 602, the bishops assembled to judge Columbanus, but he did not appear before them as requested. Instead, he sent a letter to the prelates — a strange mixture of freedom, reverence, and charity — admonishing them to hold synods more frequently, and advising them to pay more attention to matters of equal importance to that of the date of Easter. In defence of his following his traditional paschal cycle, he wrote: When the bishops refused to abandon the matter, Columbanus, following St Patrick's canon, appealed directly to Pope Gregory I. In the third and only surviving letter, he asks "the holy Pope, his Father" to provide "the strong support of his authority" and to render a "verdict of his favour", apologising for "presuming to argue as it were, with him who sits in the chair of Peter, Apostle and Bearer of the Keys". None of the letters were answered, most likely due to the pope's death in 604. Columbanus then sent a letter to Gregory's successor, Pope Boniface IV, asking him to confirm the tradition of his elders — if it is not contrary to the Faith — so that he and his monks could follow the rites of their ancestors. Before Boniface responded, Columbanus moved outside the jurisdiction of the Frankish bishops. As the Easter issue appears to end around that time, Columbanus may have stopped celebrating Irish date of Easter after moving to Italy. Columbanus was also involved in a dispute with members of the Frankish royal family. Upon the death of King Gontram of Burgundy, the succession passed to his nephew, Childebert II, the son of his brother Sigebert and Sigebert's wife Brunhilda of Austrasia. When Childebert II died, he left two sons, Theuderic II who inherited the Kingdom of Burgundy, and Theudebert II who inherited the Kingdom of Austrasia. As both were minors, Brunhilda, their grandmother, declared herself their guardian and controlled the governments of the two kingdoms. Theuderic II venerated Columbanus and often visited him, but the saint admonished and rebuked him for his behaviour. When Theuderic began living with a mistress, the saint objected, earning the displeasure of Brunhilda, who thought a royal marriage would threaten her own power. Columbanus did not spare the demoralised court, and Brunhilda became his bitterest foe. Angered by Columbanus's stance, Brunhilda stirred up the bishops and nobles to find fault with his monastic rules. When Theuderic II finally confronted Columbanus at Luxeuil, ordering him to conform to the country's conventions, the saint refused and was then taken prisoner to Besançon. Columbanus managed to escape his captors and returned to his monastery at Luxeuil. When the king and his grandmother found out, they sent soldiers to drive him back to Ireland by force, separating him from his monks by insisting that only those from Ireland could accompany him into exile. Columbanus was taken to Nevers, then travelled by boat down the Loire river to the coast. At Tours he visited the tomb of Saint Martin, and sent a message to Theuderic II indicating that within three years he and his children would perish. When he arrived at Nantes, he wrote a letter before embarkation to his fellow monks at Luxeuil monastery. The letter urged his brethren to obey Attala, who stayed behind as abbot of the monastic community. The letter concludes: Soon after the ship set sail from Nantes, a severe storm drove the vessel back ashore. Convinced that his holy passenger caused the tempest, the captain refused further attempts to transport the monk. Columbanus made his way across Gaul to visit King Chlothar II of Neustria at Soissons where he was gladly received. Despite the king's offers to stay in his kingdom, Columbanus left Neustria in 611 for the court of King Theudebert II of Austrasia in the northeastern part of the Kingdom of the Merovingian Franks. Columbanus travelled to Metz, where he received an honourable welcome, and then proceeding to Mainz, where he sailed upwards the Rhine river to the lands of the Suebi and Alemanni in the northern Alps, intending to preach the Gospel to these people. He followed the Rhine river and its tributaries, the Aar and the Limmat, and then on to Lake Zurich. Columbanus chose the village of Tuggen as his initial community, but the work was not successful. He continued north-east by way of Arbon to Bregenz on Lake Constance. Here the saint found an oratory dedicated to Saint Aurelia containing three brass images of their tutelary deities. Columbanus commanded Gallus, who knew the local language, to preach to the inhabitants, and many were converted. The three brass images were destroyed, and Columbanus blessed the little church, placing the relics of Saint Aurelia beneath the altar. A monastery was erected, Mehrerau Abbey, and the brethren observed their regular life. Columbanus stayed in Bregenz for about one year. Following an uprising against the community, possibly related to that region being taken over by the saint's old enemy King Theuderic II, Columbanus resolved to cross the Alps into Italy. Gallus remained in this area and died there 646. About seventy years later at the place of Gallus' cell the Monastery of Saint Gall was founded, which in itself was the origin of the city of St. Gallen again about another three hundred years later. Columbanus arrived in Milan in 612 and was warmly greeted by King Agilulf and Queen Theodelinda of the Lombards. He immediately began refuting the teachings of Arianism, which had enjoyed a degree of acceptance in Italy. He wrote a treatise against Arianism, which has since been lost. Queen Theodelinda, the devout daughter of Duke Garibald I of Bavaria, played an important role in restoring Nicene Christianity to a position of primacy against Arianism, and was largely responsible for the king's conversion to Christianity. At the king's request, Columbanus wrote a letter to Pope Boniface IV on the controversy over the "Three Chapters"—writings by Syrian bishops suspected of Nestorianism, which had been condemned in the fifth century as heresy. Pope Gregory I had tolerated in Lombardy those persons who defended the "Three Letters", among them King Agilulf. Columbanus agreed to take up the issue on behalf of the king. The letter begins with an apology that a "foolish Scot ("Scottus", Irishman)" would be writing for a Lombard king. After acquainting the pope with the imputations brought against him, he entreats the pontiff to prove his orthodoxy and assemble a council. He writes that his freedom of speech is consistent with the custom of his country. Some of the language used in the letter might now be regarded as disrespectful, but in that time, faith and austerity could be more indulgent. At the same time, the letter expresses the most affectionate and impassioned devotion to the Holy See. If Columbanus' zeal for orthodoxy caused him to overstep the limits of discretion, his real attitude towards Rome is sufficiently clear, calling the pope "his Lord and Father in Christ", the "Chosen Watchman", and the "First Pastor, set higher than all mortals". King Agilulf gave Columbanus a tract of land called Bobbio between Milan and Genoa near the Trebbia river, situated in a defile of the Apennine Mountains, to be used as a base for the conversion of the Lombard people. The area contained a ruined church and wastelands known as "Ebovium", which had formed part of the lands of the papacy prior to the Lombard invasion. Columbanus wanted this secluded place, for while enthusiastic in the instruction of the Lombards he preferred solitude for his monks and himself. Next to the little church, which was dedicated to Saint Peter, Columbanus erected a monastery in 614. Bobbio Abbey at its foundation followed the Rule of Saint Columbanus, based on the monastic practices of Celtic Christianity. For centuries it remained the stronghold of orthodoxy in northern Italy. During the last year of his life, Columbanus received messenges from King Chlothar II, inviting the saint to return to Burgundy, now that his enemies were dead. Columbanus did not return, but requested that the king should always protect his monks at Luxeuil Abbey. He prepared for death by retiring to his cave on the mountainside overlooking the Trebbia river, where, according to a tradition, he had dedicated an oratory to Our Lady. Columbanus died at Bobbio on 21 November 615. The Rule of Saint Columbanus embodied the customs of Bangor Abbey and other Irish monasteries. Much shorter than the Rule of Saint Benedict, the Rule of Saint Columbanus consists of ten chapters, on the subjects of obedience, silence, food, poverty, humility, chastity, choir offices, discretion, mortification, and perfection. In the first chapter, Columbanus introduces the great principle of his Rule: obedience, absolute and unreserved. The words of seniors should always be obeyed, just as "Christ obeyed the Father up to death for us." One manifestation of this obedience was constant hard labour designed to subdue the flesh, exercise the will in daily self-denial, and set an example of industry in cultivation of the soil. The least deviation from the Rule entailed corporal punishment, or a severe form of fasting. In the second chapter, Columbanus instructs that the rule of silence be "carefully observed", since it is written: "But the nurture of righteousness is silence and peace". He also warns, "Justly will they be damned who would not say just things when they could, but preferred to say with garrulous loquacity what is evil ..." In the third chapter, Columbanus instructs, "Let the monks' food be poor and taken in the evening, such as to avoid repletion, and their drink such as to avoid intoxication, so that it may both maintain life and not harm ..." Columbanus continues: In the fourth chapter, Columbanus presents the virtue of poverty and of overcoming greed, and that monks should be satisfied with "small possessions of utter need, knowing that greed is a leprosy for monks". Columbanus also instructs that "nakedness and disdain of riches are the first perfection of monks, but the second is the purging of vices, the third the most perfect and perpetual love of God and unceasing affection for things divine, which follows on the forgetfulness of earthly things. Since this is so, we have need of few things, according to the word of the Lord, or even of one." In the fifth chapter, Columbanus warns against vanity, reminding the monks of Jesus' warning in Luke 16:15: "You are the ones who justify yourselves in the eyes of others, but God knows your hearts. What people value highly is detestable in God's sight." In the sixth chapter, Columbanus instructs that "a monk's chastity is indeed judged in his thoughts" and warns, "What profit is it if he be virgin in body, if he be not virgin in mind? For God, being Spirit." In the seventh chapter, Columbanus instituted a service of perpetual prayer, known as "laus perennis", by which choir succeeded choir, both day and night. In the eighth chapter, Columbanus stresses the importance of discretion in the lives of monks to avoid "the downfall of some, who beginning without discretion and passing their time without a sobering knowledge, have been unable to complete a praiseworthy life." Monks are instructed to pray to God for to "illumine this way, surrounded on every side by the world's thickest darkness". Columbanus continues: In the ninth chapter, Columbanus presents mortification as an essential element in the lives of monks, who are instructed, "Do nothing without counsel." Monks are warned to "beware of a proud independence, and learn true lowliness as they obey without murmuring and hesitation." According to the Rule, there are three components to mortification: "not to disagree in mind, not to speak as one pleases with the tongue, not to go anywhere with complete freedom." This mirrors the words of Jesus, "For I have come down from heaven not to do my will but to do the will of him who sent me." (John 6:38) In the tenth and final chapter, Columbanus regulates forms of penance (often corporal) for offences, and it is here that the Rule of Saint Columbanus differs significantly from that of Saint Benedict. The Communal Rule of Columbanus required monks to fast every day until "None" or 3 p.m., this was later relaxed and observed on designated days. Columbanus' Rule regarding diet was very strict. Monks were to eat a limited diet of beans, vegetables, flour mixed with water and small bread of a loaf, taken in the evenings. The habit of the monks consisted of a tunic of undyed wool, over which was worn the cuculla, or cowl, of the same material. A great deal of time was devoted to various kinds of manual labour, not unlike the life in monasteries of other rules. The Rule of Saint Columbanus was approved of by the Fourth Council of Mâcon in 627, but it was superseded at the close of the century by the Rule of Saint Benedict. For several centuries in some of the greater monasteries the two rules were observed conjointly. Columbanus did not lead a perfect life. According to Jonas and other sources, he could be impetuous and even headstrong, for by nature he was eager, passionate, and dauntless. These qualities were both the source of his power and the cause of his mistakes. His virtues, however, were quite remarkable. Like many saints, he had a great love for God's creatures. Stories claim that as he walked in the woods, it was not uncommon for birds to land on his shoulders to be caressed, or for squirrels to run down from the trees and nestle in the folds of his cowl. Although a strong defender of Irish traditions, he never wavered in showing deep respect for the Holy See as the supreme authority. His influence in Europe was due to the conversions he effected and to the rule that he composed. It may be that the example and success of Saint Columba in Caledonia inspired him to similar exertions. The life of Columbanus stands as the prototype of missionary activity in Europe, followed by such men as Saint Kilian, Vergilius of Salzburg, Donatus of Fiesole, Wilfrid, Willibrord, Suitbert of Kaiserwerdt, Saint Boniface, and Ursicinus of Saint-Ursanne. The following are the principal miracles attributed to his intercession: Jonas relates the occurrence of a miracle during Columbanus' time in Bregenz, when that region was experiencing a period of severe famine. One historians states Columbanus had a "very strong sense of Irish identity...He’s the first person to write about Irish identity, he’s the first Irish person that we have a body of literary work from, so even on that point of view he’s very important in terms of Irish identity.” In 1950 a congress celebrating the 1400 anniversary of his birth took place in Luxeuil, France. It was attended by Robert Schuman, Sean MacBride, future Pope John XXIII and John A. Costello who said “All statesmen of today might well turn their thoughts to St Columban and his teaching. History records that it was by men like him that civilisation was saved in the 6th century.” Columbanus is also remembered as the first Irish person to be the subject of a biography. An Italian monk named Jonas of Bobbio wrote a biography of him some 20 years after Columbanus’ death. His use of the phrase in 600 AD totius Europae (all of Europe) in a letter to Pope Gregory the Great is the first known use of the expression. In France, the ruins of Columbanus' first monastery at Annegray are legally protected through the efforts of the Association Internationale des Amis de St Columban, which purchased the site in 1959. The association also owns and protects the site containing the cave, which acted as Columbanus' cell, and the holy well, which he created nearby. At Luxeuil-les-Bains, the Basilica of Saint Peter stands on the site of Columbanus' first church. A statue near the entrance, unveiled in 1947, shows him denouncing the immoral life of King Theuderic II. Formally an abbey church, the basilica contains old monastic buildings, which have been used as a minor seminary since the nineteenth century. It is dedicated to Columbanus and houses a bronze statue of him in its courtyard. In Lombardy, San Colombano al Lambro in Milan, San Colombano Belmonte in Turin, and San Colombano Certénoli in Genoa all take their names from the saint. The last monastery erected by Columbanus at Bobbio remained for centuries the stronghold of orthodoxy in northern Italy. If Bobbio Abbey in Italy became a citadel of faith and learning, Luxeuil Abbey in France became the "nursery of saints and apostles". The monastery produced sixty-three apostles who carried his rule, together with the Gospel, into France, Germany, Switzerland, and Italy. These disciples of Columbanus are accredited with founding over one hundred different monasteries. The canton and town still bearing the name of St. Gallen testify to how well one of his disciples succeeded. The Missionary Society of Saint Columban, founded in 1916, and the Missionary Sisters of St. Columban, founded in 1924, are both dedicated to Columbanus. The remains of Columbanus are preserved in the crypt at Bobbio Abbey. Many miracles have been credited to his intercession. In 1482, the relics were placed in a new shrine and laid beneath the altar of the crypt. The sacristy at Bobbio possesses a portion of the skull of the saint, his knife, wooden cup, bell, and an ancient water vessel, formerly containing sacred relics and said to have been given to him by Pope Gregory I. According to some authorities, twelve teeth of the saint were taken from the tomb in the fifteenth century and kept in the treasury, but these have since disappeared. Columbanus is named in the Roman Martyrology on 23 November, which is his feast day in Ireland. His feast is observed by the Benedictines on 21 November. Columbanus is the patron saint of motorcyclists. In art, Columbanus is represented bearded bearing the monastic cowl, holding in his hand a book with an Irish satchel, and standing in the midst of wolves. Sometimes he is depicted in the attitude of taming a bear, or with sun-beams over his head.
https://en.wikipedia.org/wiki?curid=6501
Concord, New Hampshire Concord () is the capital city of the U.S. state of New Hampshire and the county seat of Merrimack County. As of the 2010 census, its population was 42,695, and in 2019 the population was an estimated 43,627. The village of Penacook, where Concord was initially settled, lies at the northern boundary of the city limits. The city is home to the University of New Hampshire School of Law, New Hampshire's only law school; St. Paul's School, a private preparatory school; NHTI, a two-year community college; the New Hampshire Police Academy; and the New Hampshire Fire Academy. It is the resting place of Franklin Pierce, 14th President of the United States. The area that would become Concord was originally settled thousands of years ago by Abenaki Native Americans called the Pennacook. The tribe fished for migrating salmon, sturgeon, and alewives with nets strung across the rapids of the Merrimack River. The stream was also the transportation route for their birch bark canoes, which could travel from Lake Winnipesaukee to the Atlantic Ocean. The broad sweep of the Merrimack River valley floodplain provided good soil for farming beans, gourds, pumpkins, melons and maize. The area was first settled in 1659 as "Penacook". On January 17, 1725, the Province of Massachusetts Bay, which then claimed territories west of the Merrimack River, granted the Concord area as the Plantation of Penacook. It was settled between 1725 and 1727 by Captain Ebenezer Eastman and others from Haverhill, Massachusetts. On February 9, 1734, the town was incorporated as "Rumford", from which Sir Benjamin Thompson, Count Rumford would take his title. It was renamed "Concord" in 1765 by Governor Benning Wentworth following a bitter boundary dispute between Rumford and the town of Bow; the city name was meant to reflect the new concord, or harmony, between the disputant towns. Citizens displaced by the resulting border adjustment were given land elsewhere as compensation. In 1779, New Pennacook Plantation was granted to Timothy Walker Jr. and his associates at what would be incorporated in 1800 as Rumford, Maine, the site of Pennacook Falls. Concord grew in prominence throughout the 18th century, and some of the earliest houses from this period survive at the northern end of Main Street. In the years following the Revolution, Concord's central geographical location made it a logical choice for the state capital, particularly after Samuel Blodget in 1807 opened a canal and lock system to allow vessels passage around the Amoskeag Falls downriver, connecting Concord with Boston by way of the Middlesex Canal. In 1808, Concord was named the official seat of state government. The 1819 State House is the oldest capitol in the nation in which the state's legislative branches meet in their original chambers. The city would become noted for furniture-making and granite quarrying. In 1828, Lewis Downing joined J. Stephens Abbot to form Abbot and Downing. Their most famous product was their Concord coach, widely used in the development of the American West. In the 19th century, Concord became a hub for the railroad industry, with Penacook a textile manufacturing center using water power from the Contoocook River. Today, the city is a center for health care and several insurance companies. Concord is located at (43.2070, −71.5371). According to the United States Census Bureau, the city has a total area of . of it is land and of it is water, comprising 4.79% of the city. Concord is drained by the Merrimack River. Penacook Lake is in the west. The highest point in Concord is above sea level on Oak Hill, just west of the hill's summit in neighboring Loudon. Concord lies fully within the Merrimack River watershed, and is centered on the river, which runs from northwest to southeast through the city. Downtown is located on a low terrace to the west of the river, with residential neighborhoods climbing hills to the west and extending southwards towards the town of Bow. To the east of the Merrimack, atop a bluff, is a flat, sandy plain known as Concord Heights, which has seen most of the city's commercial development since 1960. The eastern boundary of Concord (with the town of Pembroke) is formed by the Soucook River, a tributary of the Merrimack. The Turkey River winds through the southwestern quarter of the city, passing through the campus of St. Paul's School before entering the Merrimack River in Bow. In the northern part of the city, the Contoocook River enters the Merrimack at the village of Penacook. It is north of Manchester, New Hampshire's largest city, and north of Boston. The city of Concord is split into distinct villages which residents identify with inside the boundaries of the city itself. These five villages are Penacook, the Concord Heights, East Concord, West Concord, and the downtown neighborhoods referred to as the North End and the South End. Concord, as with much of New England, is within the humid continental climate zone (Köppen "Dfb"), with long, cold, snowy winters, very warm (and at times humid) summers, and relatively brief autumns and springs. In winter, successive storms deliver light to moderate snowfall amounts, contributing to the relatively reliable snow cover. In addition, lows reach at least on an average 15 nights per year, and the city straddles the border between USDA Hardiness Zone 5b and 6a. However, thaws are frequent, with one to three days per month with + highs from December to February. Summer can bring stretches of humid conditions as well as thunderstorms, and there is an average of 12 days of + highs annually. The window for freezing temperatures on average begins on September 27 and expires on May 14. The monthly daily average temperature range from in January to in July. Temperature extremes have ranged from in February 1943 to in July 1966. As of the census of 2010, there were 42,695 people, 17,592 households, and 10,052 families residing in the city. The population density was 632.5 people per square mile (244.2/km²). There were 18,852 housing units at an average density of 293.2 per square mile (113.2/km²). The racial makeup of the city was 91.8% White, 2.2% Black or African American, 0.3% Native American, 3.4% Asian, 0.0% Pacific Islander, 0.4% from some other race, and 1.8% from two or more races. 2.1% of the population were Hispanic or Latino of any race. There were 17,592 households out of which 28.7% had children under the age of 18 living with them, 41.3% were headed by married couples living together, 11.6% had a female householder with no husband present, and 42.9% were non-families. 33.6% of all households were made up of individuals, and 12.0% were someone living alone who was 65 years of age or older. The average household size was 2.26, and the average family size was 2.90. In the city, the population was spread out with 20.7% under the age of 18, 9.3% from 18 to 24, 28.0% from 25 to 44, 28.2% from 45 to 64, and 13.8% who were 65 years of age or older. The median age was 39.4 years. For every 100 females, there were 98.5 males. For every 100 females age 18 and over, there were 96.9 males. For the period 2009–11, the estimated median annual income for a household in the city was $52,695, and the median income for a family was $73,457. Male full-time workers had a median income of $49,228 versus $38,782 for females. The per capita income for the city was $29,296. About 5.5% of families and 10.1% of the population were below the poverty line, including 8.4% of those under age 18 and 5.5% of those age 65 or over. In 2019, according to Concord's 2019 Comprehensive Annual Financial Report, the top employers in the city were: Interstate 89 and Interstate 93 are the two main interstate highways serving Concord, and join just south of the city limits. Interstate 89 links Concord with Lebanon and the state of Vermont to the northwest, while Interstate 93 connects the city to Plymouth, Littleton, and the White Mountains to the north and Manchester and Boston to the south. Interstate 393 is a spur highway leading east from Concord and merging with U.S. Route 4 as a direct route to New Hampshire's Seacoast region. North-south U.S. Route 3 serves as Concord's Main Street, while U.S. Route 202 and New Hampshire Route 9 cross the city from east to west. State routes 13 and 132 also serve the city: Route 13 leads southwest out of Concord towards Goffstown and Milford, while Route 132 travels north parallel to Interstate 93. New Hampshire Route 106 passes through the easternmost part of Concord, crossing I-393 and NH 9 before crossing the Soucook River south into the town of Pembroke. To the north, NH 106 leads to Loudon, Belmont, and Laconia. Local bus service is provided by Concord Area Transit (CAT), with three routes through the city. Regional bus service provided by Concord Coach Lines and Greyhound Lines is available from the Concord Transportation Center at 30 Stickney Avenue next to Exit 14 on Interstate 93, with service south to Boston and points in between, as well as north to Littleton and northeast to Berlin. There is no passenger rail service to Concord. General aviation services are available through Concord Municipal Airport, located east of downtown. There is no commercial air service within the city limits; the nearest such airport is Manchester–Boston Regional Airport, located to the south. Concord is governed via the council-manager system. The city council consists of 14 members, ten of which are elected from single-member wards, while the other four are elected at large. The mayor is elected directly every two years. The current mayor is Jim Bouley. According to the Concord city charter, the mayor chairs the council (composed of 15 members, including the mayor). However, the mayor has very few formal powers over the day-to-day management of the city. The actual operations of the city are overseen by the city manager, currently Thomas J. Aspell, Jr. The current police chief is Bradley S. Osgood. In the New Hampshire Senate, Concord is in the 15th District, represented by Democrat Dan Feltes. On the New Hampshire Executive Council, Concord is in the 2nd District, represented by Democrat Andru Volinsky. In the United States House of Representatives, Concord is in New Hampshire's 2nd congressional district, represented by Democrat Ann McLane Kuster. New Hampshire Department of Corrections operates the New Hampshire State Prison for Men and New Hampshire State Prison for Women in Concord. Newspapers Radio The city is otherwise served by . New Hampshire Public Radio is headquartered in Concord. Television The New Hampshire State House, designed by architect Stuart Park and constructed between 1815 and 1818, is the oldest state house in which the legislature meets in its original chambers. The building was remodeled in 1866, and the third story and west wing were added in 1910. Across from the State House is the Eagle Hotel on Main Street, which has been a downtown landmark since its opening in 1827. U.S. Presidents Ulysses S. Grant, Rutherford Hayes, and Benjamin Harrison all dined there, and Franklin Pierce spent the night before departing for his inauguration. Other well-known guests included Jefferson Davis, Charles Lindbergh, Eleanor Roosevelt, Richard M. Nixon (who carried New Hampshire in all three of his presidential bids), and Thomas E. Dewey. The hotel closed in 1961. South from the Eagle Hotel on Main Street is Phenix Hall, which replaced "Old" Phenix Hall, which burned in 1893. Both the old and new buildings featured multi-purpose auditoriums used for political speeches, theater productions, and fairs. Abraham Lincoln spoke at the old hall in 1860; Theodore Roosevelt, at the new hall in 1912. North on Main Street is the Walker-Woodman House, also known as the Reverend Timothy Walker House, the oldest standing two-story house in Concord. It was built for the Reverend Timothy Walker between 1733 and 1735. On the north end of Main Street is the Pierce Manse, in which President Franklin Pierce lived in Concord before and following his presidency. The mid-1830s Greek Revival house was moved from Montgomery Street to North Main Street in 1971 to prevent its demolition. Beaver Meadow Golf Course, located in the northern part of Concord, is one of the oldest golf courses in New England. Besides this golf course, other important sporting venues in Concord include Everett Arena and Memorial Field. The SNOB (Somewhat North Of Boston) Film Festival, started in the fall of 2002, brings independent films and filmmakers to Concord and has provided an outlet for local filmmakers to display their films. SNOB Film Festival was a catalyst for the building of Red River Theatres, a locally owned, nonprofit, independent cinema in 2007. The SNOB Film Festival is one of the many arts organizations in the city. Other sites of interest include the Capitol Center for the Arts, the New Hampshire Historical Society, which has two facilities in Concord, and the McAuliffe-Shepard Discovery Center, a science museum named after Christa McAuliffe, the Concord teacher who died during the Space Shuttle Challenger disaster in 1986, and Alan Shepard, the Derry-born astronaut who was the second person and first American in space as well as the fifth and oldest person to walk on the Moon. Concord's public schools are within the Concord School District, except for schools in the Penacook area of the city, which are within the Merrimack Valley School District, a district which also includes several towns north of Concord. The only public high school in the Concord School District is Concord High School, which has about 2,000 students. The only public middle school in the Concord School District is Rundlett Middle School, which has roughly 1,500 students. Concord School District's elementary schools underwent a major re-configuration in 2012, with three newly constructed schools opening and replacing six previous schools. Kimball School and Walker School were replaced by Christa McAuliffe School on the Kimball School site, Conant School (and Rumford School, which closed a year earlier) were replaced by Abbot-Downing School at the Conant site, and Eastman and Dame schools were replaced by Mill Brook School, serving kindergarten through grade two, located next to Broken Ground Elementary School, serving grades three to five. Beaver Meadow School, the remaining elementary school, was unaffected by the changes. Concord schools in the Merrimack Valley School District include Merrimack Valley High School and Merrimack Valley Middle School, which are adjacent to each other and to Rolfe Park in Penacook village, and Penacook Elementary School, just south of the village. Concord has two parochial schools, Bishop Brady High School and Saint John Regional School. Other area private schools include Concord Christian Academy, Parker Academy, Trinity Christian School, Shaker Road School, and St. Paul's School. Concord is also home to NHTI, Concord's Community College, Granite State College, the University of New Hampshire School of Law, and the Franklin Pierce University Doctorate of Physical Therapy program.
https://en.wikipedia.org/wiki?curid=6503
Chlorophyceae The Chlorophyceae are one of the classes of green algae, distinguished mainly on the basis of ultrastructural morphology. For example, the chlorophycean CW clade, and chlorophycean DO clade, are defined by the arrangement of their flagella. Members of the CW clade have flagella that are displaced in a "clockwise" (CW, 1–7 o'clock) direction e.g. Chlamydomonadales. Members of the DO clade have flagella that are "directly opposed" (DO, 12–6 o'clock) e.g. Sphaeropleales. They are usually green due to the dominance of pigments chlorophyll a and chlorophyll b. The chloroplast may be discoid, plate-like, reticulate, cup-shaped, spiral or ribbon shaped in different species. Most of the members have one or more storage bodies called pyrenoids located in the chloroplast. Pyrenoids contain protein besides starch. Some algae may store food in the form of oil droplets. Green algae usually have a rigid cell wall made up of an inner layer of cellulose and outer layer of pectose. Vegetative reproduction usually takes place by fragmentation. Asexual reproduction is by flagellated zoospores. And haplospore, perrination (akinate and palmellastage). Asexual reproduction by mytospore absent in spyrogyra. Sexual reproduction shows considerable variation in the type and formation of sex cells and it may be isogamous e.g. "Chlamydomonas, Ulothrix, Spirogyra", anisogamous e.g. "Chlamydomonas, Eudorina" or Oogamous e.g. "Chlamydomonas, Volvox". "Chlamydomonas" has all three types of sexual reproduction. They share many similarities with the higher plants, including the presence of asymmetrical flagellated cells, the breakdown of the nuclear envelope at mitosis, and the presence of phytochromes, flavonoids, and the chemical precursors to the cuticle. The sole method of reproduction is asexual and azosporic. The content of the cell divide into 2,4 (B), 8(C) sometimes daughter protoplasts. Each daughter protoplast rounds off to form a non-motile spore. These autospores (spores having the same distinctive shape as the parent cell) are liberated by the rupture of the parent cell wall (D). On release each autospore grows to become a new individual. the presence of sulphur in the culture medium is considered essential for cell division. it takes place even in the dark with sulphur alone as the source material but under light conditions nitrogen also required in addition. Pearsal and Loose (1937) reported the occurrence of motile cells in "Chlorella". Bendix (1964) also observed that "Chlorella" produces motile cells which might be gametes. These observations have an important bearing on the concept of the life cycle of "Chlorella," which at present is considered to be strictly asexual in character. Asexual reproduction in "Chlorella ellipsoides" has been studied in detail and the following four phases have been observed during the asexual reproduction. (i) Growth Phase- During this phase the cells grow in size by utilizing the photosynthetic products. (ii) Ripening phase- In this phase the cells mature and prepare themselves for division. (iii) Post ripening phase- During this phase, each mature cell divides twice either in dark or in light. The cells formed in dark are known as dark to light phase, cells again grow in size. (iv) Division Phase- During this phase the parent cell wall ruptures and unicells are released. The following orders are typically recognised: In older classifications, the term Chlorophyceae is sometimes used to apply to all the green algae except the Charales, and the internal division is considerably different. The Orders of the Chlorophyceae as listed by: in Hoek, Mann and Jahns (1995)
https://en.wikipedia.org/wiki?curid=6505
Computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to time and memory requirements. As the amount of resources required to run an algorithm generally varies with the size of the input, the complexity is typically expressed as a function , where is the size of the input and is either the worst-case complexity (the maximum of the amount of resources that are needed over all inputs of size ) or the average-case complexity (the average of the amount of resources over all inputs of size ). Time complexity is generally expressed as the number of required elementary operations on an input of size , where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer. Space complexity is generally expressed as the amount of memory required by an algorithm on an input of size . The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory. Both areas are highly related, as the complexity of an algorithm is always an upper bound on the complexity of the problem solved by this algorithm. The resource that is most commonly considered is time. When "complexity" is used without qualification, this generally means time complexity. The usual units of time (seconds, minutes etc.) are not used in complexity theory because they are too dependent on the choice of a specific computer and on the evolution of technology. For instance, a computer today can execute an algorithm significantly faster than a computer from the 1960s; however, this is not an intrinsic feature of the algorithm but rather a consequence of technological advances in computer hardware. Complexity theory seeks to quantify the intrinsic time requirements of algorithms, that is, the basic time constraints an algorithm would place on "any" computer. This is achieved by counting the number of "elementary operations" that are executed during the computation. These operations are assumed to take constant time (that is, not affected by the size of the input) on a given machine, and are often called "steps". Another important resource is the size of computer memory that is needed for running algorithms. The number of arithmetic operations is another resource that is commonly used. In this case, one talks of arithmetic complexity. If one knows an upper bound on the size of the binary representation of the numbers that occur during a computation, the time complexity is generally the product of the arithmetic complexity by a constant factor. For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called bit complexity in this context, may be much larger than the arithmetic complexity. For example, the arithmetic complexity of the computation of the determinant of a integer matrix is formula_1 for the usual algorithms (Gaussian elimination). The bit complexity of the same algorithms is exponential in , because the size of the coefficients may grow exponentially during the computation. On the other hand, if these algorithms are coupled with multi-modular arithmetic, the bit complexity may be reduced to . In sorting and searching, the resource that is generally considered is the number of entries comparisons. This is generally a good measure of the time complexity if data are suitably organized. It is impossible to count the number of steps of an algorithm on all possible inputs. As the complexity generally increases with the size of the input, the complexity is typically expressed as a function of the size (in bits) of the input, and therefore, the complexity is a function of . However, the complexity of an algorithm may vary dramatically for different inputs of the same size. Therefore, several complexity functions are commonly used. The worst-case complexity is the maximum of the complexity over all inputs of size , and the average-case complexity is the average of the complexity over all inputs of size (this makes sense, as the number of possible inputs of a given size is finite). Generally, when "complexity" is used without being further specified, this is the worst-case time complexity that is considered. It is generally difficult to compute precisely the worst-case and the average-case complexity. In addition, these exact values provide little practical application, as any change of computer or of model of computation would change the complexity somewhat. Moreover, the resource use is not critical for small values of , and this makes that, for small , the ease of implementation is generally more interesting than a good complexity. For these reasons, one generally focuses on the behavior of the complexity for large , that is on its asymptotic behavior when tends to the infinity. Therefore, the complexity is generally expressed by using big O notation. For example, the usual algorithm for integer multiplication has a complexity of formula_2 this means that there is a constant formula_3 such that the multiplication of two integers of at most digits may be done in a time less than formula_4 This bound is "sharp" in the sense that the worst-case complexity and the average-case complexity are formula_5 which means that there is a constant formula_6 such that these complexities are larger than formula_7 The radix does not appear in these complexity, as changing of radix changes only the constants formula_3 and formula_9 The evaluation of the complexity relies on the choice of a model of computation, which consists in defining the basic operations that are done in a unit of time. When the model of computation is not explicitly specified, this is generally meant as being multitape Turing machine. A deterministic model of computation is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state. Historically, the first deterministic models were recursive functions, lambda calculus, and Turing machines. The model of Random access machines (also called RAM-machines) is also widely used, as a closer counterpart to real computers. When the model of computation is not specified, it is generally assumed to be a multitape Turing machine. For most algorithms, the time complexity is the same on multitape Turing machines as on RAM-machines, although some care may be needed in how data is stored in memory to get this equivalence. In a non-deterministic model of computation, such as non-deterministic Turing machines, some choices may be done at some steps of the computation. In complexity theory, one considers all possible choices simultaneously, and the non-deterministic time complexity is the time needed, when the best choices are always done. In other words, one considers that the computation is done simultaneously on as many (identical) processors as needed, and the non-deterministic computation time is the time spent by the first processor that finishes the computation. This parallelism is partly amenable to quantum computing via superposed entangled states in running specific quantum algorithms, like e.g. Shor's factorization of yet only small integers (: 21 = 3 × 7). Even when such a computation model is not realistic yet, it has theoretical importance, mostly related to the P = NP problem, which questions the identity of the complexity classes formed by taking "polynomial time" and "non-deterministic polynomial time" as least upper bounds. Simulating an NP-algorithm on a deterministic computer usually takes "exponential time". A problem is in the complexity class NP, if it may be solved in polynomial time on a non-deterministic machine. A problem is NP-complete if, roughly speaking, it is in NP and is not easier than any other NP problem. Many combinatorial problems, such as the Knapsack problem, the travelling salesman problem, and the Boolean satisfiability problem are NP-complete. For all these problems, the best known algorithm has exponential complexity. If any one of these problems could be solved in polynomial time on a deterministic machine, then all NP problems could also be solved in polynomial time, and one would have P = NP. it is generally conjectured that with the practical implication that the worst cases of NP problems are intrinsically difficult to solve, i.e., take longer than any reasonable time span (decades!) for interesting lengths of input. Parallel and distributed computing consist of splitting computation on several processors, which work simultaneously. The difference between the different model lies mainly in the way of transmitting information between processors. Typically, in parallel computing the data transmission between processors is very fast, while, in distributed computing, the data transmission is done through a network and is therefore much slower. The time needed for a computation on processors is at least the quotient by of the time needed by a single processor. In fact this theoretically optimal bound can never be reached, because some subtasks cannot be parallelized, and some processors may have to wait a result from another processor. The main complexity problem is thus to design algorithms such that the product of the computation time by the number of processors is as close as possible to the time needed for the same computation on a single processor. A quantum computer is a computer whose model of computation is based on quantum mechanics. The Church–Turing thesis applies to quantum computers; that is, every problem that can be solved by a quantum computer can also be solved by a Turing machine. However, some problems may theoretically be solved with a much lower time complexity using a quantum computer rather than a classical computer. This is, for the moment, purely theoretical, as no one knows how to build an efficient quantum computer. Quantum complexity theory has been developed to study the complexity classes of problems solved using quantum computers. It is used in post-quantum cryptography, which consists of designing cryptographic protocols that are resistant to attacks by quantum computers. The complexity of a problem is the infimum of the complexities of the algorithms that may solve the problem, including unknown algorithms. Thus the complexity of a problem is not greater than the complexity of any algorithm that solves the problems. It follows that every complexity that is expressed with big O notation is a complexity of the algorithm as well as of the corresponding problem. On the other hand, it is generally hard to obtain nontrivial lower bounds for problem complexity, and there are few methods for obtaining such lower bounds. For solving most problems, it is required to read all input data, which, normally, needs a time proportional to the size of the data. Thus, such problems have a complexity that is at least linear, that is, using big omega notation, a complexity formula_10 The solution of some problems, typically in computer algebra and computational algebraic geometry, may be very large. In such a case, the complexity is lower bounded by the maximal size of the output, since the output must be written. For example, a system of polynomial equations of degree in indeterminates may have up to formula_11 complex solutions, if the number of solutions is finite (this is Bézout's theorem). As these solutions must be written down, the complexity of this problem is formula_12 For this problem, an algorithm of complexity formula_13 is known, which may thus be considered as asymptotically quasi-optimal. A nonlinear lower bound of formula_14 is known for the number of comparisons needed for a sorting algorithm. Thus the best sorting algorithms are optimal, as their complexity is formula_15 This lower bound results from the fact that there are ways of ordering objects. As each comparison splits in two parts this set of orders, the number of of comparisons that are needed for distinguishing all orders must verify formula_16 which implies formula_17 by Stirling's formula. A standard method for getting lower bounds of complexity consists of "reducing" a problem to another problem. More precisely, suppose that one may encode a problem of size into a subproblem of size of a problem , and that the complexity of is formula_18 Without loss of generality, one may suppose that the function increases with and has an inverse function . Then the complexity of the problem is formula_19 This is this method that is used for proving that, if P ≠ NP (an unsolved conjecture), the complexity of every NP-complete problem is formula_20 for every positive integer . Evaluating the complexity of an algorithm is an important part of algorithm design, as this gives useful information on the performance that may be expected. It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers. This is wrong because this power increase allows working with large input data (big data). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require formula_21 comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the quicksort and merge sort require only formula_22 comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For , this gives approximately 30,000,000 comparisons, which would only take 3 seconds at 10 million comparisons per second. Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation.
https://en.wikipedia.org/wiki?curid=6511
Coercion Coercion () is the practice of forcing another party to act in an involuntary manner by use of threats or force. It involves a set of various types of forceful actions that violate the free will of an individual to induce a desired response, for example: a bully demanding lunch money from a student or the student gets beaten. These actions may include extortion, blackmail, torture, threats to induce favors, or even sexual assault. In law, coercion is codified as a duress crime. Such actions are used as leverage, to force the victim to act in a way contrary to their own interests. Coercion may involve the actual infliction of physical pain/injury or psychological harm in order to enhance the credibility of a threat. The threat of further harm may lead to the cooperation or obedience of the person being coerced. The purpose of coercion is to substitute one's aims to those of the victim. For this reason, many social philosophers have considered coercion as the polar opposite to freedom. Various forms of coercion are distinguished: first on the basis of the "kind of injury" threatened, second according to its "aims" and "scope", and finally according to its "effects", from which its legal, social, and ethical implications mostly depend. Physical coercion is the most commonly considered form of coercion, where the content of the conditional threat is the use of force against a victim, their relatives or property. An often used example is "putting a gun to someone's head" ("at gunpoint") or putting a "knife under the throat" ("at knifepoint" or cut-throat) to compel action or the victim gets killed or injured. These are so common that they are also used as metaphors for other forms of coercion. Armed forces in many countries use firing squads to maintain discipline and intimidate the masses, or opposition, into submission or silent compliance. However, there also are nonphysical forms of coercion, where the threatened injury does not immediately imply the use of force. Byman and Waxman (2000) define coercion as "the use of threatened force, including the limited use of actual force to back up the threat, to induce an adversary to behave differently than it otherwise would." Coercion does not in many cases amount to destruction of property or life since compliance is the goal. In psychological coercion, the threatened injury regards the victim's relationships with other people. The most obvious example is "blackmail", where the threat consists of the dissemination of damaging information. However, many other types are possible e.g. "emotional blackmail", which typically involves threats of rejection from or disapproval by a peer-group, or creating feelings of guilt/obligation via a display of anger or hurt by someone whom the victim loves or respects. Another example is coercive persuasion. Psychological coercion – along with the other varieties – was extensively and systematically used by the government of the People's Republic of China during the "Thought Reform" campaign of 1951–1952. The process – carried out partly at "revolutionary universities" and partly within prisons – was investigated and reported upon by Robert Jay Lifton, then Research Professor of Psychiatry at Yale University: see Lifton (1961). The techniques used by the Chinese authorities included a technique derived from standard group psychotherapy, which was aimed at forcing the victims (who were generally intellectuals) to produce detailed and sincere ideological "confessions". For instance, a professor of formal logic called Chin Yueh-lin – who was then regarded as China's leading authority on his subject – was induced to write: "The new philosophy [of Marxism-Leninism], being scientific, is the supreme truth" [Lifton (1961) p. 545].
https://en.wikipedia.org/wiki?curid=6512
Client–server model Client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs, which share their resources with clients. A client does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client-server model are email, network printing, and the World Wide Web. The "client-server" characteristic describes the relationship of cooperating programs in an application. The server component provides a function or service to one or many clients, which initiate requests for such services. Servers are classified by the services they provide. For example, a web server serves web pages and a file server serves computer files. A shared resource may be any of the server computer's software and electronic components, from programs and data to processors and storage devices. The sharing of resources of a server constitutes a "service". Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web servers and file server software at the same time to serve different data to clients making different kinds of requests. Client software can also communicate with server software within the same computer. Communication between servers, such as to synchronize data, is sometimes called "inter-server" or "server-to-server" communication. In general, a service is an abstraction of computer resources and a client does not have to be concerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the well-known application protocol, i.e. the content and the formatting of the data for the requested service. Clients and servers exchange messages in a request–response messaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All client-server protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement an application programming interface (API). The API is an abstraction layer for accessing a service. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange. A server may receive requests from many distinct clients in a short period of time. A computer can only perform a limited number of tasks at any moment, and relies on a scheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize availability, the server software may limit the availability to clients. Denial of service attacks are designed to exploit a server's obligation to process requests by overloading it with excessive request rates. Encryption should be applied if sensitive information is to be communicated between the client and the server. When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank's web server. The customer's login credentials may be stored in a database, and the web server accesses the database server as a client. An application server interprets the returned data by applying the bank's business logic, and provides the output to the web server. Finally, the webserver returns the result to the client web browser for display. In each step of this sequence of client-server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete and the web browser presents the data to the customer. This example illustrates a design pattern applicable to the client–server model: separation of concerns. An early form of client-server architecture is remote job entry, dating at least to OS/360 (announced 1964), where the request was to run a job, and the response was the output. While formulating the client–server model in the 1960s and 1970s, computer scientists building ARPANET (at the Stanford Research Institute) used the terms "server-host" (or "serving host") and "user-host" (or "using-host"), and these appear in the early documents RFC 5 and RFC 4. This usage was continued at Xerox PARC in the mid-1970s. One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL). The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client-server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet). "Client-host" and "server-host" have subtly different meanings than "client" and "server". A host is any computer connected to a network. Whereas the words "server" and "client" may refer either to a computer or to a computer program, "server-host" and "user-host" always refer to computers. The host is a versatile, multifunction computer; "clients" and "servers" are just programs that run on a host. In the client-server model, a server is more likely to be devoted to the task of serving. An early use of the word "client" occurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client). (By 1992, the word "server" had entered into general parlance.) The client–server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts. Centralized computing, however, specifically allocates a large amount of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be. It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a fat client, such as a personal computer, has many resources, and does not rely on a server for essential functions. As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such as mainframes and minicomputers, to fat clients. This afforded greater, more individualized dominion over computer resources, but complicated information technology management. During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, more affordable mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s. In addition to the client–server model, distributed computing applications often use the peer-to-peer (P2P) application architecture. In the client–server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload. Load-balancing and failover systems are often employed to scale the server beyond a single physical machine. In a peer-to-peer network, two or more computers ("peers") pool their resources and communicate in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network. Unlike clients in a client–server or client–queue–client network, peers communicate with each other directly. In peer-to-peer networking, an algorithm in the peer-to-peer communications protocol balances load, and even peers with modest resources can help to share the load. If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achieve high availability because other, redundant peers make up for any resource downtime; as the availability and load capacity of peers change, the protocol reroutes requests. Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems.
https://en.wikipedia.org/wiki?curid=6513
County Dublin County Dublin ( or "Contae Átha Cliath") is one of the thirty-two traditional counties of Ireland. Prior to 1994 it was also an administrative county covering the whole county outside of Dublin City Council. In 1994, as part of a reorganisation of local government within Dublin the boundaries of Dublin City were redrawn, Dublin County Council was abolished and three new administrative county councils were established: Dún Laoghaire–Rathdown, Fingal and South Dublin. While it is no longer used as an administrative division for local government, it retains a strong identity in popular culture. It is in the province of Leinster, and is named after the city of Dublin, the capital city of Ireland. County Dublin was one of the first parts of Ireland to be shired by John, King of England following the Norman invasion of Ireland. According to the 2016 census, the total population of County Dublin was 1,345,402, approximately 27% of the Republic of Ireland's population. The county is a NUTS 3 region, and is part of the NUTS 2 region of Eastern and Midland. There are four local authorities whose remit collectively encompasses the geographic area of the county and city of Dublin. These are Dublin City Council, South Dublin County Council, Dún Laoghaire–Rathdown County Council and Fingal County Council. Prior to the enactment of the Local Government (Dublin) Act 1993, the county was a unified whole even though it was administered by two local authorities – Dublin County Council and Dublin Corporation. Since the enactment of the Local Government Act 2001 in particular, the geographic area of the county has been divided between three entities at the level of "county" and a further entity at the level of "city". They rank equally as first level local administrative units of the NUTS 3 Dublin Region for Eurostat purposes. There are 34 LAU 1 entities in the Republic of Ireland. Each local authority is responsible for certain local services such as sanitation, planning and development, libraries, the collection of motor taxation, local roads and social housing. Dublin County Council (which did not include the county borough of Dublin) was abolished in 1994 and the area divided among the administrative counties of Dún Laoghaire–Rathdown, Fingal and South Dublin each with its county seat. To these areas may be added the area of Dublin city which collectively comprise the Dublin Region ("") and come under the remit of the Dublin Regional Authority. The area lost its administrative county status in 1994, with Section 9 Part 1(a) of the "Local Government (Dublin) Act, 1993" stating that "the county shall cease to exist." In discussing the legislation to dissolve Dublin County Council, Avril Doyle TD said, "The Bill before us today effectively abolishes County Dublin, and as one born and bred in these parts of Ireland I find it rather strange that we in this House are abolishing County Dublin. I am not sure whether Dubliners realise that that is what we are about today, but in effect that is the case." The county is part of the Dublin constituency for the purposes of European elections. For elections to Dáil Éireann, the area of the county is currently (2016) divided into eleven constituencies: Dublin Bay North, Dublin Bay South, Dublin Central, Dublin Fingal, Dublin Mid-West, Dublin North-West, Dublin Rathdown, Dublin South-Central, Dublin South-West, Dublin West, and Dún Laoghaire. Together they return 44 deputies (TDs) to the Dáil. Despite the legal status of the Dublin Region, the term "County Dublin" is still in common usage. Many organisations and sporting teams continue to organise on a "County Dublin" or "Dublin Region" basis. The area formerly known as "County Dublin" is now defined in legislation solely as the "Dublin Region" under the "Local Government Act, 1991 (Regional Authorities) (Establishment) Order, 1993", and this is the terminology officially used by the four Dublin administrative councils in press releases concerning the former county area. The term "Greater Dublin Area", which might consist of some or all of the Dublin Region along with counties of Kildare, Meath and Wicklow, has no legal standing. The Dublin Region is a NUTS Level III region of Ireland. The region is one of eight regions of the Republic of Ireland for the purposes of Eurostat statistics. Its NUTS code is IE061. It is co-extensive with the old county. The regional capital is Dublin City which is also the national capital. The latest Ordnance Survey Ireland "Discovery Series" (Third Edition 2005) 1:50,000 map of the Dublin Region, Sheet 50, shows the boundaries of the city and three surrounding counties of the region. Extremities of the Dublin Region, in the north and south of the region, appear in other sheets of the series, 43 and 56 respectively. Local radio stations include 98FM, FM104, 103.2 Dublin City FM, Q102, SPIN 1038, Sunshine 106.8, TXFM, Raidió Na Life and Radio Nova. Local newspapers include "The Echo", "Northside People", "Southside People" and the "Liffey Champion". Most of the area can receive the five main UK television channels as well as the main Irish channels, along with Sky TV and Virgin Media Ireland cable television. The economy of County Dublin was identified as being the powerhouse behind the Celtic Tiger, a period of strong economic growth of the state. This resulted in the economy of the county expanding by almost 100% between the early 1990s and 2007. This growth resulted from incoming high-value industries, such as financial services and software manufacturing, as well as low-skilled retail and domestic services, which caused a shift away from older manufacturing industry. This change saw high unemployment in the 1980s and early 1990s which resulted in damage to the capital's social structure. According to CSO figures, the region had a GDP of €87.238 bn and a GDP per capita of €68,208 in 2014 (the second highest was Cork at €50,544 per capita). Separately, Eurostat figures for 2012 suggested the region then had a GDP of €72.384 bn and a GDP per capita of €57,200 – the highest on the island of Ireland (the second highest being Cork with €48,500). As of early 2017, the unemployment rate for the Dublin region was estimated at 6%. County Dublin is the main transport node of Ireland, and contains one international airport, Dublin Airport. It is also served by two main seaports, Dún Laoghaire port and Dublin Port, which is just located outside of the city center. The two main train stations are Dublin Heuston and Dublin Connolly, both of which serve intercity trains. According to the 2006 census, County Dublin had a population of 1,187,176, which constitutes 30% of the national population. This was an increase of 9.5% on 2002 figures. Its population density was 1,218/km². The population of Dublin City, was 506,211. The median age of the population of the county in the 2006 census was 35.6 years, with 62% of people aged between 20–64 years old. Net migration to the county between 2002 and 2006 was 48,000, with a natural increase of 33,000 people. There are 10,469 Irish speakers in County Dublin attending the 31 Gaelscoileanna (Irish language primary schools) and eight Gaelcholáistí (Irish language secondary schools). There may be up to another 10,000 Irish speakers from the Gaeltacht living and working in Dublin also. A list of the largest urban areas (those with over 1,000 inhabitants) in County Dublin. Administrative county seats are shown in bold.
https://en.wikipedia.org/wiki?curid=6514
Cosmological argument A cosmological argument, in natural theology and natural philosophy (not cosmology), is an argument in which the existence of God is inferred from alleged facts concerning causation, explanation, change, motion, contingency, dependency, or finitude with respect to the universe or some totality of objects. It is traditionally known as an argument from universal causation, an argument from first cause, or the causal argument. (about the "origin"). Whichever term is employed, there are three basic variants of the argument, each with subtle yet important distinctions: the arguments from "in causa" (causality), "in esse" (essentiality), and "in fieri" (becoming). The basic premises of all of these are the concept of causality. The conclusion of these arguments is first cause (for whichever group of things it is being argued must have a cause or explanation), subsequently deemed to be God. The history of this argument goes back to Aristotle or earlier, was developed in Neoplatonism and early Christianity and later in medieval Islamic theology during the 9th to 12th centuries, and re-introduced to medieval Christian theology in the 13th century by Thomas Aquinas. The cosmological argument is closely related to the principle of sufficient reason as addressed by Gottfried Leibniz and Samuel Clarke, itself a modern exposition of the claim that "nothing comes from nothing" attributed to Parmenides. Contemporary defenders of cosmological arguments include William Lane Craig, Robert Koons, Alexander Pruss, and William L. Rowe. Plato (c. 427–347 BC) and Aristotle (c. 384–322 BC) both posited first cause arguments, though each had certain notable caveats. In "The Laws" (Book X), Plato posited that all movement in the world and the Cosmos was "imparted motion". This required a "self-originated motion" to set it in motion and to maintain it. In "Timaeus", Plato posited a "demiurge" of supreme wisdom and intelligence as the creator of the Cosmos. Aristotle argued "against" the idea of a first cause, often confused with the idea of a "prime mover" or "unmoved mover" ( or "primus motor") in his "Physics" and "Metaphysics". Aristotle argued in "favor" of the idea of several unmoved movers, one powering each celestial sphere, which he believed lived beyond the sphere of the fixed stars, and explained why motion in the universe (which he believed was eternal) had continued for an infinite period of time. Aristotle argued the atomist's assertion of a non-eternal universe would require a first uncaused cause – in his terminology, an efficient first cause – an idea he considered a nonsensical flaw in the reasoning of the atomists. Like Plato, Aristotle believed in an eternal cosmos with no beginning and no end (which in turn follows Parmenides' famous statement that "nothing comes from nothing"). In what he called "first philosophy" or metaphysics, Aristotle "did" intend a theological correspondence between the prime mover and deity (presumably Zeus); functionally, however, he provided an explanation for the apparent motion of the "fixed stars" (now understood as the daily rotation of the Earth). According to his theses, immaterial unmoved movers are eternal unchangeable beings that constantly think about thinking, but being immaterial, they are incapable of interacting with the cosmos and have no knowledge of what transpires therein. From an "aspiration or desire", the celestial spheres, "imitate" that purely intellectual activity as best they can, by uniform circular motion. The unmoved movers "inspiring" the planetary spheres are no different in kind from the prime mover, they merely suffer a dependency of relation to the prime mover. Correspondingly, the motions of the planets are subordinate to the motion inspired by the prime mover in the sphere of fixed stars. Aristotle's natural theology admitted no creation or capriciousness from the immortal pantheon, but maintained a defense against dangerous charges of impiety. Plotinus, a third-century Platonist, taught that the One transcendent absolute caused the universe to exist simply as a consequence of its existence ("creatio ex deo"). His disciple Proclus stated "The One is God". Centuries later, the Islamic philosopher Avicenna (c. 980–1037) inquired into the question of being, in which he distinguished between essence ("Mahiat") and existence ("Wujud"). He argued that the fact of existence could not be inferred from or accounted for by the essence of existing things, and that form and matter by themselves could not originate and interact with the movement of the Universe or the progressive actualization of existing things. Thus, he reasoned that existence must be due to an agent cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must coexist with its effect and be an existing thing. Steven Duncan writes that it "was first formulated by a Greek-speaking Syriac Christian neo-Platonist, John Philoponus, who claims to find a contradiction between the Greek pagan insistence on the eternity of the world and the Aristotelian rejection of the existence of any actual infinite". Referring to the argument as the "'Kalam' cosmological argument", Duncan asserts that it "received its fullest articulation at the hands of [medieval] Muslim and Jewish exponents of "Kalam" ("the use of reason by believers to justify the basic metaphysical presuppositions of the faith"). Thomas Aquinas (c. 1225–1274) adapted and enhanced the argument he found in his reading of Aristotle and Avicenna to form one of the most influential versions of the cosmological argument. His conception of First Cause was the idea that the Universe must be caused by something that is itself uncaused, which he claimed is that which we call God: Importantly, Aquinas' Five Ways, given the second question of his Summa Theologica, are not the entirety of Aquinas' demonstration that the Christian God exists. The Five Ways form only the beginning of Aquinas' Treatise on the Divine Nature. In the scholastic era, Aquinas formulated the "argument from contingency", following Aristotle in claiming that there must be something to explain why the Universe exists. Since the Universe could, under different circumstances, conceivably "not" exist (contingency), its existence must have a cause – not merely another contingent thing, but something that exists by necessity (something that "must" exist in order for anything else to exist). In other words, even if the Universe has always existed, it still owes its existence to an uncaused cause, Aquinas further said: "... and this we understand to be God." Aquinas's argument from contingency allows for the possibility of a Universe that has no beginning in time. It is a form of argument from universal causation. Aquinas observed that, in nature, there were things with contingent existences. Since it is possible for such things not to exist, there must be some time at which these things did not in fact exist. Thus, according to Aquinas, there must have been a time when nothing existed. If this is so, there would exist nothing that could bring anything into existence. Contingent beings, therefore, are insufficient to account for the existence of contingent beings: there must exist a "necessary" being whose non-existence is an impossibility, and from which the existence of all contingent beings is derived. The German philosopher Gottfried Leibniz made a similar argument with his principle of sufficient reason in 1714. "There can be found no fact that is true or existent, or any true proposition," he wrote, "without there being a sufficient reason for its being so and not otherwise, although we cannot know these reasons in most cases." He formulated the cosmological argument succinctly: "Why is there something rather than nothing? The sufficient reason ... is found in a substance which ... is a necessary being bearing the reason for its existence within itself." Leibniz's argument from contingency is one of the most popular cosmological arguments in philosophy of religion. It attempts to prove the existence of a necessary being and infer that this being is God. Alexander Pruss formulates the argument as follows: Premise 1 is a form of the principle of sufficient reason stating that all contingently true propositions are explained. This is one of the several variants of the PSR which differ in strength, scope, and modal implications. Premise 2 refers to what is known as the Big Conjunctive Contingent Fact (abbreviated BCCF) in philosophy of religion. The BCCF is generally taken to be the totality of all contingent beings or the logical conjunction of all contingent facts. The approach of the argument is that since a contingent fact cannot explain the BCCF, a fact involving a necessary object must be its explanation. Statement 5, which is either seen as a premise or a conclusion, infers that the necessary being which explains the totality of contingent facts is God. In academic literature, several philosophers of religion such as Joshua Rasmussen and T. Ryan Byerly have argued for the inference from (4) to (5). Modern Islamic theologians have revived this argument in books (e.g. Kalam Cosmological Arguments and The Divine Reality). Attributes such as oneness, indivisibility and an eternal nature of the necessary existence have been inferred from the contingency argument by philosophers to further support the Islamic conception of God. The difference between the arguments from causation "in fieri" and "in esse" is a fairly important one. "In fieri" is generally translated as "becoming", while "in esse" is generally translated as "in essence". "In fieri", the process of becoming, is similar to building a house. Once it is built, the builder walks away, and it stands on its own accord; compare the watchmaker analogy. (It may require occasional maintenance, but that is beyond the scope of the first cause argument.) "In esse" (essence) is more akin to the light from a candle or the liquid in a vessel. George Hayward Joyce, SJ, explained that, "where the light of the candle is dependent on the candle's continued existence, not only does a candle produce light in a room in the first instance, but its continued presence is necessary if the illumination is to continue. If it is removed, the light ceases. Again, a liquid receives its shape from the vessel in which it is contained; but were the pressure of the containing sides withdrawn, it would not retain its form for an instant." This form of the argument is far more difficult to separate from a purely first cause argument than is the example of the house's maintenance above, because here the First Cause is insufficient without the candle's or vessel's continued existence. Thus, Leibniz's argument is "in fieri", while Aquinas' argument is both "in fieri" and "in esse". This distinction is an excellent example of the difference between a deistic view (Leibniz) and a theistic view (Aquinas). As a general trend, the modern slants on the cosmological argument, including the Kalam cosmological argument, tend to lean very strongly towards an "in fieri" argument. The philosopher Robert Koons has stated a new variant on the cosmological argument. He says that to deny causation is to deny all empirical ideas – for example, if we know our own hand, we know it because of the chain of causes including light being reflected upon one's eyes, stimulating the retina and sending a message through the optic nerve into your brain. He summarised the purpose of the argument as "that if you don't buy into theistic metaphysics, you're undermining empirical science. The two grew up together historically and are culturally and philosophically inter-dependent ... If you say I just don't buy this causality principle – that's going to be a big big problem for empirical science." This "in fieri" version of the argument therefore does not intend to prove God, but only to disprove objections involving science, and the idea that contemporary knowledge disproves the cosmological argument. William Lane Craig gives this argument in the following general form: Craig explains, by nature of the event (the Universe coming into existence), attributes unique to (the concept of) God must also be attributed to the cause of this event, including but not limited to: enormous power (if not omnipotence), being the creator of the Heavens and the Earth (as God is according to the Christian understanding of God), being eternal and being absolutely self-sufficient. Since these attributes are unique to God, anything with these attributes must be God. Something does have these attributes: the cause; hence, the cause is God, the cause exists; hence, God exists. Craig defends the second premise, that the Universe had a beginning starting with Al-Ghazali's proof that an actual infinite is impossible. However, If the universe never had a beginning then there would be an actual infinite, an infinite amount of cause and effect events. Hence, the Universe had a beginning. Duns Scotus, the influential Medieval Christian theologian, created a metaphysical argument for the existence of God. Though it was inspired by Aquinas' argument from motion, he, like other philosophers and theologians, believed that his statement for God's existence could be considered separate to Aquinas'. His explanation for God's existence is long, and can be summarised as follows: Scotus deals immediately with two objections he can see: first, that there cannot be a first, and second, that the argument falls apart when 1) is questioned. He states that infinite regress is impossible, because it provokes unanswerable questions, like, in modern English, "What is infinity minus infinity?" The second he states can be answered if the question is rephrased using modal logic, meaning that the first statement is instead "It is possible that something can be produced." One objection to the argument is that it leaves open the question of why the First Cause is unique in that it does not require any causes. Proponents argue that the First Cause is exempt from having a cause, while opponents argue that this is special pleading or otherwise untrue. Critics often press that arguing for the First Cause's exemption raises the question of why the First Cause is indeed exempt, whereas defenders maintain that this question has been answered by the various arguments, emphasizing that none of its major forms rests on the premise that everything has a cause. William Lane Craig, who famously uses the Kalam cosmological argument, argues that the infinite is impossible, whichever perspective the viewer takes, and so there must always have been one unmoved thing to begin the universe. He uses Hilbert's paradox of the Grand Hotel and the question 'What is infinity minus infinity?' to illustrate the idea that the infinite is metaphysically, mathematically, and even conceptually, impossible. Other reasons include the fact that it is impossible to count down from infinity, and that, had the universe existed for an infinite amount of time, every possible event, including the final end of the universe, would already have occurred. He therefore states his argument in three points- firstly, everything that begins to exist has a cause of its existence; secondly, the universe began to exist; so, thirdly, therefore, the universe has a cause of its existence. A response to this argument would be that the cause of the universe's existence (God) would need a cause for its existence, which, in turn, could be responded to as being logically inconsistent with the evidence already presented- even if God did have a cause, there would still necessarily be a cause which began everything, owing to the impossibility of the infinite stated by Craig. Secondly, it is argued that the premise of causality has been arrived at via "a posteriori" (inductive) reasoning, which is dependent on experience. David Hume highlighted this problem of induction and argued that causal relations were not true "a priori". However, as to whether inductive or deductive reasoning is more valuable still remains a matter of debate, with the general conclusion being that neither is prominent. Opponents of the argument tend to argue that it is unwise to draw conclusions from an extrapolation of causality beyond experience. Andrew Loke replies that, according to the Kalam Cosmological Argument, only things which begin to exist require a cause. On the other hand, something that is without beginning has always existed and therefore does not require a cause. The Cosmological Argument posits that there cannot be an actual infinite regress of causes, therefore there must be an uncaused First Cause that is beginningless and does not require a cause. The basic cosmological argument merely establishes that a First Cause exists, not that it has the attributes of a theistic god, such as omniscience, omnipotence, and omnibenevolence. This is why the argument is often expanded to show that at least some of these attributes are necessarily true, for instance in the modern Kalam argument given above. A causal loop is a form of predestination paradox arising where traveling backwards in time is deemed a possibility. A sufficiently powerful entity in such a world would have the capacity to travel backwards in time to a point before its own existence, and to then create itself, thereby initiating everything which follows from it. The usual reason which is given to refute the possibility of a causal loop is it requires that the loop as a whole be its own cause. Richard Hanley argues that causal loops are not logically, physically, or epistemically impossible: "[In timed systems,] the only possibly objectionable feature that all causal loops share is that coincidence is required to explain them." However, Andrew Loke argues that causal loop of the type that is supposed to avoid a First Cause suffers from the problem of vicious circularity and thus it would not work. David Hume and later Paul Edwards have invoked a similar principle in their criticisms of the cosmological argument. Alexander Pruss has called this the Hume-Edwards principle: Nevertheless, David White argues that the notion of an infinite causal regress providing a proper explanation is fallacious. Furthermore, in Hume's Dialogues Concerning Natural Religion, the character Demea states that even if the succession of causes is infinite, the whole chain still requires a cause. To explain this, suppose there exists a causal chain of infinite contingent beings. If one asks the question, "Why are there any contingent beings at all?", it does not help to be told that "There are contingent beings because other contingent beings caused them." That answer would just presuppose additional contingent beings. An adequate explanation of why some contingent beings exist would invoke a different sort of being, a necessary being that is "not" contingent. A response might suppose each individual is contingent but the infinite chain as a whole is not; or the whole infinite causal chain to be its own cause. Severinsen argues that there is an "infinite" and complex causal structure. White tried to introduce an argument "without appeal to the principle of sufficient reason and without denying the possibility of an infinite causal regress". A number of other arguments have been offered to demonstrate that an actual infinite regress cannot exist, viz. the argument for the impossibility of concrete actual infinities, the argument for the impossibility of traversing an actual infinite, the argument from the lack of capacity to begin to exist, and various arguments from paradoxes. Some cosmologists and physicists argue that a challenge to the cosmological argument is the nature of time: "One finds that time just disappears from the Wheeler–DeWitt equation" (Carlo Rovelli). The Big Bang theory states that it is the point in which all dimensions came into existence, the start of both space and time. Then, the question "What was there before the Universe?" makes no sense; the concept of "before" becomes meaningless when considering a situation without time. This has been put forward by J. Richard Gott III, James E. Gunn, David N. Schramm, and Beatrice Tinsley, who said that asking what occurred before the Big Bang is like asking what is north of the North Pole. However, some cosmologists and physicists do attempt to investigate causes for the Big Bang, using such scenarios as the collision of membranes. Philosopher Edward Feser states that classical philosophers' arguments for the existence of God do not care about the Big Bang or whether the universe had a beginning. The question is not about what got things started or how long they have been going, but rather what keeps them going. There is also a Big Bang Argument, which is a variation of the Cosmological Argument using the Big Bang Theory to validate the premise that the Universe had a beginning.
https://en.wikipedia.org/wiki?curid=6516
Clutch A clutch is a mechanical device which engages and disengages power transmission especially from driving shaft to driven shaft. In the simplest application, clutches connect and disconnect two rotating shafts (drive shafts or line shafts). In these devices, one shaft is typically attached to an engine or other power unit (the driving member) while the other shaft (the driven member) provides output power for work. While typically the motions involved are rotary, linear clutches are also possible. In a torque-controlled drill, for instance, one shaft is driven by a motor and the other drives a drill chuck. The clutch connects the two shafts so they may be locked together and spin at the same speed (engaged), locked together but spinning at different speeds (slipping), or unlocked and spinning at different speeds (disengaged). The vast majority of clutches ultimately rely on frictional forces for their operation. The purpose of friction clutches is to connect a moving member to another that is moving at a different speed or stationary, often to synchronize the speeds, and/or to transmit power. Usually, as little slippage (difference in speeds) as possible between the two members is desired. Various materials have been used for the disc-friction facings, including asbestos in the past. Modern clutches typically use a compound organic resin with copper wire facing or a ceramic material. Ceramic materials are typically used in heavy applications such as racing or heavy-duty hauling, though the harder ceramic materials increase flywheel and pressure plate wear. In the case of "wet" clutches, composite paper materials are very common. Since these "wet" clutches typically use an oil bath or flow-through cooling method for keeping the disc pack lubricated and cooled, very little wear is seen when using composite paper materials. Friction-disc clutches generally are classified as "push type" or "pull type" depending on the location of the pressure plate fulcrum points. In a pull-type clutch, the action of pressing the pedal pulls the release bearing, pulling on the diaphragm spring and disengaging the vehicle drive. The opposite is true with a push type, the release bearing is pushed into the clutch disengaging the vehicle drive. In this instance, the release bearing can be known as a thrust bearing (as per the image above). A clutch damper is a device that softens the response of the clutch engagement/disengagement. In automotive applications, this is often provided by a mechanism in the clutch disc centres. In addition to the damped disc centres, which reduce driveline vibration, pre-dampers may be used to reduce gear rattle at idle by changing the natural frequency of the disc. These weaker springs are compressed solely by the radial vibrations of an idling engine. They are fully compressed and no longer in use once the main damper springs take up drive. Mercedes truck examples: A clamp load of 33 kN is normal for a single plate 430. The 400 Twin application offers a clamp load of a mere 23 kN. Bursts speeds are typically around 5,000 rpm with the weakest point being the facing rivet. Modern clutch development focuses its attention on the simplification of the overall assembly and/or manufacturing method. For example, drive straps are now commonly employed to transfer torque as well as lift the pressure plate upon disengagement of vehicle drive. With regard to the manufacture of diaphragm springs, heat treatment is crucial. Laser welding is becoming more common as a method of attaching the drive plate to the disc ring with the laser typically being between 2-3 kW and a feed rate 1m/minute. This type of clutch has several driving members interleaved or "stacked" with several driven members. It is used in racing cars including Formula 1, IndyCar, World Rally and even most club racing. Multiplate clutches see much use in drag racing, which requires the best acceleration possible, and is notorious for the abuse the clutch is subjected to. Thus, they can be found in motorcycles, in automatic transmissions and in some diesel locomotives with mechanical transmissions. It is also used in some electronically controlled all-wheel drive systems as well as in some transfer cases. They can also be found in some heavy machinery such as tanks and AFVs (T-54) and earthmoving equipment (front-end loaders, bulldozers), as well as components in certain types of limited slip differentials. The benefit in the case of motorsports is that it is possible to achieve the same total friction force with a much smaller overall diameter (or conversely, a much greater friction force for the same diameter, important in cases where a vehicle is modified with greater power, yet the maximum physical size of the clutch unit is constrained by the clutch housing). In motorsports vehicles that run at high engine/drivetrain speeds, the smaller diameter reduces rotational inertia, making the drivetrain components accelerate more rapidly, as well as reducing the velocity of the outer areas of the clutch unit, which could become highly stressed and fail at the extremely high drivetrain rotational rates achieved in sports such as Formula 1 or drag racing. In the case of heavy equipment, which often deal with very high torque forces and drivetrain loads, a single plate clutch of the necessary strength would be too large to easily package as a component of the driveline. Another, different theme on the multiplate clutch is the clutches used in the fastest classes of drag racing, highly specialized, purpose-built cars such as Top Fuel dragsters or Funny Cars. These cars are so powerful that to attempt a start with a simple clutch would result in complete loss of traction. To avoid this problem, Top Fuel cars actually use a single, fixed gear ratio, and a "series" of clutches that are engaged one at a time, rather than in unison, progressively allowing more power to the wheels. A single one of these clutch plates (as designed) cannot hold more than a fraction of the power of the engine, so the driver starts with only the first clutch engaged. This clutch is overwhelmed by the power of the engine, allowing only a fraction of the power to the wheels, much like "slipping the clutch" in a slower car, but working without requiring concentration from the driver. As speed builds, the driver pulls a lever, which engages a second clutch, sending a bit more of the engine power to the wheels, and so on. This continues through several clutches until the car has reached a speed where the last clutch can be engaged. With all clutches engaged, the engine is now sending all of its power to the rear wheels. This is far more predictable and repeatable than the driver manually slipping the clutch himself and then shifting through the gears, given the extreme violence of the run and the speed at which it all unfolds. Another benefit is that there is no need to break the power flow in order to swap gears (a conventional manual cannot transmit power while between gears, which is important because 1/100ths of a second are important in Top Fuel races). A traditional multiplate clutch would be more prone to overheating and failure, as all the plates must be subjected to heat and friction together until the clutch is fully engaged, while a Top Fuel car keeps its last clutches in "reserve" until the cars speed allows full engagement. It is relatively easy to design the last stages to be much more powerful than the first, in order to ensure they can absorb the power of the engine even if the first clutches burn out or overheat from the extreme friction. A "wet clutch" is immersed in a cooling lubricating fluid that also keeps surfaces clean and provides smoother performance and longer life. Wet clutches, however, tend to lose some energy to the liquid. Since the surfaces of a wet clutch can be slippery (as with a motorcycle clutch bathed in engine oil), stacking multiple clutch discs can compensate for the lower coefficient of friction and so eliminate slippage under power when fully engaged. The Hele-Shaw clutch was a wet clutch that relied entirely on viscous effects, rather than on friction. A "dry clutch", as the name implies, is not bathed in liquid and uses friction to engage. A centrifugal clutch is used in some vehicles (e.g., mopeds) and also in other applications where the speed of the engine defines the state of the clutch, for example, in a chainsaw. This clutch system employs centrifugal force to automatically engage the clutch when the engine rpm rises above a threshold and to automatically disengage the clutch when the engine rpm falls low enough. See Saxomat and Variomatic. As the name implies, a cone clutch has conical friction surfaces. The cone's taper means that a given amount of movement of the actuator makes the surfaces approach (or recede) much more slowly than in a disc clutch. As well, a given amount of actuating force creates more pressure on the mating surfaces. The best known example of a cone clutch is a synchronizer ring in a manual transmission. The synchronizer ring is responsible for "synchronizing" the speeds of the shift hub and the gear wheel to ensure a smooth gear change. Also known as a slip clutch or "safety clutch", this device allows a rotating shaft to slip when higher than normal resistance is encountered on a machine. An example of a safety clutch is the one mounted on the driving shaft of a large grass mower. The clutch yields if the blades hit a rock, stump, or other immobile object, thus avoiding a potentially damaging torque transfer to the engine, possibly twisting or fracturing the crankshaft. Motor-driven mechanical calculators had these between the drive motor and gear train, to limit damage when the mechanism jammed, as motors used in such calculators had high stall torque and were capable of causing damage to the mechanism if torque was not limited. Carefully designed clutches operate, but continue to transmit maximum permitted torque, in such tools as controlled-torque screwdrivers. Some clutches are designed not to slip; torque may only be transmitted either fully engaged or disengaged to avoid catastrophic damage. An example of this is the dog clutch, most commonly used in non-synchromesh transmissions. There are multiple designs of vehicle clutch, but most are based on one or more friction discs pressed tightly together or against a flywheel using springs. The friction material varies in composition depending on many considerations such as whether the clutch is "dry" or "wet". Friction discs once contained asbestos, but this has been largely discontinued. Clutches found in heavy duty applications such as trucks and competition cars use ceramic plates that have a greatly increased friction coefficient. However, these have a "grabby" action generally considered unsuitable for passenger cars. The spring pressure is released when the clutch pedal is depressed thus either pushing or pulling the diaphragm of the pressure plate, depending on type. Raising the engine speed too high while engaging the clutch causes excessive clutch plate wear. Engaging the clutch abruptly when the engine is turning at high speed causes a harsh, jerky start. This kind of start is necessary and desirable in drag racing and other competitions, where speed is more important than comfort. In a modern car with a manual transmission the clutch is operated by the left-most pedal using a hydraulic or cable connection from the pedal to the clutch mechanism. On older cars the clutch might be operated by a mechanical linkage. Even though the clutch may physically be located very close to the pedal, such remote means of actuation are necessary to eliminate the effect of vibrations and slight engine movement, engine mountings being flexible by design. With a rigid mechanical linkage, smooth engagement would be near-impossible because engine movement inevitably occurs as the drive is "taken up." The default state of the clutch is "engaged" - that is the connection between engine and gearbox is always "on" unless the driver presses the pedal and disengages it. If the engine is running with the clutch engaged and the transmission in neutral, the engine spins the input shaft of the transmission but power is not transmitted to the wheels. The clutch is located between the engine and the gearbox, as disengaging it is usually required to change gear. Although the gearbox does not stop rotating during a gear change, there is no torque transmitted through it, thus less friction between gears and their engagement dogs. The output shaft of the gearbox is permanently connected to the final drive, then the wheels, and so both always rotate together, at a fixed speed ratio. With the clutch disengaged, the gearbox input shaft is free to change its speed as the internal ratio is changed. Any resulting difference in speed between the engine and gearbox is evened out as the clutch slips slightly during re-engagement. Clutches in typical cars are mounted directly to the face of the engine's flywheel, as this already provides a convenient large diameter steel disk that can act as one driving plate of the clutch. Some racing clutches use small multi-plate disk packs that are not part of the flywheel. Both clutch and flywheel are enclosed in a conical bellhousing, which (in a rear-wheel drive car) usually forms the main mounting for the gearbox. A few cars, notably the Alfa Romeo Alfetta and 75, Porsche 924, and Chevrolet Corvette (since 1997), sought a more even weight distribution between front and back by placing the weight of the transmission at the rear of the car, combined with the rear axle to form a transaxle. The clutch was mounted with the transaxle and so the propeller shaft rotated continuously with the engine, even when in neutral gear or declutched. Motorcycles typically employ a wet clutch with the clutch riding in the same oil as the transmission. These clutches are usually made up of a stack of alternating friction plates and steel plates. The friction plates have lugs on their outer diameters that lock them to a basket that is turned by the crankshaft. The steel plates have lugs on their inner diameters that lock them to the transmission input shaft. A set of coil springs or a diaphragm spring plate force the plates together when the clutch is engaged. On motorcycles the clutch is operated by a hand lever on the left handlebar. No pressure on the lever means that the clutch plates are engaged (driving), while pulling the lever back towards the rider disengages the clutch plates through cable or hydraulic actuation, allowing the rider to shift gears or coast. Racing motorcycles often use slipper clutches to eliminate the effects of engine braking, which, being applied only to the rear wheel, can cause instability. Cars use clutches in places other than the drive train. For example, a belt-driven engine cooling fan may have a heat-activated clutch. The driving and driven members are separated by a silicone-based fluid and a valve controlled by a bimetallic spring. When the temperature is low, the spring winds and closes the valve, which lets the fan spin at about 20% to 30% of the shaft speed. As the temperature of the spring rises, it unwinds and opens the valve, allowing fluid past the valve, makes the fan spin at about 60% to 90% of shaft speed. Other clutches—such as for an air conditioning compressor—electronically engage clutches using magnetic force to couple the driving member to the driven member. Single-revolution clutches were developed in the 19th century to power machinery such as shears or presses where a single pull of the operating lever or (later) press of a button would trip the mechanism, engaging the clutch between the power source and the machine's crankshaft for exactly one revolution before disengaging the clutch. When the clutch is disengaged and the driven member is stationary. Early designs were typically dog clutches with a cam on the driven member used to disengage the dogs at the appropriate point. Greatly simplified single-revolution clutches were developed in the 20th century, requiring much smaller operating forces and in some variations, allowing for a fixed fraction of a revolution per operation. Fast action friction clutches replaced dog clutches in some applications, eliminating the problem of impact loading on the dogs every time the clutch engaged. In addition to their use in heavy manufacturing equipment, single-revolution clutches were applied to numerous small machines. In tabulating machines, for example, pressing the operate key would trip a single revolution clutch to process the most recently entered number. In typesetting machines, pressing any key selected a particular character and also engaged a single rotation clutch to cycle the mechanism to typeset that character. Similarly, in teleprinters, the receipt of each character tripped a single-revolution clutch to operate one cycle of the print mechanism. In 1928, Frederick G. Creed developed a single-turn spring clutch (see above) that was particularly well suited to the repetitive start-stop action required in teleprinters. In 1942, two employees of Pitney Bowes Postage Meter Company developed an improved single turn spring clutch. In these clutches, a coil spring is wrapped around the driven shaft and held in an expanded configuration by the trip lever. When tripped, the spring rapidly contracts around the power shaft engaging the clutch. At the end of one revolution, if the trip lever has been reset, it catches the end of the spring (or a pawl attached to it) and the angular momentum of the driven member releases the tension on the spring. These clutches have long operating lives—many have performed tens and perhaps hundreds of millions of cycles without need of maintenance other than occasional lubrication. These superseded wrap-spring single-revolution clutches in page printers, such as teleprinters, including the Teletype Model 28 and its successors, using the same design principles. IBM Selectric typewriters also used them. These are typically disc-shaped assemblies mounted on the driven shaft. Inside the hollow disc-shaped drive drum are two or three freely floating pawls arranged so that when the clutch is tripped, the pawls spring outward much like the shoes in a drum brake. When engaged, the load torque on each pawl transfers to the others to keep them engaged. These clutches do not slip once locked up, and they engage very quickly, on the order of milliseconds. A trip projection extends out from the assembly. If the trip lever engaged this projection, the clutch was disengaged. When the trip lever releases this projection, internal springs and friction engage the clutch. The clutch then rotates one or more turns, stopping when the trip lever again engages the trip projection. These mechanisms were found in some types of synchronous-motor-driven electric clocks. Many different types of synchronous clock motors were used, including the pre-World War II Hammond manual-start clocks. Some types of self-starting synchronous motors always started when power was applied, but in detail, their behaviour was chaotic and they were equally likely to start rotating in the wrong direction. Coupled to the rotor by one (or possibly two) stages of reduction gearing was a wrap-spring clutch-brake. The spring did not rotate. One end was fixed; the other was free. It rode freely but closely on the rotating member, part of the clock's gear train. The clutch-brake locked up when rotated backwards, but also had some spring action. The inertia of the rotor going backwards engaged the clutch and wound the spring. As it unwound, it restarted the motor in the correct direction. Some designs had no explicit spring as such—but were simply compliant mechanisms. The mechanism was lubricated and wear did not present a problem. A Lock-up clutch is used in some automatic transmissions for motor vehicles. Above a certain speed (usually 60 km/h) it locks the torque converter to minimise power loss and improve fuel efficiency.
https://en.wikipedia.org/wiki?curid=6517
Cow tipping Cow tipping is the purported activity of sneaking up on any unsuspecting or sleeping upright cow and pushing it over for entertainment. The practice of cow tipping is generally considered an urban legend, and stories of such feats viewed as tall tales. The implication that rural citizens seek such entertainment due to lack of alternatives is viewed as a stereotype. The concept of cow tipping apparently developed in the 1970s, though tales of animals that cannot rise if they fall has historical antecedents dating to the Roman Empire. Cows routinely lie down and can easily regain their footing unless sick or injured. Scientific studies have been conducted to determine if cow tipping is theoretically possible, with varying conclusions. All agree that cows are large animals that are difficult to surprise and will generally resist attempts to be tipped. Estimates suggest a force of between is needed, and that at least four and possibly as many as fourteen people would be required to achieve this. In real-life situations where cattle have to be laid on the ground, or "cast", such as for branding, hoof care or veterinary treatment, either rope restraints are required or specialized mechanical equipment is used that confines the cow and then tips it over. On rare occasions, cattle can lie down or fall down in proximity to a ditch or hill that restricts their normal ability to rise without help. Cow tipping has many references in popular culture and is also used as a figure of speech. The urban legend of cow tipping relies upon the presumption that cattle are slow-moving, dim-witted, and weak-legged, thus easily pushed over without much force. Some versions suggest that because cows sleep standing up, it is possible to approach them and push them over without the animals reacting. However, cows only sleep lightly while standing up, and they are easily awakened. They lie down to sleep deeply. Furthermore, numerous sources have questioned the practice's feasibility, since most cows weigh over half a ton and easily resist any lesser force. A 2005 study led by Margo Lillie, a zoologist at the University of British Columbia, and her student Tracy Boechler, concluded that tipping a cow would require a force of nearly and is therefore impossible to accomplish by a single person. Her calculations found that it would require more than four people to apply enough force to push over a cow, based on an estimate that a single person could exert of force. However, since a cow can brace itself, Lillie and Boechler suggested that five or six people would, most likely, be needed. Further, cattle are well aware of their surroundings and are very difficult to surprise, due to excellent senses of both smell and hearing. Lillie and Boechler's analysis found that if a cow did not move, the principles of static physics suggest that two people might be able to tip a cow if its centre of mass were pushed over its hooves before the cow could react. However, cows are not rigid or unresponsive, and the faster humans have to move, the less force they can exert. Thus Lillie and Boechler concluded that it is unlikely that cows can actually be tipped over in this way. Lillie stated, "It just makes the physics of it all, in my opinion, impossible." Although he agrees that it would take a force of about 3,000 newtons to push over a standing cow, biologist Steven Vogel thinks that the study by Lillie and Boechler overestimates the pushing ability of an individual human. Using data from Cotterell and Kamminga, who estimated that humans exert a pushing force of 280 newtons, Vogel suggests that someone applying force at the requisite height to topple a cow might generate a maximum push of no more than 300 newtons. By this calculation, at least 10 people would be needed to tip over a non-reacting cow. However, this combined force requirement, he says, might not be the greatest impediment to such a prank. Standing cows are not asleep and like other animals have ever-vigilant reflexes. "If the cow does no more than modestly widen its stance without an overall shift of its center of gravity", he says, "about 4,000 newtons or 14 pushers would be needed—quite a challenge to deploy without angering the cow." The belief that certain animals cannot rise if pushed over has historical antecedents, though cattle have never been so classified. Julius Caesar recorded a belief that a European elk had no knee joints and could not get up if it fell. Pliny said the same about the hind legs of an animal he called the achlis, which Pliny's 19th-century translators Bostock and Riley said was merely another name for the elk. They also noted that Pliny's belief about the jointless back legs of the achlis (elk) was false. In 1255, Louis IX of France gave an elephant to Henry III of England for his menagerie in the Tower of London. A drawing by the historian Matthew Paris for his "Chronica Majora" can be seen in his bestiary at Parker Library of Corpus Christi College, Cambridge. An accompanying text cites elephant lore suggesting that elephants did not have knees and were unable to get up if they fell. Journalist Jake Steelhammer believes the American urban myth of cow tipping originated in the 1970s. It "stampeded into the '80s", he says, "when movies like "Tommy Boy" and "Heathers" featured cow tipping expeditions." Stories about cow tipping tend to be second-hand, he says, told by someone who does not claim to have tipped a cow but who knows someone else who says he or she did. Cattle may need to be deliberately thrown or tipped over for certain types of husbandry practices and medical treatment. When done for medical purposes, this is often called "casting", and when performed without mechanical assistance requires the attachment of of rope around the body and legs of the animal. After the rope is secured by non-slip bowline knots, it is pulled to the rear until the animal is off-balance. Once the cow is forced to lie down in sternal recumbency (on its chest), it can be rolled onto its side and its legs tied to prevent kicking. A calf table or calf cradle, also called a "tipping table" or a "throw down", is a relatively modern invention designed to be used on calves that are being branded. A calf is run into a chute, confined, and then tipped by the equipment onto its side for easier branding and castration. Hydraulic tilt tables for adult cattle have existed since the 1970s and are designed to lift and tip cattle onto their sides to enable veterinary care, particularly of the animals' genitalia, and for hoof maintenance. (Unlike horses, cows generally do not cooperate with a farrier when standing.) A Canadian veterinarian explained, "Using the table is much safer and easier than trying to get underneath to examine the animal", and noted that cows tipped over on a padded table usually stop struggling and become calm fairly quickly. One design, developed at the Western College of Veterinary Medicine in Saskatoon, Saskatchewan, included "cow comfort" as a unique aspect of care using this type of apparatus. Cows may tip themselves inadvertently. Due to their bulk and relatively short legs, cattle cannot roll over. Those that lie down and roll to their sides with their feet pointing uphill may become stuck and unable to rise without assistance, with potentially fatal results. In such cases, two humans can roll or flip a cow onto its other side, so that its feet are aimed downhill, thus allowing it to rise on its own. In one documented case of "real-life cow tipping", a pregnant cow rolled into a gully in New Hampshire and became trapped in an inverted state until rescued by volunteer fire fighters. The owner of the cow commented that he had seen this happen "once or twice" before. Trauma or illness may also result in a cow unable to rise to its feet. Such animals are sometimes called "downers." Sometimes this occurs as a result of muscle and nerve damage from calving or a disease such as mastitis. Leg injuries, muscle tears, or a massive infection of some sort may also be causes. Downer cows are encouraged to get to their feet and have a much greater chance of recovery if they do. If unable to rise, some have survived—with medical care—as long as 14 days and were ultimately able to get back on their feet. Appropriate medical treatment for a downer cow to prevent further injury includes rolling from one side to the other every three hours, careful and frequent feeding of small amounts of fodder, and access to clean water. Dead animals may appear to have been tipped over. But this is actually the process of rigor mortis, which stiffens the muscles of the carcass, beginning six to eight hours after death and lasting for one to two days. It is particularly noticeable in the limbs, which stick out straight. Post-mortem bloat also occurs because of gas formation inside the body. The process may result in cattle carcasses that wind up on their back with all four feet in the air. Assorted individuals have claimed to have performed cow tipping, often while under the influence of alcohol. These claims, to date, cannot be reliably verified, with Jake Swearingen of "Modern Farmer" noting in 2013 that YouTube, a popular source of videos of challenges and stunts, "fails to deliver one single actual cow-tipping video". Pranksters have sometimes pushed over artificial cows. Along Chicago's Michigan Avenue in 1999, two "apparently drunk" men felled six fiberglass cows that were part of a Cows on Parade public art exhibit. Four other vandals removed a "Wow cow" sculpture from its lifeguard chair at Oak Street Beach and abandoned it in a pedestrian underpass. A year later, New York City anchored its CowParade art cows, including "A Streetcow Named Desire", to concrete bases "to prevent the udder disrespect of cow-tippers and thieves." Cow tipping has been featured in films from the 1980s and later, such as "Heathers" (1988), "Tommy Boy" (1995), " Barnyard" (2006), and "I Love You, Beth Cooper" (2009). It was also used in the title of a 1992 documentary film by Randy Redroad, "Cow Tipping–The Militant Indian Waiter". The film "Cars" (2006) features a vehicular variant called tractor-tipping. In The Little Willies song "Lou Reed" from their 2006 eponymous debut album, Norah Jones sings about a fictional event during which musician Lou Reed tips cows in Texas. In another medium, "The Big Bang Theory", a television show, uses cow tipping lore as an element to establish the nature of a rural character, Penny. The term "cow tipping" is sometimes used as a figure of speech for pushing over something big. In "A Giant Cow-Tipping by Savages", author John Weir Close uses the term to describe contemporary mergers and acquisitions. "Tipping sacred cows" has been used as a deliberate mixed metaphor in titles of books on Christian ministry and business management.
https://en.wikipedia.org/wiki?curid=6520
Couplet A couplet is a pair of successive lines of metre in poetry. A couplet usually consists of two successive lines that rhyme and have the same metre. A couplet may be formal (closed) or run-on (open). In a formal (or closed) couplet, each of the two lines is end-stopped, implying that there is a grammatical pause at the end of a line of verse. In a run-on (or open) couplet, the meaning of the first line continues to the second. The word "couplet" comes from the French word meaning "two pieces of iron riveted or hinged together." The term "couplet" was first used to describe successive lines of verse in Sir P. Sidney's " Arcadia " in 1590: "In singing some short coplets, whereto the one halfe beginning, the other halfe should answere." While couplets traditionally rhyme, not all do. Poems may use white space to mark out couplets if they do not rhyme. Couplets in iambic pentameter are called "heroic couplets". John Dryden in the 17th century and Alexander Pope in the 18th century were both well known for their writing in heroic couplets. The Poetic epigram is also in the couplet form. Couplets can also appear as part of more complex rhyme schemes, such as sonnets. Rhyming couplets are one of the simplest rhyme schemes in poetry. Because the rhyme comes so quickly, it tends to call attention to itself. Good rhyming couplets tend to "explode" as both the rhyme and the idea come to a quick close in two lines. Here are some examples of rhyming couplets where the sense as well as the sound "rhymes": On the other hand, because rhyming couplets have such a predictable rhyme scheme, they can feel artificial and plodding. Here is a Pope parody of the predictable rhymes of his era: Rhyming couplets are often used in Early Modern English poetry, as seen in Chaucer's "The Canterbury Tales". This work of literature is written almost entirely in rhyming couplets. Similarly, Shakespearean sonnets often employ rhyming couplets at the end to emphasize the theme. Take one of Shakespeare's most famous sonnets, Sonnet 18, for example (the rhyming couplet is shown in italics): Chinese couplets or "contrapuntal couplets" may be seen on doorways in Chinese communities worldwide. Couplets displayed as part of the Chinese New Year festival, on the first morning of the New Year, are called "chunlian" (春联). These are usually purchased at a market a few days before and glued to the doorframe. The text of the couplets is often traditional and contains hopes for prosperity. Other chunlian reflect more recent concerns. For example, the CCTV New Year's Gala usually promotes couplets reflecting current political themes in mainland China. Some Chinese couplets may consist of two lines of four characters each. Couplets are read from top to bottom where the first pline starts from the right. But is also a 6 word diagraph with 19 lines Tamil literature contains some of the best known examples of ancient couplet poetry. The Tamil language has a rich and refined grammar for couplet poetry, and distichs in Tamil poetry follow the venpa metre. The most famous example for Tamil couplet poetry is the ancient Tamil moral text of Tirukkural, which contains a total of 1330 couplets written in the kural venpa metre from which the title of the work was derived centuries later. Each Kural couplet is made of exactly 7 words—4 in the first line and 3 in the second. The first word may rhyme with the fourth or the fifth word. Below is an example of a couplet: The American poet J. V. Cunningham was noted for many distichs included in the various forms of epigrams included in his poetry collections, as exampled here: Deep summer, and time passes. Sorrow wastesTo a new sorrow. While Time heals time hastes
https://en.wikipedia.org/wiki?curid=6530
Charlotte Brontë Charlotte Brontë (, commonly ; 21 April 1816 – 31 March 1855) was an English novelist and poet, the eldest of the three Brontë sisters who survived into adulthood and whose novels became classics of English literature. She enlisted in school at Roe Head in January 1831, aged 14 years. She left the year after to teach her sisters, Emily and Anne, at home, returning in 1835 as a governess. In 1839 she undertook the role as governess for the Sidgwick family but left after a few months to return to Haworth where the sisters opened a school, but failed to attract pupils. Instead, they turned to writing and they each first published in 1846 under the pseudonyms of Currer, Ellis and Acton Bell. While her first novel, "The Professor", was rejected by publishers, her second novel, "Jane Eyre", was published in 1847. The sisters admitted to their Bell pseudonyms in 1848, and by the following year were celebrated in London literary circles. Brontë was the last to die of all her siblings. She became pregnant shortly after her marriage in June 1854 but died on 31 March 1855, almost certainly from hyperemesis gravidarum, a complication of pregnancy which causes excessive nausea and vomiting. Charlotte Brontë was born on 21 April 1816 in Market Street Thornton, west of Bradford in the West Riding of Yorkshire, the third of the six children of Maria (née Branwell) and Patrick Brontë (formerly surnamed Brunty), an Irish Anglican clergyman. In 1820 her family moved a few miles to the village of Haworth, where her father had been appointed perpetual curate of St Michael and All Angels Church. Maria died of cancer on 15 September 1821, leaving five daughters, Maria, Elizabeth, Charlotte, Emily and Anne, and a son, Branwell, to be taken care of by her sister, Elizabeth Branwell. In August 1824, Patrick sent Charlotte, Emily, Maria and Elizabeth to the Clergy Daughters' School at Cowan Bridge in Lancashire. Charlotte maintained that the school's poor conditions permanently affected her health and physical development, and hastened the deaths of Maria (born 1814) and Elizabeth (born 1815), who both died of tuberculosis in June 1825. After the deaths of his older daughters, Patrick removed Charlotte and Emily from the school. Charlotte used the school as the basis for Lowood School in "Jane Eyre". At home in Haworth Parsonage, Brontë acted as "the motherly friend and guardian of her younger sisters". Brontë wrote her first known poem at the age of 13 in 1829, and was to go on to write more than 200 poems in the course of her life. Many of her poems were "published" in their homemade magazine "Branwell's Blackwood's Magazine", and concerned the fictional Glass Town Confederacy. She and her surviving siblings – Branwell, Emily and Anne – created their own fictional worlds, and began chronicling the lives and struggles of the inhabitants of their imaginary kingdoms. Charlotte and Branwell wrote Byronic stories about their jointly imagined country, Angria, and Emily and Anne wrote articles and poems about Gondal. The sagas they created were episodic and elaborate, and they exist in incomplete manuscripts, some of which have been published as juvenilia. They provided them with an obsessive interest during childhood and early adolescence, which prepared them for literary vocations in adulthood. Between 1831 and 1832, Brontë continued her education at Roe Head in Mirfield, where she met her lifelong friends and correspondents Ellen Nussey and Mary Taylor. In 1833 she wrote a novella, "The Green Dwarf", using the name Wellesley. Around about 1833, her stories shifted from tales of the supernatural to more realistic stories. She returned to Roe Head as a teacher from 1835 to 1838. Unhappy and lonely as a teacher at Roe Head, Brontë took out her sorrows in poetry, writing a series of melancholic poems. In "We wove a Web in Childhood" written in December 1835, Brontë drew a sharp contrast between her miserable life as a teacher and the vivid imaginary worlds she and her siblings had created. In another poem "Morning was its freshness still" written at the same time, Brontë wrote "Tis bitter sometimes to recall/Illusions once deemed fair". Many of her poems concerned the imaginary world of Angria, often concerning Byronic heroes, and in December 1836 she wrote to the Poet Laureate Robert Southey asking him for encouragement of her career as a poet. Southey replied, famously, that "Literature cannot be the business of a woman's life, and it ought not to be. The more she is engaged in her proper duties, the less leisure will she have for it even as an accomplishment and a recreation." This advice she respected but did not heed. In 1839 she took up the first of many positions as governess to families in Yorkshire, a career she pursued until 1841. In particular, from May to July 1839 she was employed by the Sidgwick family at their summer residence, Stone Gappe, in Lothersdale, where one of her charges was John Benson Sidgwick (1835–1927), an unruly child who on one occasion threw a Bible at Charlotte, an incident that may have been the inspiration for a part of the opening chapter of "Jane Eyre" in which John Reed throws a book at the young Jane. Brontë did not enjoy her work as a governess, noting her employers treated her almost as a slave, constantly humiliating her. Brontë was of slight build and was less than five feet tall. In 1842 Charlotte and Emily travelled to Brussels to enrol at the boarding school run by Constantin Héger (1809–1896) and his wife Claire Zoé Parent Héger (1804–1887). During her time in Brussels, Brontë, who favoured the Protestant ideal of an individual in direct contact with God, objected to the stern Catholicism of Madame Héger, which she considered a tyrannical religion that enforced conformity and submission to the Pope. In return for board and tuition Charlotte taught English and Emily taught music. Their time at the school was cut short when their aunt Elizabeth Branwell, who had joined the family in Haworth to look after the children after their mother's death, died of internal obstruction in October 1842. Charlotte returned alone to Brussels in January 1843 to take up a teaching post at the school. Her second stay was not happy: she was homesick and deeply attached to Constantin Héger. She returned to Haworth in January 1844 and used the time spent in Brussels as the inspiration for some of the events in "The Professor" and "Villette". After returning to Haworth, Charlotte and her sisters made headway with opening their own boarding school in the family home. It was advertised as "The Misses Brontë's Establishment for the Board and Education of a limited number of Young Ladies" and inquiries were made to prospective pupils and sources of funding. But none were attracted and in October 1844, the project was abandoned. In May 1846 Charlotte, Emily, and Anne self-financed the publication of a joint collection of poems under their assumed names Currer, Ellis and Acton Bell. The pseudonyms veiled the sisters' sex while preserving their initials; thus Charlotte was Currer Bell. "Bell" was the middle name of Haworth's curate, Arthur Bell Nicholls whom Charlotte later married, and "Currer" was the surname of Frances Mary Richardson Currer who had funded their school (and maybe their father). Of the decision to use "noms de plume", Charlotte wrote: Although only two copies of the collection of poems were sold, the sisters continued writing for publication and began their first novels, continuing to use their "noms de plume" when sending manuscripts to potential publishers. Brontë's first manuscript, "The Professor", did not secure a publisher, although she was heartened by an encouraging response from Smith, Elder & Co. of Cornhill, who expressed an interest in any longer works Currer Bell might wish to send. Brontë responded by finishing and sending a second manuscript in August 1847. Six weeks later, "Jane Eyre" was published. It tells the story of a plain governess, Jane, who, after difficulties in her early life, falls in love with her employer, Mr Rochester. They marry, but only after Rochester's insane first wife, of whom Jane initially has no knowledge, dies in a dramatic house fire. The book's style was innovative, combining naturalism with gothic melodrama, and broke new ground in being written from an intensely evoked first-person female perspective. Brontë believed art was most convincing when based on personal experience; in "Jane Eyre" she transformed the experience into a novel with universal appeal. "Jane Eyre" had immediate commercial success and initially received favourable reviews. G. H. Lewes wrote that it was "an utterance from the depths of a struggling, suffering, much-enduring spirit", and declared that it consisted of ""suspiria de profundis"!" (sighs from the depths). Speculation about the identity and gender of the mysterious Currer Bell heightened with the publication of "Wuthering Heights" by Ellis Bell (Emily) and "Agnes Grey" by Acton Bell (Anne). Accompanying the speculation was a change in the critical reaction to Brontë's work, as accusations were made that the writing was "coarse", a judgement more readily made once it was suspected that Currer Bell was a woman. However, sales of "Jane Eyre" continued to be strong and may even have increased as a result of the novel developing a reputation as an "improper" book. A talented amateur artist, Brontë personally did the drawings for the second edition of "Jane Eyre" and in the summer of 1834 two of her paintings were shown at an exhibition by the Royal Northern Society for the Encouragement of the Fine Arts in Leeds. In 1848 Brontë began work on the manuscript of her second novel, "Shirley". It was only partially completed when the Brontë family suffered the deaths of three of its members within eight months. In September 1848 Branwell died of chronic bronchitis and marasmus, exacerbated by heavy drinking, although Brontë believed that his death was due to tuberculosis. Branwell may have had a laudanum addiction. Emily became seriously ill shortly after his funeral and died of pulmonary tuberculosis in December 1848. Anne died of the same disease in May 1849. Brontë was unable to write at this time. After Anne's death Brontë resumed writing as a way of dealing with her grief, and "Shirley", which deals with themes of industrial unrest and the role of women in society, was published in October 1849. Unlike "Jane Eyre", which is written in the first person, "Shirley" is written in the third person and lacks the emotional immediacy of her first novel, and reviewers found it less shocking. Brontë, as her late sister's heir, suppressed the republication of Anne's second novel, "The Tenant of Wildfell Hall", an action which had a deleterious effect on Anne's popularity as a novelist and has remained controversial among the sisters' biographers ever since. In view of the success of her novels, particularly "Jane Eyre", Brontë was persuaded by her publisher to make occasional visits to London, where she revealed her true identity and began to move in more exalted social circles, becoming friends with Harriet Martineau and Elizabeth Gaskell, and acquainted with William Makepeace Thackeray and G.H. Lewes. She never left Haworth for more than a few weeks at a time, as she did not want to leave her ageing father. Thackeray's daughter, writer Anne Isabella Thackeray Ritchie, recalled a visit to her father by Brontë: Brontë's friendship with Elizabeth Gaskell, while not particularly close, was significant in that Gaskell wrote the first biography of Brontë after her death in 1855. Brontë's third novel, the last published in her lifetime, was "Villette", which appeared in 1853. Its main themes include isolation, how such a condition can be borne, and the internal conflict brought about by social repression of individual desire. Its main character, Lucy Snowe, travels abroad to teach in a boarding school in the fictional town of Villette, where she encounters a culture and religion different from her own and falls in love with a man (Paul Emanuel) whom she cannot marry. Her experiences result in a breakdown but eventually, she achieves independence and fulfilment through running her own school. A substantial amount of the novel's dialogue is in the French language. "Villette" marked Brontë's return to writing from a first-person perspective (that of Lucy Snowe), the technique she had used in "Jane Eyre". Another similarity to "Jane Eyre" lies in the use of aspects of her own life as inspiration for fictional events, in particular her reworking of the time she spent at the "pensionnat" in Brussels. "Villette" was acknowledged by critics of the day as a potent and sophisticated piece of writing although it was criticised for "coarseness" and for not being suitably "feminine" in its portrayal of Lucy's desires. Before the publication of "Villette", Brontë received an expected proposal of marriage from Arthur Bell Nicholls, her father's curate, who had long been in love with her. She initially turned down his proposal and her father objected to the union at least partly because of Nicholls's poor financial status. Elizabeth Gaskell, who believed that marriage provided "clear and defined duties" that were beneficial for a woman, encouraged Brontë to consider the positive aspects of such a union and tried to use her contacts to engineer an improvement in Nicholls's finances. Brontë meanwhile was increasingly attracted to Nicholls and by January 1854 she had accepted his proposal. They gained the approval of her father by April and married in June. Her father Patrick had intended to give Charlotte away, but at the last minute decided he could not, and Charlotte had to make her way to the church without him. The married couple took their honeymoon in Banagher, County Offaly, Ireland. By all accounts, her marriage was a success and Brontë found herself very happy in a way that was new to her. Brontë became pregnant soon after her wedding, but her health declined rapidly and, according to Gaskell, she was attacked by "sensations of perpetual nausea and ever-recurring faintness". She died, with her unborn child, on 31 March 1855, three weeks before her 39th birthday. Her death certificate gives the cause of death as tuberculosis, but biographers including Claire Harman and others suggest that she died from dehydration and malnourishment due to vomiting caused by severe morning sickness or hyperemesis gravidarum. Brontë was buried in the family vault in the Church of St Michael and All Angels at Haworth. "The Professor", the first novel Brontë had written, was published posthumously in 1857. The fragment of a new novel she had been writing in her last years has been twice completed by recent authors, the more famous version being "Emma Brown: A Novel from the Unfinished Manuscript by Charlotte Brontë" by Clare Boylan in 2003. Most of her writings about the imaginary country Angria have also been published since her death. In 2018, "The New York Times" published a belated obituary for her. The daughter of an Irish Anglican clergyman, Brontë was herself an Anglican. In a letter to her publisher, she claims to "love the Church of England. Her Ministers indeed, I do not regard as infallible personages, I have seen too much of them for that – but to the Establishment, with all her faults – the profane Athanasian Creed excluded – I am sincerely attached." In a letter to Ellen Nussey she wrote: Elizabeth Gaskell's biography "The Life of Charlotte Brontë" was published in 1857. It was an important step for a leading female novelist to write a biography of another, and Gaskell's approach was unusual in that, rather than analysing her subject's achievements, she concentrated on private details of Brontë's life, emphasising those aspects that countered the accusations of "coarseness" that had been levelled at her writing. The biography is frank in places, but omits details of Brontë's love for Héger, a married man, as being too much of an affront to contemporary morals and a likely source of distress to Brontë's father, widower, and friends. Mrs Gaskell also provided doubtful and inaccurate information about Patrick Brontë, claiming that he did not allow his children to eat meat. This is refuted by one of Emily Brontë's diary papers, in which she describes preparing meat and potatoes for dinner at the parsonage. It has been argued that Gaskell's approach transferred the focus of attention away from the 'difficult' novels, not just Brontë's, but all the sisters', and began a process of sanctification of their private lives. On 29 July 1913 "The Times" of London printed four letters Brontë had written to Constantin Héger after leaving Brussels in 1844. Written in French except for one postscript in English, the letters broke the prevailing image of Brontë as an angelic martyr to Christian and female duties that had been constructed by many biographers, beginning with Gaskell. The letters, which formed part of a larger and somewhat one-sided correspondence in which Héger frequently appears not to have replied, reveal that she had been in love with a married man, although they are complex and have been interpreted in numerous ways, including as an example of literary self-dramatisation and an expression of gratitude from a former pupil. In 1980 a commemorative plaque was unveiled at the Centre for Fine Arts, Brussels (BOZAR), on the site of the Madam Heger's school, in honour of Charlotte and Emily. In May 2017 the plaque was cleaned. "The Green Dwarf, A Tale of the Perfect Tense" was written in 1833 under the pseudonym Lord Charles Albert Florian Wellesley. It shows the influence of Walter Scott, and Brontë's modifications to her earlier gothic style have led Christine Alexander to comment that, in the work, "it is clear that Brontë was becoming tired of the gothic mode "per se"".
https://en.wikipedia.org/wiki?curid=6532
Charles Williams (British writer) Charles Walter Stansby Williams (20 September 1886 – 15 May 1945) was a British poet, novelist, playwright, theologian, literary critic, and member of the Inklings. Williams was born in London in 1886, the only son of (Richard) Walter Stansby Williams (1848–1929), a journalist and foreign business correspondent for an importing firm, writing in French and German, who was a 'regular and valued' contributor of verse, stories and articles to many popular magazines, and his wife Mary (née Wall, the sister of the ecclesiologist and historian J. Charles Wall), a former milliner, of Islington. He had one sister, Edith, born in 1889. The Williams family lived in 'shabby-genteel' circumstances, owing to Walter's increasing blindness and the decline of the firm by which he was employed, in Holloway. In 1894 the family moved to St Albans in Hertfordshire, where Williams lived until his marriage in 1917. Educated at St Albans School, Williams was awarded a scholarship to University College London, but he left school in 1904 without attempting to gain a degree due to an inability to pay tuition fees. Williams began work in 1904 in a Methodist bookroom. He was hired by the Oxford University Press (OUP) as a proofreading assistant in 1908 and quickly climbed to the position of editor. He continued to work at the OUP in various positions of increasing responsibility until his death in 1945. One of his greatest editorial achievements was the publication of the first major English-language edition of the works of Søren Kierkegaard. Although chiefly remembered as a novelist, Williams also published poetry, works of literary criticism, theology, drama, history, biography, and a voluminous number of book reviews. Some of his best known novels are "War in Heaven" (1930), "Descent into Hell" (1937), and "All Hallows' Eve" (1945). T. S. Eliot, who wrote an introduction for the last of these, described Williams's novels as "supernatural thrillers" because they explore the sacramental intersection of the physical with the spiritual while also examining the ways in which power, even spiritual power, can corrupt as well as sanctify. All of Williams's fantasies, unlike those of J. R. R. Tolkien and most of those of C. S. Lewis, are set in the contemporary world. Williams has been described by Colin Manlove as one of the three main writers of "Christian fantasy" in the twentieth century (the other two being C.S. Lewis and T. F. Powys). More recent writers of fantasy novels with contemporary settings, notably Tim Powers, cite Williams as a model and inspiration. W. H. Auden, one of Williams's greatest admirers, reportedly re-read Williams's extraordinary and highly unconventional history of the church, "The Descent of the Dove" (1939), every year. Williams's study of Dante entitled "The Figure of Beatrice" (1944) was very highly regarded at its time of publication and continues to be consulted by Dante scholars today. His work inspired Dorothy L. Sayers to undertake her translation of "The Divine Comedy". Williams, however, regarded his most important work to be his extremely dense and complex Arthurian poetry, of which two books were published, "Taliessin through Logres" (1938) and "The Region of the Summer Stars" (1944), and more remained unfinished at his death. Some of Williams's essays were collected and published posthumously in "Image of the City and Other Essays" (1958), edited by Anne Ridler. Williams gathered many followers and disciples during his lifetime. He was, for a period, a member of the Salvator Mundi Temple of the Fellowship of the Rosy Cross. He met fellow Anglican Evelyn Underhill (who was affiliated with a similar group, the Order of the Golden Dawn) in 1937 and was later to write the introduction to her published "Letters" in 1943. When World War II broke out in 1939, Oxford University Press moved its offices from London to Oxford. Williams was reluctant to leave his beloved city, and his wife Florence refused to go. From the nearly 700 letters he wrote his wife during the war years a generous selection has been published; "primarily… love letters," the editor calls them. But the move to Oxford did allow him to participate regularly in Lewis's literary society known as the Inklings. In this setting Williams was able to read (and improve) his final published novel, "All Hallows' Eve", as well as to hear J. R. R. Tolkien read aloud to the group some of his early drafts of "The Lord of the Rings". In addition to meeting in Lewis's rooms at Oxford, they also regularly met at The Eagle and Child pub in Oxford (better known by its nickname "The Bird and Baby"). During this time Williams also gave lectures at Oxford on John Milton, William Wordsworth, and other authors, and received an honorary M.A. degree. Williams is buried in Holywell Cemetery in Oxford: his headstone bears the word "poet", followed by the words "Under the Mercy", a phrase often used by Williams himself. In 1917 Williams married his first sweetheart, Florence Conway, following a long courtship during which he presented her with a sonnet sequence that would later become his first published book of poetry, "The Silver Stair". Their son Michael was born in 1922. Williams was an unswerving and devoted member of the Church of England, reputedly with a tolerance of the scepticism of others and a firm belief in the necessity of a "doubting Thomas" in any apostolic body. Although Williams attracted the attention and admiration of some of the most notable writers of his day, including T. S. Eliot and W. H. Auden, his greatest admirer was probably C. S. Lewis, whose novel "That Hideous Strength" (1945) has been regarded as partially inspired by his acquaintance with both the man and his novels and poems. Williams came to know Lewis after reading Lewis's then-recently published study "The Allegory of Love"; he was so impressed he jotted down a letter of congratulation and dropped it in the mail. Coincidentally, Lewis had just finished reading Williams's novel "The Place of the Lion" and had written a similar note of congratulation. The letters crossed in the mail and led to an enduring and fruitful friendship. Williams developed the concept of co-inherence and gave rare consideration to the theology of romantic love. Falling in love for Williams was a form of mystical envisioning in which one saw the beloved as he or she was seen through the eyes of God. Co-inherence was a term used in Patristic theology to describe the relationship between the human and divine natures of Jesus Christ and the relationship between the persons of the blessed Trinity. Williams extended the term to include the ideal relationship between the individual parts of God's creation, including human beings. It is our mutual indwelling: Christ in us and we in Christ, interdependent. It is also the web of interrelationships, social and economic and ecological, by which the social fabric and the natural world function. But especially for Williams, co-inherence is a way of talking about the Body of Christ and the communion of saints. For Williams, salvation was not a solitary affair: "The thread of the love of God was strong enough to save you and all the others, but not strong enough to save you alone." He proposed an order, the Companions of the Co-inherence, who would practice substitution and exchange, living in love-in-God, truly bearing one another's burdens, being willing to sacrifice and to forgive, living from and for one another in Christ. According to Gunnar Urang, co-inherence is the focus of all Williams's novels.
https://en.wikipedia.org/wiki?curid=6533
Celery Celery ("Apium graveolens") is a marshland plant in the family Apiaceae that has been cultivated as a vegetable since antiquity. Celery has a long fibrous stalk tapering into leaves. Depending on location and cultivar, either its stalks, leaves or hypocotyl are eaten and used in cooking. Celery seed is also used as a spice and its extracts have been used in herbal medicine. Celery leaves are pinnate to bipinnate with rhombic leaflets long and broad. The flowers are creamy-white, in diameter, and are produced in dense compound umbels. The seeds are broad ovoid to globose, long and wide. Modern cultivars have been selected for solid petioles, leaf stalks. A celery stalk readily separates into "strings" which are bundles of angular collenchyma cells exterior to the vascular bundles. Wild celery, "Apium graveolens" var. "graveolens", grows to tall. It occurs around the globe. The first cultivation is thought to have happened in the Mediterranean region, where the natural habitats were salty and wet, or marshy soils near the coast where celery grew in agropyro-rumicion-plant communities. North of the Alps, wild celery is found only in the foothill zone on soils with some salt content. It prefers moist or wet, nutrient rich, muddy soils. It cannot be found in Austria and is increasingly rare in Germany. First attested in English in 1664, the word "celery" derives from the French "céleri", in turn from Italian "seleri", the plural of "selero", which comes from Late Latin "selinon", the latinisation of the , "celery". The earliest attested form of the word is the Mycenaean Greek "se-ri-no", written in Linear B syllabic script. Celery was described by Carl Linnaeus in Volume One of his "Species Plantarum" in 1753. The plants are raised from seed, sown either in a hot bed or in the open garden according to the season of the year, and, after one or two thinnings and transplantings, they are, on attaining a height of , planted out in deep trenches for convenience of blanching, which is effected by earthing up to exclude light from the stems. Celery was first grown as a winter and early spring vegetable. It was considered a cleansing tonic to counter the deficiencies of a winter diet based on salted meats without fresh vegetables. By the 19th century, the season for celery in England had been extended, to last from the beginning of September to late in April. In North America, commercial production of celery is dominated by the cultivar called 'Pascal' celery. Gardeners can grow a range of cultivars, many of which differ from the wild species, mainly in having stouter leaf stems. They are ranged under two classes, white and red. The stalks grow in tight, straight, parallel bunches, and are typically marketed fresh that way. They market it without roots and just a little green leaf remaining. The stalks can be eaten raw, or as an ingredient in salads, or as a flavoring in soups, stews, and also in pot roasts. In Europe, another popular variety is celeriac (also known as "celery root"), "Apium graveolens" var. "rapaceum", grown because its hypocotyl forms a large bulb, white on the inside. The bulb can be kept for months in winter and mostly serves as a main ingredient in soup. It can also be shredded and used in salads. The leaves are used as seasoning; the small, fibrous stalks find only marginal use. Leaf celery (Chinese celery, "Apium graveolens var. secalinum") is a cultivar from East Asia that grows in marshlands. Leaf celery is most likely the oldest cultivated form of celery. Leaf celery has characteristically thin skin stalks and a stronger taste and smell compared to other cultivars. It is used as a flavoring in soups and sometimes pickled as a side dish. The wild form of celery is known as "smallage". It has a furrowed stalk with wedge-shaped leaves, the whole plant having a coarse, earthy taste, and a distinctive smell. The stalks are not usually eaten (except in soups or stews in French cuisine), but the leaves may be used in salads, and its seeds are those sold as a spice. With cultivation and blanching, the stalks lose their acidic qualities and assume the mild, sweetish, aromatic taste particular to celery as a salad plant. Because wild celery is rarely eaten, yet susceptible to the same diseases as more well-used cultivars, it is often removed from fields to help prevent transmission of viruses like celery mosaic virus. Harvesting occurs when the average size of celery in a field is marketable; due to extremely uniform crop growth, fields are harvested only once. The petioles and leaves are removed and harvested; celery is packed by size and quality (determined by color, shape, straightness and thickness of petiole, stalk and midrib length and absence of disease, cracks, splits, insect damage and rot). During commercial harvesting, celery is packaged into cartons which contain between 36 and 48 stalks and weigh up to . Under optimal conditions, celery can be stored for up to seven weeks from . Inner stalks may continue growing if kept at temperatures above . Shelf life can be extended by packaging celery in anti-fogging, micro-perforated shrink wrap. Freshly cut petioles of celery are prone to decay, which can be prevented or reduced through the use of sharp blades during processing, gentle handling, and proper sanitation. Celery stalk may be preserved through pickling by first removing the leaves, then boiling the stalks in water before finally adding vinegar, salt, and vegetable oil. In the past, restaurants used to store celery in a container of water with powdered vegetable preservative, but it was found that the sulfites in the preservative caused allergic reactions in some people. In 1986, the U.S. Food and Drug Administration banned the use of sulfites on fruits and vegetables intended to be eaten raw. Celery is eaten around the world as a vegetable. In North America the crisp petiole (leaf stalk) is used. In Europe the hypocotyl is used as a root vegetable. The leaves are strongly flavored and are used less often, either as a flavoring in soups and stews or as a dried herb. Celery, onions, and bell peppers are the "holy trinity" of Louisiana Creole and Cajun cuisine. Celery, onions, and carrots make up the French mirepoix, often used as a base for sauces and soups. Celery is a staple in many soups, such as chicken noodle soup. Phthalides occur naturally in celery. Celery leaves are frequently used in cooking to add a mild spicy flavor to foods, similar to, but milder than black pepper. Celery leaves are suitable dried as a sprinkled on seasoning for use with baked, fried or roasted fish, meats and as part of a blend of fresh seasonings suitable for use in soups and stews. They may also be eaten raw, mixed into a salad or as a garnish. In temperate countries, celery is also grown for its seeds. Actually very small fruit, these "seeds" yield a valuable essential oil that is used in the perfume industry. The oil contains the chemical compound apiole. Celery seeds can be used as flavoring or spice, either as whole seeds or ground. The seeds can be ground and mixed with salt, to produce celery salt. Celery salt can be made from an extract of the roots or using dried leaves. Celery salt is used as a seasoning, in cocktails (notably to enhance the flavor of Bloody Mary cocktails), on the Chicago-style hot dog, and in Old Bay Seasoning. Celery seeds have been used widely in Eastern herbal traditions such as Ayurveda. Aulus Cornelius Celsus wrote that celery seeds could relieve pain in around AD 30. In 2019, a trend in drinking celery juice was reported in the United States, based on "detoxification" claims from blogger Anthony William- The Medical Medium, who says he receives advanced health information from Spirit, whom he communicates with. The health claims have no scientific basis, but the trend caused a sizable spike in celery prices. Celery is used in weight loss diets, where it provides low-calorie dietary fiber bulk. Celery is often incorrectly thought to be a "negative-calorie food", the digestion of which burns more calories than the body can obtain. In fact, eating celery provides positive net calories, with digestion consuming only a small proportion of the calories taken in. Celery is among a small group of foods (headed by peanuts) that appear to provoke the most severe allergic reactions; for people with celery allergy, exposure can cause potentially fatal anaphylactic shock. The allergen does not appear to be destroyed at cooking temperatures. Celery root—commonly eaten as celeriac, or put into drinks—is known to contain more allergen than the stalk. Seeds contain the highest levels of allergen content. Exercise-induced anaphylaxis may be exacerbated. An allergic reaction also may be triggered by eating foods that have been processed with machines that have previously processed celery, making avoiding such foods difficult. In contrast with peanut allergy being most prevalent in the US, celery allergy is most prevalent in Central Europe. In the European Union, foods that contain or may contain celery, even in trace amounts, must be clearly marked as such. Polyynes can be found in Apiaceae vegetables like celery, and their extracts show cytotoxic activities. Celery contains phenolic acid, which is an antioxidant. Apiin and apigenin can be extracted from celery and parsley. Lunularin is a dihydrostilbenoid found in common celery. The main chemicals responsible for the aroma and taste of celery are butylphthalide and sedanolide. Daniel Zohary and Maria Hopf note that celery leaves and inflorescences were part of the garlands found in the tomb of pharaoh Tutankhamun (died 1323 BC), and celery mericarps dated to the seventh century BC were recovered in the Heraion of Samos. However, they note "since "A. graveolens" grows wild in these areas, it is hard to decide whether these remains represent wild or cultivated forms." Only by classical times is it certain that celery was cultivated. M. Fragiska mentions an archeological find of celery dating to the 9th century BC, at Kastanas; however, the literary evidence for ancient Greece is far more abundant. In Homer's "Iliad", the horses of the Myrmidons graze on wild celery that grows in the marshes of Troy, and in "Odyssey", there is mention of the meadows of violet and wild celery surrounding the cave of Calypso. In the "Capitulary" of Charlemagne, compiled ca. 800, "apium" appears, as does "olisatum", or alexanders, among medicinal herbs and vegetables the Frankish emperor desired to see grown. At some later point in medieval Europe celery displaced alexanders. The name "celery" retraces the plant's route of successive adoption in European cooking, as the English "celery" (1664) is derived from the French "céleri" coming from the Lombard term, "seleri", from the Latin "selinon", borrowed from Greek. Celery's late arrival in the English kitchen is an end-product of the long tradition of seed selection needed to reduce the sap's bitterness and increase its sugars. By 1699, John Evelyn could recommend it in his "Acetaria. A Discourse of Sallets": "Sellery, apium Italicum, (and of the Petroseline Family) was formerly a stranger with us (nor very long since in Italy) is an hot and more generous sort of Macedonian Persley or Smallage... and for its high and grateful Taste is ever plac'd in the middle of the Grand Sallet, at our Great Men's tables, and Praetors feasts, as the Grace of the whole Board". Celery makes a minor appearance in colonial American gardens; its culinary limitations are reflected in the observation by the author of "A Treatise on Gardening, by a Citizen of Virginia" that it is "one of the species of parsley." Its first extended treatment in print was in Bernard M'Mahon's "American Gardener's Calendar" (1806). After the mid-19th century, continued selections for refined crisp texture and taste brought celery to American tables, where it was served in celery vases to be salted and eaten raw. Celery was so popular in the United States in the 1800s and early 1900s that the New York Public Library's historical menu archive shows that it was the third most popular dish in New York City menus during that time, behind only coffee and tea. In those days celery cost more than caviar, as it was difficult to cultivate. There were also many varieties of celery back then that are no longer around because they are difficult to grow and do not ship well. A chthonian symbol among the ancient Greeks, celery was said to have sprouted from the blood of Kadmilos, father of the Cabeiri, chthonian divinities celebrated in Samothrace, Lemnos, and Thebes . The spicy odor and dark leaf color encouraged this association with the cult of death. In classical Greece, celery leaves were used as garlands for the dead, and the wreaths of the winners at the Isthmian Games were first made of celery before being replaced by crowns made of pine. According to Pliny the Elder in Achaea, the garland worn by the winners of the sacred Nemean Games was also made of celery. The Ancient Greek colony of Selinous (, "Selinous"), on Sicily, was named after wild parsley that grew abundantly there; Selinountian coins depicted a parsley leaf as the symbol of the city.
https://en.wikipedia.org/wiki?curid=6535
Dodo (Alice's Adventures in Wonderland) The Dodo is a fictional character appearing in Chapters 2 and 3 of the 1865 book "Alice's Adventures in Wonderland" by Lewis Carroll (Charles Lutwidge Dodgson). The Dodo is a caricature of the author. A popular but unsubstantiated belief is that Dodgson chose the particular animal to represent himself because of his stammer, and thus would accidentally introduce himself as "Do-do-dodgson". Historically, the Dodo was a non-flying bird that lived on the island of Mauritius, east of Madagascar in the Indian Ocean. It became extinct in the mid 17th century during the colonisation of the island by the Dutch. In this passage Lewis Carroll incorporated references to the original boating expedition of 4 July 1862 during which Alice's Adventures were first told, with Alice as herself, and the others represented by birds: the Lory was Lorina Liddell, the Eaglet was Edith Liddell, the Dodo was Dodgson, and the Duck was Rev. Robinson Duckworth. In order to get dry after a swim, the Dodo proposes that everyone run a Caucus race – where the participants run in patterns of any shape, starting and leaving off whenever they like, so that everyone wins. At the end of the race, Alice distributes comfits from her pocket to all as prizes. However this leaves no prize for herself. The Dodo inquires what else she has in her pocket. As she has only a thimble, the Dodo requests it from her and then awards it to Alice as her prize. The Caucus Race, as depicted by Carroll, is a satire on the political caucus system, mocking its lack of clarity and decisiveness. In the Disney film, the Dodo plays a much greater role in the story. He is merged with the character of Pat the Gardener, which leads to him sometimes being nicknamed Pat the Dodo, but this name is never mentioned in the film. The Dodo is also the leader of the caucus race. He has the appearance and personality of a sea captain. The Dodo is voiced by Bill Thompson and animated by Milt Kahl. Dodo is first seen as Alice is floating on the sea in a bottle. Dodo is seen singing, but when Alice asks him for help, he does not notice her. On shore, Dodo is seen on a rock, organizing a caucus race. This race involves running around until one gets dry, but the attempts are hampered by incoming waves. Dodo is later summoned by the White Rabbit, when the rabbit believes a monster, actually Alice having magically grown to a giant size, is inside his home. Dodo brings Bill the Lizard, and attempts to get him to go down the chimney. Bill refuses at first, but Dodo is able to convince him otherwise. However, the soot causes Alice to sneeze, sending Bill high up into the sky. Dodo then decides to burn the house down, much to the chagrin of the White Rabbit. He begins gathering wood, such as the furniture, for this purpose. However, Alice is soon able to return to a smaller size and exit the house. The White Rabbit soon leaves, while Dodo asks for matches, not realizing that the situation has been resolved. He then asks Alice for a match, but when she doesn't have any, Dodo complains about the lack of cooperation and uses his pipe to light the fire. The Dodo later appears briefly at the end of the film, conducting another Caucus Race. In Tim Burton's adaptation of Alice in Wonderland, the Dodo's appearance retains the subtle apparent nature from John Tenniel's illustration. He bears a down of brilliant blue and wears a blue coat and white spats along with glasses and a cane. He is one of Alice's good-willed advisers, taking first note of her abilities as the true Alice. He is also one of the oldest inhabitants. His name is Uilleam, and he is portrayed by Michael Gough. He goes with the White Rabbit, Tweedledee and Tweedledum, and Dormouse to take Alice to Caterpillar to decide whether Alice is the real one. He is later captured by the Red Queen's forces. When Alice came to the Red Queen's castle, he was seen at the Red Queen's castle yard as a caddy for the Queen's croquet game. After the Red Queen orders the release of the Jubjub bird to kill all her subjects from rebelling, he is then seen briefly running from it when the Tweedles went to hide from it and escaped but was snatched by the Jubjub and was never seen again throughout the film. His name may be based on a lecture on William the Conqueror from Chapter Three of the original novel. The character is voiced by Michael Gough in his final feature film role before his death in 2011. Gough came out of retirement to appear in the film but the character only speaks three lines, so Gough managed to record in one day. In the anime and manga series "Pandora Hearts" Dodo is the chain of Rufus Barma one of the four dukedoms.
https://en.wikipedia.org/wiki?curid=1500
Albert, Duke of Prussia Albert of Prussia (; 17 May 149020 March 1568) was a German nobleman who was the 37th Grand Master of the Teutonic Knights, who after converting to Lutheranism, became the first ruler of the Duchy of Prussia, the secularized state that emerged from the former Monastic State of the Teutonic Knights. Albert was the first European ruler to establish Lutheranism, and thus Protestantism, as the official state religion of his lands. He proved instrumental in the political spread of Protestantism in its early stage, ruling the Prussian lands for nearly six decades (1510–1568). A member of the Brandenburg-Ansbach branch of the House of Hohenzollern, Albert became Grand Master, where his skill in political administration and leadership ultimately succeeded in reversing the decline of the Teutonic Order. But Albert, who was sympathetic to the demands of Martin Luther, rebelled against the Catholic Church and the Holy Roman Empire by converting the Teutonic state into a Protestant and hereditary realm, the Duchy of Prussia, for which he paid homage to his uncle, Sigismund I, King of Poland. That arrangement was confirmed by the Treaty of Kraków in 1525. Albert pledged a personal oath to the King and in return was invested with the duchy for himself and his heirs. Albert's rule in Prussia was fairly prosperous. Although he had some trouble with the peasantry, the confiscation of the lands and treasures of the Catholic Church enabled him to propitiate the nobles and provide for the expenses of the newly established Prussian court. He was active in imperial politics, joining the League of Torgau in 1526, and acted in unison with the Protestants in plotting to overthrow Emperor Charles V after the issue of the Augsburg Interim in May 1548. Albert established schools in every town and founded Königsberg University in 1544. He promoted culture and arts, patronising the works of Erasmus Reinhold and Caspar Hennenberger. During the final years of his rule, Albert was forced to raise taxes instead of further confiscating now-depleted church lands, causing peasant rebellion. The intrigues of the court favourites Johann Funck and Paul Skalić also led to various religious and political disputes. Albert spent his final years virtually deprived of power and died at Tapiau on 20 March 1568. His son, Albert Frederick, succeeded him as Duke of Prussia. Albert's dissolution of the Teutonic State caused the founding of the Duchy of Prussia, paving the way for the rise of the House of Hohenzollern. Albert was born in Ansbach in Franconia as the third son of Frederick I, Margrave of Brandenburg-Ansbach. His mother was Sophia, daughter of Casimir IV Jagiellon, Grand Duke of Lithuania and King of Poland, and his wife Elisabeth of Austria. He was raised for a career in the Church and spent some time at the court of Hermann IV of Hesse, Elector of Cologne, who appointed him canon of the Cologne Cathedral. Not only was he quite religious; he was also interested in mathematics and science and sometimes is claimed to have contradicted the teachings of the Church in favour of scientific theories. His career was forwarded by the Church, however, and institutions of the Catholic clerics supported his early advancement. Turning to a more active life, Albert accompanied Emperor Maximilian I to Italy in 1508 and after his return spent some time in the Kingdom of Hungary. Duke Frederick of Saxony, Grand Master of the Teutonic Order, died in December 1510. Albert was chosen as his successor early in 1511 in the hope that his relationship to his maternal uncle, Sigismund I the Old, Grand Duke of Lithuania and King of Poland, would facilitate a settlement of the disputes over eastern Prussia, which had been held by the order under Polish suzerainty since the Second Peace of Thorn (1466). The new Grand Master, aware of his duties to the empire and to the papacy, refused to submit to the crown of Poland. As war over the order's existence appeared inevitable, Albert made strenuous efforts to secure allies and carried on protracted negotiations with Emperor Maximilian I. The ill-feeling, influenced by the ravages of members of the Order in Poland, culminated in a war which began in December 1519 and devastated Prussia. Albert was granted a four-year truce early in 1521. The dispute was referred to Emperor Charles V and other princes, but as no settlement was reached Albert continued his efforts to obtain help in view of a renewal of the war. For this purpose he visited the Diet of Nuremberg in 1522, where he made the acquaintance of the Reformer Andreas Osiander, by whose influence Albert was won over to Protestantism. The Grand Master then journeyed to Wittenberg, where he was advised by Martin Luther to abandon the rules of his order, to marry, and to convert Prussia into a hereditary duchy for himself. This proposal, which was understandably appealing to Albert, had already been discussed by some of his relatives; but it was necessary to proceed cautiously, and he assured Pope Adrian VI that he was anxious to reform the order and punish the knights who had adopted Lutheran doctrines. Luther for his part did not stop at the suggestion, but in order to facilitate the change made special efforts to spread his teaching among the Prussians, while Albert's brother, Margrave George of Brandenburg-Ansbach, laid the scheme before their uncle, Sigismund I the Old of Poland. After some delay Sigismund assented to the offer, with the provision that Prussia should be treated as a Polish fiefdom; and after this arrangement had been confirmed by a treaty concluded at Kraków, Albert pledged a personal oath to Sigismund I and was invested with the duchy for himself and his heirs on 10 February 1525. The Estates of the land then met at Königsberg and took the oath of allegiance to the new duke, who used his full powers to promote the doctrines of Luther. This transition did not, however, take place without protest. Summoned before the imperial court of justice, Albert refused to appear and was proscribed, while the order elected a new Grand Master, Walter von Cronberg, who received Prussia as a fief at the imperial Diet of Augsburg. As the German princes were experiencing the tumult of the Reformation, the German Peasants' War, and the wars against the Ottoman Turks, they did not enforce the ban on the duke, and agitation against him soon died away. In imperial politics Albert was fairly active. Joining the League of Torgau in 1526, he acted in unison with the Protestants, and was among the princes who banded and plotted together to overthrow Charles V after the issue of the Augsburg Interim in May 1548. For various reasons, however, poverty and personal inclination among others, he did not take a prominent part in the military operations of this period. The early years of Albert's rule in Prussia were fairly prosperous. Although he had some trouble with the peasantry, the lands and treasures of the church enabled him to propitiate the nobles and for a time to provide for the expenses of the court. He did something for the furtherance of learning by establishing schools in every town and by freeing serfs who adopted a scholastic life. In 1544, in spite of some opposition, he founded Königsberg University, where he appointed his friend Andreas Osiander to a professorship in 1549. Albert also paid for the printing of the Astronomical "Prutenic Tables" compiled by Erasmus Reinhold and the first maps of Prussia by Caspar Hennenberger. Osiander's appointment was the beginning of the troubles which clouded the closing years of Albert's reign. Osiander's divergence from Luther's doctrine of justification by faith involved him in a violent quarrel with Philip Melanchthon, who had adherents in Königsberg, and these theological disputes soon created an uproar in the town. The duke strenuously supported Osiander, and the area of the quarrel soon broadened. There were no longer church lands available with which to conciliate the nobles, the burden of taxation was heavy, and Albert's rule became unpopular. After Osiander's death in 1552, Albert favoured a preacher named Johann Funck, who, with an adventurer named Paul Skalić, exercised great influence over him and obtained considerable wealth at public expense. The state of turmoil caused by these religious and political disputes was increased by the possibility of Albert's early death and the need, should that happen, to appoint a regent, as his only son, Albert Frederick was still a mere youth. The duke was forced to consent to a condemnation of the teaching of Osiander, and the climax came in 1566 when the Estates appealed to King Sigismund II Augustus of Poland, Albert's cousin, who sent a commission to Königsberg. Skalić saved his life by flight, but Funck was executed. The question of the regency was settled, and a form of Lutheranism was adopted and declared binding on all teachers and preachers. Virtually deprived of power, the duke lived for two more years, and died at Tapiau on 20 March 1568 of the plague, along with his wife. Cornelis Floris de Vriendt designed his tomb within Königsberg Cathedral. Albert was a voluminous letter writer, and corresponded with many of the leading personages of the time. Albert was the first German noble to support Luther's ideas and in 1544 founded the University of Königsberg, the Albertina, as a rival to the Roman Catholic Krakow Academy. It was the second Lutheran university in the German states, after the University of Marburg. A relief of Albert over the Renaissance-era portal of Königsberg Castle's southern wing was created by Andreas Hess in 1551 according to plans by Christoph Römer. Another relief by an unknown artist was included in the wall of the Albertina's original campus. This depiction, which showed the duke with his sword over his shoulder, was the popular "Albertus", the symbol of the university. The original was moved to Königsberg Public Library to protect it from the elements, while the sculptor Paul Kimritz created a duplicate for the wall. Another version of the "Albertus" by Lothar Sauer was included at the entrance of the Königsberg State and Royal Library. In 1880 Friedrich Reusch created a sandstone bust of Albert at the Regierungsgebäude, the administrative building for Regierungsbezirk Königsberg. On 19 May 1891 Reusch premiered a famous statue of Albert at Königsberg Castle with the inscription: "Albert of Brandenburg, Last Grand Master, First Duke in Prussia". Albert Wolff also designed an equestrian statue of Albert located at the new campus of the Albertina. King's Gate contains a statue of Albert. Albert was oft-honored in the quarter Maraunenhof in northern Königsberg. Its main street was named Herzog-Albrecht-Allee in 1906. Its town square, König-Ottokar-Platz, was renamed Herzog-Albrecht-Platz in 1934 to match its church, the Herzog-Albrecht-Gedächtniskirche. Albert married first, to Dorothea (1 August 150411 April 1547), daughter of King Frederick I of Denmark, in 1526. They had six children: He married secondly to Anna Maria (1532–20 March 1568), daughter of Eric I, Duke of Brunswick-Lüneburg, in 1550. The couple had two children:
https://en.wikipedia.org/wiki?curid=1514
Aachen Aachen (, ; ), also known as Bad Aachen ("Aachen Spa"), in French as Aix-la-Chapelle, in Italian as Aquisgrana, and in Latin as Aquæ Granni, is a spa and border city in North Rhine-Westphalia, Germany. Aachen developed from a Roman settlement and spa, subsequently becoming the preferred medieval Imperial residence of Emperor Charlemagne of the Frankish Empire, and, from 936 to 1531, the place where 31 Holy Roman Emperors were crowned Kings of the Germans. Aachen is the westernmost city in Germany, located near the borders with Belgium and the Netherlands, west of Cologne in a former coal-mining area. One of Germany's leading institutes of higher education in technology, the RWTH Aachen University, is located in the city. Aachen's industries include science, engineering and information technology. In 2009, Aachen was ranked eighth among cities in Germany for innovation. The name "Aachen" is a modern descendant, like southern German , , meaning "river" or "stream", from Old High German , meaning "water" or "stream", which directly translates (and etymologically corresponds) to Latin , referring to the springs. The location has been inhabited by humans since the Neolithic era, about 5,000 years ago, attracted to its warm mineral springs. Latin figures in Aachen's Roman name , which meant "waters of Grannus", referring to the Celtic god of healing who was worshipped at the springs. This word became in Walloon and in French, and subsequently after Charlemagne had his palatine chapel built there in the late 8th century and then made the city his empire's capital. Aachen's name in French and German evolved in parallel. The city is known by a variety of different names in other languages: Aachen is at the western end of the Benrath line that divides High German to the south from the rest of the West Germanic speech area to the north. Aachen's local dialect is called and belongs to the Ripuarian language. Flint quarries on the Lousberg, Schneeberg, and Königshügel, first used during Neolithic times (3000–2500 BC), attest to the long occupation of the site of Aachen, as do recent finds under the modern city's "Elisengarten" pointing to a former settlement from the same period. Bronze Age (around 1600 BC) settlement is evidenced by the remains of barrows (burial mounds) found, for example, on the Klausberg. During the Iron Age, the area was settled by Celtic peoples who were perhaps drawn by the marshy Aachen basin's hot sulphur springs where they worshipped Grannus, god of light and healing. Later, the 25-hectare Roman spa resort town of Aquae Granni was, according to legend, founded by Grenus, under Hadrian, around 124 AD. Instead, the fictitious founder refers to the Celtic god, and it seems it was the Roman 6th Legion at the start of the 1st century AD that first channelled the hot springs into a spa at Büchel, adding at the end of the same century the "Münstertherme" spa, two water pipelines, and a probable sanctuary dedicated to Grannus. A kind of forum, surrounded by colonnades, connected the two spa complexes. There was also an extensive residential area, part of it inhabited by a flourishing Jewish community. The Romans built bathhouses near Burtscheid. A temple precinct called "Vernenum" was built near the modern Kornelimünster/Walheim. Today, remains have been found of three bathhouses, including two fountains in the "Elisenbrunnen" and the Burtscheid bathhouse. Roman civil administration in Aachen broke down between the end of the 4th and beginning of the 5th centuries. Rome withdrew its troops from the area, but the town remained populated. By 470, the town came to be ruled by the Ripuarian Franks and subordinated to their capital, Cologne. After Roman times, Pepin the Short had a castle residence built in the town, due to the proximity of the hot springs and also for strategic reasons as it is located between the Rhineland and northern France. Einhard mentions that in 765–6 Pepin spent both Christmas and Easter at "Aquis villa" (""), ("and [he] celebrated Christmas in the town Aquis, and similarly Easter") which must have been sufficiently equipped to support the royal household for several months. In the year of his coronation as king of the Franks, 768, Charlemagne came to spend Christmas at Aachen for the first time. He remained there in a mansion which he may have extended, although there is no source attesting to any significant building activity at Aachen in his time, apart from the building of the Palatine Chapel (since 1930, cathedral) and the Palace. Charlemagne spent most winters in Aachen between 792 and his death in 814. Aachen became the focus of his court and the political centre of his empire. After his death, the king was buried in the church which he had built; his original tomb has been lost, while his alleged remains are preserved in the "Karlsschrein", the shrine where he was reburied after being declared a saint; his saintliness, however, was never officially acknowledged by the Roman Curia as such. In 936, Otto I was crowned king of East Francia in the collegiate church built by Charlemagne. During the reign of Otto II, the nobles revolted and the West Franks, under Lothair, raided Aachen in the ensuing confusion. Aachen was attacked again by Odo of Champagne, who attacked the imperial palace while Conrad II was absent. Odo relinquished it quickly and was killed soon afterwards. The palace and town of Aachen had fortifying walls built by order of Emperor Frederick Barbarossa between 1172 and 1176. Over the next 500 years, most kings of Germany destined to reign over the Holy Roman Empire were crowned in Aachen. The original audience hall built by Charlemagne was torn down and replaced by the current city hall in 1330. The last king to be crowned here was Ferdinand I in 1531. During the Middle Ages, Aachen remained a city of regional importance, due to its proximity to Flanders; it achieved a modest position in the trade in woollen cloths, favoured by imperial privilege. The city remained a free imperial city, subject to the emperor only, but was politically far too weak to influence the policies of any of its neighbours. The only dominion it had was over Burtscheid, a neighbouring territory ruled by a Benedictine abbess. It was forced to accept that all of its traffic must pass through the "Aachener Reich". Even in the late 18th century the Abbess of Burtscheid was prevented from building a road linking her territory to the neighbouring estates of the duke of Jülich; the city of Aachen even deployed its handful of soldiers to chase away the road-diggers. As an imperial city, Aachen held certain political privileges that allowed it to remain independent of the troubles of Europe for many years. It remained a direct vassal of the Holy Roman Empire throughout most of the Middle Ages. It was also the site of many important church councils, including the Council of 837 and the Council of 1166, a council convened by the antipope Paschal III. Aachen has proved an important site for the production of historical manuscripts. Under Charlemagne's purview, both the Ada Gospels and the Coronation Gospels may have been produced in Aachen. In addition, quantities of the other texts in the court library were also produced locally. During the reign of Louis the Pious (814–840), substantial quantities of ancient texts were produced at Aachen, including legal manuscripts such as the leges scriptorium group, patristic texts including the five manuscripts of the Bamberg Pliny Group. Finally, under Lothair I (840–855), texts of outstanding quality were still being produced. This however marked the end of the period of manuscript production at Aachen. In 1598, following the invasion of Spanish troops from the Netherlands, Rudolf deposed all Protestant office holders in Aachen and even went as far as expelling them from the city. From the early 16th century, Aachen started to lose its power and influence. First the coronations of emperors were moved from Aachen to Frankfurt. This was followed by the religious wars, and the great fire of 1656. After the destruction of most of the city in 1656, the rebuilding was mostly in the Baroque style. The decline of Aachen culminated in 1794, when the French, led by General Charles Dumouriez, occupied Aachen. By the middle of the 17th century Aachen had become attractive as a spa: not so much because of the effects of the hot springs on the health of its visitors but because Aachen was then – and remained well into the 19th century – a place of high-level prostitution. Traces of this hidden agenda of the city's history are found in the 18th-century guidebooks to Aachen as well as to the other spas. The main indication for visiting patients, ironically, was syphilis; only by the end of the 19th century had rheumatism become the most important object of cures at Aachen and Burtscheid. Aachen was chosen as the site of several important congresses and peace treaties: the first congress of Aachen (often referred to as the Congress of Aix-la-Chapelle in English) on 2 May 1668, leading to the First Treaty of Aachen in the same year which ended the War of Devolution. The second congress ended with the second treaty in 1748, ending the War of the Austrian Succession. In 1789, there was a constitutional crisis in the Aachen government, and in 1794 Aachen lost its status as a free imperial city. On 9 February 1801, the Peace of Lunéville removed the ownership of Aachen and the entire "left bank" of the Rhine from Germany (the Holy Roman Empire) and granted it to France. In 1815, control of the town was passed to the German Kingdom of Prussia, by an act passed by the Congress of Vienna. The third congress took place in 1818, to decide the fate of occupied Napoleonic France. By the middle of the 19th century, industrialisation had swept away most of the city's medieval rules of production and commerce, although the entirely corrupt remains of the city's medieval constitution were kept in place (compare the famous remarks of Georg Forster in his "Ansichten vom Niederrhein") until 1801, when Aachen became the "chef-lieu du département de la Roer" in Napoleon's First French Empire. In 1815, after the Napoleonic Wars, the Kingdom of Prussia took over within the new German Confederation. The city was one of its most socially and politically backward centres until the end of the 19th century. Administered within the Rhine Province, by 1880 the population was 80,000. Starting in 1838, the railway from Cologne to Belgium passed through Aachen. The city suffered extreme overcrowding and deplorable sanitary conditions until 1875, when the medieval fortifications were finally abandoned as a limit to building and new, better housing was built in the east of the city, where sanitary drainage was easiest. In December 1880, the Aachen tramway network was opened, and in 1895 it was electrified. In the 19th century and up to the 1930s, the city was important in the production of railway locomotives and carriages, iron, pins, needles, buttons, tobacco, woollen goods, and silk goods. After World War I, Aachen was occupied by the Allies until 1930, along with the rest of German territory west of the Rhine. Aachen was one of the locations involved in the ill-fated Rhenish Republic. On 21 October 1923, an armed mob took over the city hall. Similar actions took place in Mönchen-Gladbach, Duisburg, and Krefeld. This republic lasted only about a year. Aachen was heavily damaged during World War II. According to Jörg Friedrich in "The Fire" (2008), two Allied air raids on 11 April and 24 May 1944 "radically destroyed" the city. The first killed 1,525, including 212 children, and bombed six hospitals. During the second, 442 aircraft hit two railway stations, killed 207, and left 15,000 homeless. The raids also destroyed Aachen-Eilendorf and Aachen-Burtscheid. The city and its fortified surroundings were laid siege to from 12 September to 21 October 1944 by the US 1st Infantry Division with the 3rd Armored Division assisting from the south. Around 13 October the US 2nd Armored Division played their part, coming from the north and getting as close as Würselen, while the 30th Infantry Division played a crucial role in completing the encirclement of Aachen on 16 October 1944. With reinforcements from the US 28th Infantry Division the Battle of Aachen then continued involving direct assaults through the heavily defended city, which finally forced the German garrison to surrender on 21 October 1944. Aachen was the first German city to be captured by the Allies, and its residents welcomed the soldiers as liberators. What remained of the city was destroyed—in some areas completely—during the fighting, mostly by American artillery fire and demolitions carried out by the Waffen-SS defenders. Damaged buildings included the medieval churches of St. Foillan, St. Paul and St. Nicholas, and the Rathaus (city hall), although Aachen Cathedral was largely unscathed. Only 4,000 inhabitants remained in the city; the rest had followed evacuation orders. Its first Allied-appointed mayor, Franz Oppenhoff, was assassinated by an SS commando unit. During the Roman period, Aachen was the site of a flourishing Jewish community. Later, during the Carolingian empire, a Jewish community lived near the royal palace. In 797, Isaac, a Jewish merchant, accompanied two ambassadors of Charlemagne to the court of Harun al-Rashid. He returned to Aachen in July 802, bearing an elephant called "Abul-Abbas" as a gift for the emperor. During the 13th century, many Jews converted to Christianity, as shown in the records of the Aachen Minster (today's Cathedral). In 1486, the Jews of Aachen offered gifts to Maximilian I during his coronation ceremony. In 1629, the Aachen Jewish community was expelled from the city. In 1667, six Jews were allowed to return. Most of the Aachen Jews settled in the nearby town of Burtscheid. On 16 May 1815, the Jewish community of the city offered an homage in its synagogue to the Prussian king, Friedrich Wilhelm III. A Jewish cemetery was acquired in 1851. 1,345 Jews lived in the city in 1933. The synagogue was destroyed during "Kristallnacht" in 1938. In 1939, after emigration and arrests, 782 Jews remained in the city. After World War II, only 62 Jews lived there. In 2003, 1,434 Jews were living in Aachen. In Jewish texts, the city of Aachen was called "Aish" or "Ash" (אש). The city of Aachen has developed into a technology hub as a by-product of hosting one of the leading universities of technology in Germany with the RWTH Aachen (Rheinisch-Westfälische Technische Hochschule), known especially for mechanical engineering, automotive and manufacturing technology as well as for its research and academic hospital Klinikum Aachen, one of the largest medical facilities in Europe. Aachen is located in the middle of the Meuse–Rhine Euroregion, close to the border tripoint of Germany, the Netherlands, and Belgium. The town of Vaals in the Netherlands lies nearby at about from Aachen's city centre, while the Dutch city of Heerlen and Eupen, the capital of the German-speaking Community of Belgium, are both located about from Aachen city centre. Aachen lies near the head of the open valley of the Wurm (which today flows through the city in canalised form), part of the larger basin of the Meuse, and about north of the High Fens, which form the northern edge of the Eifel uplands of the Rhenish Massif. The maximum dimensions of the city's territory are from north to south, and from east to west. The city limits are long, of which border Belgium and the Netherlands. The highest point in Aachen, located in the far southeast of the city, lies at an elevation of above sea level. The lowest point, in the north, and on the border with the Netherlands, is at . As the westernmost city in Germany (and close to the Low Countries), Aachen and the surrounding area belongs to a temperate climate zone, with humid weather, mild winters, and warm summers. Because of its location north of the Eifel and the High Fens and its subsequent prevailing westerly weather patterns, rainfall in Aachen (on average 805 mm/year) is comparatively higher than, for example, in Bonn (with 669 mm/year). Another factor in the local weather forces of Aachen is the occurrence of Foehn winds on the southerly air currents, which results from the city's geographic location on the northern edge of the Eifel. Because the city is surrounded by hills, it suffers from inversion-related smog. Some areas of the city have become urban heat islands as a result of poor heat exchange, both because of the area's natural geography and from human activity. The city's numerous cold air corridors, which are slated to remain as free as possible from new construction, therefore play an important role in the urban climate of Aachen. The January average is , while the July average is . Precipitation is almost evenly spread throughout the year. The geology of Aachen is very structurally heterogeneous. The oldest occurring rocks in the area surrounding the city originate from the Devonian period and include carboniferous sandstone, greywacke, claystone and limestone. These formations are part of the Rhenish Massif, north of the High Fens. In the Pennsylvanian subperiod of the Carboniferous geological period, these rock layers were narrowed and folded as a result of the Variscan orogeny. After this event, and over the course of the following 200 million years, this area has been continuously flattened. During the Cretaceous period, the ocean penetrated the continent from the direction of the North Sea up to the mountainous area near Aachen, bringing with it clay, sand, and chalk deposits. While the clay (which was the basis for a major pottery industry in nearby Raeren) is mostly found in the lower areas of Aachen, the hills of the Aachen Forest and the Lousberg were formed from upper Cretaceous sand and chalk deposits. More recent sedimentation is mainly located in the north and east of Aachen and was formed through tertiary and quaternary river and wind activities. Along the major thrust fault of the Variscan orogeny, there are over 30 thermal springs in Aachen and Burtscheid. Additionally, the subsurface of Aachen is traversed by numerous active faults that belong to the Rurgraben fault system, which has been responsible for numerous earthquakes in the past, including the 1756 Düren earthquake and the 1992 Roermond earthquake, which was the strongest earthquake ever recorded in the Netherlands. Aachen has 245,885 inhabitants (as of 31 December 2015), of whom 118,272 are female, and 127,613 are male. The unemployment rate in the city is, as of April 2012, 9.7 percent. At the end of 2009, the foreign-born residents of Aachen made up 13.6 percent of the total population. A significant portion of foreign residents are students at the RWTH Aachen University. The city is divided into seven administrative districts, or boroughs, each with its own district council, district leader, and district authority. The councils are elected locally by those who live within the district, and these districts are further subdivided into smaller sections for statistical purposes, with each sub-district named by a two-digit number. The districts of Aachen, including their constituent statistical districts, are: Regardless of official statistical designations, there are 50 neighbourhoods and communities within Aachen, here arranged by district: The following cities and communities border Aachen, clockwise from the northwest: Herzogenrath, Würselen, Eschweiler, Stolberg and Roetgen (which are all in the district of Aachen); Raeren, Kelmis and Plombières (Lüttich Province in Belgium) as well as Vaals, Gulpen-Wittem, Simpelveld, Heerlen and Kerkrade (all in Limburg Province in the Netherlands). Aachen Cathedral was erected on the orders of Charlemagne. Construction began "c." AD 796 and it was, on completion "c." 798, the largest cathedral north of the Alps. It was modelled after the Basilica of San Vitale, in Ravenna, Italy, and was built by Odo of Metz. Charlemagne also desired for the chapel to compete with the Lateran Palace, both in quality and authority. It was originally built in the Carolingian style, including marble covered walls, and mosaic inlay on the dome. On his death, Charlemagne's remains were interred in the cathedral and can be seen there to this day. The cathedral was extended several times in later ages, turning it into a curious and unique mixture of building styles. The throne and gallery portion date from the Ottonian, with portions of the original opus sectile floor still visible. The 13th century saw gables being added to the roof, and after the fire of 1656, the dome was rebuilt. Finally, a choir was added around the start of the 15th century. After Frederick Barbarossa canonised Charlemagne in 1165 the chapel became a destination for pilgrims. For 600 years, from 936 to 1531, Aachen Cathedral was the church of coronation for 30 German kings and 12 queens. The church built by Charlemagne is still the main attraction of the city. In addition to holding the remains of its founder, it became the burial place of his successor Otto III. In the upper chamber of the gallery, Charlemagne's marble throne is housed. Aachen Cathedral has been designated as a UNESCO World Heritage Site. Most of the marble and columns used in the construction of the cathedral were brought from Rome and Ravenna, including the sarcophagus in which Charlemagne was eventually laid to rest. A bronze bear from Gaul was placed inside, along with an equestrian statue from Ravenna, believed to be Theodric, in contrast to a wolf and a statue of Marcus Aurelius in the Capitoline. Bronze pieces such as the doors and railings, some of which have survived to present day, were cast in a local foundry. Finally, there is uncertainty surrounding the bronze pine cone in the chapel, and where it was created. Wherever it was made, it was also a parallel to a piece in Rome, this in Old St. Peter's Basilica. Aachen Cathedral Treasury has housed, throughout its history, a collection of liturgical objects. The origin of this church treasure is in dispute as some say Charlemagne himself endowed his chapel with the original collection, while the rest were collected over time. Others say all of the objects were collected over time, from such places as Jerusalem and Constantinople. The location of this treasury has moved over time and was unknown until the 15th century when it was located in the Matthiaskapelle (St. Matthew's Chapel) until 1873, when it was moved to the Karlskapelle (Charles' Chapel). From there it was moved to the Hungarian Chapel in 1881 and in 1931 to its present location next to the Allerseelenkapelle (Poor Souls' Chapel). Only six of the original Carolingian objects have remained, and of those only three are left in Aachen: the Aachen Gospels, a diptych of Christ, and an early Byzantine silk. The Coronation Gospels and a reliquary burse of St. Stephen were moved to Vienna in 1798 and the Talisman of Charlemagne was given as a gift in 1804 to Josephine Bonaparte and subsequently to Rheims Cathedral. 210 documented pieces have been added to the treasury since its inception, typically to receive in return legitimisation of linkage to the heritage of Charlemagne. The Lothar Cross, the Gospels of Otto III and multiple additional Byzantine silks were donated by Otto III. Part of the Pala d'Oro and a covering for the Aachen Gospels were made of gold donated by Henry II. Frederick Barbarossa donated the candelabrum that adorns the dome and also once "crowned" the Shrine of Charlemagne, which was placed underneath in 1215. Charles IV donated a pair of reliquaries. Louis XI gave, in 1475, the crown of Margaret of York, and, in 1481, another arm reliquary of Charlemagne. Maximilian I and Charles V both gave numerous works of art by Hans von Reutlingen. Continuing the tradition, objects continued to be donated until the present, each indicative of the period of its gifting, with the last documented gift being a chalice from 1960 made by Ewald Mataré. The Aachen Rathaus, (English: Aachen City Hall or Aachen Town Hall) dated from 1330, lies between two central squares, the "Markt" (marketplace) and the "Katschhof" (between city hall and cathedral). The coronation hall is on the first floor of the building. Inside one can find five frescoes by the Aachen artist Alfred Rethel which show legendary scenes from the life of Charlemagne, as well as Charlemagne's signature. Also, precious replicas of the Imperial Regalia are kept here. Since 2009, the city hall has been a station on the "Route Charlemagne", a tour programme by which historical sights of Aachen are presented to visitors. At the city hall, a museum exhibition explains the history and art of the building and gives a sense of the historical coronation banquets that took place there. A portrait of Napoleon from 1807 by Louis-André-Gabriel Bouchet and one of his wife Joséphine from 1805 by Robert Lefèvre are viewable as part of the tour. As before, the city hall is the seat of the mayor of Aachen and of the city council, and annually the Charlemagne Prize is awarded there. The "Grashaus", a late medieval house at the "Fischmarkt", is one of the oldest non-religious buildings in central Aachen. It hosted the city archive, and before that, the Grashaus was the city hall until the present building took over this function. The "Elisenbrunnen" is one of the most famous sights of Aachen. It is a neo-classical hall covering one of the city's famous fountains. It is just a minute away from the cathedral. Just a few steps in a south-easterly direction lies the 19th-century theatre. Also of note are two remaining city gates, the "Ponttor" (Pont gate), northwest of the cathedral, and the "Marschiertor" (marching gate), close to the central railway station. There are also a few parts of both medieval city walls left, most of them integrated into more recent buildings, but some others still visible. There are even five towers left, some of which are used for housing. St. Michael's Church, Aachen was built as a church of the Aachen Jesuit Collegium in 1628. It is attributed to the Rhine mannerism, and a sample of a local Renaissance architecture. The rich façade remained unfinished until 1891, when the architect Peter Friedrich Peters added to it. The church is a Greek Orthodox church today, but the building is used also for concerts because of its good acoustics. The synagogue in Aachen, which was destroyed on the Night of Broken Glass (Kristallnacht), 9 November 1938, was reinaugurated on 18 May 1995. One of the contributors to the reconstructions of the synagogue was Jürgen Linden, the Lord Mayor of Aachen from 1989 to 2009. There are numerous other notable churches and monasteries, a few remarkable 17th- and 18th-century buildings in the particular Baroque style typical of the region, a synagogue, a collection of statues and monuments, park areas, cemeteries, among others. Among the museums in the town are the Suermondt-Ludwig Museum, which has a fine sculpture collection and the Aachen Museum of the International Press, which is dedicated to newspapers from the 16th century to the present. The area's industrial history is reflected in dozens of 19th- and early 20th-century manufacturing sites in the city. Aachen is the administrative centre for the coal-mining industries in neighbouring places to the northeast. Products manufactured in Aachen include electrical goods, textiles, foodstuffs (chocolate and candy), glass, machinery, rubber products, furniture, metal products. Also in and around Aachen chemicals, plastics, cosmetics, and needles and pins are produced. Though once a major player in Aachen's economy, today glassware and textile production make up only 10% of total manufacturing jobs in the city. There have been a number of spin-offs from the university's IT technology department. In June 2010, Achim Kampker, together with Günther Schuh, founded a small company to develop Street Scooter GmbH; in August 2014, it was renamed StreetScooter GmbH. This was a privately organised research initiative at the RWTH Aachen University which later became an independent company in Aachen. Kampker was also the founder and chairman of the European Network for Affordable and Sustainable Electromobility. In May 2014, the company announced that the city of Aachen, the city council Aachen and the savings bank Aachen had ordered electric vehicles from the company. In late 2014, approximately 70 employees were manufacturing 200 vehicles annually in the premises of the Waggonfabrik Talbot, the former Talbot/Bombardier plant in Aachen. In December 2014 Deutsche Post DHL Group purchased the StreetScooter company, which became its wholly owned subsidiary. By April 2016, the company announced that it would produce 2000 of its electric vans branded "Work" in Aachen by the end of the year. In 2015, the electric vehicle start-up e.GO Mobile was founded by Günther Schuh, which started producing the e.GO Life electric passenger car and other vehicles in April 2019. In April 2016, StreetScooter GmbH announced that it would be scaling up to manufacture approximately 10,000 of the "Work" vehicles annually, starting in 2017, also in Aachen. If that goal is achieved, it will become the largest electric light utility vehicle manufacturer in Europe, surpassing Renault which makes the smaller "Kangoo Z.E.". RWTH Aachen University, established as Polytechnicum in 1870, is one of Germany's Universities of Excellence with strong emphasis on technological research, especially for electrical and mechanical engineering, computer sciences, physics, and chemistry. The university clinic attached to the RWTH, the Klinikum Aachen, is the biggest single-building hospital in Europe. Over time, a host of software and computer industries have developed around the university. It also maintains a botanical garden (the Botanischer Garten Aachen). FH Aachen, Aachen University of Applied Sciences (AcUAS) was founded in 1971. The AcUAS offers a classic engineering education in professions such as mechatronics, construction engineering, mechanical engineering or electrical engineering. German and international students are educated in more than 20 international or foreign-oriented programmes and can acquire German as well as international degrees (Bachelor/Master) or "Doppelabschlüsse" (double degrees). Foreign students account for more than 21% of the student body. The Katholische Hochschule Nordrhein-Westfalen – Abteilung Aachen (Catholic University of Applied Sciences Northrhine-Westphalia – Aachen department) offers its some 750 students a variety of degree programmes: social work, childhood education, nursing, and co-operative management. It also has the only programme of study in Germany especially designed for mothers. The Hochschule für Musik und Tanz Köln (Cologne University of Music) is one of the world's foremost performing arts schools and one of the largest music institutions for higher education in Europe with one of its three campuses in Aachen. The Aachen campus substantially contributes to the Opera/Musical Theatre master's programme by collaborating with the Theater Aachen and the recently established musical theatre chair through the Rheinische Opernakademie. The German army's Technical School ("Ausbildungszentrum Technik Landsysteme") is in Aachen. The annual CHIO (short for the French term "Concours Hippique International Officiel") is the biggest equestrian meeting of the world and among horsemen is considered to be as prestigious for equitation as the tournament of Wimbledon for tennis. Aachen hosted the 2006 FEI World Equestrian Games. The local football team Alemannia Aachen had a short run in Germany's first division, after its promotion in 2006. However, the team could not sustain its status and is now back in the fourth division. The stadium "Tivoli", opened in 1928, served as the venue for the team's home games and was well known for its incomparable atmosphere throughout the whole of the second division. Before the old stadium's demolition in 2011, it was used by amateurs, whilst the Bundesliga Club held its games in the new stadium "Neuer Tivoli" – meaning New Tivoli—a couple of metres down the road. The building work for the stadium which has a capacity of 32,960, began in May 2008 and was completed by the beginning of 2009. The Ladies in Black women's volleyball team (part of the "PTSV Aachen" sports club since 2013) has played in the first German volleyball league (DVL) since 2008. Aachen's railway station, the Hauptbahnhof (Central Station), was constructed in 1841 for the Cologne–Aachen railway line. In 1905 it was moved closer to the city centre. It serves main lines to Cologne, Mönchengladbach and Liège as well as branch lines to Heerlen, Alsdorf, Stolberg and Eschweiler. ICE high speed trains from Brussels via Cologne to Frankfurt am Main and Thalys trains from Paris to Cologne also stop at Aachen Central Station. Four RE lines and two RB lines connect Aachen with the Ruhrgebiet, Mönchengladbach, Spa (Belgium), Düsseldorf and the Siegerland. The "Euregiobahn", a regional railway system, reaches several minor cities in the Aachen region. There are four smaller stations in Aachen: "Aachen West", "Aachen Schanz", "Aachen-Rothe Erde" and "Eilendorf". Slower trains stop at these. Aachen West has gained in importance with the expansion of RWTH Aachen University. There are two stations for intercity bus services in Aachen: Aachen West station, in the north-west of the city, and Aachen Wilmersdorfer Straße, in the north-east. The first horse tram line in Aachen opened in December 1880. After electrification in 1895, it attained a maximum length of in 1915, becoming the fourth-longest tram network in Germany. Many tram lines extended to the surrounding towns of Herzogenrath, Stolberg, Alsdorf as well as the Belgian and Dutch communes of Vaals, Kelmis (then "Altenberg") and Eupen. The Aachen tram system was linked with the Belgian national interurban tram system. Like many tram systems in Western Europe, the Aachen tram suffered from poorly-maintained infrastructure and was so deemed unnecessary and disrupting for car drivers by local politics. On 28 September 1974 the last line 15 (Vaals–Brand) operated for one last day and was then replaced by buses. A proposal to reinstate a tram/light rail system under the name "Campusbahn" was dropped after a referendum. Today, the ASEAG ("Aachener Straßenbahn und Energieversorgungs-AG", literally "Aachen tram and power supply company") operates a bus network with 68 bus routes. Because of the location at the border, many bus routes extend to Belgium and the Netherlands. Lines 14 to Eupen, Belgium and 44 to Heerlen, Netherlands are jointly operated with Transport en Commun and Veolia Transport Nederland, respectively. ASEAG is one of the main participants in the Aachener Verkehrsverbund (AVV), a tariff association in the region. Along with ASEAG, city bus routes of Aachen are served by private contractors such as Sadar, Taeter, Schlömer, or DB Regio Bus. Line 350, which runs from Maastricht, also enters Aachen. Aachen is connected to the Autobahn A4 (west-east), A44 (north-south) and A544 (a smaller motorway from the A4 to the "Europaplatz" near the city centre). There are plans to eliminate traffic jams at the Aachen road interchange. Maastricht Aachen Airport is the main airport of Aachen and Maastricht. It is located around 15 nautical miles (28 km; 17 mi) northwest of Aachen. There is a shuttle-service between Aachen and the airport. Recreational aviation is served by the (formerly military) Aachen Merzbrück Airfield. Since 1950, a committee of Aachen citizens annually awards the Charlemagne Prize () to personalities of outstanding service to the unification of Europe. It is traditionally awarded on Ascension Day at the City Hall. In 2016, the Charlemagne Award was awarded to Pope Francis. The International Charlemagne Prize of Aachen was awarded in the year 2000 to US president Bill Clinton, for his special personal contribution to co-operation with the states of Europe, for the preservation of peace, freedom, democracy and human rights in Europe, and for his support of the enlargement of the European Union. In 2004, Pope John Paul II's efforts to unite Europe were honoured with an "Extraordinary Charlemagne Medal", which was awarded for the only time ever. Aachen is twinned with:
https://en.wikipedia.org/wiki?curid=1520
Agate Agate is a common rock formation, consisting of chalcedony and quartz as its primary components, consisting of a wide variety of colors. Agates are primarily formed within volcanic and metamorphic rocks. Decorative uses of agates date back as far as Ancient Greece and are used most commonly as decorations or jewelry. The stone was given its name by Theophrastus, a Greek philosopher and naturalist, who discovered the stone along the shore line of the Dirillo River or Achates () in Sicily, sometime between the 4th and 3rd centuries BC. Agate is one of the most common materials used in the art of hardstone carving, and has been recovered at a number of ancient sites, indicating its widespread use in the ancient world; for example, archaeological recovery at the Knossos site on Crete illustrates its role in Bronze Age Minoan culture. Agate minerals have the tendency to form on or within pre-existing rocks, creating difficulties in accurately determining their time of formation. Their host rocks have been dated to have formed as early as the Archean Eon. Agates are most commonly found as nodules within the cavities of volcanic rocks. These cavities are formed from the gases trapped within the liquid volcanic material forming vesicles. Cavities are then filled in with silica-rich fluids from the volcanic material, layers are deposited on the walls of the cavity slowly working their way inwards. The first layer deposited on the cavity walls is commonly known as the priming layer. Variations in the character of the solution or in the conditions of deposition may cause a corresponding variation in the successive layers. These variations in layers result in bands of chalcedony, often alternate with layers of crystalline quartz forming banded agate. Hollow agates can also form due to the deposition of liquid-rich silica not penetrating deep enough to fill the cavity completely. Agate will form crystals within the reduced cavity, the apex of each crystal may point towards the center of the cavity. The priming layer are often dark green, but can be modified by iron oxide resulting in a rust like appearance. Agate is a very durable and therefore is often seen detached from its eroding matrix, once removed, the outer surface is usually pitted and rough from filling the cavity of its former matrix. Agates have also been found in sedimentary rocks, normally in limestone or dolomite; these sedimentary rocks require cavities often from decomposed branches or other buried organic material. If silica-rich fluids are able to penetrate into these cavities agates can be formed. "Lace agate" is a variety that exhibits a lace-like pattern with forms such as eyes, swirls, bands or zigzags. "Blue lace agate" is found in Africa and is especially hard. "Crazy lace agate," typically found in Mexico, is often brightly colored with a complex pattern, demonstrating randomized distribution of contour lines and circular droplets, scattered throughout the rock. The stone is typically coloured red and white but is also seen to exhibit yellow and grey combinations as well. "Moss agate", as the name suggests, exhibits a moss like pattern and is of a greenish colour. The coloration is not created by any vegetative growth, but rather through the mixture of chalcedony and oxidized iron hornblende. "Dendritic agate" also displays vegetative features, including fern-like patterns formed due to the presence of manganese and iron oxides. "Turritella agate" ("Elimia tenera)" is formed from the shells of fossilized freshwater Turritellas, gastropods with elongated spiral shells. Similarly, coral, petrified wood, porous rocks and other organic remains can also form agate. "Coldwater agates", such as the Lake Michigan Cloud Agate, did not form under volcanic processes, but instead formed within the limestone and dolomite strata of marine origin. Like volcanic-origin agates, Coldwater agates formed from silica gels that lined pockets and seams within the bedrock. These agates are typically less colorful, with banded lines of grey and white chalcedony. "Greek agate" is a name given to pale white to tan colored agate found in Sicily, once a Greek colony, back to[""] 400 BC. The Greeks used it for making jewelry and beads. "Brazilian agate" is found as sizable geodes of layered nodules. These occur in brownish tones inter-layered with white and gray. Quartz forms within these nodules, creating a striking specimen when cut opposite the layered growth axis. It is often dyed in various colors for ornamental purposes. "Polyhedroid agate" forms in a flat-sided shape similar to a polyhedron. When sliced, it often shows a characteristic layering of concentric polygons. It has been suggested that growth is not crystallographically controlled but is due to the filling-in of spaces between pre-existing crystals which have since dissolved. Other forms of agate include Holley blue agate (also spelled "Holly blue agate"), a rare dark blue ribbon agate only found near Holley, Oregon; Lake Superior agate; Carnelian agate (has reddish hues); Botswana agate; Plume agate; Condor agate; Tube agate containing visible flow channels or pinhole-sized "tubes"; Fortification agate with contrasting concentric banding reminiscent of defensive ditches and walls around ancient forts; Binghamite, a variety found only on the Cuyuna iron range (near Crosby) in Crow Wing County, Minnesota; Fire agate showing an iridescent, internal flash or "fire", the result of a layer of clear agate over a layer of hydrothermally deposited hematite; Patuxent River stone, a red and yellow form of agate only found in Maryland; Enhydro agate contains tiny inclusions of water, sometimes with air bubbles. Industrial uses of agate exploit its hardness, ability to retain a highly polished surface finish and resistance to chemical attack. It has traditionally been used to make knife-edge bearings for laboratory balances and precision pendulums, and sometimes to make mortars and pestles to crush and mix chemicals. It has also been used for centuries for leather burnishing tools. The decorative arts use it to make ornaments such as pins, brooches or other types of jewellery, paper knives, inkstands, marbles and seals. Agate is also still used today for decorative displays, cabochons, beads, carvings and Intarsia art as well as face-polished and tumble-polished specimens of varying size and origin. Idar-Oberstein was one of the centers which made use of agate on an industrial scale. Where in the beginning locally found agates were used to make all types of objects for the European market, this became a globalized business around the turn of the 20th century: Idar-Oberstein imported large quantities of agate from Brazil, as ship's ballast. Making use of a variety of proprietary chemical processes, they produced colored beads that were sold around the globe. Agates have long been used in arts and crafts. The sanctuary of a Presbyterian church in Yachats, Oregon, has six windows with panes made of agates collected from the local beaches. Respiratory diseases such as silicosis and higher incidence of tuberculosis among workers involved in the agate industry have been reported from India and China.
https://en.wikipedia.org/wiki?curid=1523
Aspirin Aspirin, also known as acetylsalicylic acid (ASA), is a medication used to reduce pain, fever, or inflammation. Specific inflammatory conditions which aspirin is used to treat include Kawasaki disease, pericarditis, and rheumatic fever. Aspirin given shortly after a heart attack decreases the risk of death. Aspirin is also used long-term to help prevent further heart attacks, ischaemic strokes, and blood clots in people at high risk. It may also decrease the risk of certain types of cancer, particularly colorectal cancer. For pain or fever, effects typically begin within 30 minutes. Aspirin is a nonsteroidal anti-inflammatory drug (NSAID) and works similarly to other NSAIDs but also suppresses the normal functioning of platelets. One common adverse effect is an upset stomach. More significant side effects include stomach ulcers, stomach bleeding, and worsening asthma. Bleeding risk is greater among those who are older, drink alcohol, take other NSAIDs, or are on other blood thinners. Aspirin is not recommended in the last part of pregnancy. It is not generally recommended in children with infections because of the risk of Reye syndrome. High doses may result in ringing in the ears. A precursor to aspirin found in leaves from the willow tree has been used for its health effects for at least 2,400 years. In 1853, chemist Charles Frédéric Gerhardt treated the medicine sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time. For the next fifty years, other chemists established the chemical structure and came up with more efficient production methods. In 1897, scientists at the Bayer company began studying acetylsalicylic acid as a less-irritating replacement medication for common salicylate medicines. By 1899, Bayer had named it "Aspirin" and sold it around the world. Aspirin's popularity grew over the first half of the twentieth century leading to competition between many brands and formulations. The word "Aspirin" was Bayer's brand name; however, their rights to the trademark were lost or sold in many countries. Aspirin is one of the most widely used medications globally, with an estimated (50 to 120 billion pills) consumed each year. It is on the World Health Organization's List of Essential Medicines. , the wholesale cost in the developing world is to per dose. , the cost for a typical month of medication in the United States is less than . It is available as a generic medication. In 2017, it was the 42nd most commonly prescribed medication in the United States, with more than 17million prescriptions. Aspirin is used in the treatment of a number of conditions, including fever, pain, rheumatic fever, and inflammatory conditions, such as rheumatoid arthritis, pericarditis, and Kawasaki disease. Lower doses of aspirin have also been shown to reduce the risk of death from a heart attack, or the risk of stroke in people who are at high risk or who have cardiovascular disease, but not in elderly people who are otherwise healthy. There is some evidence that aspirin is effective at preventing colorectal cancer, though the mechanisms of this effect are unclear. In the United States, low-dose aspirin is deemed reasonable in those between 50 and 70 years old who have a risk of cardiovascular disease over 10%, are not at an increased risk of bleeding, and are otherwise healthy. Aspirin is an effective analgesic for acute pain, although it is generally considered inferior to ibuprofen because aspirin is more likely to cause gastrointestinal bleeding. Aspirin is generally ineffective for those pains caused by muscle cramps, bloating, gastric distension, or acute skin irritation. As with other NSAIDs, combinations of aspirin and caffeine provide slightly greater pain relief than aspirin alone. Effervescent formulations of aspirin relieve pain faster than aspirin in tablets, which makes them useful for the treatment of migraines. Topical aspirin may be effective for treating some types of neuropathic pain. Aspirin, either by itself or in a combined formulation, effectively treats certain types of a headache, but its efficacy may be questionable for others. Secondary headaches, meaning those caused by another disorder or trauma, should be promptly treated by a medical provider. Among primary headaches, the International Classification of Headache Disorders distinguishes between tension headache (the most common), migraine, and cluster headache. Aspirin or other over-the-counter analgesics are widely recognized as effective for the treatment of tension headache. Aspirin, especially as a component of an aspirin/paracetamol/caffeine combination, is considered a first-line therapy in the treatment of migraine, and comparable to lower doses of sumatriptan. It is most effective at stopping migraines when they are first beginning. Like its ability to control pain, aspirin's ability to control fever is due to its action on the prostaglandin system through its irreversible inhibition of COX. Although aspirin's use as an antipyretic in adults is well established, many medical societies and regulatory agencies, including the American Academy of Family Physicians, the American Academy of Pediatrics, and the Food and Drug Administration, strongly advise against using aspirin for treatment of fever in children because of the risk of Reye's syndrome, a rare but often fatal illness associated with the use of aspirin or other salicylates in children during episodes of viral or bacterial infection. Because of the risk of Reye's syndrome in children, in 1986, the US Food and Drug Administration (FDA) required labeling on all aspirin-containing medications advising against its use in children and teenagers. Aspirin is used as an anti-inflammatory agent for both acute and long-term inflammation, as well as for treatment of inflammatory diseases, such as rheumatoid arthritis. Aspirin is an important part of the treatment of those who have had a heart attack. It is generally not recommended for routine use by people with no other health problems, including those over the age of 70. For people who have already had a heart attack or stroke, taking aspirin daily for two years prevented 1 in 50 from having a cardiovascular problem (heart attack, stroke, or death), but also caused non-fatal bleeding problems to occur in 1 of 400 people. Low dose aspirin appears useful for people less than 70kg while higher dose aspirin is required to benefit those over 70kg. The United States Preventive Services Task Force (USPSTF), , recommends initiating low-dose aspirin use for the primary prevention of cardiovascular disease and colon cancer in adults aged 50 to 59 years who have a 10% or greater 10-year cardiovascular disease (CVD) risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years. In those with no previous history of heart disease, aspirin decreases the risk of a non-fatal myocardial infarction but increases the risk of bleeding and does not change the overall risk of death. Specifically over 5 years it decreased the risk of a cardiovascular event by 1 in 265 and increased the risk of bleeding by 1 in 210. Aspirin appears to offer little benefit to those at lower risk of heart attack or stroke—for instance, those without a history of these events or with pre-existing disease. Some studies recommend aspirin on a case-by-case basis, while others have suggested the risks of other events, such as gastrointestinal bleeding, were enough to outweigh any potential benefit, and recommended against using aspirin for primary prevention entirely. Aspirin has also been suggested as a component of a polypill for prevention of cardiovascular disease. Complicating the use of aspirin for prevention is the phenomenon of aspirin resistance. For people who are resistant, aspirin's efficacy is reduced. Some authors have suggested testing regimens to identify people who are resistant to aspirin. After percutaneous coronary interventions (PCIs), such as the placement of a coronary artery stent, a U.S. Agency for Healthcare Research and Quality guideline recommends that aspirin be taken indefinitely. Frequently, aspirin is combined with an ADP receptor inhibitor, such as clopidogrel, prasugrel, or ticagrelor to prevent blood clots. This is called dual antiplatelet therapy (DAPT). United States and European Union guidelines disagree somewhat about how long, and for what indications this combined therapy should be continued after surgery. U.S. guidelines recommend DAPT for at least 12 months, while EU guidelines recommend DAPT for 6–12 months after a drug-eluting stent placement. However, they agree that aspirin be continued indefinitely after DAPT is complete. Aspirin is thought to reduce the overall risk of both getting cancer and dying from cancer. This effect is particularly beneficial for colorectal cancer (CRC) but must be taken for at least 10–20 years to see this benefit. It may also slightly reduce the risk of endometrial cancer, breast cancer, and prostate cancer. Some conclude the benefits are greater than the risks due to bleeding in those at average risk. Others are unclear if the benefits are greater than the risk. Given this uncertainty, the 2007 United States Preventive Services Task Force (USPSTF) guidelines on this topic recommended against the use of aspirin for prevention of CRC in people with average risk. Nine years later however, the USPSTF issued a grade B recommendation for the use of low-dose aspirin (75 to 100mg/day) "for the primary prevention of CVD [cardiovascular disease] and CRC in adults 50 to 59 years of age who have a 10% or greater 10-year CVD risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years". A meta-analysis through 2019 found that aspirin reduces the risk of cancer of the colorectum, esophagus, and stomach. Aspirin is a first-line treatment for the fever and joint-pain symptoms of acute rheumatic fever. The therapy often lasts for one to two weeks, and is rarely indicated for longer periods. After fever and pain have subsided, the aspirin is no longer necessary, since it does not decrease the incidence of heart complications and residual rheumatic heart disease. Naproxen has been shown to be as effective as aspirin and less toxic, but due to the limited clinical experience, naproxen is recommended only as a second-line treatment. Along with rheumatic fever, Kawasaki disease remains one of the few indications for aspirin use in children in spite of a lack of high quality evidence for its effectiveness. Low-dose aspirin supplementation has moderate benefits when used for prevention of pre-eclampsia. This benefit is greater when started in early pregnancy. There is no evidence that aspirin prevents dementia. For some people, aspirin does not have as strong an effect on platelets as for others, an effect known as aspirin-resistance or insensitivity. One study has suggested women are more likely to be resistant than men, and a different, aggregate study of 2,930 people found 28% were resistant. A study in 100 Italian people, though, found, of the apparent 31% aspirin-resistant subjects, only 5% were truly resistant, and the others were noncompliant. Another study of 400 healthy volunteers found no subjects who were truly resistant, but some had "pseudoresistance, reflecting delayed and reduced drug absorption". Adult aspirin tablets are produced in standardised sizes, which vary slightly from country to country, for example 300mg in Britain and 325mg (or 5 grains) in the United States. Smaller doses are based on these standards, "e.g.", 75mg and 81mg tablets. The tablets are commonly called "baby aspirin" or "baby-strength", because they were originallybut no longerintended to be administered to infants and children. No medical significance occurs due to the slight difference in dosage between the 75mg and the 81mg tablets. The dose required for benefit appears to depend on a person's weight. For those weighing less than , low dose is effective for preventing cardiovascular disease; for patients above this weight, higher doses are required. In general, for adults, doses are taken four times a day for fever or arthritis, with doses near the maximal daily dose used historically for the treatment of rheumatic fever. For the prevention of myocardial infarction (MI) in someone with documented or suspected coronary artery disease, much lower doses are taken once daily. March 2009 recommendations from the USPSTF on the use of aspirin for the primary prevention of coronary heart disease encourage men aged 45–79 and women aged 55–79 to use aspirin when the potential benefit of a reduction in MI for men or stroke for women outweighs the potential harm of an increase in gastrointestinal hemorrhage. The WHI study said regular low dose (75 or 81mg) aspirin female users had a 25% lower risk of death from cardiovascular disease and a 14% lower risk of death from any cause. Low-dose aspirin use was also associated with a trend toward lower risk of cardiovascular events, and lower aspirin doses (75 or 81mg/day) may optimize efficacy and safety for people requiring aspirin for long-term prevention. In children with Kawasaki disease, aspirin is taken at dosages based on body weight, initially four times a day for up to two weeks and then at a lower dose once daily for a further six to eight weeks. Aspirin should not be taken by people who are allergic to ibuprofen or naproxen, or who have salicylate intolerance or a more generalized drug intolerance to NSAIDs, and caution should be exercised in those with asthma or NSAID-precipitated bronchospasm. Owing to its effect on the stomach lining, manufacturers recommend people with peptic ulcers, mild diabetes, or gastritis seek medical advice before using aspirin. Even if none of these conditions is present, the risk of stomach bleeding is still increased when aspirin is taken with alcohol or warfarin. People with hemophilia or other bleeding tendencies should not take aspirin or other salicylates. Aspirin is known to cause hemolytic anemia in people who have the genetic disease glucose-6-phosphate dehydrogenase deficiency, particularly in large doses and depending on the severity of the disease. Use of aspirin during dengue fever is not recommended owing to increased bleeding tendency. People with kidney disease, hyperuricemia, or gout should not take aspirin because it inhibits the kidneys' ability to excrete uric acid, thus may exacerbate these conditions. Aspirin should not be given to children or adolescents to control cold or influenza symptoms, as this has been linked with Reye's syndrome. Aspirin use has been shown to increase the risk of gastrointestinal bleeding. Although some enteric-coated formulations of aspirin are advertised as being "gentle to the stomach", in one study, enteric coating did not seem to reduce this risk. Combining aspirin with other NSAIDs has also been shown to further increase this risk. Using aspirin in combination with clopidogrel or warfarin also increases the risk of upper gastrointestinal bleeding. Blockade of COX-1 by aspirin apparently results in the upregulation of COX-2 as part of a gastric defense and that taking COX-2 inhibitors concurrently with aspirin increases the gastric mucosal erosion. Therefore, caution should be exercised if combining aspirin with any "natural" supplements with COX-2-inhibiting properties, such as garlic extracts, curcumin, bilberry, pine bark, ginkgo, fish oil, resveratrol, genistein, quercetin, resorcinol, and others. In addition to enteric coating, "buffering" is the other main method companies have used to try to mitigate the problem of gastrointestinal bleeding. Buffering agents are intended to work by preventing the aspirin from concentrating in the walls of the stomach, although the benefits of buffered aspirin are disputed. Almost any buffering agent used in antacids can be used; Bufferin, for example, uses magnesium oxide. Other preparations use calcium carbonate. Taking it with vitamin C has been investigated as a method of protecting the stomach lining. Taking equal doses of vitamin C and aspirin may decrease the amount of stomach damage that occurs compared to taking aspirin alone. Large doses of salicylate, a metabolite of aspirin, cause temporary tinnitus (ringing in the ears) based on experiments in rats, via the action on arachidonic acid and NMDA receptors cascade. Reye's syndrome, a rare but severe illness characterized by acute encephalopathy and fatty liver, can occur when children or adolescents are given aspirin for a fever or other illness or infection. From 1981 to 1997, 1207 cases of Reye's syndrome in people younger than 18 were reported to the U.S. Centers for Disease Control and Prevention. Of these, 93% reported being ill in the three weeks preceding the onset of Reye's syndrome, most commonly with a respiratory infection, chickenpox, or diarrhea. Salicylates were detectable in 81.9% of children for whom test results were reported. After the association between Reye's syndrome and aspirin was reported, and safety measures to prevent it (including a Surgeon General's warning, and changes to the labeling of aspirin-containing drugs) were implemented, aspirin taken by children declined considerably in the United States, as did the number of reported cases of Reye's syndrome; a similar decline was found in the United Kingdom after warnings against pediatric aspirin use were issued. The U.S. Food and Drug Administration now recommends aspirin (or aspirin-containing products) should not be given to anyone under the age of 12 who has a fever, and the UK National Health Service recommends children who are under 16 years of age should not take aspirin, unless it is on the advice of a doctor. For a small number of people, taking aspirin can result in symptoms resembling an allergic reaction, including hives, swelling, and headache. The reaction is caused by salicylate intolerance and is not a true allergy, but rather an inability to metabolize even small amounts of aspirin, resulting in an overdose. Aspirin and other NSAIDs, such as ibuprofen, may delay the healing of skin wounds. Aspirin may however help heal venous leg ulcers that have not healed following usual treatment. Aspirin can induce swelling of skin tissues in some people. In one study, angioedema appeared one to six hours after ingesting aspirin in some of the people. However, when the aspirin was taken alone, it did not cause angioedema in these people; the aspirin had been taken in combination with another NSAID-induced drug when angioedema appeared. Aspirin causes an increased risk of cerebral microbleeds having the appearance on MRI scans of 5 to 10mm or smaller, hypointense (dark holes) patches. Such cerebral microbleeds are important, since they often occur prior to ischemic stroke or intracerebral hemorrhage, Binswanger disease, and Alzheimer's disease. A study of a group with a mean dosage of aspirin of 270mg per day estimated an average absolute risk increase in intracerebral hemorrhage (ICH) of 12 events per 10,000 persons. In comparison, the estimated absolute risk reduction in myocardial infarction was 137 events per 10,000 persons, and a reduction of 39 events per 10,000 persons in ischemic stroke. In cases where ICH already has occurred, aspirin use results in higher mortality, with a dose of about 250mg per day resulting in a relative risk of death within three months after the ICH around 2.5 (95% confidence interval 1.3 to 4.6). Aspirin and other NSAIDs can cause abnormally high blood levels of potassium by inducing a hyporeninemic hypoaldosteronic state via inhibition of prostaglandin synthesis; however, these agents do not typically cause hyperkalemia by themselves in the setting of normal renal function and euvolemic state. Aspirin can cause prolonged bleeding after operations for up to 10 days. In one study, 30 of 6499 people having elective surgery required reoperations to control bleeding. Twenty had diffuse bleeding and 10 had bleeding from a site. Diffuse, but not discrete, bleeding was associated with the preoperative use of aspirin alone or in combination with other NSAIDS in 19 of the 20 diffuse bleeding people. On 9 July 2015, the FDA toughened warnings of increased heart attack and stroke risk associated with nonsteroidal anti-inflammatory drugs (NSAID). Aspirin is an NSAID but is not affected by the new warnings. Aspirin overdose can be acute or chronic. In acute poisoning, a single large dose is taken; in chronic poisoning, higher than normal doses are taken over a period of time. Acute overdose has a mortality rate of 2%. Chronic overdose is more commonly lethal, with a mortality rate of 25%; chronic overdose may be especially severe in children. Toxicity is managed with a number of potential treatments, including activated charcoal, intravenous dextrose and normal saline, sodium bicarbonate, and dialysis. The diagnosis of poisoning usually involves measurement of plasma salicylate, the active metabolite of aspirin, by automated spectrophotometric methods. Plasma salicylate levels in general range from 30–100mg/l after usual therapeutic doses, 50–300mg/l in people taking high doses and 700–1400mg/l following acute overdose. Salicylate is also produced as a result of exposure to bismuth subsalicylate, methyl salicylate, and sodium salicylate. Aspirin is known to interact with other drugs. For example, acetazolamide and ammonium chloride are known to enhance the intoxicating effect of salicylates, and alcohol also increases the gastrointestinal bleeding associated with these types of drugs. Aspirin is known to displace a number of drugs from protein-binding sites in the blood, including the antidiabetic drugs tolbutamide and chlorpropamide, warfarin, methotrexate, phenytoin, probenecid, valproic acid (as well as interfering with beta oxidation, an important part of valproate metabolism), and other NSAIDs. Corticosteroids may also reduce the concentration of aspirin. Ibuprofen can negate the antiplatelet effect of aspirin used for cardioprotection and stroke prevention. The pharmacological activity of spironolactone may be reduced by taking aspirin, and it is known to compete with penicillin G for renal tubular secretion. Aspirin may also inhibit the absorption of vitamin C. Aspirin decomposes rapidly in solutions of ammonium acetate or the acetates, carbonates, citrates, or hydroxides of the alkali metals. It is stable in dry air, but gradually hydrolyses in contact with moisture to acetic and salicylic acids. In solution with alkalis, the hydrolysis proceeds rapidly and the clear solutions formed may consist entirely of acetate and salicylate. Like flour mills, factories that make aspirin tablets must pay attention to how much of the powder gets into the air inside the building, because the powder-air mixture can be explosive. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit in the United States of 5mg/m3 (time-weighted average). In 1989, the Occupational Safety and Health Administration (OSHA) set a legal permissible exposure limit for aspirin of 5mg/m3, but this was vacated by the AFL-CIO v. OSHA decision in 1993. The synthesis of aspirin is classified as an esterification reaction. Salicylic acid is treated with acetic anhydride, an acid derivative, causing a chemical reaction that turns salicylic acid's hydroxyl group into an ester group (R-OH → R-OCOCH3). This process yields aspirin and acetic acid, which is considered a byproduct of this reaction. Small amounts of sulfuric acid (and occasionally phosphoric acid) are almost always used as a catalyst. This method is commonly employed in undergraduate teaching labs. Formulations containing high concentrations of aspirin often smell like vinegar because aspirin can decompose through hydrolysis in moist conditions, yielding salicylic and acetic acids. Aspirin, an acetyl derivative of salicylic acid, is a white, crystalline, weakly acidic substance, with a melting point of , and a boiling point of . Its acid dissociation constant (p"K"a) is 3.5 at . Polymorphism, or the ability of a substance to form more than one crystal structure, is important in the development of pharmaceutical ingredients. Many drugs receive regulatory approval for only a single crystal form or polymorph. For a long time, only one crystal structure for aspirin was known. That aspirin might have a second crystalline form was suspected since the 1960s. The elusive second polymorph was first discovered by Vishweshwar and coworkers in 2005, and fine structural details were given by Bond "et al." A new crystal type was found after attempted cocrystallization of aspirin and levetiracetam from hot acetonitrile. The form II is only stable at 100K and reverts to form I at ambient temperature. In the (unambiguous) form I, two salicylic molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds, and in the newly claimed form II, each salicylic molecule forms the same hydrogen bonds with two neighboring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures. In 1971, British pharmacologist John Robert Vane, then employed by the Royal College of Surgeons in London, showed aspirin suppressed the production of prostaglandins and thromboxanes. For this discovery he was awarded the 1982 Nobel Prize in Physiology or Medicine, jointly with Sune Bergström and Bengt Ingemar Samuelsson. Aspirin's ability to suppress the production of prostaglandins and thromboxanes is due to its irreversible inactivation of the cyclooxygenase (COX; officially known as prostaglandin-endoperoxide synthase, PTGS) enzyme required for prostaglandin and thromboxane synthesis. Aspirin acts as an acetylating agent where an acetyl group is covalently attached to a serine residue in the active site of the PTGS enzyme (Suicide inhibition). This makes aspirin different from other NSAIDs (such as diclofenac and ibuprofen), which are reversible inhibitors. Low-dose aspirin use irreversibly blocks the formation of thromboxane A2 in platelets, producing an inhibitory effect on platelet aggregation during the lifetime of the affected platelet (8–9 days). This antithrombotic property makes aspirin useful for reducing the incidence of heart attacks in people who have had a heart attack, unstable angina, ischemic stroke or transient ischemic attack. 40mg of aspirin a day is able to inhibit a large proportion of maximum thromboxane A2 release provoked acutely, with the prostaglandin I2 synthesis being little affected; however, higher doses of aspirin are required to attain further inhibition. Prostaglandins, local hormones produced in the body, have diverse effects, including the transmission of pain information to the brain, modulation of the hypothalamic thermostat, and inflammation. Thromboxanes are responsible for the aggregation of platelets that form blood clots. Heart attacks are caused primarily by blood clots, and low doses of aspirin are seen as an effective medical intervention for acute myocardial infarction. At least two different types of cyclooxygenases, COX-1 and COX-2, are acted on by aspirin. Aspirin irreversibly inhibits COX-1 and modifies the enzymatic activity of COX-2. COX-2 normally produces prostanoids, most of which are proinflammatory. Aspirin-modified PTGS2 produces lipoxins, most of which are anti-inflammatory. Newer NSAID drugs, COX-2 inhibitors (coxibs), have been developed to inhibit only PTGS2, with the intent to reduce the incidence of gastrointestinal side effects. However, several COX-2 inhibitors, such as rofecoxib (Vioxx), have been withdrawn from the market, after evidence emerged that PTGS2 inhibitors increase the risk of heart attack and stroke. Endothelial cells lining the microvasculature in the body are proposed to express PTGS2, and, by selectively inhibiting PTGS2, prostaglandin production (specifically, PGI2; prostacyclin) is downregulated with respect to thromboxane levels, as PTGS1 in platelets is unaffected. Thus, the protective anticoagulative effect of PGI2 is removed, increasing the risk of thrombus and associated heart attacks and other circulatory problems. Since platelets have no DNA, they are unable to synthesize new PTGS once aspirin has irreversibly inhibited the enzyme, an important difference with reversible inhibitors. Furthermore, aspirin, while inhibiting the ability of COX-2 to form pro-inflammatory products such as the prostaglandins, converts this enzyme's activity from a prostaglandin-forming cyclooxygenase to a lipoxygenase-like enzyme: aspirin-treated COX-2 metabolizes a variety of polyunsaturated fatty acids to hydroperoxy products which are then further metabolized to specialized proresolving mediators such as the aspirin-triggered lipoxins, aspirin-triggered resolvins, and aspirin-triggered maresins. These mediators possess potent anti-inflammatory activity. It is proposed that this aspirin-triggered transition of COX-2 from cyclooxygenase to lipoxygenase activity and the consequential formation of specialized proresolving mediators contributes to the anti-inflammatory effects of aspirin. Aspirin has been shown to have at least three additional modes of action. It uncouples oxidative phosphorylation in cartilaginous (and hepatic) mitochondria, by diffusing from the inner membrane space as a proton carrier back into the mitochondrial matrix, where it ionizes once again to release protons. Aspirin buffers and transports the protons. When high doses are given, it may actually cause fever, owing to the heat released from the electron transport chain, as opposed to the antipyretic action of aspirin seen with lower doses. In addition, aspirin induces the formation of NO-radicals in the body, which have been shown in mice to have an independent mechanism of reducing inflammation. This reduced leukocyte adhesion is an important step in the immune response to infection; however, evidence is insufficient to show aspirin helps to fight infection. More recent data also suggest salicylic acid and its derivatives modulate signaling through NF-κB. NF-κB, a transcription factor complex, plays a central role in many biological processes, including inflammation. Aspirin is readily broken down in the body to salicylic acid, which itself has anti-inflammatory, antipyretic, and analgesic effects. In 2012, salicylic acid was found to activate AMP-activated protein kinase, which has been suggested as a possible explanation for some of the effects of both salicylic acid and aspirin. The acetyl portion of the aspirin molecule has its own targets. Acetylation of cellular proteins is a well-established phenomenon in the regulation of protein function at the post-translational level. Aspirin is able to acetylate several other targets in addition to COX isoenzymes. These acetylation reactions may explain many hitherto unexplained effects of aspirin. Acetylsalicylic acid is a weak acid, and very little of it is ionized in the stomach after oral administration. Acetylsalicylic acid is quickly absorbed through the cell membrane in the acidic conditions of the stomach. The increased pH and larger surface area of the small intestine causes aspirin to be absorbed more slowly there, as more of it is ionized. Owing to the formation of concretions, aspirin is absorbed much more slowly during overdose, and plasma concentrations can continue to rise for up to 24 hours after ingestion. About 50–80% of salicylate in the blood is bound to albumin protein, while the rest remains in the active, ionized state; protein binding is concentration-dependent. Saturation of binding sites leads to more free salicylate and increased toxicity. The volume of distribution is 0.1–0.2 L/kg. Acidosis increases the volume of distribution because of enhancement of tissue penetration of salicylates. As much as 80% of therapeutic doses of salicylic acid is metabolized in the liver. Conjugation with glycine forms salicyluric acid, and with glucuronic acid to form two different glucuronide esters. The conjugate with the acetyl group intact is referred to as the "acyl glucuronide"; the deacetylated conjugate is the "phenolic glucuronide". These metabolic pathways have only a limited capacity. Small amounts of salicylic acid are also hydroxylated to gentisic acid. With large salicylate doses, the kinetics switch from first-order to zero-order, as metabolic pathways become saturated and renal excretion becomes increasingly important. Salicylates are excreted mainly by the kidneys as salicyluric acid (75%), free salicylic acid (10%), salicylic phenol (10%), and acyl glucuronides (5%), gentisic acid (< 1%), and 2,3-dihydroxybenzoic acid. When small doses (less than 250mg in an adult) are ingested, all pathways proceed by first-order kinetics, with an elimination half-life of about 2.0 h to 4.5 h. When higher doses of salicylate are ingested (more than 4 g), the half-life becomes much longer (15 h to 30 h), because the biotransformation pathways concerned with the formation of salicyluric acid and salicyl phenolic glucuronide become saturated. Renal excretion of salicylic acid becomes increasingly important as the metabolic pathways become saturated, because it is extremely sensitive to changes in urinary pH. A 10- to 20-fold increase in renal clearance occurs when urine pH is increased from 5 to 8. The use of urinary alkalinization exploits this particular aspect of salicylate elimination. It was found that short-term aspirin use in therapeutic doses might precipitate reversible acute kidney injury when the patient was ill with glomerulonephritis or cirrhosis. Aspirin for some patients with chronic kidney disease and some children with congestive heart failure was contraindicated. Medicines made from willow and other salicylate-rich plants appear in clay tablets from ancient Sumer as well as the Ebers Papyrus from ancient Egypt. Hippocrates referred to their use of salicylic tea to reduce fevers around 400 BC, and were part of the pharmacopoeia of Western medicine in classical antiquity and the Middle Ages. Willow bark extract became recognized for its specific effects on fever, pain and inflammation in the mid-eighteenth century. By the nineteenth century pharmacists were experimenting with and prescribing a variety of chemicals related to salicylic acid, the active component of willow extract. In 1853, chemist Charles Frédéric Gerhardt treated sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time; in the second half of the nineteenth century, other academic chemists established the compound's chemical structure and devised more efficient methods of synthesis. In 1897, scientists at the drug and dye firm Bayer began investigating acetylsalicylic acid as a less-irritating replacement for standard common salicylate medicines, and identified a new way to synthesize it. By 1899, Bayer had dubbed this drug Aspirin and was selling it around the world. The word "Aspirin" was Bayer's brand name, rather than the generic name of the drug; however, Bayer's rights to the trademark were lost or sold in many countries. Aspirin's popularity grew over the first half of the twentieth century leading to fierce competition with the proliferation of aspirin brands and products. Aspirin's popularity declined after the development of acetaminophen/paracetamol in 1956 and ibuprofen in 1962. In the 1960s and 1970s, John Vane and others discovered the basic mechanism of aspirin's effects, while clinical trials and other studies from the 1960s to the 1980s established aspirin's efficacy as an anti-clotting agent that reduces the risk of clotting diseases. The initial large studies on the use of low-dose aspirin to prevent heart attacks that were published in the 1970s and 1980s helped spur reform in clinical research ethics and guidelines for human subject research and US federal law, and are often cited as examples of clinical trials that included only men, but from which people drew general conclusions that did not hold true for women. Aspirin sales revived considerably in the last decades of the twentieth century, and remain strong in the twenty-first with widespread use as a preventive treatment for heart attacks and strokes. Bayer lost its trademark for Aspirin in the United States in actions taken between 1918 and 1921 because it had failed to use the name for its own product correctly and had for years allowed the use of "Aspirin" by other manufacturers without defending the intellectual property rights. Today, aspirin is a generic trademark in many countries. Aspirin, with a capital "A", remains a registered trademark of Bayer in Germany, Canada, Mexico, and in over 80 other countries, for acetylsalicylic acid in all markets, but using different packaging and physical aspects for each. Aspirin is sometimes used in veterinary medicine as an anticoagulant or to relieve pain associated with musculoskeletal inflammation or osteoarthritis. Aspirin should only be given to animals under the direct supervision of a veterinarian, as adverse effects—including gastrointestinal issues—are common. An aspirin overdose in any species may result in salicylate poisoning, characterized by hemorrhaging, seizures, coma, and even death. Dogs are better able to tolerate aspirin than cats are. Cats metabolize aspirin slowly because they lack the glucuronide conjugates that aid in the excretion of aspirin, making it potentially toxic if dosing is not spaced out properly. No clinical signs of toxicosis occurred when cats were given 25mg/kg of aspirin every 48 hours for 4 weeks, but the recommended dose for relief of pain and fever and for treating blood clotting diseases in cats is 10mg/kg every 48 hours to allow for metabolization.
https://en.wikipedia.org/wiki?curid=1525
Abner In the Hebrew Bible, Abner ( "’Avner") was the cousin of King Saul and the commander-in-chief of his army (, ). His name also appears as "Abiner son of Ner", where the longer form Abiner means "my father is Ner". Abner is initially mentioned incidentally in Saul's history (, , ), first appearing as the son of Ner, Saul's uncle, and the commander of Saul's army. He then comes to the story again as the commander who introduced David to Saul following David's killing of Goliath. He is not mentioned in the account of the disastrous battle of Gilboa when Saul's power was crushed. Seizing the youngest but only surviving of Saul's sons, Ish-bosheth, also called Eshbaal, Abner set him up as king over Israel at Mahanaim, east of the Jordan. David, who was accepted as king by Judah alone, was meanwhile reigning at Hebron, and for some time war was carried on between the two parties. The only engagement between the rival factions which is told at length is noteworthy, inasmuch as it was preceded by an encounter at Gibeon between twelve chosen men from each side, in which the whole twenty-four seem to have perished (). In the general engagement which followed, Abner was defeated and put to flight. He was closely pursued by Asahel, brother of Joab, who is said to have been "light of foot as a wild roe" (). As Asahel would not desist from the pursuit, though warned, Abner was compelled to slay him in self-defence. This originated a deadly feud between the leaders of the opposite parties, for Joab, as next of kin to Asahel, was by the law and custom of the country the avenger of his blood. However, according to Josephus, in Antiquities, Book 7, Chapter 1, Joab had forgiven Abner for the death of his brother, Asahel, the reason being that Abner had slain Asahel honorably in combat after he had first warned Asahel and had no other choice but to kill him out of self-defense. This battle was part of a civil war between David and Ish-bosheth, the son of Saul. After this battle Abner switched to the side of David and granted him control over the tribe of Benjamin. This act put Abner in David's favor. For some time afterward the war was carried on, the advantage being invariably on the side of David. At length, Ish-bosheth lost the main prop of his tottering cause by accusing Abner of sleeping with Rizpah (cf. ), one of Saul's concubines, an alliance which, according to contemporary notions, would imply pretensions to the throne (cf. ff.). Abner was indignant at the rebuke, and immediately opened negotiations with David, who welcomed him on the condition that his wife Michal should be restored to him. This was done, and the proceedings were ratified by a feast. Almost immediately after, however, Joab, who had been sent away, perhaps intentionally returned and slew Abner at the gate of Hebron. The ostensible motive for the assassination was a desire to avenge Asahel, and this would be a sufficient justification for the deed according to the moral standard of the time (although Abner should have been safe from such a revenge killing in Hebron, which was a City of Refuge). The conduct of David after the event was such as to show that he had no complicity in the act, though he could not venture to punish its perpetrators (; cf. ff.). David had Abner buried in Hebron, as it states in , "And David said to all the people who were with him, 'Rend your clothes and gird yourselves with sackcloth, and wail before Abner.' And King David went after the bier. And they buried Abner in Hebron, and the king raised his voice and wept on Abner's grave, and all the people wept." Shortly after Abner's death, Ish-bosheth was assassinated as he slept (), and David became king of the reunited kingdoms (). The site known as the Tomb of Abner is located not far from the Cave of the Patriarchs in Hebron and receives visitors throughout the year. Many travelers have recorded visiting the tomb over the centuries. Benjamin of Tudela, who began his journeys in 1165, wrote in the journal, "The valley of Eshkhol is north of the mountain upon which Hebron stood, and the cave of Makhpela is east thereof. A bow-shot west of the cave is the sepulchre of Abner the son of Ner." A rabbi in the 12th century records visiting the tomb as reprinted in Elkan Nathan Adler's book "Jewish Travellers in the Middle Ages: 19 Firsthand Accounts". The account states, "I, Jacob, the son of R. Nathaniel ha Cohen, journeyed with much difficulty, but God helped me to enter the Holy Land, and I saw the graves of our righteous Patriarchs in Hebron and the grave of Abner the son of Ner." Adler postulates that the visit must have occurred prior to Saladin's capture of Jerusalem in 1187. Rabbi Moses Basola records visiting the tomb in 1522. He states, "Abner's grave is in the middle of Hebron; the Muslims built a mosque over it." Another visitor in the 1500s states that "at the entrance to the market in Hebron, at the top of the hill against the wall, Abner ben Ner is buried, in a church, in a cave." This visit was recorded in Sefer Yihus ha-Tzaddiqim (Book of Genealogy of the Righteous), a collection of travelogues from 1561. Abraham Moshe Lunz reprinted the book in 1896. Menahem Mendel of Kamenitz, considered the first hotelier in the Land of Israel, wrote about the Tomb of Abner is his 1839 book "Korot Ha-Itim", which was translated into English as "The Book of the Occurrences of the Times to Jeshurun in the Land of Israel." He states", ""Here I write of the graves of the righteous to which I paid my respects. Hebron – Described above is the character and order of behavior of those coming to pray at the Cave of ha-Machpelah. I went there, between the stores, over the grave of Avner ben Ner and was required to pay a Yishmaeli – the grave was in his courtyard – to allow me to enter." The author and traveler J. J. Benjamin mentioned visiting the tomb in his book "Eight Years in Asia and Africa" (1859, Hanover). He states, "On leaving the Sepulchre of the Patriarchs, and proceeding on the road leading to the Jewish quarter, to the left of the courtyard, is seen a Turkish dwelling house, by the side of which is a small grotto, to which there is a descent of several steps. This is the tomb of Abner, captain of King Saul. It is held in much esteem by the Arabs, and the proprietor of it takes care that it is always kept in the best order. He requires from those who visit it a small gratuity." The British scholar Israel Abrahams wrote in his 1912 book "The Book of Delight and Other Papers", "Hebron was the seat of David’s rule over Judea. Abner was slain here by Joab, and was buried here – they still show Abner’s tomb in the garden of a large house within the city. By the pool at Hebron were slain the murderers of Ishbosheth..." Over the years the tomb fell into disrepair and neglect. It was closed to the public in 1994. In 1996, a group of 12 Israeli women filed a petition with the Supreme Court requesting the government to reopen the Tomb of Abner. More requests were made over the years and eventually arrangements were made to have the site open to the general public on ten days throughout the year corresponding to the ten days that the Isaac Hall of the Cave of the Patriarchs is open. In early 2007 new mezuzot were affixed to the entrance of the site.
https://en.wikipedia.org/wiki?curid=1526
Ahmed I Ahmed I ( ""; ; April 1590 – 22 November 1617) was the Sultan of the Ottoman Empire from 1603 until his death in 1617. Ahmed's reign is noteworthy for marking the end of the Ottoman tradition of royal fratricide; henceforth Ottoman rulers would no longer execute their brothers upon accession to the throne. He is also well known for his construction of the Blue Mosque, one of the most famous mosques in Turkey. Ahmed was probably born in April 1590 at the Manisa Palace, Manisa, when his father Şehzade Mehmed was still a prince and the governor of the Sanjak of Manisa. His mother was Handan Sultan. After his grandfather Murad III's death in 1595, his father came to Constantinople and ascended the throne as Sultan Mehmed III. Mehmed ordered the execution of nineteen of his own brothers and half brothers. Ahmed's elder brother Şehzade Mahmud was also executed by his father Mehmed on 7 June 1603, just before Mehmed's own death on 22 December 1603. Mahmud was buried along with his mother in a separate mausoleum built by Ahmed in Şehzade Mosque, Constantinople. Ahmed ascended the throne after his father's death in 1603, at the age of thirteen, when his powerful grandmother Safiye Sultan was still alive. A far lost uncle of Ahmed, Yahya, resented his accession to the throne and spent his life scheming to become Sultan. Ahmed broke with the traditional fratricide following previous enthronements and did not order the execution of his brother Mustafa. Instead Mustafa was sent to live at the old palace at Bayezit along with their grandmother Safiye Sultan. This was most likely due to Ahmed's young age - he had not yet demonstrated his ability to sire children, and Mustafa was then the only other candidate for the Ottoman throne. His brother's execution would have endangered the dynasty, and thus he was spared. In the earlier part of his reign Ahmed I showed decision and vigor, which were belied by his subsequent conduct. The wars in Hungary and Persia, which attended his accession, terminated unfavourably for the empire. Its prestige was further tarnished in the Treaty of Zsitvatorok, signed in 1606, whereby the annual tribute paid by Austria was abolished. Following the crushing defeat in the Ottoman–Safavid War (1603–18) against the neighbouring rivals Safavid Empire, led by Shah Abbas the Great, Georgia, Azerbaijan and other vast territories in the Caucasus were ceded back to Persia per the Treaty of Nasuh Pasha in 1612, territories that had been temporarily conquered in the Ottoman–Safavid War (1578–90). The new borders were drawn per the same line as confirmed in the Peace of Amasya of 1555. The Ottoman–Safavid War had begun shortly before the death of Ahmed's father Mehmed III. Upon ascending the throne, Ahmed I appointed Cigalazade Yusuf Sinan Pasha as the commander of the eastern army. The army marched from Constantinople on 15 June 1604, which was too late, and by the time it had arrived on the eastern front on 8 November 1604, the Safavid army had captured Yerevan and entered the Kars Eyalet, and could only be stopped in Akhaltsikhe. Despite the conditions being favourable, Sinan Pasha decided to stay for the winter in Van, but then marched to Erzurum to stop an incoming Safavid attack. This caused unrest within the army and the year was practically wasted for the Ottomans. In 1605, Sinan Pasha marched to take Tabriz, but the army was undermined by Köse Sefer Pasha, the Beylerbey of Erzurum, marching independently from Sinan Pasha and consequently being taken prisoner by the Safavids. The Ottoman army was routed at Urmia and had to flee firstly to Van and then to Diyarbekir. Here, Sinan Pasha sparked a rebellion by executing the Beylerbey of Aleppo, Canbulatoğlu Hüseyin Pasha, who had come to provide help, upon the pretext that he had arrived too late. He soon died himself and the Safavid army was able to capture Ganja, Shirvan and Shamakhi in Azerbaijan. The Long Turkish War between the Ottomans and the Habsburg Monarchy had been going on for over a decade by the time Ahmed ascended the throne. Grand Vizier Malkoç Ali Pasha marched to the western front from Constantinople on 3 June 1604 and arrived in Belgrade, but died there, so Lala Mehmed Pasha was appointed as the Grand Vizier and the commander of the western army. Under Mehmed Pasha, the western army recaptured Pest and Vác, but failed to capture Esztergom as the siege was lifted due to unfavourable weather and the objections of the soldiers. Meanwhile, the Prince of Transylvania, Stephen Bocskay, who struggled for the region's independence and had formerly supported the Habsburgs, sent a messenger to the Porte asking for help. Upon the promise of help, his forces also joined the Ottoman forces in Belgrade. With this help, the Ottoman army besieged Esztergom and captured it on 4 November 1605. Bocskai, with Ottoman help, captured Nové Zámky (Uyvar) and forces under Tiryaki Hasan Pasha took Veszprém and Palota. Sarhoş İbrahim Pasha, the Beylerbey of Nagykanizsa (Kanije), attacked the Austrian region of Istria. However, with Jelali revolts in Anatolia more dangerous than ever and a defeat in the eastern front, Mehmed Pasha was called to Constantinople. Mehmed Pasha suddenly died there, whilst preparing to leave for the east. Kuyucu Murad Pasha then negotiated the Peace of Zsitvatorok, which abolished the tribute of 30,000 ducats paid by Austria and addressed the Habsburg emperor as the equal of the Ottoman sultan. The Jelali revolts were a strong factor in the Ottomans' acceptance of the terms. This signaled the end of Ottoman growth in Europe. Resentment over the war with the Habsburgs and heavy taxation, along with the weakness of the Ottoman military response, combined to make the reign of Ahmed I the zenith of the Jelali revolts. Tavil Ahmed launched a revolt soon after the coronation of Ahmed I and defeated Nasuh Pasha and the Beylerbey of Anatolia, Kecdehan Ali Pasha. In 1605, Tavil Ahmed was offered the position of the Beylerbey of Shahrizor to stop his rebellion, but soon afterwards he went on to capture Harput. His son, Mehmed, obtained the governorship of Baghdad with a fake firman and defeated the forces of Nasuh Pasha sent to defeat him. Meanwhile, Canbulatoğlu Ali Pasha united his forces with the Druze Sheikh Ma'noğlu Fahreddin to defeat the Amir of Tripoli Seyfoğlu Yusuf. He went on to take control of the Adana area, forming an army and issuing coins. His forces routed the army of the newly appointed Beylerbey of Aleppo, Hüseyin Pasha. Grand Vizier Boşnak Dervish Mehmed Pasha was executed for the weakness he showed against the Jelalis. He was replaced by Kuyucu Murad Pasha, who marched to Syria with his forces to defeat the 30,000-strong rebel army with great difficulty, albeit with a decisive result, on 24 October 1607. Meanwhile, he pretended to forgive the rebels in Anatolia and appointed the rebel Kalenderoğlu, who was active in Manisa and Bursa, as the sanjakbey of Ankara. Baghdad was recaptured in 1607 as well. Canbulatoğlu Ali Pasha fled to Constantinople and asked for forgiveness from Ahmed I, who appointed him to Timișoara and later Belgrade, but then executed him due to his misrule there. Meanwhile, Kalenderoğlu was not allowed in the city by the people of Ankara and rebelled again, only to be crushed by Murad Pasha's forces. Kalenderoğlu ended up fleeing to Persia. Murad Pasha then suppressed some smaller revolts in Central Anatolia and suppressed other Jelali chiefs by inviting them to join the army. Due to the widespread violence of the Jelali revolts, a great number of people had fled their villages and a lot of villages were destroyed. Some military chiefs had claimed these abandoned villages as their property. This deprived the Porte of tax income and on 30 September 1609, Ahmed I issued a letter guaranteeing the rights of the villagers. He then worked on the resettlement of abandoned villages. The new Grand Vizier, Nasuh Pasha, did not want to fight with the Safavids. The Safavid Shah also sent a letter saying that he was willing to sign a peace, with which he would have to send 200 loads of silk every year to Constantinople. On 20 November 1612, the Treaty of Nasuh Pasha was signed, which ceded all the lands the Ottoman Empire had gained in the war of 1578–90 back to Persia and reinstated the 1555 boundaries. However, the peace ended in 1615 when the Shah did not send the 200 loads of silk. On 22 May 1615, Grand Vizier Öküz Mehmed Pasha was assigned to organize an attack on Persia. Mehmed Pasha delayed the attack till the next year, until when the Safavids made their preparations and attacked Ganja. In April 1616, Mehmed Pasha left Aleppo with a large army and marched to Yerevan, where he failed to take the city and withdrew to Erzurum. He was removed from his post and replaced by Damat Halil Pasha. Halil Pasha went for the winter to Diyarbekir, while the Khan of Crimea, Canibek Giray, attacked the areas of Ganja, Nakhichevan and Julfa. Ahmed I renewed trade treaties with England, France and Venice. In July 1612, the first ever trade treaty with the Dutch Republic was signed. He expanded the capitulations given to France, specifying that merchants from Spain, Ragusa, Genoa, Ancona and Florence could trade under the French flag. Sultan Ahmed constructed the Sultan Ahmed Mosque, the magnum opus of the Ottoman architecture, across from the Hagia Sophia. The sultan attended the breaking of the ground with a golden pickaxe to begin the construction of the mosque complex. An incident nearly broke out after the sultan discovered that the Blue Mosque contained the same number of minarets as the grand mosque of Mecca. Ahmed became furious at this fault and became remorseful until the Shaykh-ul-Islam recommended that he should erect another minaret at the grand mosque of Mecca and the matter was solved. Ahmed became delightedly involved in the eleventh comprehensive renovations of the Kaaba, which had just been damaged by flooding. He sent craftsmen from Constantinople, and the golden rain gutter that kept rain from collecting on the roof of the Ka’ba was successfully renewed. It was again during the era of Sultan Ahmed that an iron web was placed inside the Zamzam Well in Mecca. The placement of this web about three feet below the water level was a response to lunatics who jumped into the well, imagining a promise of a heroic death. In Medina, the city of the Prophet Muhammad, a new pulpit made of white marble and shipped from Istanbul arrived in the mosque of the prophet and substituted the old, worn-out pulpit. It is also known that Sultan Ahmed erected two more mosques in Uskudar on the Asian side of Istanbul; however, neither of them has survived. The sultan had a crest carved with the footprint of Muhammad that he would wear on Fridays and festive days and illustrated one of the most significant examples of affection to the prophet in Ottoman history. Engraved inside the crest was a poem he composed: Sultan Ahmed was known for his skills in fencing, poetry, horseback riding, and fluency in several languages. Ahmed was a poet who wrote a number of political and lyrical works under the name Bahti. But while supportive of poetry, he displayed an aversion to artistry and continued his father's neglect of miniature painting. This was connected to a devout religiosity that declared depiction of living things in art an immoral rivalry to Allah's creation. Accordingly, Ahmed patronized scholars, calligraphers, and pious men. Hence he commissioned a book entitled "The Quintessence of Histories" to be worked upon by calligraphers. He also attempted to enforce conformance to Islamic laws and traditions, restoring the old regulations that prohibited alcohol and he attempted to enforce attendance at Friday prayers and paying alms to the poor in the proper way. He was responsible for the destruction of the musical clock organ that Elizabeth I of England sent to the court during the reign of his father. The reason for this may have been Ahmed's religious objection to figurative art. Ahmed I died of typhus and gastric bleeding on 22 November 1617 at the Topkapı Palace, Istanbul. He was buried in Ahmed I Mausoleum, Sultan Ahmed Mosque. He was succeeded by his younger brother Şehzade Mustafa as Sultan Mustafa I. Later three of Ahmed's sons ascended to the throne: Osman II (r. 1618–22), Murad IV (r. 1623–40) and Ibrahim (r. 1640–48). Today, Ahmed I is remembered mainly for the construction of the Sultan Ahmed Mosque (also known as the Blue Mosque), one of the masterpieces of Islamic architecture. The area in Fatih around the Mosque is today called Sultanahmet. He died at Topkapı Palace in Constantinople and is buried in a mausoleum right outside the walls of the famous mosque. In the 2015 TV series "", Ahmed I is portrayed by Turkish actor Ekin Koç. [aged 27]
https://en.wikipedia.org/wiki?curid=1527
Ahmed II Ahmed II ( "Aḥmed-i sānī") (25 February 1643 or 1 August 1642 – 6 February 1695) was the Sultan of the Ottoman Empire from 1691 to 1695. Ahmed II was born on 25 February 1643 or 1 August 1642, the son of Sultan Ibrahim and Muazzez Sultan. On 21 October 1649, Ahmed along with his brothers Mehmed and Suleiman were circumcised. During the reigns of his older brothers, Ahmed was imprisoned in Kafes, and he stayed there almost 43 years. During his reign, Sultan Ahmed II devoted most of his attention to the wars against the Habsburgs and related foreign policy, governmental and economic issues. Of these, the most important were the tax reforms and the introduction of the lifelong tax farm system (malikane) (see tax farming). Following the recovery of Belgrade under his predecessor, Suleiman II, the military frontier reached a rough stalemate on the Danube, with the Habsburgs no longer able to advance south of it, and the Ottomans attempting, ultimately unsuccessfully, to regain the initiative north of it. Among the most important features of Ahmed's reign was his reliance on Köprülüzade Fazıl Mustafa Pasha. Following his accession to the throne, Sultan Ahmed II confirmed Köprülüzade Fazıl Mustafa Pasha in his office as grand vizier. In office from 1689, Fazıl Mustafa Pasha was from the famous Köprülü family of grand viziers, and like most of his Köprülü predecessors in the same office, was an able administrator and military commander. Like his father Köprülü Mehmed Pasha (grand vizier 1656–61) before him, he ordered the removal and execution of dozens of corrupt state officials of the previous regime and replaced them with men loyal to himself. He overhauled the tax system by adjusting it to the capabilities of the taxpayers affected by the latest wars. He also reformed troop mobilization and increased the pool of conscripts available for the army by drafting tribesmen in the Balkans and Anatolia. In October 1690 he recaptured Belgrade (northern Serbia), a key fortress that commanded the confluence of the rivers Danube and Sava; in Ottoman hands since 1521, the fortress had been conquered by the Habsburgs in 1688. Fazıl Mustafa Pasha's victory at Belgrade was a major military achievement that gave the Ottomans hope that the military debacles of the 1680s—which had led to the loss of Hungary and Transylvania, an Ottoman vassal principality ruled by pro-Istanbul Hungarian princes— could be reversed. However, the Ottoman success proved ephemeral. On 19 August 1691, Fazıl Mustafa Pasha suffered a devastating defeat at the Battle of Slankamen (northwest of Belgrade) at the hands of Ludwig Wilhelm von Baden, the Habsburg commander in chief in Hungary, fittingly nicknamed “Türkenlouis” (Louis the Turk) for his victories against the Ottomans. In the confrontation, recognized by contemporaries as “the bloodiest battle of the century,” the Ottomans suffered heavy losses: 20,000 men, including the grand vizier. With him, the sultan lost his most capable military commander and the last member of the Köprülü family, who for the previous half century had been instrumental in strengthening the Ottoman military. Under Fazıl Mustafa Pasha's successors, the Ottomans suffered further defeats. In June 1692 the Habsburgs conquered Várad (Oradea, Romania), the seat of an Ottoman governor () since 1660. In 1694 they attempted to recapture Várad, but to no avail. On 12 January 1695, they surrendered the fortress of Gyula, the center of an Ottoman sanjak or subprovince since 1566. With the fall of Gyula, the only territory still in Ottoman hands in Hungary was to the east of the River Tisza and to the south of the river Maros, with its center at Temesvár. Three weeks later, on 6 February 1695, Ahmed II died in Edirne Palace.
https://en.wikipedia.org/wiki?curid=1528
Ainu people The Ainu or the Aynu (Ainu: アィヌ, "Aynu", Аину; Japanese: , "Ainu"; Russian: , "Áĭny"), also known as the Ezo (蝦夷) in the historical Japanese texts, are an East Asian ethnic group indigenous to Japan (Hokkaidō and formerly North-Eastern Honshū) and Russia (Sakhalin, the Kuril Islands, Khabarovsk Krai and the Kamchatka Peninsula). Official estimates place the total Ainu population of Japan at 25,000. Unofficial estimates place the total population at 200,000 or higher, as the near-total assimilation of the Ainu into Japanese society has resulted in many individuals with Ainu heritage having no knowledge of their ancestry. Recent research suggests that Ainu culture originated from a merger of the Jōmon, Okhotsk and Satsumon cultures. These early inhabitants did not speak the Japanese language and were conquered by the Japanese early in the 9th century. In 1264, the Ainu invaded the land of the Nivkh people. The Ainu also started an expedition into the Amur region, which was then controlled by the Yuan Dynasty, resulting in reprisals by the Mongols who invaded Sakhalin. Active contact between the Wa-jin (the ethnically Japanese, also known as Yamato-jin) and the Ainu of Ezogashima (now known as Hokkaidō) began in the 13th century. The Ainu formed a society of hunter-gatherers, surviving mainly by hunting and fishing. They followed a religion which was based on natural phenomena. During the Muromachi period (1336–1573), disputes between the Japanese and Ainu developed into a war. Takeda Nobuhiro killed the Ainu leader, Koshamain. Many Ainu were subject to Japanese rule, which led to violent Ainu revolt such as Koshamain's Revolt in 1456. During the Edo period (1601–1868) the Ainu, who controlled the northern island which is now named Hokkaidō, became increasingly involved in trade with the Japanese who controlled the southern portion of the island. The Tokugawa bakufu (feudal government) granted the Matsumae clan exclusive rights to trade with the Ainu in the northern part of the island. Later, the Matsumae began to lease out trading rights to Japanese merchants, and contact between Japanese and Ainu became more extensive. Throughout this period Ainu were forced to import goods from the Japanese, and epidemic diseases such as smallpox reduced the population. Although the increased contact created by the trade between the Japanese and the Ainu contributed to increased mutual understanding, it also led to conflict which occasionally intensified into violent Ainu revolts. The most important was Shakushain's Revolt (1669–1672), an Ainu rebellion against Japanese authority. Another large-scale revolt by Ainu against Japanese rule was the Menashi-Kunashir Battle in 1789. From 1799 to 1806, the shogunate took direct control of southern Hokkaidō. Ainu men were deported to merchant subcontractors for five and ten-year terms of service, and were enticed with rewards of food and clothing if they agreed to drop their native language and culture and become Japanese. Ainu women were separated from their husbands and forcibly married to Japanese merchants and fishermen, who were told that a taboo forbade them from bringing their wives to Hokkaidō. Women were often tortured if they resisted rape by their new Japanese husbands, and frequently ran away into the mountains. These policies of family separation and forcible assimilation, combined with the impact of smallpox, caused the Ainu population to drop significantly in the early 19th century. In the 18th century, there were 80,000 Ainu. In 1868, there were about 15,000 Ainu in Hokkaidō, 2000 in Sakhalin and around 100 in the Kuril islands. The beginning of the Meiji Restoration in 1868 proved a turning point for Ainu culture. The Japanese government introduced a variety of social, political, and economic reforms in hope of modernizing the country in the Western style. One innovation involved the annexation of Hokkaidō. Sjöberg quotes Baba's (1890) account of the Japanese government's reasoning: ... The development of Japan's large northern island had several objectives: First, it was seen as a means to defend Japan from a rapidly developing and expansionist Russia. Second ... it offered a solution to the unemployment for the former samurai class ... Finally, development promised to yield the needed natural resources for a growing capitalist economy. In 1899, the Japanese government passed an act labelling the Ainu as "former aborigines", with the idea they would assimilate—this resulted in the Japanese government taking the land where the Ainu people lived and placing it from then on under Japanese control. Also at this time, the Ainu were granted automatic Japanese citizenship, effectively denying them the status of an indigenous group. The Ainu were becoming increasingly marginalized on their own land—over a period of only 36 years, the Ainu went from being a relatively isolated group of people to having their land, language, religion and customs assimilated into those of the Japanese. In addition to this, the land the Ainu lived on was distributed to the Wa-Jin who had decided to move to Hokkaidō, encouraged by the Japanese government of the Meiji era to take advantage of the island's abundant natural resources, and to create and maintain farms in the model of Western industrial agriculture. While at the time, the process was openly referred to as , the notion was later reframed by Japanese elites to the currently common usage "kaitaku" (), which instead conveys a sense of opening up or reclamation of the Ainu lands. As well as this, factories such as flour mills, beer breweries and mining practices resulted in the creation of infrastructure such as roads and railway lines, during a development period that lasted until 1904. During this time, the Ainu were forced to learn Japanese, required to adopt Japanese names, and ordered to cease religious practices such as animal sacrifice and the custom of tattooing. The 1899 act was replaced in 1997—until then the government had stated there were no ethnic minority groups. It was not until June 6, 2008, that Japan formally recognised the Ainu as an indigenous group (see § Official recognition in Japan). The vast majority of these Wa-Jin men are believed to have compelled Ainu women to partner with them as local wives. Intermarriage between Japanese and Ainu was actively promoted by the Ainu to lessen the chances of discrimination against their offspring. As a result, many Ainu are indistinguishable from their Japanese neighbors, but some Ainu-Japanese are interested in traditional Ainu culture. For example, Oki, born as a child of an Ainu father and a Japanese mother, became a musician who plays the traditional Ainu instrument "tonkori". There are also many small towns in the southeastern or Hidaka region where ethnic Ainu live such as in Nibutani (Ainu: "Niputay"). Many live in Sambutsu especially, on the eastern coast. In 1966 the number of "pure" Ainu was about 300. Their most widely known ethnonym is derived from the word "ainu", which means "human" (particularly as opposed to "kamui", divine beings). Ainu also identify themselves as "Utari" ("comrade" or "people" in the Ainu language). Official documents use both names. On June 6, 2008, the Government of Japan passed a bipartisan, non-binding resolution calling upon the government to recognize the Ainu people as indigenous to Japan, and urging an end to discrimination against the group. The resolution recognized the Ainu people as "an indigenous people with a distinct language, religion and culture". The government immediately followed with a statement acknowledging its recognition, stating, "The government would like to solemnly accept the historical fact that many Ainu were discriminated against and forced into poverty with the advancement of modernization, despite being legally equal to (Japanese) people." In February 2019, the Japanese government consolidated the legal status of the Ainu people by passing a bill which officially recognizes the Ainu as an indigenous people, based on Article 14 of the Constitution, "all of the people are equal under the law" and bans discrimination by race. Furthermore, the bill aims at simplifying procedures for getting various permissions from authorities in regards to the traditional lifestyle of the Ainu and nurture the identity and cultures of the Ainu without defining the ethnic group by blood lineage. A bill passed in April 2019 officially recognizes the Ainu of Hokkaidō as the indigenous people of Japan. According to the "Asahi Shimbun", the Ainu were due to participate in the opening ceremony of the Olympic games 2020 in Japan, but due to logistical constraints this was dropped in February 2020. The space was scheduled to open on April 24, 2020, prior to the Tokyo Olympic and Paralympic Games scheduled in the same year, in Shiraoi, Hokkaidō. The park will serve as base for the protection and promotion of Ainu people, culture and language. As a result of the Treaty of Saint Petersburg (1875), the Kuril Islands – along with their Ainu inhabitants – came under Japanese administration. A total of 83 North Kuril Ainu arrived in Petropavlovsk-Kamchatsky on September 18, 1877, after they decided to remain under Russian rule. They refused the offer by Russian officials to move to new reservations in the Commander Islands. Finally a deal was reached in 1881 and the Ainu decided to settle in the village of Yavin. In March 1881, the group left Petropavlovsk and started the journey towards Yavin on foot. Four months later they arrived at their new homes. Another village, Golygino, was founded later. Under Soviet rule, both the villages were forced to disband and residents were moved to the Russian-dominated Zaporozhye rural settlement in Ust-Bolsheretsky Raion. As a result of intermarriage, the three ethnic groups assimilated to form the Kamchadal community. In 1953, K. Omelchenko, the minister for the protection of military and state secrets in the USSR, banned the press from publishing any more information on the Ainu living in the USSR. This order was revoked after two decades. , the North Kuril Ainu of Zaporozhye form the largest Ainu subgroup in Russia. The Nakamura clan (South Kuril Ainu on their paternal side), the smallest group, numbers just six people residing in Petropavlovsk. On Sakhalin island, a few dozen people identify themselves as Sakhalin Ainu, but many more with partial Ainu ancestry do not acknowledge it. Most of the 888 Japanese people living in Russia (2010 Census) are of mixed Japanese–Ainu ancestry, although they do not acknowledge it (full Japanese ancestry gives them the right of visa-free entry to Japan.) Similarly, no one identifies themselves as Amur Valley Ainu, although people with partial descent live in Khabarovsk. There is no evidence of living descendants of the Kamchatka Ainu. In the 2010 Census of Russia, close to 100 people tried to register themselves as ethnic Ainu in the village, but the governing council of Kamchatka Krai rejected their claim and enrolled them as ethnic Kamchadal. In 2011, the leader of the Ainu community in Kamchatka, Alexei Vladimirovich Nakamura, requested that Vladimir Ilyukhin (Governor of Kamchatka) and Boris Nevzorov (Chairman of the State Duma) include the Ainu in the central list of the Indigenous small-numbered peoples of the North, Siberia and the Far East. This request was also turned down. Ethnic Ainu living in Sakhalin Oblast and Khabarovsk Krai are not organized politically. According to Alexei Nakamura, only 205 Ainu live in Russia (up from just 12 people who self-identified as Ainu in 2008) and they along with the Kurile Kamchadals (Itelmen of Kuril islands) are fighting for official recognition. Since the Ainu are not recognized in the official list of the peoples living in Russia, they are counted as people without nationality or as ethnic Russians or Kamchadal. The Ainu have emphasized that they were the natives of the Kuril islands and that the Japanese and Russians were both invaders. In 2004, the small Ainu community living in Russia in Kamchatka Krai wrote a letter to Vladimir Putin, urging him to reconsider any move to award the Southern Kuril Islands to Japan. In the letter they blamed the Japanese, the Tsarist Russians and the Soviets for crimes against the Ainu such as killings and assimilation, and also urged him to recognize the Japanese genocide against the Ainu people—which was turned down by Putin. In March 2017, Alexei Nakamura revealed that plans for an Ainu village to be created in Petropavlovsk-Kamchatsky, and plans for an Ainu dictionary are underway. The Ainu have often been considered to descend from the Jōmon people, who lived in northern Japan from the Jōmon period ( 14,000 to 300 BCE). One of their "Yukar Upopo", or legends, tells that "[t]he Ainu lived in this place a hundred thousand years before the Children of the Sun came". Recent research suggests that the historical Ainu culture originated in a merger of the Okhotsk culture with the Satsumon, one of the ancient archaeological cultures that are considered to have derived from the Jōmon-period cultures of the Japanese archipelago. The Ainu economy was based on farming, as well as on hunting, fishing and gathering. "Full-blooded" Ainu, compared to people of Yamato descent, often have lighter skin and more body hair. Many early investigators proposed a Caucasian ancestry. Luigi Luca Cavalli-Sforza places the Ainu in his "Northeast and East Asian" genetic cluster. Omoto has suggested that the Ainu are more closely related to other phenotypically East Asian people (i.e. people previously described using the now-deprecated term "Mongoloid"), than to phenotypically West Eurasian or Caucasoid (previously "Caucasian") people – on the basis of fingerprints and dental morphology. Anthropologist Joseph Powell (1999) of the University of New Mexico wrote "... we follow Brace and Hunt (1990) and Turner (1990) in viewing the Ainu as a southeast Asian population derived from early Jomon peoples of Japan, who have their closest biological affinity with South Asians rather than western Eurasian peoples". They also suggest morphological similarities to the Kennewick Man. Other anthropologists, such as Jantz and Owsley (1997), considered the Ainu as Caucasoids. Mark J. Hudson, Professor of Anthropology at Nishikyushu University, Kanzaki, Saga, Japan, has stated that Japan was settled by a "Proto-Mongoloid" population in the Pleistocene who became the Jōmon and that their features can be seen in the Ainu and Okinawan people. A dental morphology study shows the Jōmon and Ainu have their own dental structure, but are generally closer to the Sundadont groups which is more common in Southeast Asia and Taiwan (Turner, 1990). In 1893, anthropologist Arnold Henry Savage Landor described the Ainu as having deep-set eyes and an eye shape typical of Europeans, with a large and prominent browridge, large ears, hairy and prone to baldness, slightly hook nose with large and broad nostrils, prominent cheek-bones and a medium mouth. Ainu men have abundant wavy hair and often have long beards. The book of "Ainu Life and Legends" by author Kyōsuke Kindaichi (published by the Japanese Tourist Board in 1942) contains a physical description of Ainu: "Many have wavy hair, but some straight black hair. Very few of them have wavy brownish hair. Their skins are generally reported to be light brown. But this is due to the fact that they labor on the sea and in briny winds all day. Old people who have long desisted from their outdoor work are often found to be as white as western men. The Ainu have broad faces, beetling eyebrows, and large sunken eyes, which are generally horizontal and of the so-called European type. Eyes of the Mongolian type are hardly found among them." A craniometric study by Brace et al. (2001) shows a closer morphological relation of the Ainu and Jōmon people to prehistoric and modern Europeans rather than to other contemporary East Asians. The study concludes that the Jōmon and Ainu people are descendants of a population (dubbed "Eurasians" by Brace et al.) that moved into northern Eurasia (and also the Americas) in the Late Pleistocene, which significantly predates the expansion of the modern core population of East Asia. Another study (Kura et al. 2014) based on cranial and genetic characteristics suggests a northern origin for the Ainu people. The study results and genetic evidence suggest the Arctic regions of Eurasia as possible original homeland of the ancestral population of the northern Jōmon and Ainu people. The Jōmon people are considered to be a people of northern ancestry and belong mostly to Haplogroup D2 (D-M55 branch) which characterizes the northern people of Asia. Thus, despite their morphological similarities to Caucasoid populations, the Ainu are essentially of North Asiatic origin. Genetic evidence support a relation with Arctic populations, such as the Chukchi people. Genetic testing has shown that the Ainu belong mainly to Y-haplogroup D-M55 (D1a2) and C-M217. Y-DNA haplogroup D-M55 is found throughout the Japanese Archipelago, but with very high frequencies among the Ainu of Hokkaidō in the far north, and to a lesser extent among the Ryukyuans in the Ryukyu Islands of the far south. Recently it was confirmed that the Japanese branch of haplogroup D-M55 is distinct and isolated from other D-branches since more than 53,000 years. The split between D1a1 (which is common in Tibet and has a medium distribution in Central Asia) happened likely in Central Asia, while some others suggest an instant split during the origin of haplogroup D itself, as the Japanese branch has five unique mutations not found in any other D-branch. Several studies (Hammer et al. 2006, Shinoda 2008, Matsumoto 2009, Cabrera et al. 2018) suggest that the paternal lineage of the Ainu and the Paleolithic Jōmon population originated somewhere in Central Asia. According to Hammer et al., the ancestral haplogroup D originated between Tibet and the Altai mountains. He suggests that there were multiple waves into Eastern Eurasia. A study by Tajima "et al." (2004) found two out of a sample of sixteen (or 12.5%) Ainu men to belong to Haplogroup C-M217, which is the most common Y-chromosome haplogroup among the indigenous populations of Siberia and Mongolia. Carriers among the Ainu may reflect a certain degree of unidirectional genetic influence from the Nivkhs, a traditionally nomadic people of northern Sakhalin and the adjacent mainland, with whom the Ainu have long-standing cultural interactions. Hammer "et al." (2006) tested a sample of four Ainu men and found that one of them belonged to haplogroup C-M217. Based on analysis of one sample of 51 modern Ainus, their mtDNA lineages consist mainly of haplogroup Y (11/51 = 21.6% according to Tanaka "et al." 2004, or 10/51 = 19.6% according to Adachi "et al." 2009, who have cited Tajima "et al." 2004), haplogroup D (9/51 = 17.6%, particularly D4(xD1)), haplogroup M7a (8/51 = 15.7%), and haplogroup G1 (8/51 = 15.7%). Other mtDNA haplogroups detected in this sample include A (2/51), M7b2 (2/51), N9b (1/51), B4f (1/51), F1b (1/51), and M9a (1/51). Most of the remaining individuals in this sample have been classified definitively only as belonging to macro-haplogroup M. According to Sato "et al." (2009), who have studied the mtDNA of the same sample of modern Ainus (n=51), the major haplogroups of the Ainu are N9 (14/51 = 27.5%, including 10/51 Y and 4/51 N9(xY)), D (12/51 = 23.5%, including 8/51 D(xD5) and 4/51 D5), M7 (10/51 = 19.6%), and G (10/51 = 19.6%, including 8/51 G1 and 2/51 G2); the minor haplogroups are A (2/51), B (1/51), F (1/51), and M(xM7, M8, CZ, D, G) (1/51). Studies published in 2004 and 2007 show the combined frequency of M7a and N9b were observed in Jōmons and which are believed by some to be Jōmon maternal contribution at 28% in Okinawans (7/50 M7a1, 6/50 M7a(xM7a1), 1/50 N9b), 17.6% in Ainus (8/51 M7a(xM7a1), 1/51 N9b), and from 10% (97/1312 M7a(xM7a1), 1/1312 M7a1, 28/1312 N9b) to 17% (15/100 M7a1, 2/100 M7a(xM7a1)) in mainstream Japanese. In addition, haplogroups D4, D5, M7b, M9a, M10, G, A, B, and F have been found in Jōmon people as well. A 2004 reevaluation of cranial traits suggests that the Ainu resemble the Okhotsk more than they do the Jōmon. This agrees with the references to the Ainu as a merger of Okhotsk and Satsumon referenced above. Nevertheless, a newer genome study shows that the Ainu share most of their genome with the Hokkaidō Jōmon and it is suggested that the Okhotsk received strong Jōmon influence. Hideo Matsumoto (2009) suggested, based on immunoglobulin analyses, that the Ainu (and Jōmon) have a Siberian origin. Compared with other East Asian populations, the Ainu have the highest amount of Siberian (immunoglobulin) components, higher than mainland Japanese people. A 2012 genetic study has revealed that the closest genetic relatives of the Ainu are the Ryukyuan people, followed by the Yamato people and Nivkh. A genetic analysis in 2016 showed that although the Ainu have some genetic relations to the Japanese people and Eastern Siberians (especially Itelmens and Chukchis), they are not closely related to any modern ethnic group. Further, the study detected genetic contribution from the Ainu to populations around the Sea of Okhotsk but no genetic influence on the Ainu themselves. According to the study, the Ainu-like genetic contribution in the Ulch people is about 17.8% or 13.5% and about 27.2% in the Nivkhs. The study also disproved the idea about a relation to Andamanese or Tibetans; instead, it presented evidence of gene flow between the Ainu and "lowland East Asian farmer populations" (represented in the study by the Ami and Atayal in Taiwan, and the Dai and Lahu in Mainland East Asia). A comparison with the genome-wide single nucleotide polymorphism data of HGDP (Human Genome Diversity Panel) populations also showed the unique status of the Sanganji Jōmon despite being relatively closer to the East Eurasian cluster, they are far apart from all modern East Eurasians. The uniqueness of the Sanganji Jōmon within East Eurasians is consistent with the results including Europeans and Africans. When the Ainu, the mainland Japanese and the Ryukyuans from the Japanese Archipelago and CHB28 (Chinese from Beijing) were compared with Sanganji Jōmon, PC1 separated the Ainu and Sanganji Jōmon from the other populations. The population closest to the Sanganji Jōmon was the Ainu, followed by the Ryukyuan and then the mainland Japanese (Yamato). The study also implies genetic affinity between the Jōmon and Altai-Neanderthals but the degree is not much different than other non-African populations. Compared with other East-Eurasians, the Jōmon lacked similarities with Denisovans. Genetic analyses of HLA I and HLA II genes as well as HLA-A, -B, and -DRB1 gene frequencies links the Ainu to some Indigenous peoples of the Americas, especially to populations on the Pacific Northwest Coast such as Tlingit. The scientists suggest that the main ancestor of the Ainu and of some Native American groups can be traced back to Paleolithic groups in Southern Siberia. Ainu men were first recruited into the Japanese military in 1898. Sixty-four Ainu served in the Russo-Japanese War (1904–1905), eight of whom died in battle or from illness contracted during military service. Two received the Order of the Golden Kite, granted for bravery, leadership or command in battle. During World War II, Australian troops engaged in the hard-fought Kokoda Track campaign (July–November 1942) in New Guinea, were surprised by the physique and fighting prowess of the first Japanese troops they encountered. In 2008 Hohmann gave an estimate of fewer than 100 remaining speakers of the language; other research (Vovin 1993) placed the number at fewer than 15 speakers. Vovin has characterised the language as "almost extinct". As a result of this, the study of the Ainu language is limited and is based largely on historical research. Despite the small number of native speakers of Ainu, there is an active movement to revitalize the language, mainly in Hokkaidō, but also elsewhere such as Kanto. Ainu oral literature has been documented both in hopes of safeguarding it for future generations, as well as using it as a teaching tool for language learners. As of 2011 there has been an increasing number of second-language learners, especially in Hokkaidō, in large part due to the pioneering efforts of the late Ainu folklorist, activist and former Diet member Shigeru Kayano, himself a native speaker, who first opened an Ainu language school in 1987 funded by Ainu Kyokai. Although some researchers have attempted to show that the Ainu language and the Japanese language are related, modern scholars have rejected the idea that the relationship goes beyond contact (such as the mutual borrowing of words between Japanese and Ainu). No attempt to show a relationship with Ainu to any other language has gained wide acceptance, and linguists currently classify Ainu as a language isolate. Most Ainu people speak either the Japanese language or the Russian language. Concepts expressed with prepositions (such as "to", "from", "by", "in", and "at") in English appear as postpositional forms in Ainu (postpositions come after the word that they modify). A single sentence in Ainu can comprise many added or agglutinated sounds or affixes that represent nouns or ideas. The Ainu language has had no indigenous system of writing, and has historically been transliterated using the Japanese kana or Russian Cyrillic. it is typically written either in katakana or in the Latin alphabet. Many of the Ainu dialects, even those from different extremities of Hokkaidō, were not mutually intelligible; however, all Ainu speakers understood the classic Ainu language of the Yukar, or epic stories. Without a writing system, the Ainu were masters of narration, with the Yukar and other forms of narration such as the Uepeker (Uwepeker) tales being committed to memory and related at gatherings which often lasted many hours or even days. Traditional Ainu culture was quite different from Japanese culture. Never shaving after a certain age, the men had full beards and moustaches. Men and women alike cut their hair level with the shoulders at the sides of the head, trimmed semicircularly behind. The women tattooed their mouths, and sometimes the forearms. The mouth tattoos were started at a young age with a small spot on the upper lip, gradually increasing with size. The soot deposited on a pot hung over a fire of birch bark was used for colour. Their traditional dress was a robe spun from the inner bark of the elm tree, called "attusi" or "attush". Various styles were made, and consisted generally of a simple short robe with straight sleeves, which was folded around the body, and tied with a band about the waist. The sleeves ended at the wrist or forearm and the length generally was to the calves. Women also wore an undergarment of Japanese cloth.
https://en.wikipedia.org/wiki?curid=1530
Acropolis An acropolis (Ancient Greek: ἀκρόπολις, "akropolis"; from "akros" (άκρος) or "akron" (άκρον), "highest, topmost, outermost" and "polis" (πόλις), "city"; plural in English: "acropoles", "acropoleis" or "acropolises") was in ancient Greece a settlement, especially a citadel, built upon an area of elevated ground—frequently a hill with precipitous sides, chosen for purposes of defense. Acropolis also had a function of a religious sanctuary with sacred springs highlighting its religious significance. Acropolis became the nuclei of large cities of classical antiquity, such as ancient Athens, and for this reason they are sometimes prominent landmarks in modern cities with ancient pasts, such as modern Athens. One well-known acropolis is the Acropolis of Athens, located on a rocky outcrop above the city of Athens and containing the Parthenon. The word "acropolis" literally means in Greek "upper city," and though associated primarily with the Greek cities Athens, Argos (with Larissa), Thebes (with Cadmea), and Corinth (with its Acrocorinth), may be applied generically to all such citadels, including Rome, Jerusalem, Celtic Bratislava, many in Asia Minor, or even Castle Rock in Edinburgh. An example in Ireland is the Rock of Cashel. Acropolis is also the term used by archaeologists and historians for the urban Castro culture settlements located in Northwestern Iberian hilltops. The most famous example is the Acropolis of Athens, which, by reason of its historical associations and the several famous buildings erected upon it (most notably the Parthenon), is known without qualification as "the Acropolis". The Acropolis of Athens achieved its form in the fifth century BC and is currently an archeological site. Although originating in the mainland of Greece, use of the acropolis model quickly spread to Greek colonies such as the Dorian Lato on Crete during the Archaic Period Because of its classical Hellenistic style, the ruins of Mission San Juan Capistrano's Great Stone Church in California, United States has been called the "American Acropolis". Other parts of the world developed other names for the high citadel or alcázar, which often reinforced a naturally strong site. In Central Italy, many small rural communes still cluster at the base of a fortified habitation known as la Rocca of the commune. The term "acropolis" is also used to describe the central complex of overlapping structures, such as plazas and pyramids, in many Maya cities, including Tikal and Copán.
https://en.wikipedia.org/wiki?curid=1536
Acupuncture Acupuncture is a form of alternative medicine and a key component of traditional Chinese medicine (TCM) in which thin needles are inserted into the body. Acupuncture is a pseudoscience because the theories and practices of TCM are not based on scientific knowledge, and it has been characterized as quackery. There is a range of acupuncture variants which originated in different philosophies, and techniques vary depending on the country in which it is performed. It is most often used to attempt pain relief, though acupuncturists say that it can also be used for a wide range of other conditions. Acupuncture is generally used only in combination with other forms of treatment. The conclusions of numerous trials and systematic reviews of acupuncture are inconsistent, which suggests that it is not effective. An overview of Cochrane reviews found that acupuncture is not effective for a wide range of conditions. A systematic review conducted by medical scientists at the Universities of Exeter and Plymouth found little evidence of acupuncture's effectiveness in treating pain. Overall, the evidence suggests that short-term treatment with acupuncture does not produce long-term benefits. Some research results suggest that acupuncture can alleviate some forms of pain, though the majority of research suggests that acupuncture's apparent effects are not caused by the treatment itself. A systematic review concluded that the analgesic effect of acupuncture seemed to lack clinical relevance and could not be clearly distinguished from bias. One meta-analysis found that acupuncture for chronic low back pain was cost-effective as an adjunct to standard care, while a separate systematic review found insufficient evidence for the cost-effectiveness of acupuncture in the treatment of chronic low back pain. Acupuncture is generally safe when done by appropriately trained practitioners using clean needle technique and single-use needles. When properly delivered, it has a low rate of mostly minor adverse effects. Accidents and infections do occur, though, and are associated with neglect on the part of the practitioner, particularly in the application of sterile techniques. A review conducted in 2013 stated that reports of infection transmission increased significantly in the preceding decade. The most frequently reported adverse events were pneumothorax and infections. Since serious adverse events continue to be reported, it is recommended that acupuncturists be trained sufficiently to reduce the risk. Scientific investigation has not found any histological or physiological evidence for traditional Chinese concepts such as "qi", meridians, and acupuncture points, and many modern practitioners no longer support the existence of life force energy ("qi") or meridians, which was a major part of early belief systems. Acupuncture is believed to have originated around 100 BC in China, around the time "The Yellow Emperor's Classic of Internal Medicine" (Huangdi Neijing) was published, though some experts suggest it could have been practiced earlier. Over time, conflicting claims and belief systems emerged about the effect of lunar, celestial and earthly cycles, yin and yang energies, and a body's "rhythm" on the effectiveness of treatment. Acupuncture fluctuated in popularity in China due to changes in the country's political leadership and the preferential use of rationalism or Western medicine. Acupuncture spread first to Korea in the 6th century AD, then to Japan through medical missionaries, and then to Europe, beginning with France. In the 20th century, as it spread to the United States and Western countries, spiritual elements of acupuncture that conflicted with Western beliefs were sometimes abandoned in favor of simply tapping needles into acupuncture points. Acupuncture is a form of alternative medicine. It is used most commonly for pain relief, though it is also used to treat a wide range of conditions. Acupuncture is generally only used in combination with other forms of treatment. For example, the American Society of Anesthesiologists states it may be considered in the treatment for nonspecific, noninflammatory low back pain only in conjunction with conventional therapy. Acupuncture is the insertion of thin needles into the skin. According to the Mayo Foundation for Medical Education and Research (Mayo Clinic), a typical session entails lying still while approximately five to twenty needles are inserted; for the majority of cases, the needles will be left in place for ten to twenty minutes. It can be associated with the application of heat, pressure, or laser light. Classically, acupuncture is individualized and based on philosophy and intuition, and not on scientific research. There is also a non-invasive therapy developed in early 20th century Japan using an elaborate set of instruments other than needles for the treatment of children ("shōnishin" or "shōnihari"). Clinical practice varies depending on the country. A comparison of the average number of patients treated per hour found significant differences between China (10) and the United States (1.2). Chinese herbs are often used. There is a diverse range of acupuncture approaches, involving different philosophies. Although various different techniques of acupuncture practice have emerged, the method used in traditional Chinese medicine (TCM) seems to be the most widely adopted in the US. Traditional acupuncture involves needle insertion, moxibustion, and cupping therapy, and may be accompanied by other procedures such as feeling the pulse and other parts of the body and examining the tongue. Traditional acupuncture involves the belief that a "life force" ("qi") circulates within the body in lines called meridians. The main methods practiced in the UK are TCM and Western medical acupuncture. The term Western medical acupuncture is used to indicate an adaptation of TCM-based acupuncture which focuses less on TCM. The Western medical acupuncture approach involves using acupuncture after a medical diagnosis. Limited research has compared the contrasting acupuncture systems used in various countries for determining different acupuncture points and thus there is no defined standard for acupuncture points. In traditional acupuncture, the acupuncturist decides which points to treat by observing and questioning the patient to make a diagnosis according to the tradition used. In TCM, the four diagnostic methods are: inspection, auscultation and olfaction, inquiring, and palpation. Inspection focuses on the face and particularly on the tongue, including analysis of the tongue size, shape, tension, color and coating, and the absence or presence of teeth marks around the edge. Auscultation and olfaction involve listening for particular sounds such as wheezing, and observing body odor. Inquiring involves focusing on the "seven inquiries": chills and fever; perspiration; appetite, thirst and taste; defecation and urination; pain; sleep; and menses and leukorrhea. Palpation is focusing on feeling the body for tender ""A-shi"" points and feeling the pulse. The most common mechanism of stimulation of acupuncture points employs penetration of the skin by thin metal needles, which are manipulated manually or the needle may be further stimulated by electrical stimulation (electroacupuncture). Acupuncture needles are typically made of stainless steel, making them flexible and preventing them from rusting or breaking. Needles are usually disposed of after each use to prevent contamination. Reusable needles when used should be sterilized between applications. In many areas, only sterile, single-use acupuncture needles are allowed, including the State of California, USA. Needles vary in length between , with shorter needles used near the face and eyes, and longer needles in areas with thicker tissues; needle diameters vary from 0 to 0, with thicker needles used on more robust patients. Thinner needles may be flexible and require tubes for insertion. The tip of the needle should not be made too sharp to prevent breakage, although blunt needles cause more pain. Apart from the usual filiform needle, other needle types include three-edged needles and the Nine Ancient Needles. Japanese acupuncturists use extremely thin needles that are used superficially, sometimes without penetrating the skin, and surrounded by a guide tube (a 17th-century invention adopted in China and the West). Korean acupuncture uses copper needles and has a greater focus on the hand. The skin is sterilized and needles are inserted, frequently with a plastic guide tube. Needles may be manipulated in various ways, including spinning, flicking, or moving up and down relative to the skin. Since most pain is felt in the superficial layers of the skin, a quick insertion of the needle is recommended. Often the needles are stimulated by hand in order to cause a dull, localized, aching sensation that is called "de qi", as well as "needle grasp," a tugging feeling felt by the acupuncturist and generated by a mechanical interaction between the needle and skin. Acupuncture can be painful. The skill level of the acupuncturist may influence how painful the needle insertion is, and a sufficiently skilled practitioner may be able to insert the needles without causing any pain. "De-qi" (; "arrival of qi") refers to a claimed sensation of numbness, distension, or electrical tingling at the needling site. If these sensations are not observed then inaccurate location of the acupoint, improper depth of needle insertion, inadequate manual manipulation, are blamed. If "de-qi" is not immediately observed upon needle insertion, various manual manipulation techniques are often applied to promote it (such as "plucking", "shaking" or "trembling"). Once "de-qi" is observed, techniques might be used which attempt to "influence" the "de-qi"; for example, by certain manipulation the "de-qi" can allegedly be conducted from the needling site towards more distant sites of the body. Other techniques aim at "tonifying" () or "sedating" () "qi". The former techniques are used in deficiency patterns, the latter in excess patterns. "De qi" is more important in Chinese acupuncture, while Western and Japanese patients may not consider it a necessary part of the treatment. Acupuncture has been researched extensively; as of 2013, there were almost 1,500 randomized controlled trials on PubMed with "acupuncture" in the title. The results of reviews of acupuncture's efficacy, however, have been inconclusive. In January 2020, David Gorski analyzed a 2020 review of systematic reviews ("Acupuncture for the Relief of Chronic Pain: A Synthesis of Systematic Reviews") concerning the use of acupuncture to treat chronic pain. Writing in "Science-Based Medicine," Gorski said that its findings highlight the conclusion that acupuncture is "a theatrical placebo whose real history has been retconned beyond recognition." He also said this review "reveals the many weaknesses in the design of acupuncture clinical trials". It is difficult but not impossible to design rigorous research trials for acupuncture. Due to acupuncture's invasive nature, one of the major challenges in efficacy research is in the design of an appropriate placebo control group. For efficacy studies to determine whether acupuncture has specific effects, "sham" forms of acupuncture where the patient, practitioner, and analyst are blinded seem the most acceptable approach. Sham acupuncture uses non-penetrating needles or needling at non-acupuncture points, e.g. inserting needles on meridians not related to the specific condition being studied, or in places not associated with meridians. The under-performance of acupuncture in such trials may indicate that therapeutic effects are due entirely to non-specific effects, or that the sham treatments are not inert, or that systematic protocols yield less than optimal treatment. A 2014 review in "Nature Reviews Cancer" found that "contrary to the claimed mechanism of redirecting the flow of "qi" through meridians, researchers usually find that it generally does not matter where the needles are inserted, how often (that is, no dose-response effect is observed), or even if needles are actually inserted. In other words, 'sham' or 'placebo' acupuncture generally produces the same effects as 'real' acupuncture and, in some cases, does better." A 2013 meta-analysis found little evidence that the effectiveness of acupuncture on pain (compared to sham) was modified by the location of the needles, the number of needles used, the experience or technique of the practitioner, or by the circumstances of the sessions. The same analysis also suggested that the number of needles and sessions is important, as greater numbers improved the outcomes of acupuncture compared to non-acupuncture controls. There has been little systematic investigation of which components of an acupuncture session may be important for any therapeutic effect, including needle placement and depth, type and intensity of stimulation, and number of needles used. The research seems to suggest that needles do not need to stimulate the traditionally specified acupuncture points or penetrate the skin to attain an anticipated effect (e.g. psychosocial factors). A response to "sham" acupuncture in osteoarthritis may be used in the elderly, but placebos have usually been regarded as deception and thus unethical. However, some physicians and ethicists have suggested circumstances for applicable uses for placebos such as it might present a theoretical advantage of an inexpensive treatment without adverse reactions or interactions with drugs or other medications. As the evidence for most types of alternative medicine such as acupuncture is far from strong, the use of alternative medicine in regular healthcare can present an ethical question. Using the principles of evidence-based medicine to research acupuncture is controversial, and has produced different results. Some research suggests acupuncture can alleviate pain but the majority of research suggests that acupuncture's effects are mainly due to placebo. Evidence suggests that any benefits of acupuncture are short-lasting. There is insufficient evidence to support use of acupuncture compared to mainstream medical treatments. Acupuncture is not better than mainstream treatment in the long term. The use of acupuncture has been criticized owing to there being little scientific evidence for explicit effects, or the mechanisms for its supposed effectiveness, for any condition that is discernible from placebo. Acupuncture has been called 'theatrical placebo', and David Gorski argues that when acupuncture proponents advocate 'harnessing of placebo effects' or work on developing 'meaningful placebos', they essentially concede it is little more than that. Publication bias is cited as a concern in the reviews of randomized controlled trials of acupuncture. A 1998 review of studies on acupuncture found that trials originating in China, Japan, Hong Kong, and Taiwan were uniformly favourable to acupuncture, as were ten out of eleven studies conducted in Russia. A 2011 assessment of the quality of randomized controlled trials on traditional chinese medicine, including acupuncture, concluded that the methodological quality of most such trials (including randomization, experimental control, and blinding) was generally poor, particularly for trials published in Chinese journals (though the quality of acupuncture trials was better than the trials testing traditional chinese medicine remedies). The study also found that trials published in non-Chinese journals tended to be of higher quality. Chinese authors use more Chinese studies, which have been demonstrated to be uniformly positive. A 2012 review of 88 systematic reviews of acupuncture published in Chinese journals found that less than half of these reviews reported testing for publication bias, and that the majority of these reviews were published in journals with impact factors of zero. A 2015 study comparing pre-registered records of acupuncture trials with their published results found that it was uncommon for such trials to be registered before the trial began. This study also found that selective reporting of results and changing outcome measures to obtain statistically significant results was common in this literature. Scientist and journalist Steven Salzberg identifies acupuncture and Chinese medicine generally as a focus for "fake medical journals" such as the "Journal of Acupuncture and Meridian Studies" and "Acupuncture in Medicine". The conclusions of many trials and numerous systematic reviews of acupuncture are largely inconsistent with each other. A 2011 systematic review of systematic reviews found that for reducing pain, real acupuncture was no better than sham acupuncture, and concluded that numerous reviews have shown little convincing evidence that acupuncture is an effective treatment for reducing pain. The same review found that neck pain was one of only four types of pain for which a positive effect was suggested, but cautioned that the primary studies used carried a considerable risk of bias. A 2009 overview of Cochrane reviews found acupuncture is not effective for a wide range of conditions. A 2014 systematic review suggests that the nocebo effect of acupuncture is clinically relevant and that the rate of adverse events may be a gauge of the nocebo effect. A 2012 meta-analysis conducted by the Acupuncture Trialists' Collaboration found "relatively modest" efficacy of acupuncture (in comparison to sham) for the treatment of four different types of chronic pain (back and neck pain, knee osteoarthritis, chronic headache, and shoulder pain) and on that basis concluded that it "is more than a placebo" and a reasonable referral option. Commenting on this meta-analysis, both Edzard Ernst and David Colquhoun said the results were of negligible clinical significance. Edzard Ernst later stated that "I fear that, once we manage to eliminate this bias [that operators are not blind] … we might find that the effects of acupuncture exclusively are a placebo response." In 2017, the same research group updated their previous meta-analysis and again found acupuncture to be superior to sham acupuncture for non-specific musculoskeletal pain, osteoarthritis, chronic headache, and shoulder pain. They also found that the effects of acupuncture decreased by about 15% after one year. A 2010 systematic review suggested that acupuncture is more than a placebo for commonly occurring chronic pain conditions, but the authors acknowledged that it is still unknown if the overall benefit is clinically meaningful or cost-effective. A 2010 review found real acupuncture and sham acupuncture produce similar improvements, which can only be accepted as evidence against the efficacy of acupuncture. The same review found limited evidence that real acupuncture and sham acupuncture appear to produce biological differences despite similar effects. A 2009 systematic review and meta-analysis found that acupuncture had a small analgesic effect, which appeared to lack any clinical importance and could not be discerned from bias. The same review found that it remains unclear whether acupuncture reduces pain independent of a psychological impact of the needling ritual. A 2017 systematic review and meta-analysis found that ear acupuncture may be effective at reducing pain within 48 hours of its use, but the mean difference between the acupuncture and control groups was small. A 2013 systematic review found that acupuncture may be effective for nonspecific lower back pain, but the authors noted there were limitations in the studies examined, such as heterogeneity in study characteristics and low methodological quality in many studies. A 2012 systematic review found some supporting evidence that acupuncture was more effective than no treatment for chronic non-specific low back pain; the evidence was conflicting comparing the effectiveness over other treatment approaches. A 2011 systematic review of systematic reviews found that "for chronic low back pain, individualized acupuncture is not better in reducing symptoms than formula acupuncture or sham acupuncture with a toothpick that does not penetrate the skin." A 2010 review found that sham acupuncture was as effective as real acupuncture for chronic low back pain. The specific therapeutic effects of acupuncture were small, whereas its clinically relevant benefits were mostly due to contextual and psychosocial circumstances. Brain imaging studies have shown that traditional acupuncture and sham acupuncture differ in their effect on limbic structures, while at the same time showed equivalent analgesic effects. A 2005 Cochrane review found insufficient evidence to recommend for or against either acupuncture or dry needling for acute low back pain. The same review found low quality evidence for pain relief and improvement compared to no treatment or sham therapy for chronic low back pain only in the short term immediately after treatment. The same review also found that acupuncture is not more effective than conventional therapy and other alternative medicine treatments. A 2017 systematic review and meta-analysis concluded that, for neck pain, acupuncture was comparable in effectiveness to conventional treatment, while electroacupuncture was even more effective in reducing pain than was conventional acupuncture. The same review noted that "It is difficult to draw conclusion [sic] because the included studies have a high risk of bias and imprecision." A 2015 overview of systematic reviews of variable quality showed that acupuncture can provide short-term improvements to people with chronic Low Back Pain. The overview said this was true when acupuncture was used either in isolation or in addition to conventional therapy. A 2017 systematic review for an American College of Physicians clinical practice guideline found low to moderate evidence that acupuncture was effective for chronic low back pain, and limited evidence that it was effective for acute low back pain. The same review found that the strength of the evidence for both conditions was low to moderate. Another 2017 clinical practice guideline, this one produced by the Danish Health Authority, recommended against acupuncture for both recent-onset low back pain and lumbar radiculopathy. Two separate 2016 Cochrane reviews found that acupuncture could be useful in the prophylaxis of tension-type headaches and episodic migraines. The 2016 Cochrane review evaluating acupuncture for episodic migraine prevention concluded that true acupuncture had a small effect beyond sham acupuncture and found moderate-quality evidence to suggest that acupuncture is at least similarly effective to prophylactic medications for this purpose. A 2012 review found that acupuncture has demonstrated benefit for the treatment of headaches, but that safety needed to be more fully documented in order to make any strong recommendations in support of its use. A 2014 review concluded that "current evidence supports the use of acupuncture as an alternative to traditional analgesics in osteoarthritis patients." , a meta-analysis showed that acupuncture may help osteoarthritis pain but it was noted that the effects were insignificant in comparison to sham needles. A 2012 review found "the potential beneficial action of acupuncture on osteoarthritis pain does not appear to be clinically relevant." A 2010 Cochrane review found that acupuncture shows statistically significant benefit over sham acupuncture in the treatment of peripheral joint osteoarthritis; however, these benefits were found to be so small that their clinical significance was doubtful, and "probably due at least partially to placebo effects from incomplete blinding". A 2013 Cochrane review found low to moderate evidence that acupuncture improves pain and stiffness in treating people with fibromyalgia compared with no treatment and standard care. A 2012 review found "there is insufficient evidence to recommend acupuncture for the treatment of fibromyalgia." A 2010 systematic review found a small pain relief effect that was not apparently discernible from bias; acupuncture is not a recommendable treatment for the management of fibromyalgia on the basis of this review. A 2012 review found that the effectiveness of acupuncture to treat rheumatoid arthritis is "sparse and inconclusive." A 2005 Cochrane review concluded that acupuncture use to treat rheumatoid arthritis "has no effect on ESR, CRP, pain, patient's global assessment, number of swollen joints, number of tender joints, general health, disease activity and reduction of analgesics." A 2010 overview of systematic reviews found insufficient evidence to recommend acupuncture in the treatment of most rheumatic conditions, with the exceptions of osteoarthritis, low back pain, and lateral elbow pain. A 2018 systematic review found some evidence that acupuncture could be effective for the treatment of rheumatoid arthritis, but that the evidence was limited because of heterogeneity and methodological flaws in the included studies. A 2014 systematic review found that although manual acupuncture was effective at relieving short-term pain when used to treat tennis elbow, its long-term effect in relieving pain was "unremarkable". A 2007 review found that acupuncture was significantly better than sham acupuncture at treating chronic knee pain; the evidence was not conclusive due to the lack of large, high-quality trials. A 2014 overview of systematic reviews found insufficient evidence to suggest that acupuncture is an effective treatment for postoperative nausea and vomiting (PONV) in a clinical setting. A 2013 systematic review concluded that acupuncture might be beneficial in prevention and treatment of PONV. A 2015 Cochrane review found moderate-quality evidence of no difference between stimulation of the P6 acupoint on the wrist and antiemetic drugs for preventing PONV. A new finding of the review was that further comparative trials are futile, based on the conclusions of a trial sequential analysis. Whether combining PC6 acupoint stimulation with antiemetics is effective was inconclusive. A 2014 overview of systematic reviews found insufficient evidence to suggest that acupuncture is effective for surgical or post-operative pain. For the use of acupuncture for post-operative pain, there was contradictory evidence. A 2014 systematic review found supportive but limited evidence for use of acupuncture for acute post-operative pain after back surgery. A 2014 systematic review found that while the evidence suggested acupuncture could be an effective treatment for postoperative gastroparesis, a firm conclusion could not be reached because the trials examined were of low quality. A 2015 Cochrane review found that there is insufficient evidence to determine whether acupuncture is an effective treatment for cancer pain in adults. A 2014 systematic review published in the Chinese Journal of Integrative Medicine found that acupuncture may be effective as an adjunctive treatment to palliative care for cancer patients. A 2013 overview of reviews published in the Journal of Multinational Association for Supportive Care in Cancer found evidence that acupuncture could be beneficial for people with cancer-related symptoms, but also identified few rigorous trials and high heterogeneity between trials. A 2012 systematic review of randomised clinical trials published in the same journal found that the number and quality of RCTs for using acupuncture in the treatment of cancer pain was too low to draw definite conclusions. A 2014 systematic review reached inconclusive results with regard to the effectiveness of acupuncture for treating cancer-related fatigue. A 2013 systematic review found that acupuncture is an acceptable adjunctive treatment for chemotherapy-induced nausea and vomiting, but that further research with a low risk of bias is needed. A 2013 systematic review found that the quantity and quality of available RCTs for analysis were too low to draw valid conclusions for the effectiveness of acupuncture for cancer-related fatigue. Several meta-analytic and systematic reviews suggest that acupuncture alleviates sleep disturbance, particularly insomnia. However, reviewers caution that this evidence should be considered preliminary due to publication bias, problems with research methodology, small sample sizes, and heterogeneity. For the following conditions, the Cochrane Collaboration or other reviews have concluded there is no strong evidence of benefit: A 2010 overview of systematic reviews found that moxibustion was effective for several conditions but the primary studies were of poor quality, so there persists ample uncertainty, which limits the conclusiveness of their findings. Acupuncture is generally safe when administered by an experienced, appropriately trained practitioner using clean-needle technique and sterile single-use needles. When improperly delivered it can cause adverse effects. Accidents and infections are associated with infractions of sterile technique or neglect on the part of the practitioner. To reduce the risk of serious adverse events after acupuncture, acupuncturists should be trained sufficiently. People with serious spinal disease, such as cancer or infection, are not good candidates for acupuncture. Contraindications to acupuncture (conditions that should not be treated with acupuncture) include coagulopathy disorders (e.g. hemophilia and advanced liver disease), warfarin use, severe psychiatric disorders (e.g. psychosis), and skin infections or skin trauma (e.g. burns). Further, electroacupuncture should be avoided at the spot of implanted electrical devices (such as pacemakers). A 2011 systematic review of systematic reviews (internationally and without language restrictions) found that serious complications following acupuncture continue to be reported. Between 2000 and 2009, ninety-five cases of serious adverse events, including five deaths, were reported. Many such events are not inherent to acupuncture but are due to malpractice of acupuncturists. This might be why such complications have not been reported in surveys of adequately trained acupuncturists. Most such reports originate from Asia, which may reflect the large number of treatments performed there or a relatively higher number of poorly trained Asian acupuncturists. Many serious adverse events were reported from developed countries. These included Australia, Austria, Canada, Croatia, France, Germany, Ireland, the Netherlands, New Zealand, Spain, Sweden, Switzerland, the UK, and the US. The number of adverse effects reported from the UK appears particularly unusual, which may indicate less under-reporting in the UK than other countries. Reports included 38 cases of infections and 42 cases of organ trauma. The most frequent adverse events included pneumothorax, and bacterial and viral infections. A 2013 review found (without restrictions regarding publication date, study type or language) 295 cases of infections; mycobacterium was the pathogen in at least 96%. Likely sources of infection include towels, hot packs or boiling tank water, and reusing reprocessed needles. Possible sources of infection include contaminated needles, reusing personal needles, a person's skin containing mycobacterium, and reusing needles at various sites in the same person. Although acupuncture is generally considered a safe procedure, a 2013 review stated that the reports of infection transmission increased significantly in the prior decade, including those of mycobacterium. Although it is recommended that practitioners of acupuncture use disposable needles, the reuse of sterilized needles is still permitted. It is also recommended that thorough control practices for preventing infection be implemented and adapted. A 2013 systematic review of the English-language case reports found that serious adverse events associated with acupuncture are rare, but that acupuncture is not without risk. Between 2000 and 2011 the English-language literature from 25 countries and regions reported 294 adverse events. The majority of the reported adverse events were relatively minor, and the incidences were low. For example, a prospective survey of 34,000 acupuncture treatments found no serious adverse events and 43 minor ones, a rate of 1.3 per 1000 interventions. Another survey found there were 7.1% minor adverse events, of which 5 were serious, amid 97,733 acupuncture patients. The most common adverse effect observed was infection (e.g. mycobacterium), and the majority of infections were bacterial in nature, caused by skin contact at the needling site. Infection has also resulted from skin contact with unsterilized equipment or with dirty towels in an unhygienic clinical setting. Other adverse complications included five reported cases of spinal cord injuries (e.g. migrating broken needles or needling too deeply), four brain injuries, four peripheral nerve injuries, five heart injuries, seven other organ and tissue injuries, bilateral hand edema, epithelioid granuloma, pseudolymphoma, argyria, pustules, pancytopenia, and scarring due to hot-needle technique. Adverse reactions from acupuncture, which are unusual and uncommon in typical acupuncture practice, included syncope, galactorrhoea, bilateral nystagmus, pyoderma gangrenosum, hepatotoxicity, eruptive lichen planus, and spontaneous needle migration. A 2013 systematic review found 31 cases of vascular injuries caused by acupuncture, three resulting in death. Two died from pericardial tamponade and one was from an aortoduodenal fistula. The same review found vascular injuries were rare, bleeding and pseudoaneurysm were most prevalent. A 2011 systematic review (without restriction in time or language), aiming to summarize all reported case of cardiac tamponade after acupuncture, found 26 cases resulting in 14 deaths, with little doubt about causality in most fatal instances. The same review concluded cardiac tamponade was a serious, usually fatal, though theoretically avoidable complication following acupuncture, and urged training to minimize risk. A 2012 review found a number of adverse events were reported after acupuncture in the UK's National Health Service (NHS) but most (95%) were not severe, though miscategorization and under-reporting may alter the total figures. From January 2009 to December 2011, 468 safety incidents were recognized within the NHS organizations. The adverse events recorded included retained needles (31%), dizziness (30%), loss of consciousness/unresponsive (19%), falls (4%), bruising or soreness at needle site (2%), pneumothorax (1%) and other adverse side effects (12%). Acupuncture practitioners should know, and be prepared to be responsible for, any substantial harm from treatments. Some acupuncture proponents argue that the long history of acupuncture suggests it is safe. However, there is an increasing literature on adverse events (e.g. spinal-cord injury). Acupuncture seems to be safe in people getting anticoagulants, assuming needles are used at the correct location and depth. Studies are required to verify these findings. The evidence suggests that acupuncture might be a safe option for people with allergic rhinitis. A 2010 systematic review of the Chinese-language literature found numerous acupuncture-related adverse events, including pneumothorax, fainting, subarachnoid hemorrhage, and infection as the most frequent, and cardiovascular injuries, subarachnoid hemorrhage, pneumothorax, and recurrent cerebral hemorrhage as the most serious, most of which were due to improper technique. Between 1980 and 2009, the Chinese-language literature reported 479 adverse events. Prospective surveys show that mild, transient acupuncture-associated adverse events ranged from 6.71% to 15%. In a study with 190,924 patients, the prevalence of serious adverse events was roughly 0.024%. Another study showed a rate of adverse events requiring specific treatment of 2.2%, 4,963 incidences among 229,230 patients. Infections, mainly hepatitis, after acupuncture are reported often in English-language research, though are rarely reported in Chinese-language research, making it plausible that acupuncture-associated infections have been underreported in China. Infections were mostly caused by poor sterilization of acupuncture needles. Other adverse events included spinal epidural hematoma (in the cervical, thoracic and lumbar spine), chylothorax, injuries of abdominal organs and tissues, injuries in the neck region, injuries to the eyes, including orbital hemorrhage, traumatic cataract, injury of the oculomotor nerve and retinal puncture, hemorrhage to the cheeks and the hypoglottis, peripheral motor-nerve injuries and subsequent motor dysfunction, local allergic reactions to metal needles, stroke, and cerebral hemorrhage after acupuncture. A causal link between acupuncture and the adverse events cardiac arrest, pyknolepsy, shock, fever, cough, thirst, aphonia, leg numbness, and sexual dysfunction remains uncertain. The same review concluded that acupuncture can be considered inherently safe when practiced by properly trained practitioners, but the review also stated there is a need to find effective strategies to minimize the health risks. Between 1999 and 2010, the Republic of Korean-literature contained reports of 1104 adverse events. Between the 1980s and 2002, the Japanese-language literature contained reports of 150 adverse events. Although acupuncture has been practiced for thousands of years in China, its use in pediatrics in the United States did not become common until the early 2000s. In 2007, the National Health Interview Survey (NHIS) conducted by the National Center For Health Statistics (NCHS) estimated that approximately 150,000 children had received acupuncture treatment for a variety of conditions. In 2008 a study determined that the use of acupuncture-needle treatment on children was "questionable" due to the possibility of adverse side-effects and the pain manifestation differences in children versus adults. The study also includes warnings against practicing acupuncture on infants, as well as on children who are over-fatigued, very weak, or have over-eaten. When used on children, acupuncture is considered safe when administered by well-trained, licensed practitioners using sterile needles; however, a 2011 review found there was limited research to draw definite conclusions about the overall safety of pediatric acupuncture. The same review found 279 adverse events, 25 of them serious. The adverse events were mostly mild in nature (e.g. bruising or bleeding). The prevalence of mild adverse events ranged from 10.1% to 13.5%, an estimated 168 incidences among 1,422 patients. On rare occasions adverse events were serious (e.g. cardiac rupture or hemoptysis); many might have been a result of substandard practice. The incidence of serious adverse events was 5 per one million, which included children and adults. When used during pregnancy, the majority of adverse events caused by acupuncture were mild and transient, with few serious adverse events. The most frequent mild adverse event was needling or unspecified pain, followed by bleeding. Although two deaths (one stillbirth and one neonatal death) were reported, there was a lack of acupuncture-associated maternal mortality. Limiting the evidence as certain, probable or possible in the causality evaluation, the estimated incidence of adverse events following acupuncture in pregnant women was 131 per 10,000. Although acupuncture is not contraindicated in pregnant women, some specific acupuncture points are particularly sensitive to needle insertion; these spots, as well as the abdominal region, should be avoided during pregnancy. Four adverse events associated with moxibustion were bruising, burns and cellulitis, spinal epidural abscess, and large superficial basal cell carcinoma. Ten adverse events were associated with cupping. The minor ones were keloid scarring, burns, and bullae; the serious ones were acquired hemophilia A, stroke following cupping on the back and neck, factitious panniculitis, reversible cardiac hypertrophy, and iron deficiency anemia. A 2013 meta-analysis found that acupuncture for chronic low back pain was cost-effective as a complement to standard care, but not as a substitute for standard care except in cases where comorbid depression presented. The same meta-analysis found there was no difference between sham and non-sham acupuncture. A 2011 systematic review found insufficient evidence for the cost-effectiveness of acupuncture in the treatment of chronic low back pain. A 2010 systematic review found that the cost-effectiveness of acupuncture could not be concluded. A 2012 review found that acupuncture seems to be cost-effective for some pain conditions. As with other alternative medicines, unethical or naïve practitioners may induce patients to exhaust financial resources by pursuing ineffective treatment. Professional ethics codes set by accrediting organizations such as the National Certification Commission for Acupuncture and Oriental Medicine require practitioners to make "timely referrals to other health care professionals as may be appropriate." Stephen Barrett states that there is a "risk that an acupuncturist whose approach to diagnosis is not based on scientific concepts will fail to diagnose a dangerous condition". Acupuncture is a substantial part of traditional Chinese medicine (TCM). Early acupuncture beliefs relied on concepts that are common in TCM, such as a life force energy called "qi". "Qi" was believed to flow from the body's primary organs (zang-fu organs) to the "superficial" body tissues of the skin, muscles, tendons, bones, and joints, through channels called meridians. Acupuncture points where needles are inserted are mainly (but not always) found at locations along the meridians. Acupuncture points not found along a meridian are called extraordinary points and those with no designated site are called "A-shi" points. In TCM, disease is generally perceived as a disharmony or imbalance in energies such as yin, yang, "qi", xuĕ, zàng-fǔ, meridians, and of the interaction between the body and the environment. Therapy is based on which "pattern of disharmony" can be identified. For example, some diseases are believed to be caused by meridians being invaded with an excess of wind, cold, and damp. In order to determine which pattern is at hand, practitioners examine things like the color and shape of the tongue, the relative strength of pulse-points, the smell of the breath, the quality of breathing, or the sound of the voice. TCM and its concept of disease does not strongly differentiate between the cause and effect of symptoms. Scientific research has not supported the existence of "qi", meridians, or yin and yang. A "Nature" editorial described TCM as "fraught with pseudoscience", with the majority of its treatments having no logical mechanism of action. Quackwatch states that "TCM theory and practice are not based upon the body of knowledge related to health, disease, and health care that has been widely accepted by the scientific community. TCM practitioners disagree among themselves about how to diagnose patients and which treatments should go with which diagnoses. Even if they could agree, the TCM theories are so nebulous that no amount of scientific study will enable TCM to offer rational care." Some modern practitioners support the use of acupuncture to treat pain, but have abandoned the use of "qi", meridians, "yin", "yang" and other mystical energies as an explanatory frameworks. The use of "qi" as an explanatory framework has been decreasing in China, even as it becomes more prominent during discussions of acupuncture in the US. Academic discussions of acupuncture still make reference to pseudoscientific concepts such as "qi" and meridians despite the lack of scientific evidence. Many within the scientific community consider attempts to rationalize acupuncture in science to be quackery and pseudoscience. Academics Massimo Pigliucci and Maarten Boudry describe it as a "borderlands science" lying between science and pseudoscience. Many acupuncturists attribute pain relief to the release of endorphins when needles penetrate, but no longer support the idea that acupuncture can affect a disease. It is a generally held belief within the acupuncture community that acupuncture points and meridians structures are special conduits for electrical signals, but no research has established any consistent anatomical structure or function for either acupuncture points or meridians. Human tests to determine whether electrical continuity was significantly different near meridians than other places in the body have been inconclusive. Some studies suggest acupuncture causes a series of events within the central nervous system, and that it is possible to inhibit acupuncture's analgesic effects with the opioid antagonist naloxone. Mechanical deformation of the skin by acupuncture needles appears to result in the release of adenosine. The anti-nociceptive effect of acupuncture may be mediated by the adenosine A1 receptor. A 2014 review in "Nature Reviews Cancer" found that since the key mouse studies that suggested acupuncture relieves pain via the local release of adenosine, which then triggered nearby A1 receptors "caused more tissue damage and inflammation relative to the size of the animal in mice than in humans, such studies unnecessarily muddled a finding that local inflammation can result in the local release of adenosine with analgesic effect." It has been proposed that acupuncture's effects in gastrointestinal disorders may relate to its effects on the parasympathetic and sympathetic nervous system, which have been said to be the "Western medicine" equivalent of "yin and yang". Another mechanism whereby acupuncture may be effective for gastrointestinal dysfunction involves the promotion of gastric peristalsis in subjects with low initial gastric motility, and suppressing peristalsis in subjects with active initial motility. Acupuncture has also been found to exert anti-inflammatory effects, which may be mediated by the activation of the vagus nerve and deactivation of inflammatory macrophages. Neuroimaging studies suggest that acupuncture stimulation results in deactivation of the limbic brain areas and the default mode network. Acupuncture, along with moxibustion, is one of the oldest practices of traditional Chinese medicine. Most historians believe the practice began in China, though there are some conflicting narratives on when it originated. Academics David Ramey and Paul Buell said the exact date acupuncture was founded depends on the extent to which dating of ancient texts can be trusted and the interpretation of what constitutes acupuncture. According to an article in "Rheumatology", the first documentation of an "organized system of diagnosis and treatment" for acupuncture was in "The Yellow Emperor's Classic of Internal Medicine" (Huangdi Neijing) from about 100 BC. Gold and silver needles found in the tomb of Liu Sheng from around 100 BC are believed to be the earliest archeological evidence of acupuncture, though it is unclear if that was their purpose. According to Plinio Prioreschi, the earliest known historical record of acupuncture is the Shih-Chi ("Record of History"), written by a historian around 100 BC. It is believed that this text was documenting what was established practice at that time. The 5,000-year-old mummified body of Ötzi the Iceman was found with 15 groups of tattoos, many of which were located at points on the body where acupuncture needles are used for abdominal or lower back problems. Evidence from the body suggests Otzi suffered from these conditions. This has been cited as evidence that practices similar to acupuncture may have been practiced elsewhere in Eurasia during the early Bronze Age; however, "The Oxford Handbook of the History of Medicine" calls this theory "speculative". It is considered unlikely that acupuncture was practiced before 2000 BC. The Ötzi the Iceman's tattoo marks suggest to some experts that an acupuncture-like treatment was previously used in Europe 5 millennia ago. Acupuncture may have been practiced during the Neolithic era, near the end of the Stone Age, using sharpened stones called Bian shi. Many Chinese texts from later eras refer to sharp stones called "plen", which means "stone probe", that may have been used for acupuncture purposes. The ancient Chinese medical text, Huangdi Neijing, indicates that sharp stones were believed at-the-time to cure illnesses at or near the body's surface, perhaps because of the short depth a stone could penetrate. However, it is more likely that stones were used for other medical purposes, such as puncturing a growth to drain its pus. The "Mawangdui" texts, which are believed to be from the 2nd century BC, mention the use of pointed stones to open abscesses, and moxibustion, but not for acupuncture. It is also speculated that these stones may have been used for bloodletting, due to the ancient Chinese belief that illnesses were caused by demons within the body that could be killed or released. It is likely bloodletting was an antecedent to acupuncture. According to historians Lu Gwei-djen and Joseph Needham, there is substantial evidence that acupuncture may have begun around 600 BC. Some hieroglyphs and pictographs from that era suggests acupuncture and moxibustion were practiced. However, historians Gwei-djen and Needham said it was unlikely a needle could be made out of the materials available in China during this time period. It is possible that bronze was used for early acupuncture needles. Tin, copper, gold and silver are also possibilities, though they are considered less likely, or to have been used in fewer cases. If acupuncture was practiced during the Shang dynasty (1766 to 1122 BC), organic materials like thorns, sharpened bones, or bamboo may have been used. Once methods for producing steel were discovered, it would replace all other materials, since it could be used to create a very fine, but sturdy needles. Gwei-djen and Needham noted that all the ancient materials that could have been used for acupuncture and which often produce archeological evidence, such as sharpened bones, bamboo or stones, were also used for other purposes. An article in "Rheumatology" said that the absence of any mention of acupuncture in documents found in the tomb of Ma-Wang-Dui from 198 BC suggest that acupuncture was not practiced by that time. Several different and sometimes conflicting belief systems emerged regarding acupuncture. This may have been the result of competing schools of thought. Some ancient texts referred to using acupuncture to cause bleeding, while others mixed the ideas of blood-letting and spiritual ch'i energy. Over time, the focus shifted from blood to the concept of puncturing specific points on the body, and eventually to balancing Yin and Yang energies as well. According to David Ramey, no single "method or theory" was ever predominantly adopted as the standard. At the time, scientific knowledge of medicine was not yet developed, especially because in China dissection of the deceased was forbidden, preventing the development of basic anatomical knowledge. It is not certain when specific acupuncture points were introduced, but the autobiography of Pien Chhio from around 400–500 BC references inserting needles at designated areas. Bian Que believed there was a single acupuncture point at the top of one's skull that he called the point "of the hundred meetings." Texts dated to be from 156–186 BC document early beliefs in channels of life force energy called meridians that would later be an element in early acupuncture beliefs. Ramey and Buell said the "practice and theoretical underpinnings" of modern acupuncture were introduced in "The Yellow Emperor's Classic" (Huangdi Neijing) around 100 BC. It introduced the concept of using acupuncture to manipulate the flow of life energy ("qi") in a network of meridian (channels) in the body. The network concept was made up of acu-tracts, such as a line down the arms, where it said acupoints were located. Some of the sites acupuncturists use needles at today still have the same names as those given to them by the "Yellow Emperor's Classic". Numerous additional documents were published over the centuries introducing new acupoints. By the 4th century AD, most of the acupuncture sites in use today had been named and identified. In the first half of the 1st century AD, acupuncturists began promoting the belief that acupuncture's effectiveness was influenced by the time of day or night, the lunar cycle, and the season. The Science of the Yin-Yang Cycles ("Yün Chhi Hsüeh") was a set of beliefs that curing diseases relied on the alignment of both heavenly (thien) and earthly (ti) forces that were attuned to cycles like that of the sun and moon. There were several different belief systems that relied on a number of celestial and earthly bodies or elements that rotated and only became aligned at certain times. According to Needham and Gwei-djen, these "arbitrary predictions" were depicted by acupuncturists in complex charts and through a set of special terminology. Acupuncture needles during this period were much thicker than most modern ones and often resulted in infection. Infection is caused by a lack of sterilization, but at that time it was believed to be caused by use of the wrong needle, or needling in the wrong place, or at the wrong time. Later, many needles were heated in boiling water, or in a flame. Sometimes needles were used while they were still hot, creating a cauterizing effect at the injection site. Nine needles were recommended in the "Chen Chiu Ta Chheng" from 1601, which may have been because of an ancient Chinese belief that nine was a magic number. Other belief systems were based on the idea that the human body operated on a rhythm and acupuncture had to be applied at the right point in the rhythm to be effective. In some cases a lack of balance between Yin and Yang were believed to be the cause of disease. In the 1st century AD, many of the first books about acupuncture were published and recognized acupuncturist experts began to emerge. The "Zhen Jiu Jia Yi Jing", which was published in the mid-3rd century, became the oldest acupuncture book that is still in existence in the modern era. Other books like the "Yu Kuei Chen Ching", written by the Director of Medical Services for China, were also influential during this period, but were not preserved. In the mid 7th century, Sun Simiao published acupuncture-related diagrams and charts that established standardized methods for finding acupuncture sites on people of different sizes and categorized acupuncture sites in a set of modules. Acupuncture became more established in China as improvements in paper led to the publication of more acupuncture books. The Imperial Medical Service and the Imperial Medical College, which both supported acupuncture, became more established and created medical colleges in every province. The public was also exposed to stories about royal figures being cured of their diseases by prominent acupuncturists. By time "The Great Compendium of Acupuncture and Moxibustion" was published during the Ming dynasty (1368–1644 AD), most of the acupuncture practices used in the modern era had been established. By the end of the Song dynasty (1279 AD), acupuncture had lost much of its status in China. It became rarer in the following centuries, and was associated with less prestigious professions like alchemy, shamanism, midwifery and moxibustion. Additionally, by the 18th century, scientific rationality was becoming more popular than traditional superstitious beliefs. By 1757 a book documenting the history of Chinese medicine called acupuncture a "lost art". Its decline was attributed in part to the popularity of prescriptions and medications, as well as its association with the lower classes. In 1822, the Chinese Emperor signed a decree excluding the practice of acupuncture from the Imperial Medical Institute. He said it was unfit for practice by gentlemen-scholars. In China acupuncture was increasingly associated with lower-class, illiterate practitioners. It was restored for a time, but banned again in 1929 in favor of science-based Western medicine. Although acupuncture declined in China during this time period, it was also growing in popularity in other countries. Korea is believed to be the first country in Asia that acupuncture spread to outside of China. Within Korea there is a legend that acupuncture was developed by emperor Dangun, though it is more likely to have been brought into Korea from a Chinese colonial prefecture in 514 AD. Acupuncture use was commonplace in Korea by the 6th century. It spread to Vietnam in the 8th and 9th centuries. As Vietnam began trading with Japan and China around the 9th century, it was influenced by their acupuncture practices as well. China and Korea sent "medical missionaries" that spread traditional Chinese medicine to Japan, starting around 219 AD. In 553, several Korean and Chinese citizens were appointed to re-organize medical education in Japan and they incorporated acupuncture as part of that system. Japan later sent students back to China and established acupuncture as one of five divisions of the Chinese State Medical Administration System. Acupuncture began to spread to Europe in the second half of the 17th century. Around this time the surgeon-general of the Dutch East India Company met Japanese and Chinese acupuncture practitioners and later encouraged Europeans to further investigate it. He published the first in-depth description of acupuncture for the European audience and created the term "acupuncture" in his 1683 work "De Acupunctura". France was an early adopter among the West due to the influence of Jesuit missionaries, who brought the practice to French clinics in the 16th century. The French doctor Louis Berlioz (the father of the composer Hector Berlioz) is usually credited with being the first to experiment with the procedure in Europe in 1810, before publishing his findings in 1816. By the 19th century, acupuncture had become commonplace in many areas of the world. Americans and Britons began showing interest in acupuncture in the early 19th century, although interest waned by mid-century. Western practitioners abandoned acupuncture's traditional beliefs in spiritual energy, pulse diagnosis, and the cycles of the moon, sun or the body's rhythm. Diagrams of the flow of spiritual energy, for example, conflicted with the West's own anatomical diagrams. It adopted a new set of ideas for acupuncture based on tapping needles into nerves. In Europe it was speculated that acupuncture may allow or prevent the flow of electricity in the body, as electrical pulses were found to make a frog's leg twitch after death. The West eventually created a belief system based on Travell trigger points that were believed to inhibit pain. They were in the same locations as China's spiritually identified acupuncture points, but under a different nomenclature. The first elaborate Western treatise on acupuncture was published in 1683 by Willem ten Rhijne. In China, the popularity of acupuncture rebounded in 1949 when Mao Zedong took power and sought to unite China behind traditional cultural values. It was also during this time that many Eastern medical practices were consolidated under the name traditional Chinese medicine (TCM). New practices were adopted in the 20th century, such as using a cluster of needles, electrified needles, or leaving needles inserted for up to a week. A lot of emphasis developed on using acupuncture on the ear. Acupuncture research organizations such as the International Society of Acupuncture were founded in the 1940s and 1950s and acupuncture services became available in modern hospitals. China, where acupuncture was believed to have originated, was increasingly influenced by Western medicine. Meanwhile, acupuncture grew in popularity in the US. The US Congress created the Office of Alternative Medicine in 1992 and the National Institutes of Health (NIH) declared support for acupuncture for some conditions in November 1997. In 1999, the National Center for Complementary and Alternative Medicine was created within the NIH. Acupuncture became the most popular alternative medicine in the US. Politicians from the Chinese Communist Party said acupuncture was superstitious and conflicted with the party's commitment to science. Communist Party Chairman Mao Zedong later reversed this position, arguing that the practice was based on scientific principles. In 1971, a "New York Times" reporter published an article on his acupuncture experiences in China, which led to more investigation of and support for acupuncture. The US President Richard Nixon visited China in 1972. During one part of the visit, the delegation was shown a patient undergoing major surgery while fully awake, ostensibly receiving acupuncture rather than anesthesia. Later it was found that the patients selected for the surgery had both a high pain tolerance and received heavy indoctrination before the operation; these demonstration cases were also frequently receiving morphine surreptitiously through an intravenous drip that observers were told contained only fluids and nutrients. One patient receiving open heart surgery while awake was ultimately found to have received a combination of three powerful sedatives as well as large injections of a local anesthetic into the wound. After the National Institute of Health expressed support for acupuncture for a limited number of conditions, adoption in the US grew further. In 1972 the first legal acupuncture center in the US was established in Washington DC and in 1973 the American Internal Revenue Service allowed acupuncture to be deducted as a medical expense. In 2006, a BBC documentary "Alternative Medicine" filmed a patient undergoing open heart surgery allegedly under acupuncture-induced anesthesia. It was later revealed that the patient had been given a cocktail of anesthetics. In 2010, UNESCO inscribed "acupuncture and moxibustion of traditional Chinese medicine" on the UNESCO Intangible Cultural Heritage List following China's nomination. Acupuncture is most heavily practiced in China and is popular in the US, Australia, and Europe. In Switzerland, acupuncture has become the most frequently used alternative medicine since 2004. In the United Kingdom, a total of 4 million acupuncture treatments were administered in 2009. Acupuncture is used in most pain clinics and hospices in the UK. An estimated 1 in 10 adults in Australia used acupuncture in 2004. In Japan, it is estimated that 25 percent of the population will try acupuncture at some point, though in most cases it is not covered by public health insurance. Users of acupuncture in Japan are more likely to be elderly and to have a limited education. Approximately half of users surveyed indicated a likelihood to seek such remedies in the future, while 37% did not. Less than one percent of the US population reported having used acupuncture in the early 1990s. By the early 2010s, more than 14 million Americans reported having used acupuncture as part of their health care. In the US, acupuncture is increasingly () used at academic medical centers, and is usually offered through CAM centers or anesthesia and pain management services. Examples include those at Harvard University, Stanford University, Johns Hopkins University, and UCLA. The use of acupuncture in Germany increased by 20% in 2007, after the German acupuncture trials supported its efficacy for certain uses. In 2011, there were more than one million users, and insurance companies have estimated that two-thirds of German users are women. As a result of the trials, German public health insurers began to cover acupuncture for chronic low back pain and osteoarthritis of the knee, but not tension headache or migraine. This decision was based in part on socio-political reasons. Some insurers in Germany chose to stop reimbursement of acupuncture because of the trials. For other conditions, insurers in Germany were not convinced that acupuncture had adequate benefits over usual care or sham treatments. Highlighting the results of the placebo group, researchers refused to accept a placebo therapy as efficient. There are various government and trade association regulatory bodies for acupuncture in the United Kingdom, the United States, Saudi Arabia, Australia, Japan, Canada, and in European countries and elsewhere. The World Health Organization recommends that before being licensed or certified, an acupuncturist receive 200 hours of specialized training if they are a physician and 2,500 hours for non-physicians; many governments have adopted similar standards. In China, the practice of acupuncture is regulated by the Chinese Medicine Council that was formed in 1999 by the Legislative Council. It includes a licensing exam and registration, as well as degree courses approved by the board. Canada has acupuncture licensing programs in the provinces of British Columbia, Ontario, Alberta and Quebec; standards set by the Chinese Medicine and Acupuncture Association of Canada are used in provinces without government regulation. Regulation in the US began in the 1970s in California, which was eventually followed by every state but Wyoming and Idaho. Licensing requirements vary greatly from state to state. The needles used in acupuncture are regulated in the US by the Food and Drug Administration. In some states acupuncture is regulated by a board of medical examiners, while in others by the board of licensing, health or education. In Japan, acupuncturists are licensed by the Minister of Health, Labour and Welfare after passing an examination and graduating from a technical school or university. In Australia, the Chinese Medicine Board of Australia regulates acupuncture, among other Chinese medical traditions, and restricts the use of titles like 'acupuncturist' to registered practitioners only. At least 28 countries in Europe have professional associations for acupuncturists. In France, the Académie Nationale de Médecine (National Academy of Medicine) has regulated acupuncture since 1955.
https://en.wikipedia.org/wiki?curid=1537
Aeneas In Greco-Roman mythology, Aeneas (; Greek: Αἰνείας, "Aineías", possibly derived from Greek meaning "praised") was a Trojan hero, the son of the prince Anchises and the goddess Aphrodite (Venus). His father was a first cousin of King Priam of Troy (both being grandsons of Ilus, founder of Troy), making Aeneas a second cousin to Priam's children (such as Hector and Paris). He is a character in Greek mythology and is mentioned in Homer's "Iliad". Aeneas receives full treatment in Roman mythology, most extensively in Virgil's "Aeneid," where he is cast as an ancestor of Romulus and Remus. He became the first true hero of Rome. Snorri Sturluson identifies him with the Norse god Vidarr of the Æsir. Aeneas is the Romanization of the Greek (). is first introduced in the "Homeric Hymn to Aphrodite" when Aphrodite gives him his name from the adjective "terrible"), for the "terrible grief" () has caused her. It is a popular etymology for the name, apparently exploited by Homer in the "Iliad". Later in the Medieval period there were writers who held that, because the "Aeneid" was written by a philosopher, it is meant to be read philosophically. As such, in the "natural order", the meaning of Aeneas' name combines Greek ("dweller") with ("body"), which becomes or "in-dweller"—i.e. as a god inhabiting a mortal body. However, there is no certainty regarding the origin of his name. In imitation of the "Iliad", Virgil borrows epithets of Homer, including: Anchisiades, "magnanimum", "magnus", "heros", and "bonus". Though he borrows many, Virgil gives Aeneas two epithets of his own in the "Aeneid:" "pater" and "pius". The epithets applied by Virgil are an example of an attitude different from that of Homer, for whilst Odysseus is ("wily"), Aeneas is described as ("pious"), which conveys a strong moral tone. The purpose of these epithets seems to enforce the notion of Aeneas' divine hand as father and founder of the Roman race, and their use seems circumstantial: when Aeneas is praying he refers to himself as pius, and is referred to as such by the author only when the character is acting on behalf of the gods to fulfill his divine mission. Likewise, Aeneas is called "pater" when acting in the interest of his men. The story of the birth of Aeneas is told in the "Hymn to Aphrodite", one of the major Homeric Hymns. Aphrodite has caused Zeus to fall in love with mortal women. In retaliation, Zeus puts desire in her heart for Anchises, who is tending his cattle among the hills near Mount Ida. When Aphrodite sees him she is smitten. She adorns herself as if for a wedding among the gods and appears before him. He is overcome by her beauty, believing that she is a goddess, but Aphrodite identifies herself as a Phrygian princess. After they make love, Aphrodite reveals her true identity to him and Anchises fears what might happen to him as a result of their liaison. Aphrodite assures him that he will be protected, and tells him that she will bear him a son to be called Aeneas. However, she warns him that he must never tell anyone that he has lain with a goddess. When Aeneas is born, Aphrodite takes him to the nymphs of Mount Ida. She directs them to raise the child to age five, then take him to Anchises. According to other sources, Anchises later brags about his encounter with Aphrodite, and as a result is struck in the foot with a thunderbolt by Zeus. Thereafter he is lame in that foot, so that Aeneas has to carry him from the flames of Troy. Aeneas is a minor character in the "Iliad", where he is twice saved from death by the gods as if for an as-yet-unknown destiny, but is an honorable warrior in his own right. Having held back from the fighting, aggrieved with Priam because in spite of his brave deeds he was not given his due share of honour, he leads an attack against Idomeneus to recover the body of his brother-in-law Alcathous at the urging of Deiphobus. He is the leader of the Trojans' Dardanian allies, as well as a second cousin and principal lieutenant of Hector, son of the Trojan king Priam. Aeneas's mother Aphrodite frequently comes to his aid on the battlefield, and he is a favorite of Apollo. Aphrodite and Apollo rescue Aeneas from combat with Diomedes of Argos, who nearly kills him, and carry him away to Pergamos for healing. Even Poseidon, who normally favors the Greeks, comes to Aeneas's rescue after he falls under the assault of Achilles, noting that Aeneas, though from a junior branch of the royal family, is destined to become king of the Trojan people. Bruce Louden presents Aeneas as a "type" in the tradition of Utnapishtim, Baucis and Philemon, and Lot; the just man spared the general destruction. Apollodorus explains that "...the Greeks let him alone on account of his piety." The Roman mythographer Gaius Julius Hyginus (c. 64 BCE – CE 17) in his "Fabulae" credits Aeneas with killing 28 enemies in the Trojan War. Aeneas also appears in the Trojan narratives attributed to Dares Phrygius and Dictys of Crete. The history of Aeneas was continued by Roman authors. One influential source was the account of Rome's founding in Cato the Elder's "Origines". The Aeneas legend was well known in Virgil's day and appeared in various historical works, including the "Roman Antiquities" of the Greek historian Dionysius of Halicarnassus (relying on Marcus Terentius Varro), "Ab Urbe Condita" by Livy (probably dependent on Quintus Fabius Pictor, fl. 200 BCE), and Gnaeus Pompeius Trogus (now extant only in an epitome by Justin). The "Aeneid" explains that Aeneas is one of the few Trojans who were not killed or enslaved when Troy fell. Aeneas, after being commanded by the gods to flee, gathered a group, collectively known as the Aeneads, who then traveled to Italy and became progenitors of Romans. The Aeneads included Aeneas's trumpeter Misenus, his father Anchises, his friends Achates, Sergestus, and Acmon, the healer Iapyx, the helmsman Palinurus, and his son Ascanius (also known as Iulus, Julus, or Ascanius Julius). He carried with him the Lares and Penates, the statues of the household gods of Troy, and transplanted them to Italy. Several attempts to find a new home failed; one such stop was on Sicily, where in Drepanum, on the island's western coast, his father, Anchises, died peacefully. After a brief but fierce storm sent up against the group at Juno's request, Aeneas and his fleet made landfall at Carthage after six years of wanderings. Aeneas had a year-long affair with the Carthaginian queen Dido (also known as Elissa), who proposed that the Trojans settle in her land and that she and Aeneas reign jointly over their peoples. A marriage of sorts was arranged between Dido and Aeneas at the instigation of Juno, who was told that her favorite city would eventually be defeated by the Trojans' descendants. Aeneas's mother Venus (the Roman adaptation of Aphrodite) realized that her son and his company needed a temporary respite to reinforce themselves for the journey to come. However, the messenger god Mercury was sent by Jupiter and Venus to remind Aeneas of his journey and his purpose, compelling him to leave secretly. When Dido learned of this, she uttered a curse that would forever pit Carthage against Rome, an enmity that would culminate in the Punic Wars. She then committed suicide by stabbing herself with the same sword she gave Aeneas when they first met. After the sojourn in Carthage, the Trojans returned to Sicily where Aeneas organized funeral games to honor his father, who had died a year before. The company traveled on and landed on the western coast of Italy. Aeneas descended into the underworld where he met Dido (who turned away from him to return to her husband) and his father, who showed him the future of his descendants and thus the history of Rome. Latinus, king of the Latins, welcomed Aeneas's army of exiled Trojans and let them reorganize their lives in Latium. His daughter Lavinia had been promised to Turnus, king of the Rutuli, but Latinus received a prophecy that Lavinia would be betrothed to one from another land – namely, Aeneas. Latinus heeded the prophecy, and Turnus consequently declared war on Aeneas at the urging of Juno, who was aligned with King Mezentius of the Etruscans and Queen Amata of the Latins. Aeneas's forces prevailed. Turnus was killed, and Virgil's account ends abruptly. The rest of Aeneas's biography is gleaned from other ancient sources, including Livy and Ovid's "Metamorphoses". According to Livy, Aeneas was victorious but Latinus died in the war. Aeneas founded the city of Lavinium, named after his wife. He later welcomed Dido's sister, Anna Perenna, who then committed suicide after learning of Lavinia's jealousy. After Aeneas's death, Venus asked Jupiter to make her son immortal. Jupiter agreed. The river god Numicus cleansed Aeneas of all his mortal parts and Venus anointed him with ambrosia and nectar, making him a god. Aeneas was recognized as the god Jupiter Indiges. Snorri Sturlason, in the Prologue of the Prose Edda, tells of the world as parted in three continents: Africa, Asia and the third part called Europe or Enea. Snorri also tells of a Trojan named Munon or Menon, who marries the daughter of the High King (Yfirkonungr) Priam called Troan and travels to distant lands, marries the Sybil and got a son, Tror, who, as Snorri tells, is identical to Thor. This tale resemble some episodes of the Aeneid. Continuations of Trojan matter in the Middle Ages had their effects on the character of Aeneas as well. The 12th-century French "Roman d'Enéas" addresses Aeneas's sexuality. Though Virgil appears to deflect all homoeroticism onto Nisus and Euryalus, making his Aeneas a purely heterosexual character, in the Middle Ages there was at least a suspicion of homoeroticism in Aeneas. The "Roman d'Enéas" addresses that charge, when Queen Amata opposes Aeneas's marrying Lavinia, claiming that Aeneas loved boys. Medieval interpretations of Aeneas were greatly influenced by both Virgil and other Latin sources. Specifically, the accounts by Dares and Dictys, which were reworked by 13th-century Italian writer Guido delle Colonne (in "Historia destructionis Troiae"), colored many later readings. From Guido, for instance, the Pearl Poet and other English writers get the suggestion that Aeneas's safe departure from Troy with his possessions and family was a reward for treason, for which he was chastised by Hecuba. In "Sir Gawain and the Green Knight" (late 14th century) the Pearl Poet, like many other English writers, employed Aeneas to establish a genealogy for the foundation of Britain, and explains that Aeneas was "impeached for his perfidy, proven most true" (line 4). Aeneas had an extensive family tree. His wet-nurse was Caieta, and he is the father of Ascanius with Creusa, and of Silvius with Lavinia. Ascanius, also known as Iulus (or Julius), founded Alba Longa and was the first in a long series of kings. According to the mythology outlined by Virgil in the "Aeneid," Romulus and Remus were both descendants of Aeneas through their mother Rhea Silvia, making Aeneas the progenitor of the Roman people. Some early sources call him their father or grandfather, but considering the commonly accepted dates of the fall of Troy (1184 BCE) and the founding of Rome (753 BCE), this seems unlikely. The Julian family of Rome, most notably Julius Cæsar and Augustus, traced their lineage to Ascanius and Aeneas, thus to the goddess Venus. Through the Julians, the Palemonids make this claim. The legendary kings of Britain – including King Arthur – trace their family through a grandson of Aeneas, Brutus. Aeneas's consistent epithet in Virgil and other Latin authors is "pius", a term that connotes reverence toward the gods and familial dutifulness. In the "Aeneid", Aeneas is described as strong and handsome, but neither his hair colour nor complexion are described. In late antiquity however sources add further physical descriptions. The "De excidio Troiae" of Dares Phrygius describes Aeneas as "auburn-haired, stocky, eloquent, courteous, prudent, pious, and charming". There is also a brief physical description found in 6th century AD John Malalas' "Chronographia": "Aeneas: short, fat, with a good chest, powerful, with a ruddy complexion, a broad face, a good nose, fair skin, bald on the forehead, a good beard, grey eyes." Aeneas and Dido are the main characters of a 17th-century broadside ballad called "The Wandering Prince of Troy". The ballad ultimately alters Aeneas's fate from traveling on years after Dido's death to joining her as a spirit soon after her suicide. In modern literature, Aeneas is the speaker in two poems by Allen Tate, "Aeneas at Washington" and "Aeneas at New York". He is a main character in Ursula K. Le Guin's "Lavinia", a re-telling of the last six books of the "Aeneid" told from the point of view of Lavinia, daughter of King Latinus of Latium. Aeneas appears in David Gemmell's "Troy" series as a main heroic character who goes by the name Helikaon. In Rick Riordan's book series, "The Heroes of Olympus", Aeneas is regarded as the first Roman demigod, son of Venus rather than Aphrodite. Will Adams' novel "City of the Lost" assumes that much of the information provided by Virgil is mistaken, and that the true Aeneas and Dido did not meet and love in Carthage but in a Phoenician colony at Cyprus, on the site of the modern Famagusta. Their tale is interspersed with that of modern activists who, while striving to stop an ambitious Turkish Army general trying to stage a coup, accidentally discover the hidden ruins of Dido's palace. Aeneas is a title character in Henry Purcell's opera "Dido and Aeneas" (c. 1688), and Jakob Greber's ("Aeneas in Carthage") (1711), and one of the principal roles in Hector Berlioz' opera "Les Troyens" (c. 1857), as well as in Metastasio's immensely popular opera libretto Didone abbandonata. Canadian composer James Rolfe composed his opera "Aeneas and Dido" (2007; to a libretto by André Alexis) as a companion piece to Purcell's opera. Despite its many dramatic elements, Aeneas's story has generated little interest from the film industry. Ronald Lewis portrayed Aeneas in "Helen of Troy", directed by Robert Wise, as a supporting character, who is a member of the Trojan Royal family, and a close and loyal friend to Paris, and escapes at the end of the film. Portrayed by Steve Reeves, he was the main character in the 1961 sword and sandal film "Guerra di Troia" ("The Trojan War"). Reeves reprised the role the following year in the film "The Avenger", about Aeneas's arrival in Latium and his conflicts with local tribes as he tries to settle his fellow Trojan refugees there. Giulio Brogi, portrayed as Aeneas in the 1971 Italian TV miniseries series called "Eneide", which gives the whole story of the Aeneid, from Aeneas escape from to Troy, to his meeting of Dido, his arrival in Italy, and his duel with Turnus. The most recent cinematic portrayal of Aeneas was in the film "Troy", in which he appears as a youth charged by Paris to protect the Trojan refugees, and to continue the ideals of the city and its people. Paris gives Aeneas Priam's sword, in order to give legitimacy and continuity to the royal line of Troy – and lay the foundations of Roman culture. In this film, he is not a member of the royal family and does not appear to fight in the war. In the role-playing game "" by White Wolf Game Studios, Aeneas figures as one of the mythical founders of the Ventrue Clan. in the action game "", Aeneas is a playable character. The game ends with him and the Aeneans fleeing Troy's destruction and, spurned by the words of a prophetess thought crazed, goes to a new country (Italy) where he will start an empire greater than Greece and Troy combined that shall rule the world for 1000 years, never to be outdone in the tale of men (The Roman Empire). In the 2018 TV miniseries "", Aeneas is portrayed by Alfred Enoch. Scenes depicting Aeneas, especially from the Aeneid, have been the focus of study for centuries. They have been the frequent subject of art and literature since their debut in the 1st century. The artist Giovanni Battista Tiepolo was commissioned by Gaetano Valmarana in 1757 to fresco several rooms in the Villa Valmarana, the family villa situated outside Vicenza. Tiepolo decorated the "palazzina" with scenes from epics such as Homer's "Iliad" and Virgil's "Aeneid".
https://en.wikipedia.org/wiki?curid=1540
Amaranth Amaranthus is a cosmopolitan genus of annual or short-lived perennial plants collectively known as amaranths. Some amaranth species are cultivated as leaf vegetables, pseudocereals, and ornamental plants. Most of the "Amaranthus" species are summer annual weeds and are commonly referred to as pigweeds. Catkin-like cymes of densely packed flowers grow in summer or autumn. Amaranth varies in flower, leaf, and stem color with a range of striking pigments from the spectrum of maroon to crimson and can grow longitudinally from tall with a cylindrical, succulent, fibrous stem that is hollow with grooves and bracteoles when mature. There are approximately 75 species in the genus, 10 of which are dioecious and native to North America with the remaining 65 monoecious species endemic to every continent from tropical lowlands to the Himalayas. Members of this genus share many characteristics and uses with members of the closely related genus "Celosia". Amaranth grain is collected from the genus. The leaves of some species are also eaten. "Amaranth" derives from Greek (), "unfading", with the Greek word for "flower", (), factoring into the word's development as "amaranth, the unfading flower". "Amarant" is an archaic variant. The showy Amaranth present in John Milton's garden of Eden is "remov'd from Heav'n" when it blossoms because the flowers "shade the fountain of life". He describes Amaranth as 'immortal' in reference to the flowers that generally do not wither and retain bright reddish tones of color, even when deceased; sometimes referred to as "love-lies-bleeding." Amaranth is a herbaceous plant or shrub that is either annual or perennial across the genus. Flowers vary interspecifically from the presence of 3 or 5 tepals and stamens, whereas a 7-porate pollen grain structure remains consistent across the family. Species across the genus contain concentric rings of vascular bundles, and fix carbon efficiently with a C4 photosynthetic pathway. Leaves are approximately oval or elliptical shape that are either opposite or alternate across species, although most leaves are whole and simple with entire margins. Amaranth has a primary root with deeper spreading secondary fibrous root structures. Inflorescences are in the form a large panicle that varies from terminal to axial, color, and sex. The tassel of fluorescence is either erect or bent and varies in width and length between species. Flowers are radially symmetric and either bisexual or unisexual with very small, bristly perianth and pointy bracts. Species in this genus are either monecious (i.e. "A. hybridus," L.) or dioecious (i.e. "A. arenicola," L.). Fruits are in the form of capsules referred to as a "unilocular pixdio" that opens at maturity. The top (operculum) of the unilocular pixdio releases the urn that contains the seed. Seeds are circular form from 1-1.5 millimeters in diameter and range in color with a shiny, smooth seed coat. The panicle is harvested 200 days after cultivation with approximately 1,000 to 3,000 seeds harvested per gram. "Amaranthus" shows a wide variety of morphological diversity among and even within certain species. Amaranthus is part of the Amaranthaceae that is part of the larger grouping of the Carophyllales. Although the family (Amaranthaceae) is distinctive, the genus has few distinguishing characters among the 75 species present across all seven continents. This complicates taxonomy and "Amaranthus" has generally been considered among systematists as a "difficult" genus and hybridize often. In 1955, Sauer classified the genus into two subgenera, differentiating only between monoecious and dioecious species: "Acnida" (L.) Aellen ex K.R. Robertson and "Amaranthus". Although this classification was widely accepted, further infrageneric classification was (and still is) needed to differentiate this widely diverse group. Mosyakin and Robertson 1996 later divided into three subgenera: Acnida, Amaranthus, and Albersia. The support for the addition of the subdivision Albersia because of its circumcise, indehiscent fruits coupled with three elliptic to linear tepals to be exclusive characters to members of this subgenus. The classification of these groups are further supported with a combination of  floral characters, reproductive strategies, geographic distribution, and molecular evidence. The phylogenies of Amaranthus of maximum parsimony and Bayesian analysis of nuclear and chloroplast genes suggest five clades within the genus: Diecious / Pumilus, Hybris, Galapagos, Eurasian/ South African, Australian (ESA), ESA + South American. "Amaranthus" includes three recognised subgenera and 75 species, although species numbers are questionable due to hybridisation and species concepts. Infrageneric classification focuses on inflorescence, flower characters and whether a species is monoecious/dioecious, as in the Sauer (1955) suggested classification. Bracteole morphology present on the stem is used for taxonomic classification of Amaranth. Wild species have longer bracteole's compared to cultivated species. A modified infrageneric classification of "Amaranthus" includes three subgenera: "Acnida", "Amaranthus", and "Albersia", with the taxonomy further differentiated by sections within each of the subgenera. There is near certainty that "A. hypochondriacus" is the common ancestor to the cultivated grain species, however the later series of domestication to follow remains unclear. There has been opposing hypotheses of a single as opposed to multiple domestication events of the three grain species. There is evidence of phylogenetic and geographical support for clear groupings that indicate separate domestication events in South America and Central America. "A. hybridus" may derive from South America, whereas "A. quentiensis", "A. caudatus", and "A. hypochondriacus" are native to Central and North America. Species include: Uncooked amaranth grain is 12% water, 65% carbohydrates (including 7% dietary fiber), 14% protein, and 7% fat (table). A reference amount of uncooked amaranth grain provides 371 calories, and is a rich source (20% or more of the Daily Value, DV) of protein, dietary fiber, pantothenic acid, vitamin B6, folate, and several dietary minerals (table). Uncooked amaranth is particularly rich in manganese (159% DV), phosphorus (80% DV), magnesium (70% DV), iron (59% DV), and selenium (34% DV). Cooking decreases its nutritional value substantially across all nutrients, with only dietary minerals remaining at moderate levels. Cooked amaranth leaves are a rich source of vitamin A, vitamin C, calcium, and manganese, with moderate levels of folate, iron, magnesium, and potassium. Amaranth does not contain gluten. Amaranth grain contains phytochemicals that are not defined as nutrients and may be antinutrient factors, such as polyphenols, saponins, tannins, and oxalates. These compounds are reduced in content and antinutrient effect by cooking. The genus is native to Mexico and Central America. Known to the Aztecs as , amaranth is thought to have represented up to 80% of their energy consumption before the Spanish conquest. Another important use of amaranth throughout Mesoamerica was in ritual drinks and foods. To this day, amaranth grains are toasted much like popcorn and mixed with honey, molasses, or chocolate to make a treat called , meaning "joy" in Spanish. Diego Durán described the festivities for the Aztec god (whose name means "left side of the hummingbird"; hummingbirds feed on amaranth flowers). The Aztec month of (7 December to 26 December) was dedicated to . People decorated their homes and trees with paper flags; ritual races, processions, dances, songs, prayers, and finally human sacrifices were held. This was one of the more important Aztec festivals, and the people prepared for the whole month. They fasted or ate very little; a statue of the god was made out of amaranth seeds and honey, and at the end of the month, it was cut into small pieces so everybody could eat a piece of the god. After the Spanish conquest, cultivation of amaranth was outlawed, while some of the festivities were subsumed into the Christmas celebration. Amaranth is native to the New World and has been first found in the Old World as part of an archaeological excavation in Narhan, India, dated to 1000-800 B.C.E. Because of its importance as a symbol of indigenous culture, its palatability, ease of cooking, and a protein that is particularly well-suited to human nutritional needs, interest in amaranth seeds (especially "A. cruentus" and "A. hypochondriacus") revived in the 1970s. It was recovered in Mexico from wild varieties and is now commercially cultivated. It is a popular snack in Mexico, sometimes mixed with chocolate or puffed rice, and its use has spread to Europe and parts of North America. Amaranth and quinoa are pseudocereals because of their similarities to cereals in flavor and cooking. Several species are raised for amaranth "grain" in Asia and the Americas. The spread of Amaranthus is of a joint effort of human expansion, adaptation, and fertilization strategies. Seeds of Amaranth grain have been found in archeological records in Northern Argentina that date to the mid-Holocene. Archeological evidence of seeds from "A. hypochondriacus" and "A. crutenus" found in a cave in Tehuacán, Mexico, suggests amaranth was part of Aztec civilization in the 1400s. Ancient amaranth grains still used include the three species, "Amaranthus caudatus", "Amaranthus cruentus", and "Amaranthus hypochondriacus". Evidence from single-nucleotide polymorphisms and chromosome structure supports "A. hypochondriacus" as the common ancestor of the three grain species. It has been proposed as an inexpensive native crop that could be cultivated by indigenous people in rural areas for several reasons: In the United States, amaranth crop is mostly used for seed production. Most amaranth in American food products starts as a ground flour, blended with wheat or other flours to create cereals, crackers, cookies, bread or other baked products. Despite utilization studies showing that amaranth can be blended with other flours at levels above 50% without affecting functional properties or taste, most commercial products use amaranth only as a minor portion of their ingredients despite them being marketed as "amaranth" products. Amaranth species are cultivated and consumed as a leaf vegetable in many parts of the world. Four species of "Amaranthus" are documented as cultivated vegetables in eastern Asia: "Amaranthus cruentus", "Amaranthus blitum, Amaranthus dubius", and "Amaranthus tricolor". In Indonesia and Malaysia, leaf amaranth is called . In the Philippines, the Ilocano word for the plant is ; the Tagalog word for the plant is or . In Uttar Pradesh and Bihar in India, it is called "chaulai" and is a popular green leafy vegetable (referred to in the class of vegetable preparations called "saag"). It is called "chua" in Kumaun area of Uttarakhand, where it is a popular red-green vegetable. In Karnataka in India, it is called "harive soppu (ಹರಿವೆ ಸೊಪ್ಪು)" . It is used to prepare curries such as "hulee, palya, majjigay-hulee", and so on. In Kerala, it is called "cheera" and is consumed by stir-frying the leaves with spices and red chili peppers to make a dish called "cheera thoran". In Tamil Nadu, it is called "mulaikkira" and is regularly consumed as a favourite dish, where the greens are steamed and mashed with light seasoning of salt, red chili pepper, and cumin. It is called "keerai masial". In "Andhra Pradesh", this leaf is added in preparation of a popular "dal" called in (Telugu). In Maharashtra, it is called "shravani maath" and is available in both red and white colour. In Orissa, it is called "khada saga", it is used to prepare "saga bhaja", in which the leaf is fried with chili and onions. In China, the leaves and stems are used as a stir-fry vegetable, or in soups. In Vietnam, it is called and is used to make soup. Two species are popular as edible vegetable in Vietnam: ("Amaranthus tricolor") and or ("Amaranthus viridis"). A traditional food plant in Africa, amaranth has the potential to improve nutrition, boost food security, foster rural development and support sustainable land care. In Bantu regions of Uganda and western Kenya, it is known as "doodo" or "litoto". It is also known among the Kalenjin as a drought crop ("chepkerta"). In Lingala (spoken in the Congo), it is known as or . In Nigeria, it is a common vegetable and goes with all Nigerian starch dishes. It is known in Yoruba as a short form of (meaning "make the husband fat") or (meaning "we have money left over for fish"). In the Caribbean, the leaves are called "bhaji" in Trinidad and "callaloo" in Jamaica, and are sautéed with onions, garlic, and tomatoes, or sometimes used in a soup called pepperpot soup. In Botswana, it is referred to as "morug" and cooked as a staple green vegetable. In Greece, green amaranth ("A. viridis") is a popular dish called , or . It is boiled, then served with olive oil and lemon juice like a salad, sometimes alongside fried fish. Greeks stop harvesting the plant (which also grows wild) when it starts to bloom at the end of August. In Brazil, green amaranth was, and to a degree still is, often considered an invasive species as all other species of amaranth (except the generally imported "A. caudatus" cultivar), though some have traditionally appreciated it as a leaf vegetable, under the names of or , which is consumed cooked, generally accompanying the staple food, rice and beans. Making up about 5% of the total fatty acids of amaranth, squalene is extracted as a vegetable-based alternative to the more expensive shark oil for use in dietary supplements and cosmetics. The flowers of the 'Hopi Red Dye' amaranth were used by the Hopi (a tribe in the western United States) as the source of a deep red dye. Also a synthetic dye was named "amaranth" for its similarity in color to the natural amaranth pigments known as betalains. This synthetic dye is also known as Red No. 2 in North America and E123 in the European Union. The genus also contains several well-known ornamental plants, such as "Amaranthus caudatus" (love-lies-bleeding), a vigorous, hardy annual with dark purplish flowers crowded in handsome drooping spikes. Another Indian annual, "A. hypochondriacus" (prince's feather), has deeply veined, lance-shaped leaves, purple on the under face, and deep crimson flowers densely packed on erect spikes. Amaranths are recorded as food plants for some Lepidoptera (butterfly and moth) species including the nutmeg moth and various case-bearer moths of the genus "Coleophora": "C. amaranthella", "C. enchorda" (feeds exclusively on "Amaranthus"), "C. immortalis" (feeds exclusively on "Amaranthus"), "C. lineapulvella", and "C. versurella" (recorded on "A. spinosus"). Amaranth weed species have an extended period of germination, rapid growth, and high rates of seed production, and have been causing problems for farmers since the mid-1990s. This is partially due to the reduction in tillage, reduction in herbicidal use and the evolution of herbicidal resistance in several species where herbicides have been applied more often. The following 9 species of "Amaranthus" are considered invasive and noxious weeds in the U.S and Canada: "A. albus", "A. blitoides", "A. hybridus", "A. palmeri", "A. powellii", "A. retroflexus", "A. spinosus", "A. tuberculatus", and "A. viridis". A new herbicide-resistant strain of "Amaranthus palmeri" has appeared; it is glyphosate-resistant and so cannot be killed by herbicides using the chemical. Also, this plant can survive in tough conditions. The species "Amaranthus palmeri" (Palmer amaranth) causes the greatest reduction in soybean yields and has the potential to reduce yields by 17-68% in field experiments. Palmer amaranth is among the "top five most troublesome weeds" in the southeast of the United States and has already evolved resistances to dinitroaniline herbicides and acetolactate synthase inhibitors. This makes the proper identification of "Amaranthus" species at the seedling stage essential for agriculturalists. Proper weed control needs to be applied before the species successfully colonizes in the crop field and causes significant yield reductions. An evolutionary lineage of around 90 species within the genus has acquired the carbon fixation pathway, which increases their photosynthetic efficiency. This probably occurred in the Miocene.
https://en.wikipedia.org/wiki?curid=1542
Aga Khan I Aga Khan I ( or ) or Hasan Ali Shah () (1804 – 1881) was the governor of Kirman, 46th Imam of the Nizari Ismaili Muslims, and prominent Muslim leader in Iran and later in the Indian subcontinent. He was the first Nizari Imam to hold the title Aga Khan. The Imam Hasan Ali Shah was born in 1804 in Kahak, Iran to Shah Khalil Allah, the 45th Ismaili Imam, and Bibi Sarkara, the daughter of Muhammad Sadiq Mahallati (d. 1815), a poet and a Ni‘mat Allahi Sufi. Shah Khalil Allah moved to Yazd in 1815, probably out of concern for his Indian followers, who used to travel to Persia to see their Imam and for whom Yazd was a much closer and safer destination than Kahak. Meanwhile, his wife and children (Including Hasan Ali) continued to live in Kahak off the revenues obtained from the family holdings in the Mahallat () region. Two years later, in 1817, Shah Khalil Allah was killed in Yazd during a brawl between some of his followers and local shopkeepers. He was succeeded by his eldest son Hasan Ali Shah, also known as Muhammad Hasan, who became the 46th Imam. While Khalil Allah resided in Yazd, his land holdings in Kahak were being managed by his son-in-law, Imani Khan Farahani, husband of his daughter Shah Bibi. After Khalil Allah's death, a conflict ensued between Imani Khan Farahani and the local Nizaris (followers of Imam Khalil Allah), as a result of which Khalil Allah's widow and children found themselves left unprovided for. The young Imam and his mother moved to Qumm, but their financial situation worsened. The dowager decided to go to the Qajar court in Tehran to obtain justice for her husband's death and was eventually successful. Those who had been involved in the Shah Khalil Allah's murder were punished. Not only that, but the Persian king Fath Ali Shah gave his own daughter, princess Sarv-i-Jahan Khanum, in marriage to the young Imam Hasan Ali Shah and provided a princely dowry in land holdings in the Mahallat region. King Fath Ali Shah also appointed Hasan Ali Shah as governor of Qumm and bestowed upon him the honorific of "Aga Khan." Thus did the title of "Aga Khan" enter the family. Hasan Ali Shah become known as Aga Khan Mahallati, and the title of Aga Khan was inherited by his successors. Aga Khan I's mother later moved to India where she died in 1851. Until Fath Ali Shah's death in 1834, the Imam Hasan Ali Shah enjoyed a quiet life and was held in high esteem at the Qajar court. Soon after the accession of Muhammad Shah Qajar to the throne of his grandfather, Fath Ali Shah, the Imam Hasan Ali Shah was appointed governor of Kerman in 1835. At the time, Kerman was held by the rebellious sons of Shuja al-Saltana, a pretender to the Qajar throne. The area was also frequently raided by the Afghans. Hasan Ali Shah managed to restore order in Kerman, as well as in Bam and Narmashir, which were also held by rebellious groups. Hasan Ali Shah sent a report of his success to Tehran, but did not receive any material appreciation for his achievements. Despite the service he rendered to the Qajar government, Hasan Ali Shah was dismissed from the governorship of Kerman in 1837, less than two years after his arrival there, and was replaced by Firuz Mirza Nusrat al-Dawla, a younger brother of Muhammad Shah Qajar. Refusing to accept his dismissal, Hasan Ali Shah withdrew with his forces to the citadel at Bam. Along with his two brothers, he made preparations to resist the government forces that were sent against him. He was besieged at Bam for some fourteen months. When it was clear that continuing the resistance was of little use, Hasan Ali Shah sent one of his brothers to Shiraz in order to speak to the governor of Fars to intervene on his behalf and arrange for safe passage out of Kerman. With the governor having interceded, Hasan Ali Shah surrendered and emerged from the citadel of Bam only to be double-crossed. He was seized and his possessions were plundered by the government troops. Hasan Ali Shah and his dependents were sent to Kerman and remained as prisoners there for eight months. He was eventually allowed to go to Tehran near the end of 1838-39 where he was able to present his case before the Shah. The Shah pardoned him on the condition that he return peacefully to Mahallat. Hasan Ali Shah remained in Mahallat for about two years. He managed to gather an army in Mahallat which alarmed Muhammad Shah, who travelled to Delijan near Mahallat to determine the truth of the reports about Hasan Ali Shah. Hasan Ali Shah was on a hunting trip at the time, but he sent a messenger to request permission of the monarch to go to Mecca for the hajj pilgrimage. Permission was given, and Hasan Ali Shah's mother and a few relatives were sent to Najaf and other holy cities in Iraq in which the shrines of his ancestors, the Shiite Imams are found. Prior to leaving Mahallat, Hasan Ali Shah equipped himself with letters appointing him to the governorship of Kerman. Accompanied by his brothers, nephews and other relatives, as well as many followers, he left for Yazd, where he intended to meet some of his local followers. Hasan Ali Shah sent the documents reinstating him to the position of governor of Kerman to Bahman Mirza Baha al-Dawla, the governor of Yazd. Bahman Mirza offered Hasan Ali Shah lodging in the city, but Hasan Ali Shah declined, indicating that he wished to visit his followers living around Yazd. Hajji Mirza Aqasi sent a messenger to Bahman Mirza to inform him of the spuriousness of Hasan Ali Shah's documents and a battle between Bahman Mīrzā and Hasan Ali Shah broke out in which Bahman Mirza was defeated. Other minor battles were won by Hasan Ali Shah before he arrived in Shahr-e Babak, which he intended to use as his base for capturing Kerman. At the time of his arrival in Shahr-e Babak, a formal local governor was engaged in a campaign to drive out the Afghans from the city's citadel, and Hasan Ali Shah joined him in forcing the Afghans to surrender. Soon after March 1841, Hasan Ali Shah set out for Kerman. He managed to defeat a government force consisting of 4,000 men near Dashtab, and continued to win a number of victories before stopping at Bam for a time. Soon, a government force of 24,000 men forced Hasan Ali Shah to flee from Bam to Rigan on the border of Baluchistan, where he suffered a decisive defeat. Hasan Ali Shah decided to escape to Afghanistan, accompanied by his brothers and many soldiers and servants. Fleeing Iran, Hasan Ali Shah arrived in Kandahar, Afghanistan in 1841 — a town that had been occupied by an Anglo-Indian army in 1839 in the First Anglo-Afghan War. A close relationship developed between Hasan Ali Shah and the British, which coincided with the final years of the First Anglo-Afghan War (1838–1842). After his arrival, Hasan Ali Shah wrote to Sir William Macnaghten, discussing his plans to seize and govern Herat on behalf of the British. Although the proposal seemed to have been approved, the plans of the British were thwarted by the uprising of Dost Muhammad's son Muhammad Akbar Khan, who defeated and annihilated the British-Indian garrison at Gandamak on its retreat from Kabul in January 1842. Hasan Ali Shah soon proceeded to Sindh, where he rendered further services to the British. The British were able to annex Sindh and for his services, Hasan Ali Shah received an annual pension of £2,000 from General Charles James Napier, the British conqueror of Sindh with whom he had a good relationship. In October 1844, Hasan Ali Shah left Sind for city of Bombay in Bombay Presidency, British India passing through Cutch and Kathiawar where he spent some time visiting the communities of his followers in the area. After arriving in Bombay in February 1846, the Persian government demanded his extradition from India. The British refused and only agreed to transfer Hasan Ali Shah's residence to Calcutta, where it would be harder for him to launch new attacks against the Persian government. The British also negotiated the safe return of Hasan Ali Shah to Persia, which was in accordance with his own wish. The government agreed to Hasan Ali Shah's return provided that he would avoid passing through Baluchistan and Kirman and that he was to settle peacefully in Mahallat. Hasan Ali Shah was eventually forced to leave for Calcutta in April 1847, where he remained until he received news of the death of Muhammad Shah Qajar. Hasan Ali Shah left for Bombay and the British attempted to obtain permission for his return to Persia. Although some of his lands were restored to the control of his relatives, his safe return could not be arranged, and Hasan Ali Shah was forced to remain a permanent resident of India. While in India, Hasan Ali Shah continued his close relationship with the British, and was even visited by the Prince of Wales (the future King Edward VII) when he was on a state visit to India. The British came to address Hasan Ali Shah as His Highness. Hasan Ali Shah received protection from the British government in British India as the spiritual head of an important Muslim community. The vast majority of his Khoja Ismaili followers in India welcomed him warmly, but some dissident members, sensing their loss of prestige with the arrival of the Imam, wished to maintain control over communal properties. Because of this, Hasan Ali Shah decided to secure a pledge of loyalty from the members of the community to himself and to the Ismaili form of Islam. Although most of the members of the community signed a document issued by Hasan Ali Shah summarizing the practices of the Ismailis, a group of dissenting Khojas surprisingly asserted that the community had always been Sunni. This group was outcast by the unanimous vote of all the Khojas assembled in Bombay. In 1866, these dissenters filed a suit in the Bombay High Court against Hasan Ali Shah, claiming that the Khojas had been Sunni Muslims from the very beginning. The case, commonly referred to as the Aga Khan Case, was heard by Sir Joseph Arnould. The hearing lasted several weeks, and included testimony from Hasan Ali Shah himself. After reviewing the history of the community, Justice Arnould gave a definitive and detailed judgement against the plaintiffs and in favour of Hasan Ali Shah and other defendants. The judgement was significant in that it legally established the status of the Khojas as a community referred to as Shia Imami Ismailis, and of Hasan Ali Shah as the spiritual head of that community. Hasan Ali Shah's authority thereafter was not seriously challenged again. Hasan Ali Shah spent his final years in Bombay with occasional visits to Pune. Maintaining the traditions of the Iranian nobility to which he belonged, he kept excellent stables and became a well-known figure at the Bombay racecourse. Hasan Ali Shah died after an imamate of sixty-four years in April 1881. He was buried in a specially built shrine at Hasanabad in the Mazagaon area of Bombay. He was survived by three sons and five daughters. Hasan Ali Shah was succeeded as Imam by his eldest son Aqa Ali Shah, who became Aga Khan II.
https://en.wikipedia.org/wiki?curid=1545
Aga Khan III Sir Sultan Mahomed Shah, Aga Khan III (2 November 187711 July 1957) was the 48th Imam of the Nizari Ismaili religion. He was one of the founders and the first permanent president of the All-India Muslim League (AIML). His goal was the advancement of Muslim agendas and protection of Muslim rights in India. The League, until the late 1930s, was not a large organisation but represented the landed and commercial Muslim interests of the British-ruled 'United Provinces' (as of today Uttar Pradesh). He shared Sir Syed Ahmad Khan's belief that Muslims should first build up their social capital through advanced education before engaging in politics. Aga Khan called on the British Raj to consider Muslims to be a separate nation within India, the so-called 'Two Nation Theory'. Even after he resigned as president of the AIML in 1912, he still exerted major influence on its policies and agendas. He was nominated to represent India to the League of Nations in 1932 and served as President of the League of Nations from 1937–38. Sir Sultan Mahomed Shah was born in Karachi, the capital of Sindh province in British India, (now Pakistan) to Aga Khan II and his third wife, Nawab A'lia Shamsul-Muluk, who was a granddaughter of Fath Ali Shah of Persia (Qajar dynasty). Under the care of his mother, he was given not only that religious and Oriental education which his position as the religious leader of the Ismailis made indispensable, but also sound European training, an opportunity denied to his father and paternal grandfather. He also attended Eton and the University of Cambridge. In 1885, at the age of seven, he succeeded his father as Imam of the Shi'a Isma'ili Muslims. The Aga Khan travelled in distant parts of the world to receive the homage of his followers, and with the objective either of settling differences or of advancing their welfare by financial help and personal advice and guidance. The distinction of a Knight Commander of the Indian Empire (KCIE) was conferred upon him by Queen Victoria in 1897; and he was promoted to a Knight Grand Commander (GCIE) in the 1902 Coronation Honours list, and invested as such by King Edward VII at Buckingham Palace on 24 October 1902. He was made a Knight Grand Commander of the Order of the Star of India (GCSI) by George V (1912), and appointed a GCMG in 1923. He received like recognition for his public services from the German Emperor, the Sultan of Turkey, the Shah of Persia and other potentates. In 1906, the Aga Khan was a founding member and first president of the All India Muslim League, a political party which pushed for the creation of an independent Muslim nation in the north west regions of India, then under British colonial rule, and later established the country of Pakistan in 1947. During the three Round Table Conferences (India) in London from 1930–32, he played an important role to bring about Indian constitutional reforms. In 1934, he was made a member of the Privy Council and served as a member of the League of Nations (1934–37), becoming the President of the League of Nations in 1937. Under the leadership of Sir Sultan Mahomed Shah, Aga Khan III, the first half of the 20th century was a period of significant development for the Ismā'īlī community. Numerous institutions for social and economic development were established in the Indian Subcontinent and in East Africa. Ismailis have marked the Jubilees of their Imāms with public celebrations, which are symbolic affirmations of the ties that link the Ismāʿīlī Imām and his followers. Although the Jubilees have no religious significance, they serve to reaffirm the Imamat's worldwide commitment to the improvement of the quality of human life, especially in the developing countries. The Jubilees of Sir Sultan Mahomed Shah, Aga Khan III, are well remembered. During his 72 years of Imamat (1885–1957), the community celebrated his Golden (1937), Diamond (1946) and Platinum (1954) Jubilees. To show their appreciation and affection, the Ismā'īliyya weighed their Imam in gold, diamonds and, symbolically, in platinum, respectively, the proceeds of which were used to further develop major social welfare and development institutions in Asia and Africa. In India and later in Pakistan, social development institutions were established, in the words of Aga Khan III, "for the relief of humanity". They included institutions such as the Diamond Jubilee Trust and the Platinum Jubilee Investments Limited which in turn assisted the growth of various types of cooperative societies. "Diamond Jubilee High School for Girls" were established throughout the remote Northern Areas of what is now Pakistan. In addition, scholarship programs, established at the time of the Golden Jubilee to give assistance to needy students, were progressively expanded. In East Africa, major social welfare and economic development institutions were established. Those involved in social welfare included the accelerated development of schools and community centres, and a modern, fully equipped hospital in Nairobi. Among the economic development institutions established in East Africa were companies such as the Diamond Jubilee Investment Trust (now Diamond Trust of Kenya) and the Jubilee Insurance Company, which are quoted on the Nairobi Stock Exchange and have become major players in national development. Sir Sultan Mahomed Shah also introduced organizational forms that gave Ismāʿīlī communities the means to structure and regulate their own affairs. These were built on the Muslim tradition of a communitarian ethic on the one hand, and responsible individual conscience with freedom to negotiate one's own moral commitment and destiny on the other. In 1905 he ordained the first Ismā'īlī Constitution for the social governance of the community in East Africa. The new administration for the Community's affairs was organised into a hierarchy of councils at the local, national, and regional levels. The constitution also set out rules in such matters as marriage, divorce and inheritance, guidelines for mutual cooperation and support among Ismā'īlīs, and their interface with other communities. Similar constitutions were promulgated in India, and all were periodically revised to address emerging needs and circumstances in diverse settings. Following the Second World War, far-reaching social, economic and political changes profoundly affected a number of areas where Ismāʿīlīs resided. In 1947, British rule in the Indian Subcontinent was replaced by the sovereign, independent nations of India, Pakistan and later Bangladesh, resulting in the migration of millions people and significant loss of life and property. In the Middle East, the Suez crisis of 1956 as well as the preceding crisis in Iran, demonstrated the sharp upsurge of nationalism, which was as assertive of the region's social and economic aspirations as of its political independence. Africa was also set on its course to decolonisation, swept by what Harold Macmillan, the then British prime minister, termed the "wind of change". By the early 1960s, most of East and Central Africa, where the majority of the Ismāʿīlī population on the continent resided, including Tanganyika, Kenya, Uganda, Madagascar, Rwanda, Burundi and Zaire, had attained their political independence. The Aga Khan was deeply influenced by the views of Sir Sayyid Ahmad Khan. Along with Sir Sayyid, the Aga Khan was one of the backers and founders of the Aligarh University, for which he tirelessly raised funds for and donated large sums of his own money to. The Aga Khan himself can be considered an Islamic modernist and an intellectual of the Aligarh movement. From a religious standpoint, the Aga Khan followed a modernist approach to Islam. He believed there to be no contradiction between religion and modernity, and urged Muslims to embrace modernity. Although he opposed a wholesale replication of Western society by Muslims, the Aga Khan did believe increased contact with the West would be overall beneficial to Muslim society. He was intellectually open to Western philosophy and ideas, and believed engagement with them could lead to a revival and renaissance within Islamic thought. Like many other Islamic modernists, the Aga Khan held a low opinion of the traditional religious establishment (the ʿUlamāʾ) as well as what he saw as their rigid formalism, legalism, and literalism. Instead, he advocated for renewed ijtihād (independent reasoning) and ijmāʿ (consensus), the latter of which he understood in a modernist way to mean consensus-building. According to him, Muslims should go back to the original sources, especially the Qurʾān, in order to discover the true essence and spirit of Islam. Once the principles of the faith were discovered, they would seen to be universal and modern. Islam, in his view, had an underlying liberal and democratic spirit. He also called for full civil and religious liberties, peace and disarmament, and an end to all wars. The Aga Khan opposed sectarianism, which he believed to sap the strength and unity of the Muslim community. In specific, he called for a rapprochement between Sunnism and Shīʿism. This did not mean that he thought religious differences would go away, and he himself instructed his Ismāʿīlī followers to be dedicated to their own teachings. However, he believed in unity through accepting diversity, and by respecting differences of opinion. On his view, there was strength to be found in the diversity of Muslim traditions. The Aga Khan called for social reform of Muslim society, and he was able to implement them within his own Ismāʿīlī community. As he believed Islam to essentially be a humanitarian religion, the Aga Khan called for the reduction and eradication of poverty. Like Sir Sayyid, the Aga Khan was concerned that Muslims had fallen behind the Hindu community in terms of education. According to him, education was the path to progress. He was a tireless advocate for compulsory and universal primary education, and also for the creation of higher institutions of learning. In terms of women's rights, the Aga Khan was more progressive in his views than Sir Sayyid and many other Islamic modernists of his time. The Aga Khan framed his pursuit of women's rights not simply in the context of women being better mothers or wives, but rather, for women's own benefit. He endorsed the spiritual equality of men and women in Islam, and he also called for full political equality. This included the right to vote and the right to an education. In regards to the latter issue, he endorsed compulsory primary education for girls. He also encouraged women to pursue higher university-level education, and saw nothing wrong with co-educational institutions. Whereas Sir Sayyid prioritized the education of boys over girls, the Aga Khan instructed his followers that if they had a son and daughter, and if they could only afford to send one of them to school, they should send the daughter over the boy. The Aga Khan campaigned against the institution of purda and zenāna, which he felt were oppressive and un-Islamic institutions. He completely banned the purda and the face veil for his Ismāʿīlī followers. The Aga Khan also restricted polygamy, encouraged marriage to widows, and banned child marriage. He also made marriage and divorce laws more equitable to women. Overall, he encouraged women to take part in all national activities and to agitate for their full religious, social, and political rights. Today, in large part due to the Aga Khan's reforms, the Ismāʿīlī community is one of the most progressive, peaceful, and prosperous branches of Islam. He was an owner of thoroughbred racing horses, including a record equalling five winners of The Derby (Blenheim, Bahram, Mahmoud, My Love, Tulyar) and a total of sixteen winners of British Classic Races. He was British flat racing Champion Owner thirteen times. According to Ben Pimlott, biographer of Queen Elizabeth II, the Aga Khan presented Her Majesty with a filly called "Astrakhan", who won at Hurst Park Racecourse in 1950. In 1926, the Aga Khan gave a cup (the Aga Khan Trophy) to be awarded to the winners of an international team show jumping competition held at the annual horse show of the Royal Dublin Society in Dublin, Ireland every first week in August. It attracts competitors from all of the main show jumping nations and is carried live on Irish national television. He wrote a number of books and papers two of which are of immense importance, namely (1) India in Transition, about the prepartition politics of India and (2) The Memoirs of Aga Khan: World Enough and Time, his autobiography. Aga Khan III was succeeded as Aga Khan by his grandson Karim Aga Khan, who is the present Imam of the Ismaili Muslims. At the time of his death on 11 July 1957, his family members were in Versoix. A solicitor brought the will of the Aga Khan III from London to Geneva and read it before the family: “Ever since the time of my ancestor Ali, the first Imam, that is to say over a period of thirteen hundred years, it has always been the tradition of our family that each Imam chooses his successor at his absolute and unfettered discretion from amongst any of his descendants, whether they be sons or remote male issue and in these circumstances and in view of the fundamentally altered conditions in the world in very recent years due to the great changes which have taken place including the discoveries of atomic science, I am convinced that it is in the best interest of the Shia Muslim Ismailia Community that I should be succeeded by a young man who has been brought up and developed during recent years and in the midst of the new age and who brings a new outlook on life to his office as Imam. For these reasons, I appoint my grandson Karim, the son of my own son, Aly Salomone Khan to succeed to the title of Aga Khan and to the Imam and Pir of all Shia Ismailian followers” He is buried in at the Mausoleum of Aga Khan, on the Nile in Aswan, Egypt. Pakistan Post issued a special 'Birth Centenary of Agha Khan III' postage stamp in his honor in 1977. Pakistan Post again issued a postage stamp in his honor in its 'Pioneers of Freedom' series in 1990.
https://en.wikipedia.org/wiki?curid=1546
Alexander Agassiz Alexander Emmanuel Rodolphe Agassiz (December 17, 1835March 27, 1910), son of Louis Agassiz and stepson of Elizabeth Cabot Agassiz, was an American scientist and engineer. Agassiz was born in Neuchâtel, Switzerland and immigrated to the United States with his father, Louis, in 1849. He graduated from Harvard University in 1855, subsequently studying engineering and chemistry, and taking the degree of bachelor of science at the Lawrence scientific school of the same institution in 1857; in 1859 became an assistant in the United States Coast Survey. Thenceforward he became a specialist in marine ichthyology. Agassiz was elected a Fellow of the American Academy of Arts and Sciences in 1862. Up until the summer of 1866, Agassiz worked as an assistant in the museum of natural history that his father founded at Harvard. E. J. Hulbert, a friend of Agassiz's brother-in-law, Quincy Adams Shaw, had discovered a rich copper lode known as the Calumet conglomerate on the Keweenaw Peninsula in Michigan. Hulbert persuaded them, along with a group of friends, to purchase a controlling interest in the mines, which later became known as the Calumet and Hecla Mining Company based in Calumet, Michigan. That summer, he took a trip to see the mines for himself and he afterwards became treasurer of the enterprise. Over the winter of 1866 and early 1867, mining operations began to falter, due to the difficulty of extracting copper from the conglomerate. Hulbert had sold his interests in the mines and had moved on to other ventures. But Agassiz refused to give up hope for the mines. He returned to the mines in March 1867, with his wife and young son. At that time, Calumet was a remote settlement, virtually inaccessible during the winter and very far removed from civilization even during the summer. With insufficient supplies at the mines, Agassiz struggled to maintain order, while back in Boston, Shaw was saddled with debt and the collapse of their interests. Shaw obtained financial assistance from John Simpkins, the selling agent for the enterprise to continue operations. Agassiz continued to live at Calumet, making gradual progress in stabilizing the mining operations, such that he was able to leave the mines under the control of a general manager and return to Boston in 1868 before winter closed navigation. The mines continued to prosper and in May 1871, several mines were consolidated to form the Calumet and Hecla Mining Company with Shaw as its first president. In August 1871, Shaw "retired" to the board of directors and Agassiz became president, a position he held until his death. Until the turn of the century, this company was by far the largest copper producer in the United States, many years producing over half of the total. Agassiz was a major factor in the mine's continued success and visited the mines twice a year. He innovated by installing a giant engine, known as the Superior, which was able to lift 24 tons of rock from a depth of . He also built a railroad and dredged a channel to navigable waters. However, after a time the mines did not require his full-time, year-round, attention and he returned to his interests in natural history at Harvard. Out of his copper fortune, he gave some US$500,000 to Harvard for the museum of comparative zoology and other purposes. Shortly after the death of his father in 1873, Agassiz acquired a small peninsula in Newport, Rhode Island, which features spectacular views of Narragansett Bay. Here he built a substantial house and a laboratory for use as his summer residence. The house was completed in 1875 and today is known as the Inn at Castle Hill. In 1875, he surveyed Lake Titicaca, Peru, examined the copper mines of Peru and Chile, and made a collection of Peruvian antiquities for the Museum of Comparative Zoology (MCZ), of which he was first curator from 1874 to 1885 and then director until his death in 1910. He assisted Charles Wyville Thomson in the examination and classification of the collections of the 1872 "Challenger" Expedition, and wrote the "Review of the Echini" (2 vols., 1872–1874) in the reports. Between 1877 and 1880 he took part in the three dredging expeditions of the steamer "Blake" of the Coast Survey, and presented a full account of them in two volumes (1888). In 1896 Agassiz visited Fiji and Queensland and inspected the Great Barrier Reef, publishing a paper on the subject in 1898. Of Agassiz's other writings on marine zoology, most are contained in the bulletins and memoirs of the museum of comparative zoology. However, in 1865, he published with Elizabeth Cary Agassiz, his stepmother, "Seaside Studies in Natural History", a work at once exact and stimulating. They also published, in 1871, "Marine Animals of Massachusetts Bay". He received the German Order Pour le Mérite for Science and Arts in August 1902. Agassiz served as a president of the National Academy of Sciences, which since 1913 has awarded the Alexander Agassiz Medal in his memory. He died in 1910 on board the RMS "Adriatic" en route to New York from Southampton. He was the father of three sons – George R. Agassiz (1861–1951), Maximilian Agassiz (1866–1943) and Rodolphe Agassiz (1871–1933). Alexander Agassiz is commemorated in the scientific name of a species of lizard, "Anolis agassizi.
https://en.wikipedia.org/wiki?curid=1548
Agathon Agathon (; ; ) was an Athenian tragic poet whose works have been lost. He is best known for his appearance in Plato's "Symposium," which describes the banquet given to celebrate his obtaining a prize for his first tragedy at the Lenaia in 416. He is also a prominent character in Aristophanes' comedy the "Thesmophoriazusae". Agathon was the son of Tisamenus, and the lifelong companion of Pausanias, with whom he appears in both the "Symposium" and Plato's "Protagoras". Together with Pausanias, he later moved to the court of Archelaus, king of Macedon, who was recruiting playwrights; it is here that he probably died around 401 BC. Agathon introduced certain innovations into the Greek theater: Aristotle tells us in the "Poetics" (1456a) that the characters and plot of his "Anthos" were original and not, following Athenian dramatic orthodoxy, borrowed from mythological or historical subjects. Agathon was also the first playwright to write choral parts which were apparently independent from the main plot of his plays. Agathon is portrayed by Plato as a handsome young man, well dressed, of polished manners, courted by the fashion, wealth and wisdom of Athens, and dispensing hospitality with ease and refinement. The epideictic speech in praise of love which Agathon recites in the "Symposium" is full of beautiful but artificial rhetorical expressions, and has led some scholars to believe he may have been a student of Gorgias. In the "Symposium," Agathon is presented as the friend of the comic poet Aristophanes, but this alleged friendship did not prevent Aristophanes from harshly criticizing Agathon in at least two of his comic plays: the "Thesmophoriazousae" and the (now lost) "Gerytades". In the later play "Frogs", Aristophanes softens his criticisms, but even so it may be only for the sake of punning on Agathon's name (ἁγαθός "good") that he makes Dionysus call him a "good poet". Agathon was also a friend of Euripides, another recruit to the court of Archelaus of Macedon. Agathon's extraordinary physical beauty is brought up repeatedly in the sources; the historian W. Rhys Roberts observes that "ὁ καλός Ἀγάθων ("ho kalos Agathon") has become almost a stereotyped phrase." The most detailed surviving description of Agathon is in the "Thesmophoriazousae," in which Agathon appears as a pale, clean-shaven young man dressed in women's clothes. Scholars are unsure how much of Aristophanes' portrayal is fact and how much mere comic invention. After a close reading of the "Thesmophoriazousae," the historian Jane McIntosh Snyder observed that Agathon's costume was almost identical to that of the famous lyric poet Anacreon, as he is portrayed in early 5th-century vase-paintings. Snyder theorizes that Agathon might have made a deliberate effort to mimic the sumptuous attire of his famous fellow-poet, although by Agathon's time, such clothing, especially the κεκρύφαλος ("kekryphalos", an elaborate covering for the hair) had long fallen out of fashion for men. According to this interpretation, Agathon is mocked in the "Thesmophoriazousae" not only for his notorious effeminacy, but also for the pretentiousness of his dress: "he seems to think of himself, in all his elegant finery, as a rival to the old Ionian poets, perhaps even to Anacreon himself." Agathon has been thought to be the subject of "Lovers' Lips", an epigram attributed to Plato: Another translation reads: Although the authenticity of this epigram was accepted for many centuries, it was probably not composed for Agathon the tragedian, nor was it composed by Plato. Stylistic evidence suggests that the poem (with most of Plato's other alleged epigrams) was actually written some time after Plato had died: its form is that of the Hellenistic erotic epigram, which did not become popular until after 300 BC. According to 20th-century scholar Walther Ludwig, the poems were spuriously inserted into an early biography of Plato sometime between 250 BC and 100 BC and adopted by later writers from this source. Of Agathon's plays, only six titles and thirty-one fragments have survived: Fragments in A Nauck, "Tragicorum graecorum fragmenta" (1887). Fragments in Greek with English translations in Matthew Wright's "The Lost Plays of Greek Tragedy (Volume 1) Neglected Authors" (2016) As quoted in Meditations by Marcus Aurelius IV:18
https://en.wikipedia.org/wiki?curid=1549
Agesilaus II Agesilaus II (; "Agesilaos"; c. 444/443 – c. 360 BC), was a king ("basileus") of the ancient Greek city-state of Sparta and a member of the Eurypontid dynasty ruling from 398 to about 360 BC, during most of which time he was, in Plutarch's words, "as good as though commander and king of all Greece", and was for the whole of it greatly identified with his country's deeds and fortunes. Small in stature and lame from birth, Agesilaus became ruler somewhat unexpectedly in his mid-forties. His reign saw successful military incursions into various states in Asia Minor, as well as successes in the Corinthian War; however, several diplomatic decisions resulted in Sparta becoming increasingly isolated prior to his death at the age of 84 in Cyrenaica. Agesilaus was greatly admired by his friend, the historian Xenophon, who wrote a minor work about him titled "Agesilaus". Agesilaus was the son of Archidamus II and his second wife, Eupoleia, brother to Cynisca (the first woman in ancient history to achieve an Olympic victory), and younger half-brother of Agis II. There is little surviving detail on the youth of Agesilaus. Born with one leg shorter, he was not expected to succeed to the throne after his brother king Agis II, especially because the latter had a son (Leotychidas). Therefore, Agesilaus was trained in the traditional curriculum of Sparta, the "agoge." However, Leotychidas was ultimately set aside as illegitimate (contemporary rumors representing him as the son of Alcibiades) and Agesilaus became king in 398, at the age of about forty. In addition to questions of his nephew's paternity, Agesilaus' succession was largely due to the intervention of the Spartan general, Lysander, who hoped to find in him a willing tool for the furtherance of his political designs. Lysander and the young Agesilaus came to maintain an intimate relation (see Pederasty in Ancient Greece), as was common of the period. Their unique relationship would serve an important role during Agesilaus' later campaigns in Asia Minor. Agesilaus is first recorded as king during the suppression of the conspiracy of Cinadon, shortly after 398 BC. Then, in 396, Agesilaus crossed into Asia with a force of 2,000 neodamodes (freed helots) and 6,000 allies (including 30 Spartiates) to liberate Greek cities from Persian dominion. On the eve of sailing from Aulis he attempted to offer a sacrifice, as Agamemnon had done before the Trojan expedition, but the Thebans intervened to prevent it, an insult for which he never forgave them. On his arrival at Ephesus in 396 BC, a three months' truce was concluded with Tissaphernes, the satrap of Lydia and Caria, but negotiations conducted during that time proved fruitless, and on its termination Agesilaus raided Hellespontine Phrygia, where he easily won immense booty from the satrap Pharnabazus; Tissaphernes could offer no assistance, as he had concentrated his troops in Caria. In these campaigns Agesilaus also benefited from the aid of some of the Ten Thousand (a Greek mercenary army), who had marched through miles of Persian territory to reach the Black Sea a few years earlier (401-399 BC). After spending the winter organizing a cavalry force ("hippeis"), he made a successful incursion into Lydia in the spring of 395. Tithraustes was sent to replace Tissaphernes, who paid with his life for his continued failure. An armistice was concluded between Tithraustes and Agesilaus, who left the southern satrapy and again invaded Hellespontine Phrygia, which he ravaged until the following spring. He then came to an agreement with Pharnabazus whom he met personally, and once more turned southward. During these campaigns, Lysander attempted to manipulate Agesilaus into ceding his authority. Agesilaus would have nothing of this, and reminded Lysander (who was only a Spartan general) who was king. He had Lysander sent away to assist the naval campaigns in the Aegean. This dominating move by Agesilaus earned the respect of his men-at-arms and of Lysander himself, who remained emotionally close with Agesilaus. In 394, while encamped on the plain of Thebe, Agesilaus was planning a campaign in the interior of Asia Minor, or even an attack on Artaxerxes II himself, when he was recalled to Greece to fight in the Corinthian War between Sparta and the combined forces of Athens, Thebes, Corinth, Argos and several minor states. The outbreak of the conflict had been encouraged by Persian payments to Sparta's Greek rivals. Tens of thousands of Darics, the main currency in Achaemenid coinage, were used to bribe the Greek states to start a war against Sparta. According to Plutarch, Agesilaus said upon leaving Asia Minor "I have been driven out by 10,000 Persian archers", a reference to "Archers" ("Toxotai") the Greek nickname for the Darics from their obverse design, because that much money had been paid to politicians in Athens and Thebes in order to start a war against Sparta. A rapid march through Thrace and Macedonia brought him to Thessaly, where he repulsed the Thessalian cavalry who tried to impede him. Reinforced by Phocian and Orchomenian troops and a Spartan army, he met the confederate forces at Coronea in Boeotia and in a hotly contested battle was technically victorious. However, the Spartan baggage train was ransacked and Agesilaus himself was injured during the fighting, resulting in a subsequent retreat by way of Delphi to the Peloponnese. Shortly before this battle the Spartan navy, of which he had received the supreme command, was totally defeated off Cnidus by a powerful Persian fleet under Conon and Pharnabazus. During these conflicts in mainland Greece, Lysander perished while attacking the walls of Haliartus. Pausanias, the second king of Sparta (see Spartan Constitution for more information on Sparta's dual monarchy), was supposed to provide Lysander with reinforcements as they marched into Boeotia, yet failed to arrive in time to assist Lysander, likely because Pausanias disliked him for his brash and arrogant attitude towards the Spartan royalty and government. Pausanias failed to fight for the bodies of the dead, and because he retrieved the bodies under truce (a sign of defeat), he was disgraced and banished from Sparta. In 393, Agesilaus engaged in a ravaging invasion of Argolis. In 392 BC he made several successful expeditions into Corinthian territory, capturing Lechaeum and Piraeus. The loss, however, of a battalion (mora), destroyed by Iphicrates, neutralized these successes, and Agesilaus returned to Sparta. In 389 BC he conducted a campaign in Acarnania, but two years later the Peace of Antalcidas, warmly supported by Agesilaus, put an end to the war, maintaining Spartan hegemony over Greece and returning the Greek cities of Asia Minor to the Achaemenid Empire. In this interval, Agesilaus declined command over Sparta's aggression on Mantineia, and justified Phoebidas' seizure of the Theban Cadmea so long as the outcome provided glory to Sparta. When war broke out afresh with Thebes, Agesilaus twice invaded Boeotia (in 378 and 377 BC), although he spent the next five years largely out of action due to an unspecified but apparently grave illness. In the congress of 371 an altercation is recorded between him and the Theban general Epaminondas, and due to his influence, Thebes was peremptorily excluded from the peace, and orders given for Agesilaus's royal colleague Cleombrotus to march against Thebes in 371. Cleombrotus was defeated and killed at the Battle of Leuctra and the Spartan supremacy overthrown. In 370 Agesilaus was engaged in an embassy to Mantineia, and reassured the Spartans with an invasion of Arcadia. He preserved an un-walled Sparta against the revolts and conspiracies of helots, perioeci and even other Spartans; and against external enemies, with four different armies led by Epaminondas penetrating Laconia that same year. In 366 BC, Sparta and Athens, dissatisfied with the Persian king's support of Thebes following the embassy of Philiscus of Abydos, decided to provide careful military support to the opponents of the Achaemenid king. Athens and Sparta provided support for the revolting satraps in the Revolt of the Satraps, in particular Ariobarzanes: Sparta sent a force to Ariobarzanes under an aging Agesilaus, while Athens sent a force under Timotheus, which was however diverted when it became obvious that Ariobarzanes had entered frontal conflict with the Achaemenid king. An Athenian mercenary force under Chabrias was also sent to the Egyptian Pharao Tachos, who was also fighting against the Achaemenid king. According to Xenophon, Agesilaus, in order to gain money for prosecuting the war, supported the satrap Ariobarzanes of Phrygia in his revolt against Artaxerxes II in 364 (Revolt of the Satraps). Again, in 362, Epaminondas almost succeeded in seizing the city of Sparta with a rapid and unexpected march. The Battle of Mantinea, in which Agesilaus took no part, was followed by a general peace: Sparta, however, stood aloof, hoping even yet to recover her supremacy. In 361, Agesilaus went to Egypt at the head of a mercenary force to aid the king Nectanebo I and his regent Teos against Persia. He soon transferred his services to Teos's cousin and rival Nectanebo II, who, in return for his help, gave him a sum of over 200 talents. On his way home Agesilaus died in Cyrenaica, around the age of 84, after a reign of some 41 years. His body was embalmed in wax, and buried at Sparta. He was succeeded by his son Archidamus III. Agesilaus was of small stature and unimpressive appearance, and was lame from birth. These facts were used as an argument against his succession, an oracle having warned Sparta against a "lame reign." Most ancient writers considered him a highly successful leader in guerrilla warfare, alert and quick, yet cautious—a man, moreover, whose personal bravery was rarely questioned in his own time. Of his courage, temperance, and hardiness, many instances are cited, and to these were added the less Spartan qualities of kindliness and tenderness as a father and a friend. As examples, there is the story of his riding a stick-horse with his children and upon being discovered by a friend desiring that he not mention till he himself were the father of children; and because of the affection of his son Archidamus' for Cleonymus, he saved Sphodrias, Cleonymus' father, from execution for his incursion into the Piraeus, and dishonorable retreat, in 378. Modern writers tend to be slightly more critical of Agesilaus' reputation and achievements, reckoning him an excellent soldier, but one who had a poor understanding of sea power and siegecraft. As a statesman he won himself both enthusiastic adherents and bitter enemies. Agesilaus was most successful in the opening and closing periods of his reign: commencing but then surrendering a glorious career in Asia; and in extreme age, maintaining his prostrate country. Other writers acknowledge his extremely high popularity at home, but suggest his occasionally rigid and arguably irrational political loyalties and convictions contributed greatly to Spartan decline, notably his unremitting hatred of Thebes, which led to Sparta's humiliation at the Battle of Leuctra and thus the end of Spartan hegemony. Historian J. B. Bury remarks that "there is something melancholy about his career:" born into a Sparta that was the unquestioned continental power of Hellas, the Sparta which mourned him eighty four years later had suffered a series of military defeats which would have been unthinkable to his forebears, had seen its population severely decline, and had run so short of money that its soldiers were increasingly sent on campaigns fought more for money than for defense or glory. Other historical accounts paint Agesilaus as a prototype for the ideal leader. His awareness, thoughtfulness, and wisdom were all traits to be emulated diplomatically, while his bravery and shrewdness in battle epitomized the heroic Greek commander. These historians point towards the unstable oligarchies established by Lysander in the former Athenian Empire and the failures of Spartan leaders (such as Pausanias and Kleombrotos) for the eventual suppression of Spartan power. The ancient historian Xenophon was a huge admirer and served under Agesilaus during the campaigns into Asia Minor. Plutarch includes among Agesilaus' 78 essays and speeches comprising the apophthegmata Agesilaus' letter to the ephors on his recall: And when asked whether Agesilaus wanted a memorial erected in his honor: Agesilaus lived in the most frugal style alike at home and in the field, and though his campaigns were undertaken largely to secure booty, he was content to enrich the state and his friends and to return as poor as he had set forth. When someone was praising an orator for his ability to magnify small points, Agesilaus said, "In my opinion it's not a good cobbler who fits large shoes on small feet." Another time Agesilaus watched a mouse being pulled from its hole by a small boy. When the mouse turned around, bit the hand of its captor and escaped, he pointed this out to those present and said, "When the tiniest creature defends itself like this against aggressors, what ought men to do, do you reckon?" Certainly when somebody asked what gain the laws of Lycurgus had brought Sparta, Agesilaus answered, "Contempt for pleasures." Asked once how far Sparta's boundaries stretched, Agesilaus brandished his spear and said, "As far as this can reach." On noticing a house in Asia roofed with square beams, Agesilaus asked the owner whether timber grew square in that area. When told no, it grew round, he said, "What then? If it were square, would you make it round?" Invited to hear an actor who could perfectly imitate the nightingale, Agesilaus declined, saying he had heard the nightingale itself.
https://en.wikipedia.org/wiki?curid=1550
Agrippina the Elder Vipsania Agrippina (Classical Latin: AGRIPPINA•GERMANICI; c. 14 BC – AD 33), commonly referred to as Agrippina the Elder (), was a prominent member of the Julio-Claudian dynasty. She was born in c. 14 BC the daughter of Marcus Vipsanius Agrippa, a close supporter of Rome's first emperor Augustus, and Augustus' daughter Julia the Elder. At the time of her birth, her brothers Lucius and Gaius were the adoptive sons of Augustus and were his heirs until their deaths in AD 2 and 4, respectively. Following their deaths, her cousin Germanicus was made the adoptive son of Tiberius as part of Augustus' succession scheme in the adoptions of AD 4 in which Tiberius was adopted by Augustus. As a corollary to the adoption, Agrippina was wed to Germanicus in order to bring him closer to the Julian family. She is known to have traveled with him throughout his career, taking her children everywhere they went. In AD 14, Germanicus was deployed in Gaul as governor and general. While there, the late Augustus sent her son Gaius to her unspecified location. She liked to dress him in a little soldiers' outfit complete with boots for which Gaius earned the nickname "Caligula" ("little soldier's boots"). After three years in Gaul they returned to Rome and her husband was awarded a triumph on 26 May AD 17 to commemorate his victories. The following year, Germanicus was sent to govern over the eastern provinces. While Germanicus was active in his administration, the governor of Syria Gnaeus Calpurnius Piso began feuding with him. During the feud, her husband died of illness on 10 October AD 19. Germanicus was cremated in Antioch and she transported his ashes to Rome where they were interred at the Mausoleum of Augustus. Agrippina was vocal in claiming her husband was murdered to promote Tiberius' son Drusus Julius Caesar ("Drusus the Younger") as heir. Following the model of her grandmother Livia, she spent the time following Germanicus' death supporting the cause of her sons Nero and Drusus Caesar. This put her and her sons at odds with the powerful Praetorian prefect Lucius Aelius Sejanus who would begin eliminating their supporters with accusations of treason and sexual misconduct in AD 26. Her family's rivalry with Sejanus would culminate with her and Nero's exile in AD 29. Nero was exiled to Pontia and she was exiled to the island of Pandateria, where she would remain until her death by starvation in AD 33. Following the Roman custom of parents and children sharing the same nomen and cognomen, women in the same family would often share the same name. Accordingly, Marcus Vipsanius Agrippa had many relatives who shared the name "Vipsania Agrippina". To distinguish Marcus Agrippa's daughter from his granddaughter, historians refer to his daughter as Latin "Agrippina Maior", literally "Agrippina the Elder". Likewise, Agrippina's daughter is referred to as "Agrippina Minor", literally "Agrippina the Younger". Like her father, Agrippina the Elder avoided her nomen and was never called "Vipsania". Marcus Vipsanius Agrippa was an early supporter of Augustus (then "Octavius") during the Final War of the Roman Republic that ensued as a result of the assassination of Julius Caesar in 44 BC. He was a key general in Augustus' armies, commanding troops in pivotal battles against Mark Antony and Sextus Pompeius. From early in the emperor's reign, Agrippa was trusted to handle affairs in the eastern provinces and was even given the signet ring of Augustus, who appeared to be on his deathbed in 23BC, a sign that he would become "princeps" were Augustus to die. It is probable that he was to rule until the emperor's nephew, Marcus Claudius Marcellus, came of age. However, Marcellus died that year of an illness that became an epidemic in Rome. Now, with Marcellus dead, Augustus arranged for the marriage of Agrippa to his daughter Julia the Elder, who was previously the wife of Marcellus. Agrippa was given "tribunicia potestas" ("the tribunician power") in 18 BC, a power that only the emperor and his immediate heir could hope to attain. The tribunician power allowed him to control the Senate, and it was first given to Julius Caesar. Agrippa acted as tribune in the Senate to pass important legislation and, though he lacked some of the emperor's power and authority, he was approaching the position of co-regent. After the birth of Agrippa's second son, Lucius, in 17 BC, Lucius and his brother Gaius were adopted together by Augustus. Around the time of their adoption in the summer, Augustus held the fifth ever "Ludi Saeculares" ("Secular Games"). Cassius Dio says the adoption of the boys coupled with the games served to introduce a new era of peace – the "Pax Augusta". It is not known what Agrippa thought of their adoption; however, following their adoption, Agrippa was dispatched to govern the eastern provinces, bringing his family with him. Agrippina was born in 14 BC to Marcus Vipsanius Agrippa and Julia the Elder, before their return to Rome in 13 BC. She had several siblings, including half-sisters Vipsania Agrippina, Vipsania Attica, Vipsania Marcella and Vipsania Marcellina (from her fathers marriages to Pomponia Caecilia Attica and Claudia Marcella Major); and four full siblings, with three brothers; Gaius, Lucius, and Postumus Agrippa (all were adopted by Augustus; Gaius and Lucius were adopted together following Lucius' birth in 17 BC; Postumus in AD 4), and a sister Julia the Younger. She was a prominent member of the Julio-Claudian dynasty. On her mother's side, she was the younger granddaughter of Augustus. She was the Stepdaughter of Tiberius by her mother's marriage to him, and sister in law of Claudius, the brother of her husband Germanicus. Her son Gaius, better known as "Caligula", would be the fourth emperor, and her grandson Nero would be the last emperor of the dynasty. In 13 BC, her father returned to Rome and was promptly sent to Pannonia to suppress a rebellion. Agrippa arrived there that winter (in 12 BC), but the Pannonians gave up that same year. Agrippa returned to Campania in Italy, where he fell ill and died soon after. After her father's death, she spent the rest of her childhood in Augustus' household where access to her was strictly controlled. Some of the currency issued in 13–12 BC, the "aurei" and "denarii", make it clear that her brothers Gaius and Lucius were Augustus' intended heirs. Their father was no longer available to assume the reins of power if the Emperor were to die, and Augustus had to make it clear who his intended heirs were in case anything should happen. Lucius' and Gaius' military and political careers would steadily advance until their deaths in AD 2 and 4, respectively. The death of her brothers meant that Augustus had to find other heirs. Although he initially considered Agrippina's second cousin Germanicus a potential heir for a time, Livia convinced Augustus to adopt Tiberius, Livia's son from her first marriage with Tiberius Claudius Nero. Although Augustus adopted Tiberius, it was on condition that Tiberius first adopt Germanicus so that Germanicus would become second in the line of succession. It was a corollary to the adoption, probably in the next year, that Agrippina was married to Germanicus. By her husband Germanicus, she had nine children: Nero Julius Caesar, Drusus Julius Caesar, Tiberius Julius Caesar, a child of unknown name (normally referenced as "Ignotus"), Gaius the Elder, the Emperor Caligula (Gaius the Younger), the Empress Agrippina the Younger , Julia Drusilla, and Julia Livilla. Only six of his children came of age; Tiberius and the Ignotus died as infants, and Gaius the Elder in his early childhood. Her husband's career in the military began in AD 6, with the Batonian War in Pannonia and Dalmatia. Throughout Germanicus' military career, Agrippina is known to have traveled with her husband and their children. Germanicus' career advanced steadily as he advanced in ranks following the "cursus honorum" until, in AD 12, he was made consul. The following year, he was given command over Gaul and the forces on the Rhine, totaling eight legions. On 18 May AD 14, her one-year-old son Gaius was sent by Augustus from Rome to join her in Gaul. She was pregnant at the time and, while Germanicus was collecting taxes across Gaul, she remained at an unspecified separate location, presumably for her safety. Augustus sent her a letter with her son's party, which read: Later that year, on 19 August, Augustus died while away in Campania. As a result, Tiberius was made "princeps". While Germanicus was administering the oath of fealty to Tiberius, a mutiny began among the forces on the Rhine. During the mutiny, Agrippina brought out their sixth child, Gaius, and made preparations to take him away to a safer town nearby. He was in a full army outfit including the legionary hobnailed boots ("caligae"). These military-booties earned Gaius the nickname "Caligula" (lit. "little boots"), and garnered sympathy for Agrippina and the child among the soldiery. Tacitus attributes her actions as having quelled the mutiny (Tacitus, "Annals" 1.40–4). Once the mutiny was put to an end, Germanicus allowed the soldiers to deal with the ringleaders, which they did with brutal severity. He then led them against the Germanic tribes, perhaps in an effort to prevent future mutiny. Germanicus would remain in Gaul fighting against the Germanic tribes until AD 16, at which time he was recalled to Rome by Tiberius. His campaigns won him much renown among the Roman people, and he was awarded a triumph on 26 May AD 17. In AD 18, Agrippina left for the eastern provinces with her family. Germanicus was sent the east to govern the provinces, the same assignment her father was given years earlier. Agrippina was pregnant on their journey east and, on the way to Syria, she gave birth to her youngest daughter Julia Livilla on the island of Lesbos. Inscriptions celebrating her fertility have been found on the island. Tiberius sent Gnaeus Calpurnius Piso to assist her husband, naming him governor of Syria. During their time there, Germanicus was active in his administration of the eastern regions. Piso did not get along well with Germanicus and their relationship only got worse. In AD 19, Germanicus ordered Piso to leave the province, which Piso began to do. On his way back to Rome, Piso stopped at the island of Kos off the coast of Syria. Around that time Germanicus fell ill and he died on 10 October AD 19 at Antioch. Rumours spread of Piso poisoning her husband on the emperor's orders. After Germanicus' cremation in the forum of Antioch, Agrippina personally carried the ashes of her husband to Rome. The transportation of the ashes witnessed national mourning. She landed at the port of Brundisium in southern Italy where she was met with huge crowds of sympathizers; a praetorian escort was provided by the emperor in light of her rank as the wife of a governor-general. As she passed each town, the people and local magistrates came out to show their respect. Drusus the Younger (son of Tiberius), Claudius, and the consuls journeyed to join the procession as well. Once she made it to Rome, her husband's ashes were interred at the Mausoleum of Augustus. Tiberius and Livia did not make an appearance. Her marriage to Germanicus had served to unite the imperial family. Agrippina may have suspected Tiberius' involvement in the death of her husband and, with Germanicus dead, she no longer had any familial ties to the emperor. Historian Richard Alston says it is likely that either Tiberius or Livia were behind the exile of Agrippina's half-sister and the death of Postumus. He notes the death of Agrippina's mother, who starved herself to death amidst her exile in AD 14, linking her death to Tiberius' disdain for her. Agrippina was vocal about her feelings claiming that Germanicus was murdered to promote Drusus the Younger as Tiberius' heir, and worried that the birth of the Younger Drusus' twin sons would displace her own sons in the line of succession. Her fears proved to be unfounded, with her son Nero receiving the "toga virilis" ("toga of manhood") from Tiberius and the Younger Drusus on 7 June AD 20. Further, Nero was promised the office of quaestor five years before the ordinary age and was wed to Tiberius' granddaughter Julia. Agrippina's second oldest son Drusus was given similar honors and was also promised the office of quaestor in advance when he reached his fourteenth year in AD 23. At about this time, Tiberius' Praetorian Prefect Sejanus was becoming powerful in Rome and began feuding with Drusus the Younger. While the exact causes of the feud are unknown, it ended when the Younger Drusus died of seemingly natural causes on 14 September AD 23. After the death of Tiberius' son, Agrippina wanted to advance the careers of her sons, who were all potential heirs for Tiberius. To achieve this, Agrippina presented the Great Cameo of France to Tiberius. It was a personalized gift that positioned the family of Germanicus around the emperor. The work was designed to convince Tiberius to choose her children as his heirs. It is likely she was the one who commissioned the Great Cameo of France. Ultimately, the death of Tiberius' son elevated her own children to the position of heirs. Her sons were the logical choice, because they were the sons of Germanicus and Tiberius' grandsons were too young. Nero was becoming popular in the Senate due in part, Tacitus says, to his resemblance with his father. The rise of her children was threatening to Sejanus' position. Resultantly, Sejanus began spreading rumors about Agrippina in the imperial court. The coming years were marked with increasing hostility between Sejanus and Agrippina and her sons. This effectively caused factions to rise in the aristocracy between her family and Sejanus. On New Year's Day, AD 24, Sejanus had the priests and magistrates add prayers for the health of Nero and Drusus in addition to those normally offered to the emperor on that day. Tiberius was not happy with this and he voiced his displeasure in the Senate. In addition, he questioned the priests of the Palatine. Some of the priests who offered the prayers were relatives of Agrippina and Germanicus. This made Tiberius suspicious of her and marked a change in his attitude toward her and her older sons, but not Caligula. In AD 25, Sejanus requested Livilla's hand in marriage. Livilla was a niece of the emperor, which would have made him a member of the imperial family. While this did make his ambitions clear, his request was denied. The loss may have been huge for Sejanus had the dissensions in the imperial household not been deteriorating. Relations were so bad that Agrippina refused to eat at Tiberius' dinner parties for fear of being poisoned. She also asked Tiberius if she could be allowed to remarry, which he also refused. If either of them were allowed to remarry it would have threatened the line of succession that Tiberius was comfortable with. By refusing Sejanus' request, Tiberius made it clear he was content with the children of Germanicus and his own grandchildren being his successors. Had Sejanus married Livilla, their children would have provided another line of possible successors. The implication of Agrippina's request was that she needed a man from outside the imperial family to serve as protector and step-father of possible imperial heirs, a powerful position. It was also an implied reprimand: Tiberius was meant to be the guardian of the imperial family. Tiberius was in a tough position. He was faced with a conflict between his family and his friend. His solution was surprising. In AD 26, left Rome altogether and retired to the island of Capri in the Bay of Naples. He cut himself off from the factions altogether and abandoned politics. He left Rome in the care of Sejanus. This allowed Sejanus to freely attack his rivals. With Tiberius away from Rome, the city would see a rise of politically motivated trials on the part of Sejanus and his supporters against Agrippina and her associates. Many of her friends and associates were subsequently accused of "maiestas" ("treason") by the growing number of accusers. It was also common to see charges of sexual misconduct and corruption. In AD 27, Agrippina found herself placed under house arrest in her suburban villa outside Herculaneum. In AD 28, the Senate voted that altars to "Clementia" (mercy) and "Amicitia" (friendship) be raised. At that time, "Clementia" was considered a virtue of the ruling class, for only the powerful could give clemency. The altar of "Amicitia" was flanked by statues of Sejanus and Tiberius. By this time, his association with Tiberius was such that there were those in Roman society who erected statues in his honor and gave prayers and sacrifices in his honor. Sejanus' birthday was honored as if he were a member of the imperial family. According to Richard Alston, "Sejanus' association with Tiberius must have at least indicated to the people that he would be further elevated." Sejanus did not begin his final attack on Agrippina until after the death of Livia in AD 29. Tacitus reports a letter being sent to the Senate from Tiberius denouncing Agrippina for her arrogance and prideful attitude, and Nero for engaging in shameful sexual activities. The Senate would not begin highly unpopular prosecutions against her or her son until it received clear instructions from Tiberius to do so. Despite public outcry, Agrippina and Nero were declared public enemies ("hostes") following a repeat of the accusations by the emperor. They were both exiled; Nero to Pontia where he was killed or encouraged to commit suicide in AD 31, and Agrippina to the island of Pandateria (the same place her mother was exiled to). Suetonius says that while on the island of Pandateria, she lost an eye when she was beaten by a centurion. She would remain on the island until her death in AD 33. Accounts of her death vary. She is said to have died from starvation, but it is not certain whether or not it was self-imposed. Tacitus says food was withheld from her in an effort to make her death seem like a suicide. Her son Drusus was later also exiled on charges of sexual misdemeanors. Sejanus remained powerful until his sudden downfall and summary execution in October AD 31, just after the death of Nero, the exact cause for which remains unclear. Alston suggests that Sejanus may have been acting in Tiberius' favor to remove Germanicus' family from power, noting that Agrippina and Nero's brother Drusus were left in exile even after Sejanus' death. The deaths of Agrippina's older sons elevated her youngest son Caligula to the position of successor and he became "princeps" when Tiberius died in AD 37. Drusus the Younger's son Tiberius Gemellus was summoned to Capri by his grandfather Tiberius, where he and Caligula were made joint-heirs. When Caligula assumed power he made Gemellus his adopted son, but Caligula soon had Gemellus killed for plotting against him. After he became emperor, Caligula took on the role of a dutiful son and brother in a public show of "pietas" ("piety"). He went out to the islands of Pontia and Pandateria in order to recover the remains of Agrippina and Nero. It was not easy to recover Nero's bones as they were scattered and buried. Moreover, he had a stormy passage; however, the difficulty in his task made his devotion seem even greater. The ashes were brought to Ostia, from where they were carried up the Tiber and brought to the Campus Martius, from where equestrians placed them on briers to join the ashes of Germanicus in the mausoleum of Augustus. The move was reminiscent of when Agrippina carried the ashes of her husband just over 17 years earlier. Agrippina's funerary urn still survives (). Agrippina was fiercely independent, a trait she shared with her mother. Dio described her as having ambitions to match her pedigree. However, Anthony Barrett notes that Agrippina was fully aware that a woman in ancient Rome could not hold power in her own right. Instead, Agrippina followed the model of Livia in promoting the careers of her children. She and her daughter, Agrippina the Younger, are both described as being equally ambitious for their sons. Whereas the elder Agrippina's son failed to become emperor, the younger Agrippina's son, also named Nero, succeeds. In a contrast, Tacitus has Agrippina the Elder merely standing on a bridge waving the soldiers passing by, whereas her daughter eclipses her by presiding over a military tribunal and accepting gifts from foreign ambassadors. Tacitus also records serious tension between Agrippina and Livia. He describes Livia as having visited "stepmotherly provocations" on Agrippina. He says of Agrippina: "were it not that through her moral integrity and love for her husband she converted an otherwise ungovernable temper to the good" (Tacitus, "Annals" 1.33). Despite being sympathetic to her as a victim of imperial oppression, he uses expressions like "excitable", "arrogant", "proud", "fierce", "obstinate", and "ambitious" to describe Agrippina. His comments are echoed by other sources. Historian Lindsay Powell says Agrippina enjoyed a normal marriage and continued to show her devotion to Germanicus after his death. He says she was regarded by the Roman people as, quoting Tacitus, "the glory of the country, the sole surviving offspring of Augustus, the solitary example of the good old times." Alston cautions against accepting the stories of Agrippina's feud with Sejanus at face value, as these accounts reflect a tradition hostile to Tiberius and Sejanus. They may have been circulated by Agrippina's supporters or they may have emerged after Sejanus' fall in AD 31. He adds: "These stories are plausible, though not certain to be true." Augustus was proud of Agrippina. Suetonius claims that Augustus wrote her a letter praising her intellect and directing her education. Suetonius also records that Augustus, who held strict views on self-restraint and respectable speech, cautioned Agrippina not to speak "offensively". When she next appears, she is being chastised by Tiberius in Greek for making irritating remarks, and the tone of the Greek verse quoted by Tiberius suggests that she should have heeded the advice of her grandfather not to speak offensively. The "Annals" of Tacitus is a history of the Julio-Claudian dynasty beginning with the death of Augustus. In it, he portrays women as having a profound influence on politics. The women of the imperial family in particular are depicted by Tacitus as having a notable prominence in the public sphere as well as possessing a ferocity and ambition with which they pursue it. Tacitus presents them as living longer than the imperial men and thus being more wise as they advance in age. Among the most broad of his portrayals is that of Agrippina. He emphasizes their role in connecting genetically back to Augustus, a significant factor in the marriages of the emperors and princes of the dynasty. The Annals repeatedly has Agrippina competing for influence with Tiberius simply because she is related to Augustus biologically. Tacitus presents Agrippina as being kindred to aristocratic males, and has her reversing gender roles, which showcases her assumption of male "auctoritas" ("authority") with metaphors of her dressing and undressing. In an example of Agrippina assuming "auctoritas", he says: Using the above epithet, ""(femina) ingens animi"" ("..[a woman], great for her courage"), he assigns a haughty attitude to Agrippina that compels her to explore the affairs of men. He records her as having reversed the natural order of things when she quelled the mutiny of the Rhine in AD 14. In so doing, he describes her as having usurped her husband's power, a power rightfully belonging only to a general. Portraits of Roman women from the Julio-Claudian dynasty display a freer hair treatment then those of traditional Roman men and are more keen on the sensitivity of recording on different textures. These changes in style served to make reproducing them more popular in the mid-first-century AD. Reproductions of her image would continue to be made into that period. In the portrait, she is given a youthful face despite the fact that she lived to middle age. Agrippina's hair is a mass of curls that covers both sides of her head and is long going down to her shoulders. Her portraiture can be contrasted with that of Livia who had a more austere Augustan hairstyle. There are three different periods during the first-century AD when portraits were created for Agrippina: at the time of her marriage to Germanicus (which made her the mother of a potential emperor); when her son Caligula came into power in AD 37, and collected her ashes from the island of Pandateria for relocation to the Mausoleum of Augustus; and at the time of Claudius' marriage to Antonia Minor, who wanted to connect himself to the lineage of Augustus by evoking Agrippina's image. Coins and inscriptions cannot act as a method of discerning her age, because her hairstyle remains unchanged in all the representations. The easiest phase of portraits to identify are those dating to the time of Caligula, when a fair abundance of coins were minted with an image of his mother on them. It is a posthumous portrait of her with idealized features. In the phase following Claudius' marriage, her features are made to more closely resemble those of her daughter. The goal was to strengthen Agrippina the Younger's connection with her mother. Finally, the portraits of her dating to the time of Tiberius are still idealized, but not as much as those from the period of Caligula's reign. Images of Agrippina from this period are the most lifelike. Agrippina has been depicted in many works of art. She is remembered in "De Mulieribus Claris", a collection of biographies of historical and mythological women by the Florentine author Giovanni Boccaccio, composed in 136162. It is notable as the first collection devoted exclusively to biographies of women in Western literature.. Other notables works of which include:
https://en.wikipedia.org/wiki?curid=1556
Agrippina the Younger Julia Agrippina (6 November AD 15 – 23 March AD 59), also referred to as Agrippina the Younger (, "smaller", often used to mean "younger"), was a powerful Roman empress and one of the prominent and effective women in the Julio-Claudian dynasty. Her father was Germanicus, a popular general and one-time heir apparent to the Roman Empire under Tiberius; and her mother was Agrippina the Elder, a granddaughter of the first Roman emperor Augustus. She was also the younger sister of Caligula, and the niece and fourth wife of Claudius. Both ancient and modern sources describe Agrippina's personality as ruthless, ambitious, violent and domineering. Physically she was a beautiful and reputable woman; according to Pliny the Elder, she had a double canine in her upper right jaw, a sign of good fortune. Many ancient historians accuse Agrippina of poisoning her husband Claudius, though accounts vary. During the reign of her husband, she was an effective force behind the Roman throne and later briefly served as the de facto ruler of Roman Empire during the reign of her son, emperor Nero. In AD 59 Agrippina was executed on the orders of Nero. Agrippina was the first daughter and fourth living child of Agrippina the Elder and Germanicus. She had three elder brothers, Nero Caesar, Drusus Caesar, and the future Emperor Caligula, and two younger sisters, Julia Drusilla and Julia Livilla. Agrippina's two eldest brothers and her mother were victims of the intrigues of the Praetorian Prefect Lucius Aelius Sejanus. She was the namesake of her mother. Agrippina the Elder was remembered as a modest and heroic matron, who was the second daughter and fourth child of Julia the Elder and the statesman Marcus Vipsanius Agrippa. The father of Julia the Elder was the Emperor Augustus, and Julia was his only natural child from his second marriage to Scribonia, who had close blood relations with Pompey the Great and Lucius Cornelius Sulla. Germanicus, Agrippina's father, was a very popular general and politician. His mother was Antonia Minor and his father was the general Nero Claudius Drusus. He was Antonia Minor's first child. Germanicus had two younger siblings; a sister, named Livilla, and a brother, the future Emperor Claudius. Claudius was Agrippina's paternal uncle and third husband. Antonia Minor was a daughter to Octavia the Younger by her second marriage to triumvir Mark Antony, and Octavia was the second eldest sister and full-blooded sister of Augustus. Germanicus' father, Drusus the Elder, was the second son of the Empress Livia Drusilla by her first marriage to praetor Tiberius Nero, and was the Emperor Tiberius's younger brother and Augustus's stepson. In the year 9, Augustus ordered and forced Tiberius to adopt Germanicus, who happened to be Tiberius's nephew, as his son and heir. Germanicus was a favorite of his great-uncle Augustus, who hoped that Germanicus would succeed his uncle Tiberius, who was Augustus's own adopted son and heir. This in turn meant that Tiberius was also Agrippina's adoptive grandfather in addition to her paternal great-uncle. Agrippina was born on 6 November in AD 15, or possibly 14, at Oppidum Ubiorum, a Roman outpost on the Rhine River located in present-day Cologne, Germany. A second sister Julia Drusilla was born on 16 September 16, also in Germany. As a small child, Agrippina travelled with her parents throughout Germany (15–16) until she and her siblings (apart from Caligula) returned to Rome to live with and be raised by their maternal grandmother Antonia. Her parents departed for Syria in 18 to conduct official duties, and, according to Tacitus, the third and youngest sister was born en route on the island of Lesbos, namely Julia Livilla, probably in March 18. In October of AD 19, Germanicus died suddenly in Antioch (modern Antakya, Turkey). Germanicus' death caused much public grief in Rome, and gave rise to rumors that he had been murdered by Gnaeus Calpurnius Piso and Munatia Plancina on the orders of Tiberius, as his widow Agrippina the Elder returned to Rome with his ashes. Agrippina the Younger was thereafter supervised by her mother, her paternal grandmother Antonia Minor, and her great-grandmother, Livia, all of them notable, influential, and powerful figures from whom she learnt how to survive. She lived on the Palatine Hill in Rome. Her great-uncle Tiberius had already become emperor and the head of the family after the death of Augustus in 14. After her thirteenth birthday in 28, Tiberius arranged for Agrippina to marry her paternal first cousin once removed Gnaeus Domitius Ahenobarbus and ordered the marriage to be celebrated in Rome. Domitius came from a distinguished family of consular rank. Through his mother Antonia Major, Domitius was a great nephew of Augustus, first cousin to Claudius, and first cousin once removed to Agrippina and Caligula. He had two sisters; Domitia Lepida the Elder and Domitia Lepida the Younger. Domitia Lepida the Younger was the mother of the Empress Valeria Messalina. Antonia Major was the elder sister to Antonia Minor, and the first daughter of Octavia Minor and Mark Antony. According to Suetonius, Domitius was a wealthy man with a despicable and dishonest character, who, according to Suetonius, was "a man who was in every aspect of his life detestable" and served as consul in 32. Agrippina and Domitius lived between Antium (modern Anzio and Nettuno) and Rome. Not much is known about the relationship between them. Tiberius died on March 16, AD 37, and Agrippina's only surviving brother, Caligula, became the new emperor. Being the emperor's sister gave Agrippina some influence. Agrippina and her younger sisters Julia Drusilla and Julia Livilla received various honours from their brother, which included but were not limited to Around the time that Tiberius died, Agrippina had become pregnant. Domitius had acknowledged the paternity of the child. On December 15, AD 37, in the early morning, in Antium, Agrippina gave birth to a son. Agrippina and Domitius named their son Lucius Domitius Ahenobarbus, after Domitius' recently deceased father. This child would grow up to become the Emperor Nero. Nero was Agrippina's only natural child. Suetonius states that Domitius was congratulated by friends on the birth of his son, whereupon he replied "I don't think anything produced by me and Agrippina could possibly be good for the state or the people". Caligula and his sisters were accused of having incestuous relationships. On June 10, AD 38, Drusilla died, possibly of a fever, rampant in Rome at the time. He was particularly fond of Drusilla, claiming to treat her as he would his own wife, even though Drusilla had a husband. Following her death Caligula showed no special love or respect toward the surviving sisters and was said to have gone insane. In 39, Agrippina and Livilla, with their maternal cousin, Drusilla's widower Marcus Aemilius Lepidus, were involved in a failed plot to murder Caligula, a plot known as the "Plot of the Three Daggers", which was to make Lepidus the new emperor. Lepidus, Agrippina and Livilla were accused of being lovers. Not much is known concerning this plot and the reasons behind it. At the trial of Lepidus, Caligula felt no compunction about denouncing them as adulteresses, producing handwritten letters discussing how they were going to kill him. Lepidus was executed. Agrippina and Livilla were exiled by their brother to the Pontine Islands. Caligula sold their furniture, jewellery, slaves and freedmen. In January of AD 40, Domitius died of edema (dropsy) at Pyrgi. Lucius had gone to live with his second paternal aunt Domitia Lepida the Younger after Caligula had taken his inheritance away from him. Caligula, his wife Milonia Caesonia and their daughter Julia Drusilla were murdered on January 24, 41. Agrippina's paternal uncle, Claudius, brother of her father Germanicus, became the new Roman Emperor. Claudius lifted the exiles of Agrippina and Livilla. Livilla returned to her husband, while Agrippina was reunited with her estranged son. After the death of her first husband, Agrippina tried to make shameless advances to the future emperor Galba, who showed no interest in her and was devoted to his wife Aemilia Lepida. On one occasion, Galba's mother-in-law gave Agrippina a public reprimand and a slap in the face before a whole bevy of married women. Claudius had Lucius' inheritance reinstated. Lucius became more wealthy despite his youth shortly after Gaius Sallustius Crispus Passienus divorced Lucius' aunt, Domitia Lepida the Elder (Lucius' first paternal aunt) so that Crispus could marry Agrippina. They married, and Crispus became a step-father to Lucius. Crispus was a prominent, influential, witty, wealthy and powerful man, who served twice as consul. He was the adopted grandson and biological great-great-nephew of the historian Sallust. Little is known on their relationship, but Crispus soon died and left his estate to Nero. In the first years of Claudius' reign, Claudius was married to the infamous Empress Valeria Messalina. Although Agrippina was very influential, she kept a very low profile and stayed away from the imperial palace and the court of the emperor. Messalina was Agrippina's second paternal cousin. Among the victims of Messalina's intrigues were Agrippina's surviving sister Livilla, who was charged with having adultery with Seneca the Younger. Seneca was later called back from exile to be a tutor to Nero. Messalina considered Agrippina's son a threat to her son's position and sent assassins to strangle Lucius during his siesta. The assassins fled in terror when they saw a snake suddenly dart from beneath Lucius' pillow—but it was only a sloughed-off snake-skin in his bed, near his pillow. In 47, Crispus died, and at his funeral, the rumor spread around that Agrippina poisoned Crispus to gain his estate. After being widowed a second time, Agrippina was left very wealthy. Later that year at the Secular Games, at the performance of the Troy Pageant, Messalina attended the event with her son Britannicus. Agrippina was also present with Lucius. Agrippina and Lucius received greater applause from the audience than Messalina and Britannicus did. Many people began to show pity and sympathy to Agrippina, due to the unfortunate circumstances in her life. Agrippina wrote a memoir that recorded the misfortunes of her family (casus suorum) and wrote an account of her mother's life. After Messalina was executed in 48 for conspiring with Gaius Silius to overthrow her husband, Claudius considered remarrying for the fourth time. Around this time, Agrippina became the mistress to one of Claudius' advisers, the Greek freedman, Marcus Antonius Pallas. At that time Claudius' advisers were discussing which noblewoman Claudius should marry. Claudius had a reputation that he was easily persuaded. In more recent times, it has been suggested that the Senate may have pushed for the marriage between Agrippina and Claudius to end the feud between the Julian and Claudian branches. This feud dated back to Agrippina's mother's actions against Tiberius after the death of Germanicus, actions which Tiberius had gladly punished. Claudius made references to her in his speeches: "my daughter and foster child, born and bred, in my lap, so to speak". When Claudius decided to marry her, he persuaded a group of senators that the marriage should be arranged in the public interest. In Roman society, an uncle (Claudius) marrying his niece (Agrippina) was considered incestuous and immoral. Agrippina and Claudius married on New Year's Day, 49. This marriage caused widespread disapproval. This was a part of Agrippina's scheming plan to make her son Lucius the new emperor. Her marriage to Claudius was not based on love, but on power. She quickly eliminated her rival Lollia Paulina. Shortly after marrying Claudius, Agrippina charged Paulina with black magic. Paulina did not receive a hearing. Her property was confiscated. She left Italy and, on Agrippina's orders, committed suicide. In the months leading up to her marriage to Claudius, Agrippina's maternal second cousin, the praetor Lucius Junius Silanus Torquatus, was betrothed to Claudius' daughter Claudia Octavia. This betrothal was broken off in 48, when Agrippina, scheming with the consul Lucius Vitellius the Elder, the father of the future Emperor Aulus Vitellius, falsely accused Silanus of incest with his sister Junia Calvina. Agrippina did this hoping to secure a marriage between Octavia and her son. Consequently, Claudius broke off the engagement and forced Silanus to resign from public office. Silanus committed suicide on the day that Agrippina married her uncle, and Calvina was exiled from Italy in early 49. Calvina was called back from exile after the death of Agrippina. Towards the end of 54, Agrippina would order the murder of Silanus' eldest brother Marcus Junius Silanus Torquatus without Nero's knowledge, so that he would not seek revenge against her over his brother's death. On the day that Agrippina married her uncle Claudius as her third husband/his fourth wife, she became an Empress and the most powerful woman in the Roman Empire. She also was a stepmother to Claudia Antonia, Claudius' daughter and only child from his second marriage to Aelia Paetina, and to the young Claudia Octavia and Britannicus, Claudius' children with Valeria Messalina. Agrippina removed or eliminated anyone from the palace or the imperial court who she thought was loyal and dedicated to the memory of the late Messalina. She also eliminated or removed anyone who she considered was a potential threat to her position and the future of her son, one of her victims being Lucius' second paternal aunt and Messalina's mother Domitia Lepida the Younger. In 49, Agrippina was seated on a dais at a parade of captives when their leader the Celtic King Caratacus bowed before her with the same homage and gratitude as he accorded the emperor. In 50, Agrippina was granted the honorific title of Augusta. She was only the third Roman woman (Livia Drusilla and Antonia Minor received this title) and only the second living Roman woman (the first being Antonia) to receive this title. Also that year, Claudius had founded a Roman colony and called the colony "Colonia Claudia Ara Agrippinensis" or "Agrippinensium", today known as Cologne, after Agrippina who was born there. This colony was the only Roman colony to be named after a Roman woman. In 51, she was given a "carpentum" which she used. A carpentum was a sort of ceremonial carriage usually reserved for priests, such as the Vestal Virgins, and sacred statues. That same year she appointed Sextus Afranius Burrus as the head of the Praetorian Guard, replacing the previous head of the Praetorian Guard, Rufrius Crispinus. Ancient sources claim that Agrippina successfully influenced Claudius into adopting her son and making him his successor. Lucius Domitius Ahenobarbus was adopted by his great maternal uncle and stepfather in 50. Lucius' name was changed to "Nero Claudius Caesar Drusus Germanicus" and he became Claudius's adopted son, heir and recognised successor. Agrippina and Claudius betrothed Nero to Octavia, and Agrippina arranged to have Seneca the Younger return from exile to tutor the future emperor. Claudius chose to adopt Nero because of his Julian and Claudian lineage. Agrippina deprived Britannicus of his heritage and further isolated him from his father and succession for the throne in every way possible. For instance, in 51, Agrippina ordered the execution of Britannicus' tutor Sosibius because he had confronted her and was outraged by Claudius' adoption of Nero and his choice of Nero as successor, instead of choosing his own son Britannicus. Nero and Octavia were married on June 9, 53. Claudius later repented of marrying Agrippina and adopting Nero, began to favor Britannicus, and started preparing him for the throne. His actions allegedly gave Agrippina a motive to eliminate Claudius. The ancient sources say she poisoned Claudius on October 13, 54 (a Sunday) with a plate of deadly mushrooms at a banquet, thus enabling Nero to quickly take the throne as emperor. Accounts vary wildly with regard to this private incident and according to more modern sources, it is possible that Claudius died of natural causes; Claudius was 63 years old. Agrippina was named a priestess of the cult of the deified Claudius. She was allowed to visit senate meetings, watching and hearing the meetings from behind a curtain and in all the royal coins and statues she appeared as a ruling empress and rule with her son. In the One year of Nero's reign, Agrippina had amazing and all-powerful influence over her son and the all Empire and at that time she was entirely Omnipotence. She started losing influence over Nero when he began to have an affair with the freed woman Claudia Acte, which Agrippina strongly disapproved of and violently scolded him for. Agrippina began to support Britannicus in her attempt to make him emperor. Britannicus was secretly poisoned on Nero's orders during his own banquet in February 55. The power struggle between Agrippina and her son had begun. Agrippina between 55 and 58 became very watchful and had a critical eye over her son. In 55, Agrippina was forced out of the palace by her son to live in imperial residence. Nero deprived his mother of all honors and powers, and even removed her Roman and German bodyguards. Nero even threatened his mother he would abdicate the throne and would go to live on the Greek Island of Rhodes, a place where Tiberius had lived after divorcing Julia the Elder. Pallas also was dismissed from the court. The fall of Pallas and the opposition of Burrus and Seneca, contributed to Agrippina's loss of authority. Towards 57, Agrippina was expelled from the palace and went to live in a riverside estate in Misenum. While Agrippina lived there or when she went on short visits to Rome, Nero sent people to annoy her. Although living in Misenum, she was still very popular, powerful and influential. Agrippina and Nero would see each other on short visits. The circumstances that surround Agrippina's death are uncertain due to historical contradictions and anti-Nero bias. All surviving stories of Agrippina's death contradict themselves and each other, and are generally fantastical. According to Tacitus, in 58, Nero became involved with the noble woman Poppaea Sabina. With the reasoning that a divorce from Octavia and a marriage to Poppaea was not politically feasible with Agrippina alive, Nero decided to kill Agrippina. Yet, Nero did not marry Poppaea until 62, calling into question this motive. Additionally, Suetonius reveals that Poppaea's husband, Otho, was not sent away by Nero until after Agrippina's death in 59, making it highly unlikely that already married Poppaea would be pressing Nero. Some modern historians theorize that Nero's decision to kill Agrippina was prompted by her plot to replace him with either Gaius Rubellius Plautus (Nero's maternal second cousin) or Britannicus (Claudius' biological son). Tacitus claims that Nero considered poisoning or stabbing her, but felt these methods were too difficult and suspicious, so he settled on – after the advice of his former tutor Anicetus – building a self-sinking boat. Though aware of the plot, Agrippina embarked on this boat and was nearly crushed by a collapsing lead ceiling only to be saved by the side of a sofa breaking the ceiling's fall. Though the collapsing ceiling missed Agrippina, it crushed her attendant who was outside by the helm. The boat failed to sink from the lead ceiling, so the crew then sank the boat, but Agrippina swam to shore. Her friend, Acerronia Polla, was attacked by oarsmen while still in the water, and was either bludgeoned to death or drowned, since she was exclaiming that she was Agrippina, with the intention of being saved. She did not know, however, that this was an assassination attempt, not a mere accident. Agrippina was met at the shore by crowds of admirers. News of Agrippina's survival reached Nero so he sent three assassins to kill her. Suetonius says that Agrippina's "over-watchful" and "over-critical" eye that she kept over Nero drove him to murdering her. After months of attempting to humiliate her by depriving her of her power, honour, and bodyguards, he also expelled her from the Palatine, followed by the people he sent to "pester" her with lawsuits and "jeers and catcalls". When he eventually turned to murder, he first tried poison, three times in fact. She prevented her death by taking the antidote in advance. Afterwards, he rigged up a machine in her room which would drop her ceiling tiles onto her as she slept, but she once again escaped her death after she received word of the plan. Nero's final plan was to get her in a boat which would collapse and sink. He sent her a friendly letter asking to reconcile and inviting her to celebrate the Quinquatrus at Baiae with him. He arranged an "accidental" collision between her galley and one of his captains. When returning home, he offered her his collapsible boat, as opposed to her damaged galley. The next day, Nero received word of her survival after the boat sank from her freedman Agermus. Panicking, Nero ordered a guard to "surreptitiously" drop a blade behind Agermus and Nero immediately had him arrested on account of attempted murder. Nero ordered the assassination of Agrippina. He made it look as if Agrippina had committed suicide after her plot to kill Nero had been uncovered. After Agrippina's death, Suetonius says that Nero examined Agrippina's corpse and discussed her good and bad points. It is said that Nero believed Agrippina to haunt him after her death. The tale of Cassius Dio is also somewhat different. It starts again with Poppaea as the motive behind the murder. Nero designed a ship that would open at the bottom while at sea. Agrippina was put aboard and after the bottom of the ship opened up, she fell into the water. Agrippina swam to shore so Nero sent an assassin to kill her. Nero then claimed Agrippina had plotted to kill him and committed suicide. Her reputed last words, uttered as the assassin was about to strike, were "Smite my womb", the implication here being she wished to be destroyed first in that part of her body that had given birth to so "abominable a son." After Agrippina's death, Nero viewed her corpse and commented how beautiful she was, according to some. Her body was cremated that night on a dining couch. At his mother's funeral, Nero was witless, speechless and rather scared. When the news spread that Agrippina had died, the Roman army, senate and various people sent him letters of congratulations that he had been saved from his mother's plots. During the remainder of Nero's reign, Agrippina's grave was not covered or enclosed. Her household later on gave her a modest tomb in Misenum. Nero would have his mother's death on his conscience. He felt so guilty he would sometimes have nightmares about his mother. He even saw his mother's ghost and got Persian magicians to scare her away. Years before she died, Agrippina had visited astrologers to ask about her son's future. The astrologers had rather accurately predicted that her son would become emperor and would kill her. She replied, "Let him kill me, provided he becomes emperor," according to Tacitus. She is remembered in "De Mulieribus Claris", a collection of biographies of historical and mythological women by the Florentine author Giovanni Boccaccio, composed in 136162. It is notable as the first collection devoted exclusively to biographies of women in Western literature.. Most ancient Roman sources are quite critical of Agrippina the Younger. Tacitus considered her vicious and had a strong disposition against her. Other sources are Suetonius and Cassius Dio.
https://en.wikipedia.org/wiki?curid=1557
American Chinese cuisine American Chinese cuisine is a style of Chinese cuisine developed by Chinese Americans. The dishes served in many North American Chinese restaurants are adapted to American tastes and often differ significantly from those found in China. Chinese immigrants arrived in the United States seeking employment as miners and railroad workers. As larger groups of Chinese immigrants arrived, laws were put in place preventing them from owning land. They mostly lived together in ghettos, individually referred to as "Chinatown". Here the immigrants started their own small businesses, including restaurants and laundry services. By the 19th century, the Chinese community in San Francisco operated sophisticated and sometimes luxurious restaurants patronized mainly by Chinese. The restaurants in smaller towns (mostly owned by Chinese immigrants) served food based on what their customers requested, anything ranging from pork chop sandwiches and apple pie, to beans and eggs. Many of these small-town restaurant owners were self-taught family cooks who improvised on different cooking methods and whatever ingredients were available. These smaller restaurants were responsible for developing American Chinese cuisine, where the food was modified to suit a more American palate. First catering to miners and railroad workers, they established new eateries in towns where Chinese food was completely unknown, adapting local ingredients and catering to their customers' tastes. Even though the new flavors and dishes meant they were not strictly Chinese cuisine, these Chinese restaurants have been cultural ambassadors to Americans. Chinese restaurants in the United States began during the California Gold Rush, which brought twenty to thirty thousand immigrants across from the Canton (Guangdong) region of China. By 1850, there were five Chinese restaurants in San Francisco. Soon after, significant amounts of food were being imported from China to America's west coast. The trend spread steadily eastward with the growth of the American railways, particularly to New York City. The Chinese Exclusion Act allowed merchants to enter the country, and in 1915, restaurant owners became eligible for merchant visas. This fueled the opening of Chinese restaurants as an immigration vehicle. , the United States had 46,700 Chinese restaurants. Along the way, cooks adapted southern Chinese dishes such as chop suey and developed a style of Chinese food not found in China. Restaurants (along with Chinese laundries) provided an ethnic niche for small businesses at a time when the Chinese people were excluded from most jobs in the wage economy by ethnic discrimination or lack of language fluency. By the 1920s, this cuisine, particularly chop suey, became popular among middle-class Americans. However, after World War II it began to be dismissed for not being "authentic". Late 20th century tastes have been more accommodating. By this time it became evident that Chinese restaurants no longer catered mainly for Chinese customers. Chinese American restaurants played a key role in ushering in the era of take-out and delivery food in America. In New York City delivery was pioneered in the 1970s by "Empire Szechuan Gourmet Franchise" which hired Taiwanese students studying at Columbia University to do the work. Chinese American restaurants were among the first restaurants to use picture menus. Beginning in the 1950s Taiwanese immigrants replaced Cantonese immigrants as the primary labor force in American Chinese restaurants. These immigrants expanded American Chinese cuisine beyond Cantonese cuisine to encompass dishes from many different regions of China as well as Japanese inspired dishes. Taiwanese immigration largely ended with the 1990s due to an economic boom and democratization in Taiwan. From the 1990s onward immigrants from China once again made up the majority of cooks in American Chinese restaurants. There has been a consequential component of Chinese emigration of illegal origin, most notably Fuzhou people from Fujian Province and Wenzhounese from Zhejiang Province in Mainland China, specifically destined to work in Chinese restaurants in New York City, beginning in the 1980s. Adapting Chinese cooking techniques to local produce and tastes has led to the development of American Chinese cuisine. Many of the Chinese restaurant menus in the U.S. are printed in Chinatown, Manhattan, which has a strong Chinese American demographic. In 2011, the Smithsonian National Museum of American History displayed some of the historical background and cultural artefacts of American Chinese cuisine in its exhibit entitled, "Sweet & Sour: A Look at the History of Chinese Food in the United States". American Chinese food builds from styles and food habits brought from the southern province of Guangdong, often from the Toisan district of Toisan, the origin of most Chinese immigration before the closure of immigration from China in 1924. These Chinese families developed new styles and used readily available ingredients, especially in California. The type of Chinese American cooking served in restaurants was different from the foods eaten in Chinese American homes. Of the various regional cuisines in China, Cantonese cuisine has been the most influential in the development of American Chinese food. Among the common differences is to treat vegetables as a side dish or garnish, while traditional cuisines of China emphasize vegetables. This can be seen in the use of carrots and tomatoes. Cuisine in China makes frequent use of Asian leaf vegetables like bok choy and kai-lan and puts a greater emphasis on fresh meat and seafood. Stir frying, pan frying, and deep frying tend to be the most common Chinese cooking techniques used in American Chinese cuisine, which are all easily done using a wok (a Chinese frying pan with bowl-like features and which accommodates very high temperatures). The food also has a reputation for high levels of MSG to enhance the flavor. Market forces and customer demand have encouraged many restaurants to offer "MSG Free" or "No MSG" menus, or to omit this ingredient on request. American Chinese cuisine makes use of ingredients not native to and very rarely used in China. One such example is the common use of Western broccoli () instead of Chinese broccoli (Gai-lan, ) in American Chinese cuisine. Occasionally, Western broccoli is also referred to as "" in Cantonese () in order not to confuse the two styles of broccoli. Among Chinese speakers, however, it is typically understood that one is referring to the leafy vegetable unless otherwise specified. This is also the case with the words for carrot ("luo buo" or "lo baak," or "hong luo buo", "hong" meaning "red") and onion ("yang cong"). "Lo baak", in Cantonese, can refer to several types of rod-shaped root vegetable including carrot, daikon, green radish, or an umbrella term for all of them. The orange Western carrot is known in some areas of China as "foreign radish" (or more properly "hung lo baak" in Cantonese, "hung" meaning "red"). When the word for onion, "cong", is used, it is understood that one is referring to "green onions" (otherwise known to English-speakers as "scallions" or "spring onions"). The larger, many-layered onion bulb common in the United States is called "yang cong". This translates as "foreign onion". These names make it evident that the American broccoli, carrot, and onion are not indigenous to China, and therefore are less common in the traditional cuisines of China. Egg fried rice in American Chinese cuisine is also prepared differently, with more soy sauce added for more flavor whereas the traditional egg fried rice uses less soy sauce. Some food styles, such as dim sum, were also modified to fit American palates, such as added batter for fried dishes and extra soy sauce. Salads containing raw or uncooked ingredients are rare in traditional Chinese cuisine, as are Japanese style sushi or sashimi. However, an increasing number of American Chinese restaurants, including some upscale establishments, have started to offer these items in response to customer demand. Ming Tsai, the owner of the Blue Ginger restaurant in Wellesley, Massachusetts, and host of PBS culinary show "Simply Ming", said that American Chinese restaurants typically try to have food representing 3-5 regions of China at one time, have chop suey, or have "fried vegetables and some protein in a thick sauce", "eight different sweet and sour dishes", or "a whole page of 20 different chow meins or fried rice dishes". Tsai said "Chinese-American cuisine is 'dumbed-down' Chinese food. It's adapted... to be blander, thicker and sweeter for the American public". Most American Chinese establishments cater to non-Chinese customers with menus written in English or containing pictures. If separate Chinese-language menus are available, they typically feature items such as liver, chicken feet, or other meat dishes that might deter American customers. In Chinatown, Manhattan, the restaurants were known for having a "phantom" menu with food preferred by ethnic Chinese, but believed to be disliked by non-Chinese Americans. Dishes that often appear on American Chinese restaurant menus include: Authentic restaurants with Chinese-language menus may offer "yellow-hair chicken" (), essentially a free-range chicken, as opposed to typical American mass-farmed chicken. Yellow-hair chicken is valued for its flavor, but needs to be cooked properly to be tender due to its lower fat and higher muscle content. This dish usually does not appear on the English-language menu. Dau Miu () is a Chinese vegetable that has become popular since the early 1990s, and now not only appears on English-language menus, usually as "pea shoots", but is often served by upscale non-Asian restaurants as well. Originally it was only available during a few months of the year, but it is now grown in greenhouses and is available year-round. The New York metropolitan area is home to Chinese populations representing the largest Chinese population outside of Asia, constituting the largest metropolitan Asian American group in the United States, and the largest Asian-national metropolitan diaspora in the Western Hemisphere. The Chinese American population of the New York City metropolitan area was an estimated 893,697 as of 2017; and given the New York metropolitan area's status as the leading gateway for Chinese immigrants to the United States, greater than San Francisco and Los Angeles combined, all popular styles of regional Chinese cuisine have commensurately become ubiquitously accessible in New York City, including Hakka, Taiwanese, Shanghainese, Hunanese, Szechuan, Cantonese, Fujianese, Xinjiang, Zhejiang, and Korean Chinese cuisine. Even the relatively obscure Dongbei style of cuisine indigenous to Northeast China is now available in Flushing, Queens, as well as Mongolian cuisine and Uyghur cuisine. The availability of the regional variations of Chinese cuisine originating from throughout the different provinces of China is most apparent in the city's Chinatowns in Queens, particularly the Flushing Chinatown (法拉盛華埠), but is also notable in the city's Chinatowns in Brooklyn and Manhattan. Kosher preparation of Chinese food is also widely available in New York City, given the metropolitan area's large Jewish and particularly Orthodox Jewish populations. The perception that American Jews eat at Chinese restaurants on Christmas Day is documented in media as a common stereotype with a basis in fact. The tradition may have arisen from the lack of other open restaurants on Christmas Day, as well as the close proximity of Jewish and Chinese immigrants to each other in New York City, and the absence of dairy foods combined with meat. Kosher Chinese food is usually prepared in New York City, as well as in other large cities with Orthodox Jewish neighborhoods, under strict rabbinical supervision as a prerequisite for Kosher certification. Chinese populations in Los Angeles represent at least 21 of the 34 provincial-level administrative units of China, making greater Los Angeles home to a diverse population of Chinese in the United States. Chinese American cuisine in the Greater Los Angeles area is concentrated in Chinese ethnoburbs rather than traditional Chinatowns. The oldest Chinese ethnoburb is Monterey Park, considered to be the nation's first suburban Chinatown. Although Chinatown in Los Angeles is still a significant commercial center for Chinese immigrants, the majority have relocated to the areas with significant Chinese immigrant populations is the San Gabriel Valley, stretching from Monterey Park into the cities of Alhambra, San Gabriel, Rosemead, San Marino, South Pasadena, West Covina, Walnut, City of Industry, Diamond Bar, Arcadia, and Temple City. The Valley Boulevard corridor is the main artery of Chinese restaurants in the San Gabriel Valley. Another hub with a significant Chinese population is Irvine (Orange County). More than 525,000 Asian Americans live in the San Gabriel Valley alone, with over 67% of them being foreign born. The valley has become a brand-name tourist destination famous in China. Of the ten cities in the United States with the highest proportions of Chinese Americans, the top eight are located in the San Gabriel Valley, making it one the largest concentrated hubs for Chinese-Americans in North America. Some regional styles of Chinese cuisine include Beijing, Chengdu, Chonqing, Dalian, Hangzhou, Hong Kong, Hunan, Mongolian hot pot, Nanjing, Shanghai, Shanxi, Shenyang, Wuxi, Xinjiang, Yunnan, and Wuhan. Since the early 1990s, many American Chinese restaurants influenced by California cuisine have opened in the San Francisco Bay Area. The trademark dishes of American Chinese cuisine remain on the menu, but there is more emphasis on fresh vegetables, and the selection is vegetarian-friendly. This new cuisine has exotic ingredients like mangos and portobello mushrooms. Brown rice is often offered as an alternative to white rice. Some restaurants substitute grilled wheat flour tortillas for the rice pancakes in mu shu dishes. This occurs even in some restaurants that would not otherwise be identified as California Chinese, both the more Westernized places and the more authentic places. There is a Mexican bakery that sells some restaurants thinner tortillas made for use with mu shu. Mu shu purists do not always react positively to this trend. In addition, many restaurants serving more native-style Chinese cuisines exist, due to the high numbers and proportion of ethnic Chinese in the San Francisco Bay Area. Restaurants specializing in Cantonese, Sichuanese, Hunanese, Northern Chinese, Shanghainese, Taiwanese, and Hong Kong traditions are widely available, as are more specialized restaurants such as seafood restaurants, Hong Kong-style diners and cafes, also known as "Cha chaan teng" (), dim sum teahouses, and hot pot restaurants. Many Chinatown areas also feature Chinese bakeries, boba milk tea shops, roasted meat, vegetarian cuisine, and specialized dessert shops. Chop suey is not widely available in San Francisco, and the area's chow mein is different from Midwestern chow mein. Chinese cuisine in Boston reflects a mélange of multiple influential factors. The growing Boston Chinatown accommodates Chinese-owned bus lines shuttling an increasing number of passengers to and from the numerous Chinatowns in New York City, and this has led to some commonalities in the local Chinese cuisine derived from Chinese food in New York. A large immigrant Fujianese immigrant population has made a home in Boston, leading to Fuzhou cuisine being readily available in Boston. An increasing Vietnamese population has also been exerting an influence on Chinese cuisine in Greater Boston. Finally, innovative dishes incorporating chow mein and chop suey as well as locally farmed produce and regionally procured seafood ingredients are found in Chinese as well as non-Chinese food in and around Boston. Joyce Chen introduced northern Chinese (Mandarin) and Shanghainese dishes to Boston in the 1950s, including Peking duck, moo shu pork, hot and sour soup, and potstickers, which she called "Peking Ravioli" or "Ravs". Her restaurants would be frequented by early workers on the ARPANET, John Kenneth Galbraith, James Beard, Julia Child, Henry Kissinger, Beverly Sills, and Danny Kaye. A former Harvard University president called her eating establishment "not merely a restaurant, but a cultural exchange center". The evolving American Chinese scene in Philadelphia exhibits commonalities with the Chinese cuisine scenes in both New York City and Boston. Similarly to Boston, Philadelphia is experiencing significant Chinese immigration from New York City, 95 miles to the north, and from China, the top country of birth by a significant margin sending immigrants to Philadelphia. There is a growing Fujianese community in Philadelphia as well, and Fuzhou cuisine is readily available in the Philadelphia Chinatown. Also like Boston, the emerging Vietnamese cuisine scene in Philadelphia is contributing to the milieu of Chinese cuisine, with some Chinese-American restaurants adopting Vietnamese influences or recipes. Hawaiian-Chinese food developed somewhat differently from Chinese cuisine in the continental United States. Owing to the diversity of Pacific ethnicities in Hawaii and the history of the Chinese influence in Hawaii, resident Chinese cuisine forms a component of the cuisine of Hawaii, which is a fusion of different culinary traditions. Some Chinese dishes are typically served as part of plate lunches in Hawaii. The names of foods are different as well, such as "Manapua", from the Hawaiian contraction of "Mea ono pua'a" or "delicious pork item" from the dim sum "bao", though the meat is not necessarily pork. Many American films (for example:"The Godfather"; "Ghostbusters"; "Crossing Delancey"; "Paid in Full"; "Inside Out") involve scenes where Chinese take-out food is eaten from oyster pails, a "consistent choice of cuisine in all these cases, however, might just be an indicator of its popularity". A running gag in "Dallas" is Cliff Barnes' fondness for inexpensive Chinese take-out food, as opposed to nemesis J. R. Ewing frequenting fine restaurants. Among the numerous American television series and films that feature Chinese restaurants as a setting include "Seinfeld" (particularly the episode "The Chinese Restaurant"), "Year of the Dragon", "Lethal Weapon 4", "Mickey Blue Eyes", "Rush Hour 2", and "Men in Black 3". In most cases it is not an actual restaurant but a movie set that typifies the stereotypical American Chinese eatery, featuring "paper lanterns and intricate woodwork", with "numerous fish tanks and detailed [red] wallpaper [with gold designs]" and "golden dragons", plus "hanging ducks in the window".
https://en.wikipedia.org/wiki?curid=1558
Arthur Aikin Arthur Aikin, FLS, FGS (19 May 177315 April 1854) was an English chemist, mineralogist and scientific writer, and was a founding member of the Chemical Society (now the Royal Society of Chemistry). He first became its Treasurer in 1841, and later became the Society's second President. He was born at Warrington, Lancashire into a distinguished literary family of prominent Unitarians. The best known of these was his paternal aunt, Anna Letitia Barbauld, a woman of letters who wrote poetry and essays as well as early children's literature. His father, Dr John Aikin, was a medical doctor, historian, and author. His grandfather, also called John (1713–1780), was a Unitarian scholar and theological tutor, closely associated with Warrington Academy. His sister Lucy (1781–1864) was a historical writer. Their brother Charles was adopted by their famous aunt and brought up as their cousin. Arthur Aikin studied chemistry under Joseph Priestley in the New College at Hackney, and gave attention to the practical applications of the science. In early life he was a Unitarian minister for a short time. Aikin lectured on chemistry at Guy's Hospital for thirty-two years. He became the President of the British Mineralogical Society in 1801 for five years up until 1806 when the Society merged with the Askesian Society. From 1803 to 1808 he was editor of the "Annual Review". In 1805 Aiken also became a Proprietor of the London Institution, which was officially founded in 1806. He was one of the founders of the Geological Society of London in 1807 and was its honorary secretary in 1812–1817. He also gave lectures in 1813 and 1814. He contributed papers on the Wrekin and the Shropshire coalfield, among others, to the transactions of that society. His "Manual of Mineralogy" was published in 1814. Later he became the paid Secretary of the Society of Arts and later was elected as a Fellow. He was founder of the Chemical Society of London in 1841, being its first Treasurer and, between 1843 and 1845, second President. In order to support himself, outside of his work with the British Mineralogical Society, the London Institution and the Geological Society, Aiken worked as a writer, translator and lecturer to the public and to medical students at Guy's Hospital. His writing and journalism were useful for publicising foreign scientific news to the wider British public. He was also a member of the Linnean Society and in 1820 joined the Institution of Civil Engineers. He was highly esteemed as a man of sound judgement and wide knowledge. Aikin never married, and died at Hoxton in London in 1854. For "Rees's Cyclopædia" he wrote articles about Chemistry, Geology and Mineralogy, but the topics are not known.
https://en.wikipedia.org/wiki?curid=1563
Ailanthus Ailanthus (; derived from "ailanto," an Ambonese word probably meaning "tree of the gods" or "tree of heaven") is a genus of trees belonging to the family Simaroubaceae, in the order Sapindales (formerly Rutales or Geraniales). The genus is native from east Asia south to northern Australasia. The number of living species is disputed, with some authorities accepting up to ten species, while others accept six or fewer. Species include: There is a good fossil record of "Ailanthus" with many species names based on their geographic occurrence, but almost all of these have very similar morphology and have been grouped as a single species among the three species recognized: A silk spinning moth, the ailanthus silkmoth ("Samia cynthia"), lives on "Ailanthus" leaves, and yields a silk more durable and cheaper than mulberry silk, but inferior to it in fineness and gloss. This moth has been introduced to the eastern United States and is common near many towns; it is about 12 cm across, with angulated wings, and in color olive brown, with white markings. Other Lepidoptera whose larvae feed on "Ailanthus" include "Endoclita malabaricus".
https://en.wikipedia.org/wiki?curid=1564
Aimoin Aimoin of Fleury (; ), French chronicler, was born at Villefranche-de-Longchat about 960, and in early life entered the monastery of Fleury, where he became a monk and passed the greater part of his life. Between c. 980 and 985 Aimoin wrote about St. Benedict in Abbey of Fleury-sur-Loire. His chief work is a "Historia Francorum", or "Libri V. de Gestis Francorum", which deals with the history of the Franks from the earliest times to 653, and was continued by other writers until the middle of the twelfth century. It was much in vogue during the Middle Ages, but its historical value is now regarded as slight. It was edited by G. Waitz and published in the "Monumenta Germaniae Historica: Scriptores", Band xxvi (Hanover and Berlin, 1826–1892). In 1004 he also wrote a "Vita Abbonis, abbatis Floriacensis", the last of a series of lives of the abbots of Fleury, all of which, except the life of Abbo, have been lost. This was published by J. Mabillon in the "Acta sanctorum ordinis sancti Benedicti" (Paris, 1668–1701). Aimoin's third work was the composition of books ii and iii of the "Miracula sancti Benedicti", the first book of which was written by another monk of Fleury named Adrevald. This also appears in the "Acta sanctorum".
https://en.wikipedia.org/wiki?curid=1565
Akkadian Empire The Akkadian Empire () was the first ancient empire of Mesopotamia, centered in the city of Akkad and its surrounding region, which the Bible also called Akkad. The empire united Akkadian and Sumerian speakers under one rule. The Akkadian Empire exercised influence across Mesopotamia, the Levant, and Anatolia, sending military expeditions as far south as Dilmun and Magan (modern Bahrain and Oman) in the Arabian Peninsula. During the 3rd millennium BC, there developed a cultural symbiosis between the Sumerians and the Akkadians, which included widespread bilingualism. Akkadian, an East Semitic language, gradually replaced Sumerian as a spoken language somewhere between the 3rd and the 2nd millennia BC (the exact dating being a matter of debate). The Akkadian Empire reached its political peak between the 24th and 22nd centuries BC, following the conquests by its founder Sargon of Akkad. Under Sargon and his successors, the Akkadian language was briefly imposed on neighboring conquered states such as Elam and Gutium. Akkad is sometimes regarded as the first empire in history, though the meaning of this term is not precise, and there are earlier Sumerian claimants. After the fall of the Akkadian Empire, the people of Mesopotamia eventually coalesced into two major Akkadian-speaking nations: Assyria in the north, and, a few centuries later, Babylonia in the south. The Bible refers to Akkad as The Great City in Genesis 10:10–12, which states: Nimrod's historical identity is unknown or debated, but Nimrod has been identified as Sargon of Akkad by some, and others have compared him with the legendary Gilgamesh, founder of Uruk. Today, scholars have documented some 7,000 texts from the Akkadian period, written in both Sumerian and Akkadian. Many later texts from the successor states of Assyria and Babylonia also deal with the Akkadian Empire. Understanding of the Akkadian Empire continues to be hampered by the fact that its capital Akkad has not yet been located, despite numerous attempts. Precise dating of archaeological sites is hindered by the fact that there are no clear distinctions between artifact assemblages thought to stem from the preceding Early Dynastic period, and those thought to be Akkadian. Likewise, material that is thought to be Akkadian continues to be in use into the Ur III period. Many of the more recent insights on the Akkadian Empire have come from excavations in the Upper Khabur area in modern northeastern Syria which was to become a part of Assyria after the fall of Akkad. For example, excavations at Tell Mozan (ancient Urkesh) brought to light a sealing of Tar'am-Agade, a previously unknown daughter of Naram-Sin, who was possibly married to an unidentified local "endan" (ruler). The excavators at nearby Tell Leilan (ancient Shekhna/Shubat-Enlil) have used the results from their investigations to argue that the Akkadian Empire came to an end due to a sudden drought, the so-called 4.2 kiloyear event. The impact of this climate event on Mesopotamia in general, and on the Akkadian Empire in particular, continues to be hotly debated. Excavation at the modern site of Tell Brak has suggested that the Akkadians rebuilt a city ("Brak" or "Nagar") on this site, for use as an administrative center. The city included two large buildings including a complex with temple, offices, courtyard, and large ovens. The Akkadian Period is generally dated to either: (according to the middle chronology timeline of the Ancient Near East), or (according to the short chronology timeline of the Ancient Near East.) It was preceded by the Early Dynastic Period of Mesopotamia (ED) and succeeded by the Ur III Period, although both transitions are blurry. For example: it is likely that the rise of Sargon of Akkad coincided with the late ED Period and that the final Akkadian kings ruled simultaneously with the Gutian kings alongside rulers at the city-states of both: Uruk and Lagash. The Akkadian Period is contemporary with: EB IV (in Israel), EB IVA and EJ IV (in Syria), and EB IIIB (in Turkey.) The relative order of Akkadian kings is clear. The absolute dates of their reigns are approximate (as with all dates prior to the late Bronze Age collapse "c." 1200 BC). The Akkadian Empire takes its name from the region and the city of Akkad, both of which were localized in the general confluence area of the Tigris and Euphrates Rivers. Although the city of Akkad has not yet been identified on the ground, it is known from various textual sources. Among these is at least one text predating the reign of Sargon. Together with the fact that the name Akkad is of non-Akkadian origin, this suggests that the city of Akkad may have already been occupied in pre-Sargonic times. Sargon of Akkad defeated and captured Lugal-zage-si in the Battle of Uruk and conquered his empire. The earliest records in the Akkadian language date to the time of Sargon. Sargon was claimed to be the son of La'ibum or Itti-Bel, a humble gardener, and possibly a hierodule, or priestess to Ishtar or Inanna. One legend related to Sargon in Assyrian times says that Later claims made on behalf of Sargon were that his mother was an "entu" priestess (high priestess). The claims might have been made to ensure a pedigree of nobility, since only a highly placed family could achieve such a position. Originally a cupbearer (Rabshakeh) to a king of Kish with a Semitic name, Ur-Zababa, Sargon thus became a gardener, responsible for the task of clearing out irrigation canals. The royal cupbearer at this time was in fact a prominent political position, close to the king and with various high level responsibilities not suggested by the title of the position itself. This gave him access to a disciplined corps of workers, who also may have served as his first soldiers. Displacing Ur-Zababa, Sargon was crowned king, and he entered upon a career of foreign conquest. Four times he invaded Syria and Canaan, and he spent three years thoroughly subduing the countries of "the west" to unite them with Mesopotamia "into a single empire". However, Sargon took this process further, conquering many of the surrounding regions to create an empire that reached westward as far as the Mediterranean Sea and perhaps Cyprus ("Kaptara"); northward as far as the mountains (a later Hittite text asserts he fought the Hattian king Nurdaggal of Burushanda, well into Anatolia); eastward over Elam; and as far south as Magan (Oman) — a region over which he reigned for purportedly 56 years, though only four "year-names" survive. He consolidated his dominion over his territories by replacing the earlier opposing rulers with noble citizens of Akkad, his native city where loyalty would thus be ensured. Trade extended from the silver mines of Anatolia to the lapis lazuli mines in modern Afghanistan, the cedars of Lebanon and the copper of Magan. This consolidation of the city-states of Sumer and Akkad reflected the growing economic and political power of Mesopotamia. The empire's breadbasket was the rain-fed agricultural system of Assyria and a chain of fortresses was built to control the imperial wheat production. Images of Sargon were erected on the shores of the Mediterranean, in token of his victories, and cities and palaces were built at home with the spoils of the conquered lands. Elam and the northern part of Mesopotamia (Assyria/Subartu) were also subjugated, and rebellions in Sumer were put down. Contract tablets have been found dated in the years of the campaigns against Canaan and against Sarlak, king of Gutium. He also boasted of having subjugated the "four-quarters" — the lands surrounding Akkad to the north (Assyria), the south (Sumer), the east (Elam), and the west (Martu). Some of the earliest historiographic texts (ABC 19, 20) suggest he rebuilt the city of Babylon ("Bab-ilu") in its new location near Akkad. Sargon, throughout his long life, showed special deference to the Sumerian deities, particularly Inanna (Ishtar), his patroness, and Zababa, the warrior god of Kish. He called himself "The anointed priest of Anu" and "the great" ensi" of Enlil" and his daughter, Enheduanna, was installed as priestess to Nanna at the temple in Ur. Troubles multiplied toward the end of his reign. A later Babylonian text states: It refers to his campaign in "Elam", where he defeated a coalition army led by the King of Awan and forced the vanquished to become his vassals. Also shortly after, another revolt took place: Sargon had crushed opposition even at old age. These difficulties broke out again in the reign of his sons, where revolts broke out during the nine-year reign of Rimush (2278–2270 BC), who fought hard to retain the empire, and was successful until he was assassinated by some of his own courtiers. According to his inscriptions, he faced widespread revolts, and had to reconquer the cities of Ur, Umma, Adab, Lagash, Der, and Kazallu from rebellious "ensis": Rimush introduced mass slaughter and large scale destruction of the Sumerian city-states, and maintained meticulous records of his destructions. Most of the major Sumerian cities were destroyed, and Sumerian human losses were enormous: Rimush's elder brother, Manishtushu (2269–2255 BC) succeeded him. The latter seems to have fought a sea battle against 32 kings who had gathered against him and took control over their pre-Arab country, consisting of modern-day United Arab Emirates and Oman. Despite the success, like his brother he seems to have been assassinated in a palace conspiracy. Manishtushu's son and successor, Naram-Sin (2254–2218 BC), due to vast military conquests, assumed the imperial title "King Naram-Sin, king of the four-quarters" ("Lugal Naram-Sîn, Šar kibrat 'arbaim"), the four-quarters as a reference to the entire world. He was also for the first time in Sumerian culture, addressed as "the god (Sumerian = DINGIR, Akkadian = "ilu") of Agade" (Akkad), in opposition to the previous religious belief that kings were only representatives of the people towards the gods. He also faced revolts at the start of his reign, but quickly crushed them. Naram-Sin also recorded the Akkadian conquest of Ebla as well as Armanum and its king. The location of Armanum is debated: it is sometimes identified with a Syrian kingdom mentioned in the tablets of Ebla as Armi, whose location is also debated; while historian Adelheid Otto identifies it with the Citadel of Bazi at the Tell Banat complex on the Euphrates River between Ebla and Tell Brak, others like Wayne Horowitz identify it with Aleppo. Further, while most scholars place Armanum in Syria, Michael C. Astour believes it to be located north of the Hamrin Mountains in northern Iraq. To better police Syria, he built a royal residence at Tell Brak, a crossroads at the heart of the Khabur River basin of the Jezirah. Naram-Sin campaigned against Magan which also revolted; Naram-Sin "marched against Magan and personally caught Mandannu, its king", where he instated garrisons to protect the main roads. The chief threat seemed to be coming from the northern Zagros Mountains, the Lulubis and the Gutians. A campaign against the Lullubi led to the carving of the "Victory Stele of Naram-Suen", now in the Louvre. Hittite sources claim Naram-Sin of Akkad even ventured into Anatolia, battling the Hittite and Hurrian kings Pamba of Hatti, Zipani of Kanesh, and 15 others. This newfound Akkadian wealth may have been based upon benign climatic conditions, huge agricultural surpluses and the confiscation of the wealth of other peoples. The economy was highly planned. Grain was cleaned, and rations of grain and oil were distributed in standardized vessels made by the city's potters. Taxes were paid in produce and labour on public walls, including city walls, temples, irrigation canals and waterways, producing huge agricultural surpluses. In later Assyrian and Babylonian texts, the name "Akkad", together with "Sumer", appears as part of the royal title, as in the Sumerian LUGAL KI-EN-GI KI-URI or Akkadian "Šar māt Šumeri u Akkadi", translating to "king of Sumer and Akkad". This title was assumed by the king who seized control of Nippur, the intellectual and religious center of southern Mesopotamia. During the Akkadian period, the Akkadian language became the lingua franca of the Middle East, and was officially used for administration, although the Sumerian language remained as a spoken and literary language. The spread of Akkadian stretched from Syria to Elam, and even the Elamite language was temporarily written in Mesopotamian cuneiform. Akkadian texts later found their way to far-off places, from Egypt (in the Amarna Period) and Anatolia, to Persia (Behistun). The submission of some Sumerian rulers to the Akkadian Empire, is recorded in the seal inscriptions of Sumerian rulers such as Lugal-ushumgal, governor (ensi) of Lagash ("Shirpula"), circa 2230-2210 BCE. Several inscriptions of Lugal-ushumgal are known, particularly seal impressions, which refer to him as governor of Lagash and at the time a vassal (, "arad", "servant" or "slave") of Naram-Sin, as well as his successor Shar-kali-sharri. One of these seals proclaims: It can be considered that Lugalushumgal was a collaborator of the Akkadian Empire, as was Meskigal, ruler of Adab. Later however, Lugal-ushumgal was succeeded by Puzer-Mama who, as Akkadian power waned, achieved independence from Shar-Kali-Sharri, assuming the title of "King of Lagash" and starting the illustrious Second Dynasty of Lagash. The empire of Akkad fell, perhaps in the 22nd century BC, within 180 years of its founding, ushering in a "Dark Age" with no prominent imperial authority until Third Dynasty of Ur. The region's political structure may have reverted to the "status quo ante" of local governance by city-states. Shu-turul appears to have restored some centralized authority; however, he was unable to prevent the empire eventually collapsing outright from the invasion of barbarian peoples from the Zagros Mountains known as the Gutians. Little is known about the Gutian period, or how long it endured. Cuneiform sources suggest that the Gutians' administration showed little concern for maintaining agriculture, written records, or public safety; they reputedly released all farm animals to roam about Mesopotamia freely and soon brought about famine and rocketing grain prices. The Sumerian king Ur-Nammu (2112–2095 BC) cleared the Gutians from Mesopotamia during his reign. The "Sumerian King List", describing the Akkadian Empire after the death of Shar-kali-shari, states: However, there are no known year-names or other archaeological evidence verifying any of these later kings of Akkad or Uruk, apart from several artefact referencing king Dudu of Akkad and Shu-turul. The named kings of Uruk may have been contemporaries of the last kings of Akkad, but in any event could not have been very prominent. The period between BC and 2004 BC is known as the Ur III period. Documents again began to be written in Sumerian, although Sumerian was becoming a purely literary or liturgical language, much as Latin later would be in Medieval Europe. One explanation for the end of the Akkadian empire is simply that the Akkadian dynasty could not maintain its political supremacy over other independently powerful city-states. One theory associates regional decline at the end of the Akkadian period (and of the First Intermediary Period following the Old Kingdom in Ancient Egypt) was associated with rapidly increasing aridity, and failing rainfall in the region of the Ancient Near East, caused by a global centennial-scale drought. Harvey Weiss has shown that Peter B. deMenocal has shown "there was an influence of the North Atlantic Oscillation on the streamflow of the Tigris and Euphrates at this time, which led to the collapse of the Akkadian Empire". More recent analysis of simulations from the HadCM3 climate model indicate that there was a shift to a more arid climate on a timescale that is consistent with the collapse of the empire. Excavation at Tell Leilan suggests that this site was abandoned soon after the city's massive walls were constructed, its temple rebuilt and its grain production reorganized. The debris, dust, and sand that followed show no trace of human activity. Soil samples show fine wind-blown sand, no trace of earthworm activity, reduced rainfall and indications of a drier and windier climate. Evidence shows that skeleton-thin sheep and cattle died of drought, and up to 28,000 people abandoned the site, seeking wetter areas elsewhere. Tell Brak shrank in size by 75%. Trade collapsed. Nomadic herders such as the Amorites moved herds closer to reliable water suppliers, bringing them into conflict with Akkadian populations. This climate-induced collapse seems to have affected the whole of the Middle East, and to have coincided with the collapse of the Egyptian Old Kingdom. This collapse of rain-fed agriculture in the Upper Country meant the loss to southern Mesopotamia of the agrarian subsidies which had kept the Akkadian Empire solvent. Water levels within the Tigris and Euphrates fell 1.5 meters beneath the level of 2600 BC, and although they stabilized for a time during the following Ur III period, rivalries between pastoralists and farmers increased. Attempts were undertaken to prevent the former from herding their flocks in agricultural lands, such as the building of a wall known as the "Repeller of the Amorites" between the Tigris and Euphrates under the Ur III ruler Shu-Sin. Such attempts led to increased political instability; meanwhile, severe depression occurred to re-establish demographic equilibrium with the less favorable climatic conditions. Richard Zettler has critiqued the drought theory, observing that the chronology of the Akkadian empire is very uncertain and that available evidence is not sufficient to show its economic dependence on the northern areas excavated by Weiss and others. He also criticizes Weiss for taking Akkadian writings literally to describe certain catastrophic events. According to Joan Oates, at Tell Brak, the soil "signal" associated with the drought lies below the level of Naram-Sin's palace. However, evidence may suggest a tightening of Akkadian control following the Brak 'event', for example, the construction of the heavily fortified 'palace' itself and the apparent introduction of greater numbers of Akkadian as opposed to local officials, perhaps a reflection of unrest in the countryside of the type that often follows some natural catastrophe. Furthermore, Brak remained occupied and functional after the fall of the Akkadians. In 2019, a study by Hokkaido University on fossil corals in Oman provides an evidence that prolonged winter shamal seasons led to the salinization of the irrigated fields; hence, a dramatic decrease in crop production triggered a widespread famine and eventually the collapse of the ancient Akkadian Empire. The Akkadian government formed a "classical standard" with which all future Mesopotamian states compared themselves. Traditionally, the "ensi" was the highest functionary of the Sumerian city-states. In later traditions, one became an "ensi" by marrying the goddess Inanna, legitimising the rulership through divine consent. Initially, the monarchical "lugal" ("lu" = man, "gal" =Great) was subordinate to the priestly "ensi", and was appointed at times of troubles, but by later dynastic times, it was the "lugal" who had emerged as the preeminent role, having his own ""é"" (= house) or "palace", independent from the temple establishment. By the time of Mesalim, whichever dynasty controlled the city of Kish was recognised as "šar kiššati" (= king of Kish), and was considered preeminent in Sumer, possibly because this was where the two rivers approached, and whoever controlled Kish ultimately controlled the irrigation systems of the other cities downstream. As Sargon extended his conquest from the "Lower Sea" (Persian Gulf), to the "Upper Sea" (Mediterranean), it was felt that he ruled "the totality of the lands under heaven", or "from sunrise to sunset", as contemporary texts put it. Under Sargon, the "ensi"s generally retained their positions, but were seen more as provincial governors. The title "šar kiššati" became recognised as meaning "lord of the universe". Sargon is even recorded as having organised naval expeditions to Dilmun (Bahrain) and Magan, amongst the first organised military naval expeditions in history. Whether he also did in the case of the Mediterranean with the kingdom of Kaptara (possibly Cyprus), as claimed in later documents, is more questionable. With Naram-Sin, Sargon's grandson, this went further than with Sargon, with the king not only being called "Lord of the Four-Quarters (of the Earth)", but also elevated to the ranks of the "dingir" (= gods), with his own temple establishment. Previously a ruler could, like Gilgamesh, become divine after death but the Akkadian kings, from Naram-Sin onward, were considered gods on earth in their lifetimes. Their portraits showed them of larger size than mere mortals and at some distance from their retainers. One strategy adopted by both Sargon and Naram-Sin, to maintain control of the country, was to install their daughters, Enheduanna and Emmenanna respectively, as high priestess to Sin, the Akkadian version of the Sumerian moon deity, Nanna, at Ur, in the extreme south of Sumer; to install sons as provincial "ensi" governors in strategic locations; and to marry their daughters to rulers of peripheral parts of the Empire (Urkesh and Marhashe). A well documented case of the latter is that of Naram-Sin's daughter Tar'am-Agade at Urkesh. Records at the Brak administrative complex suggest that the Akkadians appointed locals as tax collectors. The population of Akkad, like nearly all pre-modern states, was entirely dependent upon the agricultural systems of the region, which seem to have had two principal centres: the irrigated farmlands of southern Iraq that traditionally had a yield of 30 grains returned for each grain sown and the rain-fed agriculture of northern Iraq, known as the "Upper Country." Southern Iraq during Akkadian period seems to have been approaching its modern rainfall level of less than per year, with the result that agriculture was totally dependent upon irrigation. Before the Akkadian period the progressive salinisation of the soils, produced by poorly drained irrigation, had been reducing yields of wheat in the southern part of the country, leading to the conversion to more salt-tolerant barley growing. Urban populations there had peaked already by 2,600 BC, and demographic pressures were high, contributing to the rise of militarism apparent immediately before the Akkadian period (as seen in the Stele of the Vultures of Eannatum). Warfare between city states had led to a population decline, from which Akkad provided a temporary respite. It was this high degree of agricultural productivity in the south that enabled the growth of the highest population densities in the world at this time, giving Akkad its military advantage. The water table in this region was very high and replenished regularly—by winter storms in the headwaters of the Tigris and Euphrates from October to March and from snow-melt from March to July. Flood levels, that had been stable from about 3,000 to 2,600 BC, had started falling, and by the Akkadian period were a half-meter to a meter lower than recorded previously. Even so, the flat country and weather uncertainties made flooding much more unpredictable than in the case of the Nile; serious deluges seem to have been a regular occurrence, requiring constant maintenance of irrigation ditches and drainage systems. Farmers were recruited into regiments for this work from August to October—a period of food shortage—under the control of city temple authorities, thus acting as a form of unemployment relief. Gwendolyn Leick has suggested that this was Sargon's original employment for the king of Kish, giving him experience in effectively organising large groups of men; a tablet reads, "Sargon, the king, to whom Enlil permitted no rival—5,400 warriors ate bread daily before him". Harvest was in the late spring and during the dry summer months. Nomadic Amorites from the northwest would pasture their flocks of sheep and goats to graze on the crop residue and be watered from the river and irrigation canals. For this privilege, they would have to pay a tax in wool, meat, milk, and cheese to the temples, who would distribute these products to the bureaucracy and priesthood. In good years, all would go well, but in bad years, wild winter pastures would be in short supply, nomads would seek to pasture their flocks in the grain fields, and conflicts with farmers would result. It would appear that the subsidizing of southern populations by the import of wheat from the north of the Empire temporarily overcame this problem, and it seems to have allowed economic recovery and a growing population within this region. As a result, Sumer and Akkad had a surplus of agricultural products but was short of almost everything else, particularly metal ores, timber and building stone, all of which had to be imported. The spread of the Akkadian state as far as the "silver mountain" (possibly the Taurus Mountains), the "cedars" of Lebanon, and the copper deposits of Magan, was largely motivated by the goal of securing control over these imports. One tablet reads: International trade developed during the Akkadian period. Indus-Mesopotamia relations also seem to have expanded: Sargon of Akkad (circa 2300 or 2250 BC), was the first Mesopotamian ruler to make an explicit reference to the region of Meluhha, which is generally understood as being the Baluchistan or the Indus area. In art, there was a great emphasis on the kings of the dynasty, alongside much that continued earlier Sumerian art. Little architecture remains. In large works and small ones such as seals, the degree of realism was considerably increased, but the seals show a "grim world of cruel conflict, of danger and uncertainty, a world in which man is subjected without appeal to the incomprehensible acts of distant and fearful divinities who he must serve but cannot love. This sombre mood ... remained characteristic of Mesopotamian art..." Akkadian sculpture is remarkable for its fineness and realism, which shows a clear advancement compared to the previous period of Sumerian art. The Akkadians used visual arts as a vehicle of ideology. They developed a new style for cylinder seals, by reusing traditional animal decorations but organizing them around inscriptions, which often became central parts of the layout. The figures also became more sculptural and naturalistic. New elements were also included, especially in relation to the rich Akkadian mythology. During the 3rd millennium BC, there developed a very intimate cultural symbiosis between the Sumerians and the Akkadians, which included widespread bilingualism. The influence of Sumerian on Akkadian (and vice versa) is evident in all areas, from lexical borrowing on a massive scale, to syntactic, morphological, and phonological convergence. This has prompted scholars to refer to Sumerian and Akkadian in the third millennium as a "sprachbund". Akkadian gradually replaced Sumerian as a spoken language somewhere around 2000 BC (the exact dating being a matter of debate), but Sumerian continued to be used as a sacred, ceremonial, literary, and scientific language in Mesopotamia until the 1st century AD. Sumerian literature continued in rich development during the Akkadian period. Enheduanna, the "wife (Sumerian "dam" = high priestess) of Nanna [the Sumerian moon god] and daughter of Sargon" of the temple of Sin at Ur, who lived –2250 BC, is the first poet in history whose name is known. Her known works include hymns to the goddess Inanna, the "Exaltation of Inanna" and "In-nin sa-gur-ra". A third work, the "Temple Hymns", a collection of specific hymns, addresses the sacred temples and their occupants, the deity to whom they were consecrated. The works of this poet are significant, because although they start out using the third person, they shift to the first person voice of the poet herself, and they mark a significant development in the use of cuneiform. As poet, princess, and priestess, she was a person who, according to William W Hallo, "set standards in all three of her roles for many succeeding centuries" In the "Exultation of Inanna", Later material described how the fall of Akkad was due to Naram-Sin's attack upon the city of Nipper. When prompted by a pair of inauspicious oracles, the king sacked the E-kur temple, supposedly protected by the god Enlil, head of the pantheon. As a result of this, eight chief deities of the Anunnaki pantheon were supposed to have come together and withdrawn their support from Akkad. The kings of Akkad were legendary among later Mesopotamian civilizations, with Sargon understood as the prototype of a strong and wise leader, and his grandson Naram-Sin considered the wicked and impious leader ("Unheilsherrscher" in the analysis of Hans Gustav Güterbock) who brought ruin upon his kingdom. Tablets from the periods reads, ""(From the earliest days) no-one had made a statue of lead, (but) Rimush king of Kish, had a statue of himself made of lead. It stood before Enlil; and it recited his (Rimush's) virtues to the idu of the gods"". The copper Bassetki Statue, cast with the lost wax method, testifies to the high level of skill that craftsmen achieved during the Akkadian period. The empire was bound together by roads, along which there was a regular postal service. Clay seals that took the place of stamps bear the names of Sargon and his son. A cadastral survey seems also to have been instituted, and one of the documents relating to it states that a certain Uru-Malik, whose name appears to indicate his Canaanite origin, was governor of the land of the Amorites, or "Amurru" as the semi-nomadic people of Syria and Canaan were called in Akkadian. It is probable that the first collection of astronomical observations and terrestrial omens was made for a library established by Sargon. The earliest "year names", whereby each year of a king's reign was named after a significant event performed by that king, date from Sargon's reign. Lists of these "year names" henceforth became a calendrical system used in most independent Mesopotamian city-states. In Assyria, however, years came to be named for the annual presiding "limmu" official appointed by the king, rather than for an event.
https://en.wikipedia.org/wiki?curid=1566
Ajax the Great Ajax () or Aias (; , "Aíantos"; archaic ) is a Greek mythological hero, the son of King Telamon and Periboea, and the half-brother of Teucer. He plays an important role, and is portrayed as a towering figure and a warrior of great courage in Homer's "Iliad" and in the Epic Cycle, a series of epic poems about the Trojan War. He is also referred to as "Telamonian Ajax" (, in Etruscan recorded as "Aivas Tlamunus"), "Greater Ajax", or "Ajax the Great", which distinguishes him from Ajax, son of Oileus (Ajax the Lesser). Ajax is the son of Telamon, who was the son of Aeacus and grandson of Zeus, and his first wife Periboea. He is the cousin of Achilles, and is the elder half-brother of Teucer. His given name is derived from the root of "to lament", translating to "one who laments; mourner". Hesiod, however, includes a story in "The Great Eoiae" that indicates Aias received his name when Heracles prayed to Zeus that a son might be born to Telemon and Eriboea. Zeus sent an eagle ("aietos" - αετός) as a sign. Heracles then bade the parents call their son Aias after the eagle. Many illustrious Athenians, including Cimon, Miltiades, Alcibiades and the historian Thucydides, traced their descent from Ajax. On an Etruscan tomb dedicated to Racvi Satlnei in Bologna (5th century BC) there is an inscription that says "aivastelmunsl", which means "[family] of Telamonian Ajax". In Homer's "Iliad" he is described as of great stature, colossal frame and strongest of all the Achaeans. Known as the "bulwark of the Achaeans", he was trained by the centaur Chiron (who had trained Ajax's father Telamon and Achilles's father Peleus and would later die of an accidental wound inflicted by Heracles, whom he was at the time training) at the same time as Achilles. He was described as fearless, strong and powerful but also with a very high level of combat intelligence. Ajax commands his army wielding a huge shield made of seven cow-hides with a layer of bronze. Most notably, Ajax is not wounded in any of the battles described in the "Iliad", and he is the only principal character on either side who does not receive substantial assistance from any of the gods (except for Agamemnon) who take part in the battles, although, in book 13, Poseidon strikes Ajax with his staff, renewing his strength. Unlike Diomedes, Agamemnon, and Achilles, Ajax appears as a mainly defensive warrior, instrumental in the defence of the Greek camp and ships and that of Patroclus' body. When the Trojans are on the offensive, he is often seen covering the retreat of the Achaeans. Significantly, while one of the deadliest heroes in the whole poem, Ajax has no aristeia depicting him on the offensive. In the "Iliad", Ajax is notable for his abundant strength and courage, seen particularly in two fights with Hector. In Book 7, Ajax is chosen by lot to meet Hector in a duel which lasts most of a whole day. Ajax at first gets the better of the encounter, wounding Hector with his spear and knocking him down with a large stone, but Hector fights on until the heralds, acting at the direction of Zeus, call a draw, with the two combatants exchanging gifts, Ajax giving Hector a purple sash and Hector giving Ajax his sharp sword. The second fight between Ajax and Hector occurs when the latter breaks into the Mycenaean camp, and fights with the Greeks among the ships. In Book 14, Ajax throws a giant rock at Hector which almost kills him. In Book 15, Hector is restored to his strength by Apollo and returns to attack the ships. Ajax, wielding an enormous spear as a weapon and leaping from ship to ship, holds off the Trojan armies virtually single-handedly. In Book 16, Hector and Ajax duel once again. Hector then disarms Ajax (although Ajax is not hurt) and Ajax is forced to retreat, seeing that Zeus is clearly favoring Hector. Hector and the Trojans succeed in burning one Greek ship, the culmination of an assault that almost finishes the war. Ajax is responsible for the death of many Trojan lords, including Phorcys. Ajax often fought in tandem with his brother Teucer, known for his skill with the bow. Ajax would wield his magnificent shield, as Teucer stood behind picking off enemy Trojans. Achilles was absent during these encounters because of his feud with Agamemnon. In Book 9, Agamemnon and the other Mycenaean chiefs send Ajax, Odysseus and Phoenix to the tent of Achilles in an attempt to reconcile with the great warrior and induce him to return to the fight. Although Ajax speaks earnestly and is well received, he does not succeed in convincing Achilles. When Patroclus is killed, Hector tries to steal his body. Ajax, assisted by Menelaus, succeeds in fighting off the Trojans and taking the body back with his chariot; however, the Trojans have already stripped Patroclus of Achilles' armor. Ajax's prayer to Zeus to remove the fog that has descended on the battle to allow them to fight or die in the light of day has become proverbial. According to Hyginus, in total, Ajax killed 28 people at Troy. As the "Iliad" comes to a close, Ajax and the majority of other Greek warriors are alive and well. When Achilles dies, killed by Paris (with help from Apollo), Ajax and Odysseus are the heroes who fight against the Trojans to get the body and bury it with his companion, Patroclus. Ajax, with his great shield and spear, manages to recover the body and carry it to the ships, while Odysseus fights off the Trojans. After the burial, each claims Achilles' magical armor, which had been forged on Mount Olympus by the smith-god Hephaestus, for himself as recognition for his heroic efforts. A competition is held to determine who deserves the armor. Ajax argues that because of his strength and the fighting he has done for the Greeks, including saving the ships from Hector, and driving him off with a massive rock, he deserves the armor. However, Odysseus proves to be more eloquent, and with the aid of Athena, the council gives him the armor. Ajax, "Unconquered", and furious, becomes crazed and slaughters the Achaians' herds of captured livestock, believing them to be his enemies through a trick of Athena. Unable to deal with this dual dishonor, he falls upon his own sword, "conquered by his [own] sorrow", and commits suicide. The Belvedere Torso, a marble torso now in the Vatican Museums, is considered to depict Ajax "in the act of contemplating his suicide". In Sophocles' play "Ajax", a famous retelling of Ajax's demise, after the armor is awarded to Odysseus, Ajax feels so insulted that he wants to kill Agamemnon and Menelaus. Athena intervenes and clouds his mind and vision, and he goes to a flock of sheep and slaughters them, imagining they are the Achaean leaders, including Odysseus and Agamemnon. When he comes to his senses, covered in blood, he realizes that what he has done has diminished his honor, and decides that he prefers to kill himself rather than live in shame. He does so with the same sword which Hector gave him when they exchanged presents. From his blood sprang a red flower, as at the death of Hyacinthus, which bore on its leaves the initial letters of his name "Ai," also expressive of lament. His ashes were deposited in a golden urn on the Rhoetean promontory at the entrance of the Hellespont. Ajax's half-brother Teucer stood trial before his father for not bringing Ajax's body or famous weapons back. Teucer was acquitted for responsibility but found guilty of negligence. He was disowned by his father and was not allowed to return to his home, the island of Salamis off the coast of Athens. Homer is somewhat vague about the precise manner of Ajax's death but does ascribe it to his loss in the dispute over Achilles' armor; when Odysseus visits Hades, he begs the soul of Ajax to speak to him, but Ajax, still resentful over the old quarrel, refuses and descends silently back into Erebus. Like Achilles, he is represented (although not by Homer) as living after his death on the island of Leuke at the mouth of the Danube. Ajax, who in the post-Homeric legend is described as the grandson of Aeacus and the great-grandson of Zeus, was the tutelary hero of the island of Salamis, where he had a temple and an image, and where a festival called "Aianteia" was celebrated in his honour. At this festival a couch was set up, on which the panoply of the hero was placed, a practice which recalls the Roman Lectisternium. The identification of Ajax with the family of Aeacus was chiefly a matter which concerned the Athenians, after Salamis had come into their possession, on which occasion Solon is said to have inserted a line in the "Iliad" (2.557–558), for the purpose of supporting the Athenian claim to the island. Ajax then became an Attic hero; he was worshiped at Athens, where he had a statue in the market-place, and the tribe "Aiantis" was named after him. Pausanias also relates that a gigantic skeleton, its kneecap in diameter, appeared on the beach near Sigeion, on the Trojan coast; these bones were identified as those of Ajax. In 2001, Yannos Lolos began excavating a Mycenaean palace near the village of Kanakia on the island of Salamis which he theorized to be the home of the mythological Aiacid dynasty. The multi-story structure covers and had perhaps 30 rooms. The palace appears to have been abandoned at the height of the Mycenaean civilization, roughly the same time the Trojan War may have occurred.
https://en.wikipedia.org/wiki?curid=1568
Alaric I Alaric I (; , "ruler of all"; ; 370 (or 375)410 AD) was the first king of the Visigoths from 395–410. He ruled the tribe that came to occupy Moesia – territory acquired a couple of decades earlier by a combined force of Goths and Alans in the wake of the dramatic Battle of Adrianople. Alaric began his career under the Gothic soldier Gainas, and later joined the Roman army. Once an ally of Rome under the Emperor Theodosius, Alaric helped defeat the Franks. Despite losing many thousands of his men, he received little recognition from Rome for his efforts and left the Roman army disappointed. In 395, he was made elected king of the Visigoths and on several occasions marched against Rome. He is best known for his sack of Rome in 410, which marked one of several decisive events in the Western Roman Empire's eventual decline. According to Jordanes, a 6th-century Roman bureaucrat of Gothic origin—who later turned his hand to history—Alaric was born on Peuce Island at the mouth of the Danube Delta in present-day Romania and belonged to the noble Balti dynasty of the Tervingian Goths, but there's no way to verify the veracity of this claim. Historian Douglas Boin does not make such an unequivocal assessment about Alaric's Gothic heritage and instead claims he came from either the Tervingi or the Greunthung tribes. When the Goths suffered setbacks against the Huns, they made a mass migration across the Danube, and fought a war with Rome. Alaric was probably a child during this period who grew up along Rome's periphery. To this end, Alaric's upbringing was shaped by living along the border of Roman territory in a region that the Romans viewed as a veritable "backwater" and if the Roman poet Ovid's first century writings while in exile are in any way telling, the area along the Danube and Black sea where Alaric was reared constituted a land of "barbarians" and was among "the most remote in the vast world." Alaric's childhood in the Balkans, where the Goths had settled by way of an agreement with Theodosius, was spent in the company of veterans who had fought at the Battle of Adrianople in 378, during which they had annihilated much of the Eastern army and killed Emperor Valens. Imperial campaigns against the Visigoths were conducted until a treaty was reached in 382. This treaty was the first "foedus" on imperial Roman soil and required these semi-autonomous Germanic tribes—among whom Alaric was raised—to supply troops for the Roman army in exchange for peace, the access to cultivatable land, and freedom from Roman legal strictures. Correspondingly, there was hardly a region along the Roman frontier during Alaric's day without Gothic slaves and servants of one form or another. For several subsequent decades, many Goths like Alaric were "called up into regular units of the eastern field army" while others served as auxiliaries in campaigns led by Theodosius against the western usurpers Magnus Maximus and Eugenius. A new phase in the relationship between the Goths and the empire resulted from the treaty signed in 382, as more and more Goths attained aristocratic rank from their service in the imperial army. Alaric began his military career under the Gothic soldier Gainas, and later joined the Roman army. He first appeared as leader of a mixed band of Goths and allied peoples, who invaded Thrace in 391 but were stopped by the half-Vandal Roman General Stilicho. While the Roman poet Claudian diminished Alaric as "a little-known menace" terrorizing southern Thrace during this time, the latter's strategic abilities and forces were formidable enough to prevent the Roman Emperor Theodosius from crossing the Maritsa River. By 392, Alaric had entered Roman military service, which coincided a reduction of hostilities between Goths and Romans. In 394, he led a Gothic force that helped Emperor Theodosius defeat the Frankish usurper Arbogast—fighting at the behest of Eugenius—at the Battle of Frigidus. Despite sacrificing around 10,000 of his men, who had been victims of Theodosius' callous tactical decision to overwhelm the enemies front lines using Gothic "foederati", Alaric received little recognition from the emperor. Alaric was among the few who survived the protracted and bloody affair. Refused the reward he expected, which included a promotion to "magisterium" and command of regular Roman units, Alaric mutinied and began to march against Constantinople. On 17 January 395, Theodosius died of an illness, leaving his two young sons Arcadius and Honorius in Stilicho's guardianship. Distinguishing himself among his people, Alaric, who had once dreamed of becoming a Roman general, a rank he acquired first from Constantinople but lost, then regained in Milan (later Ravenna) and lost a second time, became instead, king of the Visigoths in 395. According to historian Peter Heather, it is not entirely clear in the sources if Alaric rose to prominence at the time the Goths revolted following Theodosius's death, or if he had already risen within his tribe as early as the war against Eugenius. Whatever the circumstances, Jordanes recorded that the new king persuaded his people to "seek a kingdom by their own exertions rather than serve others in idleness." Whether or not Alaric was a member of an ancient Germanic royal clan—claimed by Jordanes and debated by historians—is less important than his emergence as a leader, the first of his kind since Fritigern. Around the time Alaric became king and began asserting his authority, controversy and intrigue erupted between the Eastern and Western sides of the Roman Empire, as General Stilicho attempted to increase his position. Since Theodosius's death left the Empire divided between his two sons, one taking the eastern and the other the western portion of the Empire, Stilicho sought to exploit the situation. Historian Roger Collins points out that while the rivalries created by the two halves of the Empire vying for power worked to Alaric's advantage and that of his people, simply being called to authority by the Gothic people did not solve the practicalities of their needs for survival. Perhaps with these concerns in mind, Alaric took his Gothic army on what historian Edward James describes as a "pillaging campaign" that began first in the East. Alaric's forces made their way down to Athens and along the Adriatic coast, where he sought to force a new peace upon the Romans. His march in 396 included passing through Thermopylae further into Greece, during which his troops plundered for the next year or so as far south as the mountainous Peloponnese peninsula. Only Stilicho's surprise attack with his western field army (having sailed from Italy) stemmed the plundering as he pushed Alaric's forces north into Epirus. Despite the successful efforts of Stilicho, the east Roman official Eutropius—to whom power had passed after the death of Rufinus—expressed outrage at this unsanctioned intervention and instead of recognizing Stilicho, conferred in 397 on Alaric the title, "magisterium militum", as he thought the Gothic king more malleable than the ambitious Roman general. After having received this new rank, Alaric acquired ample gold and grain to grant to his followers and negotiations were underway for a more permanent settlement. Stilicho's supporters in Milan were outraged at this seeming betrayal; meanwhile, Eutropius was celebrated in 398 via a parade through Constantinople for having achieved victory over the "wolves of the North." Fought more or less to a standstill by Stilicho, the Gothic forces under Alaric were relatively quiet for the next couple of years. According to historian Michael Kulikowski, sometime in the spring of 402 Alaric decided to invade Italy, but no sources from antiquity indicate to what purpose. Using Claudian as his source, historian Guy Halsall reports that Alaric's attack actually began in late 401, but since Stilicho was in "Raetia" "dealing with frontier issues" the two did not first confront one another in Italy until 402. Two battles were fought. The first was at Pollentia on Easter Sunday, where Stilicho achieved an impressive victory, taking Alaric's wife and children prisoner, and more significantly, seizing much of the treasure that Alaric had amassed over the previous five years' worth of plundering. Pursuing the retreating forces of Alaric, Stilicho offered to return the prisoners but was refused. The second battle was at Verona, where Alaric was defeated for a second time. Stilicho once again offered Alaric a truce and allowed him to withdraw from Italy. Kulikowski explains this confounding, if not outright conciliatory behavior by stating, "given Stilicho's cold war with Constantinople, it would have been foolish to destroy as biddable and violent a potential weapon as Alaric might well prove to be". Halsall's observations are similar, as he contends that the Roman general's "decision to permit Alaric's withdrawal into "Pannonia" makes sense if we see Alaric's force entering Stilicho's service, and Stilicho's victory being less total than Claudian would have us believe". Perhaps more revealing is a report from the Greek historian Zosimus—writing a half a century later—that indicates an agreement was concluded between Stilicho and Alaric in 405, which suggests Alaric being in "western service at that point", likely stemming from arrangements made back in 402. Between 404 and 405, Alaric remained in one of the four "Pannonian" provinces, from where he could "play East off against West while potentially threatening both". Fate was not kind to the Empire, as "Alaric’s return to the north-west Balkans brought only temporary respite to Italy, for in 405 another substantial body of Goths and other barbarians, this time from outside the empire, crossed the middle Danube and advanced into northern Italy, where they plundered the countryside and besieged cities and towns" under their leader Radagaisus. Although the imperial government was struggling to muster enough troops to contain these barbarian invasions, Stilicho managed to stifle the threat posed by the tribes under Radagaisus, when the latter split his forces into three separate groups. Stilicho cornered Radagaisus near Florence and starved the invaders into submission. Meanwhile, Alaric—bestowed with codicils of "magister militum" by Stilicho and now supplied by the West—awaited for one side or the other to incite him to action as Stilicho faced further difficulties from more barbarians. Sometime in 406 and into 407, another assemblage of barbarians, consisting primarily of Vandals, Sueves and Alans, crossed the Rhine into Gaul while at the same time a rebellion—under a common soldier named Constantine—occurred in Britain and spread to Gaul. Burdened by so many enemies, Stilicho's position was strained. During this crisis in 407, Alaric again marched on Italy, taking a position in Noricum (modern Austria), where he demanded a sum of 4,000 pounds of gold lest he embark again on a full-scale invasion. At this point, Stilicho, whose earlier efforts to deal with the usurper Constantine had failed, convinced the Western Emperor Honorius and the Roman senate—who begrudgingly agreed—to pay the sum and unleash Alaric's forces on their new enemy. Despite how sensible Stilicho’s plan was in the grand scheme of things, since he had correctly evaluated the dangers, arguing for Alaric's intervention after having failed to contain the threat to Rome, fatally weakened his standing. In 408, Western Emperor Honorius ordered the execution of Stilicho and his family, in response to rumors that the general had made a deal with Alaric. While Stilicho had been murdered, so too had his loyal supporters and many thousands of barbarian auxiliaries, along with their wives and children, those that remained alive or escaped joined Alaric at Noricum. Rome had made itself more vulnerable as a result and Alaric now once again made demands on Honorius for an unknown sum of gold and for the return of the civilian dependent hostages belonging to his new followers—if they remained alive. When Alaric was rebuffed, he led his force of around 30,000 men—many newly enlisted and understandably motivated—on a march toward Rome to avenge their murdered families. Crossing the Julian Alps in September 408, Alaric stood before the walls of Rome (now with no capable general like Stilicho as a defender) and began a strict blockade. No blood was shed this time; Alaric relied on hunger as his most powerful weapon. When the ambassadors of the Senate, entreating for peace, tried to intimidate him with hints of what the despairing citizens might accomplish, he laughed and gave his celebrated answer: "The thicker the hay, the easier mowed!" After much bargaining, the famine-stricken citizens agreed to pay a ransom of 5,000 pounds of gold, 30,000 pounds of silver, 4,000 silken tunics, 3,000 hides dyed scarlet, and 3,000 pounds of pepper. Along came 40,000 freed Gothic slaves. Thus ended Alaric's first siege of Rome. After having provisionally agreed to the terms offered by Alaric for lifting the blockade, Honorius recanted; historian A.D. Lee highlights that one of the points of contention for the emperor was Alaric's expectation of being named head of the Roman Army, a post Honorius was not prepared to grant to Alaric. When this title was not bestowed onto Alaric, he proceeded to not only "besiege Rome again in late 409, but also to proclaim a leading senator, Priscus Attalus, as a rival emperor, from whom Alaric then received the appointment" he desired. Meanwhile, Alaric's newly appointed "emperor" Attalus, who seems not to have known the limits of his power or understand his dependence on Alaric, failed to take the latter's advice and lost the grain supply in Africa to a pro-Honorian "comes Africae", Heraclian. Then, sometime in 409, Attalus—accompanied by Alaric—marched on Ravenna and after receiving unprecedented terms and concessions from the legitimate emperor Honorius, refused him and instead, demanded that Honorius be deposed and exiled. Fearing for his safety, Honorius made preparations to flee to Ravenna when a ship carrying 4,000 troops arrived from Constantinople, restoring his resolve. Now that it was clear Honorius no longer needed to negotatie, Alaric (regretting his choice of puppet emperor) deposed him, perhaps to re-open negotiations with Ravenna. Negotiations with Honorius might have succeeded had it not been for the influence of another Goth, Sarus, an Amali, and therefore hereditary enemy of Alaric and his house. Why Sarus, who had been in imperial service for years under Stilicho, intervened at this moment remains a mystery, but Alaric interpeted this attack as Ravenna-directed and as bad faith from Honorius. No longer would negotiations suffice for Alaric, as his patience had reached its end, which led him to march on Rome for a third and final time. On 24 August 410, Alaric and his forces began the sack of Rome, an assault that lasted three days. After hearing reports that Alaric had entered the city—possibly aided by Gothic slaves inside—there were reports that Emperor Honorius (safe in Ravenna) broke into "wailing and lamentation" but quickly calmed once "it was explained to him that it was the city of Rome that had met its end and not 'Roma'," his pet fowl. Writing from Bethlehem, St. Jerome (Letter 127.12, to "Principia") lamented: "A dreadful rumour reached us from the West. We heard that Rome was besieged, that the citizens were buying their safety with gold . . . The city which had taken the whole world was itself taken; nay, it fell by famine before it fell to the sword." Nonetheless, Christian apologists also cited how Alaric ordered that anyone who took shelter in a Church was to be spared. When liturgical vessels were taken from the basilica of St. Peter and Alaric heard of this, he ordered them returned and had them ceremoniously restored in the church. If the account from the historian Orosius can be seen as accurate, there was even a celebratory recognition of Christian unity by way of a procession through the streets where Romans and barbarians alike "raised a hymn to God in public"; historian Edward James concludes that such stories are likely more political rhetoric of the "noble" barbarians than a reflection of historical reality. According to historian Patrick Geary, Roman booty was not the focus of Alaric's sack of Rome but that he had come for needed food supplies. Historian Stephen Mitchell asserts that Alaric's followers seemed incapable of feeding themselves and relied on provisions "supplied by the Roman authorities." Whatever Alaric's intentions were cannot be known entirely, but Kulikowski certainly sees the issue of available treasure in a different light, writing that "For three days, Alaric’s Goths sacked the city, stripping it of the wealth of centuries." The barbarian invaders were not gentle in their treatment of property as substantial damage was still evident into the sixth century. Certainly the Roman world was shaken by the fall of the Eternal City to barbarian invaders, but as Guy Halsall emphasizes, "Rome’s fall had less striking political effects. Alaric, unable to treat with Honorius, remained in the political cold." Kulikowski sees the situation similarly, commenting: But for Alaric the sack of Rome was an admission of defeat, a catastrophic failure. Everything he had hoped for, had fought for over the course of a decade and a half, went up in flames with the capital of the ancient world. Imperial office, a legitimate place for himself and his followers inside the empire, these were now forever out of reach. He might seize what he wanted, as he had seized Rome, but he would never be given it by right. The sack of Rome solved nothing and when the looting was over Alaric’s men still had nowhere to live and fewer future prospects than ever before. Still, the importance of Alaric cannot be "overestimated" according to Halsall, since he had desired and obtained a Roman command even though he was a barbarian; his real misfortune was being caught between the rivalry of the Eastern and Western empires and their court intrigue. According to historian Peter Brown, when one compares Alaric with other barbarians, "he was almost an Elder Statesman." Nonetheless, Alaric's respect for Roman institutions as a former servant to its highest office did not stay his hand in violently sacking the city that had for centuries exemplified Roman glory, leaving behind physical destruction and social disruption, while Alaric took clerics and even the emperor’s sister, Galla Placidia, with him when he left the city. More than the city of Rome itself was victim to the forces under Alaric, but the remainder of Italy, as Procopius ("Wars" 3.2.11–13) writing in the sixth-century later relates: For they destroyed all the cities which they captured, especially those south of the Ionian Gulf, so completely that nothing has been left to my time to know them by, unless, indeed, it might be one tower or gate or some such thing which chanced to remain. And they killed all the people, as many as came in their way, both old and young alike, sparing neither women nor children. Wherefore even up to the present time Italy is sparsely populated. Whether Alaric's forces wrought the level of destruction described by Procopius or not cannot be known, but evidence speaks to the subsequent population decrease, as the number of people on the food dole dropped from 800,000 in 408 to 500,000 by 419. Rome's fall to the barbarians was as much a psychological blow to the empire as anything else, since some Romans citizens saw the collapse resultant from the conversion to Christianity, while Christian apologists like Augustine (writing "City of God") responded in turn; some even saw Alaric—himself a Christian—as God's wrath upon a still pagan Rome; meanwhile, Christians and pagans alike blamed the other religion for the city's fall. The Goths were not long in the city of Rome, as only three days after the sack, Alaric marched his men south to Campania, from where he intended to sail to Sicily—probably to obtain grain and other supplies—when a storm destroyed his fleet. During the early months of 411, while on his northward return journey through Italy, Alaric took ill and died at Consentia in Bruttium. His cause of death was likely fever, and his body was, according to legend, buried under the riverbed of the Busento in accordance with the pagan practices of the Visigothic people. The stream was temporarily turned aside from its course while the grave was dug, wherein the Gothic chief and some of his most precious spoils were interred. When the work was finished, the river was turned back into its usual channel and the captives by whose hands the labor had been accomplished were put to death that none might learn their secret. Alaric was succeeded in the command of the Gothic army by his brother-in-law, Ataulf, who married Honorius' sister Galla Placidia three years later. Following in the wake of Alaric's leadership, which Kulikowski claims, had given his people "a sense of community that survived his own death...Alaric’s Goths remained together inside the empire, going on to settle in Gaul. There, in the province of Aquitaine, they put down roots and created the first autonomous barbarian kingdom inside the frontiers of the Roman empire." Not long after Alaric's exploits in Rome and Athaulf's settlement in Aquitaine, there is a "rapid emergence of Germanic barbarian groups in the West" who begin controlling many western provinces. These barbarian peoples include: Vandals in Spain and Africa, Visigoths in Spain and Aquitaine, Burgundians along the upper Rhine and southern Gaul, and Franks on the lower Rhine and in northern and central Gaul. The chief authorities on the career of Alaric are: the historian Orosius and the poet Claudian, both contemporary, neither disinterested; Zosimus, a historian who lived probably about half a century after Alaric's death; and Jordanes, a Goth who wrote the history of his nation in 551, basing his work on Cassiodorus's "Gothic History". The legend of Alaric's burial in the Buzita River comes from Jordanes.
https://en.wikipedia.org/wiki?curid=1570
Albertus Magnus Albertus Magnus (before 1200 – November 15, 1280), also known as Saint Albert the Great and Albert of Cologne, was a German Catholic Dominican friar and bishop. Later canonised as a Catholic saint, he was known during his lifetime as "Doctor universalis" and "Doctor expertus" and, late in his life, the sobriquet "Magnus" was appended to his name. Scholars such as James A. Weisheipl and Joachim R. Söder have referred to him as the greatest German philosopher and theologian of the Middle Ages. The Catholic Church distinguishes him as one of the 36 Doctors of the Church. It seems likely that Albert was born sometime before 1200, given well-attested evidence that he was aged over 80 on his death in 1280. Two later sources say that Albert was about 87 on his death, which has led 1193 to be commonly given as the date of Albert's birth, but this information has not enough evidence. Albert was probably born in Lauingen (now in Bavaria), since he called himself 'Albert of Lauingen', but this might simply be a family name. Most probably his family was of "ministerial" class; his familiar connection with (being son of the count) the Bollstädt noble family is almost certainly mere conjecture by 15th c. hagiographers. Albert was probably educated principally at the University of Padua, where he received instruction in Aristotle's writings. A late account by Rudolph de Novamagia refers to Albertus' encounter with the Blessed Virgin Mary, who convinced him to enter Holy Orders. In 1223 (or 1229) he became a member of the Dominican Order, and studied theology at Bologna and elsewhere. Selected to fill the position of lecturer at Cologne, Germany, where the Dominicans had a house, he taught for several years there, as well as in Regensburg, Freiburg, Strasbourg, and Hildesheim. During his first tenure as lecturer at Cologne, Albert wrote his "Summa de bono" after discussion with Philip the Chancellor concerning the transcendental properties of being. In 1245, Albert became master of theology under Gueric of Saint-Quentin, the first German Dominican to achieve this distinction. Following this turn of events, Albert was able to teach theology at the University of Paris as a full-time professor, holding the seat of the Chair of Theology at the College of St. James. During this time Thomas Aquinas began to study under Albertus. Albert was the first to comment on virtually all of the writings of Aristotle, thus making them accessible to wider academic debate. The study of Aristotle brought him to study and comment on the teachings of Muslim academics, notably Avicenna and Averroes, and this would bring him into the heart of academic debate. In 1254 Albert was made provincial of the Dominican Order, and fulfilled the duties of the office with great care and efficiency. During his tenure he publicly defended the Dominicans against attacks by the secular and regular faculty of the University of Paris, commented on John the Evangelist, and answered what he perceived as errors of the Islamic philosopher Averroes. In 1259 Albert took part in the General Chapter of the Dominicans at Valenciennes together with Thomas Aquinas, masters Bonushomo Britto, Florentius, and Peter (later Pope Innocent V) establishing a "ratio studiorum" or program of studies for the Dominicans that featured the study of philosophy as an innovation for those not sufficiently trained to study theology. This innovation initiated the tradition of Dominican scholastic philosophy put into practice, for example, in 1265 at the Order's "studium provinciale" at the convent of Santa Sabina in Rome, out of which would develop the Pontifical University of Saint Thomas Aquinas, the "Angelicum". In 1260 Pope Alexander IV made him bishop of Regensburg, an office from which he resigned after three years. During the exercise of his duties he enhanced his reputation for humility by refusing to ride a horse, in accord with the dictates of the Order, instead traversing his huge diocese on foot. This earned him the affectionate sobriquet "boots the bishop" from his parishioners. In 1263 Pope Urban IV relieved him of the duties of bishop and asked him to preach the eighth Crusade in German-speaking countries. After this, he was especially known for acting as a mediator between conflicting parties. In Cologne he is not only known for being the founder of Germany's oldest university there, but also for "the big verdict" (der Große Schied) of 1258, which brought an end to the conflict between the citizens of Cologne and the archbishop. Among the last of his labors was the defense of the orthodoxy of his former pupil, Thomas Aquinas, whose death in 1274 grieved Albert (the story that he travelled to Paris in person to defend the teachings of Aquinas can not be confirmed). Albert was a scientist, philosopher, astrologer, theologian, spiritual writer, ecumenist, and diplomat. Under the auspices of Humbert of Romans, Albert molded the curriculum of studies for all Dominican students, introduced Aristotle to the classroom and probed the work of Neoplatonists, such as Plotinus. Indeed, it was the thirty years of work done by Aquinas and himself that allowed for the inclusion of Aristotelian study in the curriculum of Dominican schools. After suffering a collapse of health in 1278, he died on November 15, 1280, in the Dominican convent in Cologne, Germany. Since November 15, 1954, his relics are in a Roman sarcophagus in the crypt of the Dominican St. Andreas Church in Cologne. Although his body was discovered to be incorrupt at the first exhumation three years after his death, at the exhumation in 1483 only a skeleton remained. Albert was beatified in 1622. He was canonized and proclaimed a Doctor of the Church on December 16, 1931, by Pope Pius XI and the patron saint of natural scientists in 1941. St. Albert's feast day is November 15. Albert's writings collected in 1899 went to thirty-eight volumes. These displayed his prolific habits and encyclopedic knowledge of topics such as logic, theology, botany, geography, astronomy, astrology, mineralogy, alchemy, zoology, physiology, phrenology, justice, law, friendship, and love. He digested, interpreted, and systematized the whole of Aristotle's works, gleaned from the Latin translations and notes of the Arabian commentators, in accordance with Church doctrine. Most modern knowledge of Aristotle was preserved and presented by Albert. His principal theological works are a commentary in three volumes on the Books of the Sentences of Peter Lombard ("Magister Sententiarum"), and the "Summa Theologiae" in two volumes. The latter is in substance a more didactic repetition of the former. Albert's activity, however, was more philosophical than theological (see Scholasticism). The philosophical works, occupying the first six and the last of the 21 volumes, are generally divided according to the Aristotelian scheme of the sciences, and consist of interpretations and condensations of Aristotle's relative works, with supplementary discussions upon contemporary topics, and occasional divergences from the opinions of the master. Albert believed that Aristotle's approach to natural philosophy did not pose any obstacle to the development of a Christian philosophical view of the natural order. Albert's knowledge of natural science was considerable and for the age remarkably accurate. His industry in every department was great: not only did he produce commentaries and paraphrases of the entire Aristotelian corpus, including his scientific works, but Albert also added to and improved upon them. His books on topics like botany, zoology, and minerals included information from ancient sources, but also results of his own empirical investigations. These investigations pushed several of the special sciences forward, beyond the reliance on classical texts. In the case of embryology, for example, it has been claimed that little of value was written between Aristotle and Albert, who managed to identify organs within eggs. Furthermore, Albert also effectively invented entire special sciences, where Aristotle has not covered a topic. For example, prior to Albert, there was no systematic study of minerals. For the breadth of these achievements, he was bestowed the name "Doctor Universalis." Much of Albert's empirical contributions to the natural sciences have been superseded, but his general approach to science may be surprisingly modern. For example, in "De Mineralibus" (Book II, Tractate ii, Ch. 1) Albert claims, "For it is [the task] of natural science not simply to accept what we are told but to inquire into the causes of natural things." In the centuries since his death, many stories arose about Albert as an alchemist and magician. "Much of the modern confusion results from the fact that later works, particularly the alchemical work known as the "Secreta Alberti" or the "Experimenta Alberti", were falsely attributed to Albertus by their authors to increase the prestige of the text through association." On the subject of alchemy and chemistry, many treatises relating to alchemy have been attributed to him, though in his authentic writings he had little to say on the subject, and then mostly through commentary on Aristotle. For example, in his commentary, "De mineralibus", he refers to the power of stones, but does not elaborate on what these powers might be. A wide range of Pseudo-Albertine works dealing with alchemy exist, though, showing the belief developed in the generations following Albert's death that he had mastered alchemy, one of the fundamental sciences of the Middle Ages. These include "Metals and Materials"; the "Secrets of Chemistry"; the "Origin of Metals"; the "Origins of Compounds", and a "Concordance "which is a collection of "Observations on the philosopher's stone"; and other alchemy-chemistry topics, collected under the name of "Theatrum Chemicum". He is credited with the discovery of the element arsenic and experimented with photosensitive chemicals, including silver nitrate. He did believe that stones had occult properties, as he related in his work "De mineralibus". However, there is scant evidence that he personally performed alchemical experiments. According to legend, Albert is said to have discovered the philosopher's stone and passed it on to his pupil Thomas Aquinas, shortly before his death. Albert does not confirm he discovered the stone in his writings, but he did record that he witnessed the creation of gold by "transmutation." Given that Thomas Aquinas died six years before Albert's death, this legend as stated is unlikely. Albert was deeply interested in astronomy, as has been articulated by scholars such as Paola Zambelli and Scott Hendrix. Throughout the Middle Ages –and well into the early modern period– astrology was widely accepted by scientists and intellectuals who held the view that life on earth is effectively a microcosm within the macrocosm (the latter being the cosmos itself). It was believed that correspondence therefore exists between the two and thus the celestial bodies follow patterns and cycles analogous to those on earth. With this worldview, it seemed reasonable to assert that astrology could be used to predict the probable future of a human being. Albert argued that an understanding of the celestial influences affecting us could help us to live our lives more in accord with Christian precepts. The most comprehensive statement of his astrological beliefs is to be found in a work he authored around 1260, now known as the "Speculum astronomiae". However, details of these beliefs can be found in almost everything he wrote, from his early "De natura boni" to his last work, the "Summa theologiae". Albert believed that all natural things were compositions of matter and form, he referred to it as "quod est" and "quo est". Albert also believed that God alone is the absolute ruling entity. Albert's version of hylomorphism is very similar to the Aristotelian doctrine. Albert is known for his commentary on the musical practice of his times. Most of his written musical observations are found in his commentary on Aristotle's "Poetics". He rejected the idea of "music of the spheres" as ridiculous: movement of astronomical bodies, he supposed, is incapable of generating sound. He wrote extensively on proportions in music, and on the three different subjective levels on which plainchant could work on the human soul: purging of the impure; illumination leading to contemplation; and nourishing perfection through contemplation. Of particular interest to 20th-century music theorists is the attention he paid to silence as an integral part of music. Both of his early treatises, "De natura boni" and "De bono", start with a metaphysical investigation into the concepts of the good in general and the physical good. Albert refers to the physical good as "bonum naturae". Albert does this before directly dealing with the moral concepts of metaphysics. In Albert's later works, he says in order to understand human or moral goodness, the individual must first recognize what it means to be good and do good deeds. This procedure reflects Albert's preoccupations with neo-Platonic theories of good as well as the doctrines of Pseudo-Dionysius. Albert's view was highly valued by the Catholic Church and his peers. Albert devoted the last tractatus of "De Bono" to a theory of justice and natural law. Albert places God as the pinnacle of justice and natural law. God legislates and divine authority is supreme. Up until his time, it was the only work specifically devoted to natural law written by a theologian or philosopher. Albert mentions friendship in his work, "De bono", as well as presenting his ideals and morals of friendship in the very beginning of "Tractatus II". Later in his life he published "Super Ethica". With his development of friendship throughout his work it is evident that friendship ideals and morals took relevance as his life went on. Albert comments on Aristotle's view of friendship with a quote from Cicero, who writes, "friendship is nothing other than the harmony between things divine and human, with goodwill and love". Albert agrees with this commentary but he also adds in harmony or agreement. Albert calls this harmony, "consensio", itself a certain kind of movement within the human spirit. Albert fully agrees with Aristotle in the sense that friendship is a virtue. Albert relates the inherent metaphysical contentedness between friendship and moral goodness. Albert describes several levels of goodness; the useful ("utile"), the pleasurable ("delectabile") and the authentic or unqualified good ("honestum"). Then in turn there are three levels of friendship based on each of those levels, namely friendship based on usefulness ("amicitia utilis"), friendship based on pleasure ("amicitia delectabilis"), and friendship rooted in unqualified goodness ("amicitia honesti"; "amicitia quae fundatur super honestum"). The iconography of the tympanum and archivolts of the late 13th-century portal of Strasbourg Cathedral was inspired by Albert's writings. Albert is frequently mentioned by Dante, who made his doctrine of free will the basis of his ethical system. In his "Divine Comedy", Dante places Albertus with his pupil Thomas Aquinas among the great lovers of wisdom ("Spiriti Sapienti") in the Heaven of the Sun. Albert is also mentioned, along with Agrippa and Paracelsus, in Mary Shelley's "Frankenstein", in which his writings influence a young Victor Frankenstein. In "The Concept of Anxiety", Søren Kierkegaard wrote that Albert, "arrogantly boasted of his speculation before the deity and suddenly became stupid." Kierkegaard cites Gotthard Oswald Marbach whom he quotes as saying ""Albertus repente ex asino factus philosophus et ex philosopho asinus"" [Albert was suddenly transformed from an ass into a philosopher and from a philosopher into an ass]. Johann Eduard Erdmann considers Albert greater and more original than his pupil Aquinas. A number of schools have been named after Albert, including Albertus Magnus High School in Bardonia, New York; Albertus Magnus Lyceum in River Forest, Illinois; and Albertus Magnus College in New Haven, Connecticut. Albertus Magnus Science Hall at Thomas Aquinas College, in Santa Paula, California, is named in honor of Albert. The main science buildings at Providence College and Aquinas College in Grand Rapids, Michigan, are also named after him. The central square at the campus of the University of Cologne features a statue of Albert and is named after him. The Academy for Science and Design in New Hampshire honored Albert by naming one of its four houses Magnus House. As a tribute to the scholar's contributions to the law, the University of Houston Law Center displays a statue of Albert. It is located on the campus of the University of Houston. The Albertus-Magnus-Gymnasien is found in Rottweil, Germany. In Managua, Nicaragua, the Albertus Magnus International Institute, a business and economic development research center, was founded in 2004. In The Philippines, the Albertus Magnus Building at the University of Santo Tomas that houses the Conservatory of Music, College of Tourism and Hospitality Management, College of Education, and UST Education High School is named in his honor. The Saint Albert the Great Science Academy in San Carlos City, Pangasinan, which offers preschool, elementary and high school education, takes pride in having St. Albert as their patron saint. Its main building was named Albertus Magnus Hall in 2008. San Alberto Magno Academy in Tubao, La Union is also dedicated in his honor. This century-old Catholic high school continues to live on its vision-mission up to this day, offering Senior High school courses. Due to his contributions to natural philosophy, the plant species "Alberta magna" and the asteroid 20006 Albertus Magnus were named after him. Numerous Catholic elementary and secondary schools are named for him, including schools in Toronto; Calgary; Cologne; and Dayton, Ohio. The Albertus typeface is named after him. At the University of Notre Dame du Lac in South Bend, Indiana, USA, the Zahm House Chapel is dedicated to St. Albert the Great. Fr. John Zahm, C.S.C., after whom the men's residence hall is named, looked to St. Albert's example of using religion to illumine scientific discovery. Fr. Zahm's work with the Bible and evolution is sometimes seen as a continuation of St. Albert's legacy. The second largest student's fraternity of the Netherlands, located in the city of Groningen, is named Albertus Magnus, in honor of the saint. The Colegio Cientifico y Artistico de San Alberto, Hopelawn, New Jersey, USA with a sister school in Nueva Ecija, Philippines was founded in 1986 in honor of him who thought and taught that religion, the sciences and the arts may be advocated as subjects which should not contradict each other but should support one another to achieve wisdom and reason. The Vosloorus catholic parish (located in Vosloorus Extension One, Ekurhuleni, Gauteng, South Africa) is named after the saint. The catholic parish in Leopoldshafen, near Karlsruhe in Germany is also named after him also considering the huge research center of the Karlsruhe Institute of Technology nearby, as he is the patron saint of scientists. Since the death of King Albert I, the King's Feast is celebrated in Belgium on Albert's feast day. Edinburgh's Catholic Chaplaincy serving the city's universities is named after St Albert.
https://en.wikipedia.org/wiki?curid=1573
Alboin Alboin (530s – 28 June 572) was king of the Lombards from about 560 until 572. During his reign the Lombards ended their migrations by settling in Italy, the northern part of which Alboin conquered between 569 and 572. He had a lasting effect on Italy and the Pannonian Basin; in the former his invasion marked the beginning of centuries of Lombard rule, and in the latter his defeat of the Gepids and his departure from Pannonia ended the dominance there of the Germanic peoples. The period of Alboin's reign as king in Pannonia following the death of his father, Audoin, was one of confrontation and conflict between the Lombards and their main neighbors, the Gepids. The Gepids initially gained the upper hand, but in 567, thanks to his alliance with the Avars, Alboin inflicted a decisive defeat on his enemies, whose lands the Avars subsequently occupied. The increasing power of his new neighbours caused Alboin some unease however, and he therefore decided to leave Pannonia for Italy, hoping to take advantage of the Byzantine Empire's reduced ability to defend its territory in the wake of the Gothic War. After gathering a large coalition of peoples, Alboin crossed the Julian Alps in 568, entering an almost undefended Italy. He rapidly took control of most of Venetia and Liguria. In 569, unopposed, he took northern Italy's main city, Milan. Pavia offered stiff resistance however, and was taken only after a siege lasting three years. During that time Alboin turned his attention to Tuscany, but signs of factionalism among his supporters and Alboin's diminishing control over his army increasingly began to manifest themselves. Alboin was assassinated on 28 June 572, in a coup d'état instigated by the Byzantines. It was organized by the king's foster brother, Helmichis, with the support of Alboin's wife, Rosamund, daughter of the Gepid king whom Alboin had killed some years earlier. The coup failed in the face of opposition from a majority of the Lombards, who elected Cleph as Alboin's successor, forcing Helmichis and Rosamund to flee to Ravenna under imperial protection. Alboin's death deprived the Lombards of the only leader who could have kept the newborn Germanic entity together, the last in the line of hero-kings who had led the Lombards through their migrations from the vale of the Elbe to Italy. For many centuries following his death Alboin's heroism and his success in battle were celebrated in Saxon and Bavarian epic poetry. The name Alboin derives from the Proto-Germanic roots ("elf") and ("friend"); it is thus cognate with the Old English name "Ælfwine". He was known in Latin as "Alboinus" and in Greek as Αλβοΐνος ("Alvoinos"). In modern Italian he is "Alboino" and in modern Lombard "Albuì". The Lombards under King Wacho had migrated towards the east into Pannonia, taking advantage of the difficulties facing the Ostrogothic Kingdom in Italy following the death of its founder, Theodoric, in 526. Wacho's death in about 540 brought his son Walthari to the throne, but, as the latter was still a minor, the kingdom was governed in his stead by Alboin's father, Audoin, of the Gausian clan. Seven years later Walthari died, giving Audoin the opportunity to crown himself and overthrow the reigning Lethings. Alboin was probably born in the 530s in Pannonia, the son of Audoin and his wife, Rodelinda. She may have been the niece of King Theodoric and betrothed to Audoin through the mediation of Emperor Justinian. Like his father, Alboin was raised a pagan, although Audoin had at one point attempted to gain Byzantine support against his neighbours by professing himself a Christian. Alboin took as his first wife the Christian Chlothsind, daughter of the Frankish King Chlothar. This marriage, which took place soon after the death of the Frankish ruler Theudebald in 555, is thought to reflect Audoin's decision to distance himself from the Byzantines, traditional allies of the Lombards, who had been lukewarm when it came to supporting Audoin against the Gepids. The new Frankish alliance was important because of the Franks' known hostility to the Byzantine empire, providing the Lombards with more than one option. However, the "Prosopography of the Later Roman Empire" interprets events and sources differently, believing that Alboin married Chlothsind when already a king in or shortly before 561, the year of Chlothar's death. Alboin first distinguished himself on the battlefield in a clash with the Gepids. At the Battle of Asfeld (552), he killed Turismod, son of the Gepid king Thurisind, in a victory that resulted in the Emperor Justinian's intervention to maintain equilibrium between the rival regional powers. After the battle, according to a tradition reported by Paul the Deacon, to be granted the right to sit at his father's table, Alboin had to ask for the hospitality of a foreign king and have him donate his weapons, as was customary. For this initiation, he went to the court of Thurisind, where the Gepid king gave him Turismod's arms. Walter Goffart believes it is probable that in this narrative Paul was making use of an oral tradition, and is sceptical that it can be dismissed as merely a typical "topos" of an epic poem. Alboin came to the throne after the death of his father, sometime between 560 and 565. As was customary among the Lombards, Alboin took the crown after an election by the tribe's freemen, who traditionally selected the king from the dead sovereign's clan. Shortly afterwards, in 565, a new war erupted with the Gepids, now led by Cunimund, Thurisind's son. The cause of the conflict is uncertain, as the sources are divided; the Lombard Paul the Deacon accuses the Gepids, while the Byzantine historian Menander Protector places the blame on Alboin, an interpretation favoured by historian Walter Pohl. An account of the war by the Byzantine Theophylact Simocatta sentimentalises the reasons behind the conflict, claiming it originated with Alboin's vain courting and subsequent kidnapping of Cunimund's daughter Rosamund, that Alboin proceeded then to marry. The tale is treated with scepticism by Walter Goffart, who observes that it conflicts with the "Origo Gentis Langobardorum", where she was captured only after the death of her father. The Gepids obtained the support of the Emperor in exchange for a promise to cede him the region of Sirmium, the seat of the Gepid kings. Thus in 565 or 566 Justinian's successor Justin II sent his son-in-law Baduarius as "magister militum" (field commander) to lead a Byzantine army against Alboin in support of Cunimund, ending in the Lombards' complete defeat. Faced with the possibility of annihilation, Alboin made an alliance in 566 with the Avars under Bayan I, at the expense of some tough conditions: the Avars demanded a tenth of the Lombards' cattle, half of the war booty, and on the war's conclusion all of the lands held by the Gepids. The Lombards played on the pre-existing hostility between the Avars and the Byzantines, claiming that the latter were allied with the Gepids. Cunimund, on the other hand, encountered hostility when he once again asked the Emperor for military assistance, as the Byzantines had been angered by the Gepids' failure to cede Sirmium to them, as had been agreed. Moreover, Justin II was moving away from the foreign policy of Justinian, and believed in dealing more strictly with bordering states and peoples. Attempts to mollify Justin II with tributes failed, and as a result the Byzantines kept themselves neutral if not outright supportive of the Avars. In 567 the allies made their final move against Cunimund, with Alboin invading the Gepids' lands from the northwest while Bayan attacked from the northeast. Cunimund attempted to prevent the two armies joining up by moving against the Lombards and clashing with Alboin somewhere between the Tibiscus and Danube rivers. The Gepids were defeated in the ensuing battle, their king slain by Alboin, and Cunimund's daughter Rosamund taken captive, according to references in the "Origo". The full destruction of the Gepid kingdom was completed by the Avars, who overcame the Gepids in the east. As a result, the Gepids ceased to exist as an independent people, and were partly absorbed by the Lombards and the Avars. Some time before 568, Alboin's first wife Chlothsind died, and after his victory against Cunimund Alboin married Rosamund, to establish a bond with the remaining Gepids. The war also marked a watershed in the geo-political history of the region, as together with the Lombard migration the following year, it signalled the end of six centuries of Germanic dominance in the Pannonian Basin. Despite his success against the Gepids, Alboin had failed to greatly increase his power, and was now faced with a much stronger threat from the Avars. Historians consider this the decisive factor in convincing Alboin to undertake a migration, even though there are indications that before the war with the Gepids a decision was maturing to leave for Italy, a country thousands of Lombards had seen in the 550s when hired by the Byzantines to fight in the Gothic War. Additionally, the Lombards would have known of the weakness of Byzantine Italy, which had endured a number of problems after being retaken from the Goths. In particular the so-called Plague of Justinian had ravaged the region and conflict remained endemic, with the Three-Chapter Controversy sparking religious opposition and administration at a standstill after the able governor of the peninsula, Narses, was recalled. Nevertheless, the Lombards viewed Italy as a rich land which promised great booty, assets Alboin used to gather together a horde which included not only Lombards but many other peoples of the region, including Heruli, Suebi, Gepids, Thuringii, Bulgars, Sarmatians, the remaining Romans and a few Ostrogoths. But the most important group, other than the Lombards, were the Saxons, of whom 20,000 male warriors with their families participated in the trek. These Saxons were tributaries to the Frankish King Sigebert, and their participation indicates that Alboin had the support of the Franks for his venture. The precise size of the heterogeneous group gathered by Alboin is impossible to know, and many different estimates have been made. Neil Christie considers 150,000 to be a realistic size, a number which would make the Lombards a more numerous force than the Ostrogoths on the eve of their invasion of Italy. Jörg Jarnut proposes 100,000–150,000 as an approximation; Wilfried Menghen in "Die Langobarden" estimates 150,000 to 200,000; while Stefano Gasparri cautiously judges the peoples united by Alboin to be somewhere between 100,000 and 300,000. As a precautionary move Alboin strengthened his alliance with the Avars, signing what Paul calls a "foedus perpetuum" ("perpetual treaty") and what is referred to in the 9th-century "Historia Langobardorum codicis Gothani" as a "pactum et foedus amicitiae" ("pact and treaty of friendship"), adding that the treaty was put down on paper. By the conditions accepted in the treaty, the Avars were to take possession of Pannonia and the Lombards were promised military support in Italy should the need arise; also, for a period of 200 years the Lombards were to maintain the right to reclaim their former territories if the plan to conquer Italy failed, thus leaving Alboin with an alternative open. The accord also had the advantage of protecting Alboin's rear, as an Avar-occupied Pannonia would make it difficult for the Byzantines to bring forces to Italy by land. The agreement proved immensely successful, and relations with the Avars were almost uninterruptedly friendly during the lifetime of the Lombard Kingdom. A further cause of the Lombard migration into Italy may have been an invitation from Narses. According to a controversial tradition reported by several medieval sources, Narses, out of spite for having been removed by Justinian's successor Justin II, called the Lombards to Italy. Often dismissed as an unreliable tradition, it has been studied with attention by modern scholars, in particular Neil Christie, who see in it a possible record of a formal invitation by the Byzantine state to settle in northern Italy as "foederati", to help protect the region against the Franks, an arrangement that may have been disowned by Justin II after Narses' removal. The Lombard migration started on Easter Monday, April 2, 568. The decision to combine the departure with a Christian celebration can be understood in the context of Alboin's recent conversion to Arian Christianity, as attested by the presence of Arian Gothic missionaries at his court. The conversion is likely to have been motivated mostly by political considerations, and intended to consolidate the migration's cohesion, distinguishing the migrants from the Catholic Romans. It also connected Alboin and his people to the Gothic heritage, and in this way obtain the support of the Ostrogoths serving in the Byzantine army as "foederati". It has been speculated that Alboin's migration could have been partly the result of a call from surviving Ostrogoths in Italy. The season chosen for leaving Pannonia was unusually early; the Germanic peoples generally waited until autumn before beginning a migration, giving themselves time to do the harvesting and replenish their granaries for the march. The reason behind the spring departure could be the anxiety induced by the neighboring Avars, despite the friendship treaty. Nomadic peoples like the Avars also waited for autumn to begin their military campaigns, as they needed enough forage for their horses. A sign of this anxiety can also be seen in the decision taken by Alboin to ravage Pannonia, which created a safety zone between the Lombards and the Avars. The road followed by Alboin to reach Italy has been the subject of controversy, as is the length of the trek. According to Neil Christie the Lombards divided themselves into migrational groups, with a vanguard scouting the road, probably following the Poetovio – Celeia – Emona – Forum Iulii route, while the wagons and most of the people proceeded slowly behind because of the goods and chattels they brought with them, and possibly also because they were waiting for the Saxons to join them on the road. By September raiding parties were looting Venetia, but it was probably only in 569 that the Julian Alps were crossed at the Vipava Valley; the eyewitness Secundus of Non gives the date as May 20 or 21. The 569 date for the entry into Italy is not void of difficulties however, and Jörg Jarnut believes the conquest of most of Venetia had already been completed in 568. According to Carlo Guido Mor, a major difficulty remains in explaining how Alboin could have reached Milan on September 3 assuming he had passed the border only in the May of the same year. The Lombards penetrated into Italy without meeting any resistance from the border troops ("milities limitanei"). The Byzantine military resources available on the spot were scant and of dubious loyalty, and the border forts may well have been left unmanned. What seems certain is that archaeological excavations have found no sign of violent confrontation in the sites that have been excavated. This agrees with Paul the Deacon's narrative, who speaks of a Lombard takeover in Friuli "without any hindrance". The first town to fall into the Lombards' hands was Forum Iulii (Cividale del Friuli), the seat of the local "magister militum". Alboin chose this walled town close to the frontier to be capital of the Duchy of Friuli and made his nephew and shield bearer, Gisulf, duke of the region, with the specific duty of defending the borders from Byzantine or Avar attacks from the east. Gisulf obtained from his uncle the right to choose for his duchy those "farae", or clans, that he preferred. Alboin's decision to create a duchy and designate a duke were both important innovations; until then, the Lombards had never had dukes or duchies based on a walled town. The innovation adopted was part of Alboin's borrowing of Roman and Ostrogothic administrative models, as in Late Antiquity the "comes civitatis" (city count) was the main local authority, with full administrative powers in his region. But the shift from count ("comes") to duke ("dux") and from county ("comitatus") to duchy ("ducatus") also signalled the progressive militarization of Italy. The selection of a fortified town as the centre for the new duchy was also an important change from the time in Pannonia, for while urbanized settlements had previously been ignored by the Lombards, now a considerable part of the nobility settled itself in Forum Iulii, a pattern that was repeated regularly by the Lombards in their other duchies. From Forum Iulii, Alboin next reached Aquileia, the most important road junction in the northeast, and the administrative capital of Venetia. The imminent arrival of the Lombards had a considerable impact on the city's population; the Patriarch of Aquileia Paulinus fled with his clergy and flock to the island of Grado in Byzantine-controlled territory. From Aquileia, Alboin took the Via Postumia and swept through Venetia, taking in rapid succession Tarvisium (Treviso), Vicentia (Vicenza), Verona, Brixia (Brescia) and Bergomum (Bergamo). The Lombards faced difficulties only in taking Opitergium (Oderzo), which Alboin decided to avoid, as he similarly avoided tackling the main Venetian towns closer to the coast on the Via Annia, such as Altinum, Patavium (Padova), Mons Silicis (Monselice), Mantua and Cremona. The invasion of Venetia generated a considerable level of turmoil, spurring waves of refugees from the Lombard-controlled interior to the Byzantine-held coast, often led by their bishops, and resulting in new settlements such as Torcello and Heraclia. Alboin moved west in his march, invading the region of Liguria (north-west Italy) and reaching its capital Mediolanum (Milan) on September 3, 569, only to find it already abandoned by the "vicarius Italiae" (vicar of Italy), the authority entrusted with the administration of the diocese of Annonarian Italy. Archbishop Honoratus, his clergy, and part of the laity accompanied the "vicarius Italiae" to find a safe haven in the Byzantine port of Genua (Genoa). Alboin counted the years of his reign from the capture of Milan, when he assumed the title of "dominus Italiae" (Lord of Italy). His success also meant the collapse of Byzantine defences in the northern part of the Po plain, and large movements of refugees to Byzantine areas. Several explanations have been advanced to explain the swiftness and ease of the initial Lombard advance in northern Italy. It has been suggested that the towns' doors may have been opened by the betrayal of the Gothic auxiliaries in the Byzantine army, but historians generally hold that Lombard success occurred because Italy was not considered by Byzantium as a vital part of the empire, especially at a time when the empire was imperilled by the attacks of Avars and Slavs in the Balkans and Sassanids in the east. The Byzantine decision not to contest the Lombard invasion reflects the desire of Justinian's successors to reorient the core of the Empire's policies eastward. The impact of the Lombard migration on the Late Roman aristocracy was disruptive, especially in combination with the Gothic War; the latter conflict had finished in the north only in 562, when the last Gothic stronghold, Verona, was taken. Many men of means (Paul's "possessores") either lost their lives or their goods, but the exact extent of the despoliation of the Roman aristocracy is a subject of heated debate. The clergy was also greatly affected. The Lombards were mostly pagans, and displayed little respect for the clergy and Church property. Many churchmen left their sees to escape from the Lombards, like the two most senior bishops in the north, Honoratus and Paulinus. However, most of the suffragan bishops in the north sought an accommodation with the Lombards, as did in 569 the bishop of Tarvisium, Felix, when he journeyed to the Piave river to parley with Alboin, obtaining respect for the Church and its goods in return for this act of homage. It seems certain that many sees maintained an uninterrupted episcopal succession through the turmoil of the invasion and the following years. The transition was eased by the hostility existing among the northern Italian bishops towards the papacy and the empire due to the religious dispute involving the "Three-Chapter Controversy". In Lombard territory, churchmen were at least sure to avoid imperial religious persecution. In the view of Pierre Riché, the disappearance of 220 bishops' seats indicates that the Lombard migration was a crippling catastrophe for the Church. Yet according to Walter Pohl the regions directly occupied by Alboin suffered less devastation and had a relatively robust survival rate for towns, whereas the occupation of territory by autonomous military bands interested mainly in raiding and looting had a more severe impact, with the bishoprics in such places rarely surviving. The first attested instance of strong resistance to Alboin's migration took place at the town of Ticinum (Pavia), which he started to besiege in 569 and captured only after three years. The town was of strategic importance, sitting at the confluence of the rivers Po and Ticino and connected by waterways to Ravenna, the capital of Byzantine Italy and the seat of the Praetorian prefecture of Italy. Its fall cut direct communications between the garrisons stationed on the Alpes Maritimae and the Adriatic coast. Careful to maintain the initiative against the Byzantines, by 570 Alboin had taken their last defences in northern Italy except for the coastal areas of Liguria and Venetia and a few isolated inland centres such as Augusta Praetoria (Aosta), Segusio (Susa), and the island of Amacina in the Larius Lucus (Lake Como). During Alboin's kingship the Lombards crossed the Apennines and plundered Tuscia, but historians are not in full agreement as to whether this took place under his guidance and if this constituted anything more than raiding. According to Herwig Wolfram, it was probably only in 578–579 that Tuscany was conquered, but Jörg Jarnut and others believe this began in some form under Alboin, although it was not completed by the time of his death. Alboin's problems in maintaining control over his people worsened during the siege of Ticinum. The nature of the Lombard monarchy made it difficult for a ruler to exert the same degree of authority over his subjects as had been exercised by Theodoric over his Goths, and the structure of the army gave great authority to the military commanders or "duces", who led each band ("fara") of warriors. Additionally, the difficulties encountered by Alboin in building a solid political entity resulted from a lack of imperial legitimacy, as unlike the Ostrogoths, they had not entered Italy as "foederati" but as enemies of the Empire. The king's disintegrating authority over his army was also manifested in the invasion of Frankish Burgundy which from 569 or 570 was subject to yearly raids on a major scale. The Lombard attacks were ultimately repelled following Mummolus' victory at Embrun. These attacks had lasting political consequences, souring the previously cordial Lombard-Frankish relations and opening the door to an alliance between the Empire and the Franks against the Lombards, a coalition agreed to by Guntram in about 571. Alboin is generally thought not to have been behind this invasion, but an alternative interpretation of the transalpine raids presented by Gian Piero Bognetti is that Alboin may actually have been involved in the offensive on Guntram as part of an alliance with the Frankish king of Austrasia, Sigebert I. This view is met with scepticism by scholars such as Chris Wickham. The weakening of royal authority may also have resulted in the conquest of much of southern Italy by the Lombards, in which modern scholars believe Alboin played no role at all, probably taking place in 570 or 571 under the auspices of individual warlords. However it is far from certain that the Lombard takeover occurred during those years, as very little is known of Faroald and Zotto's respective rises to power in Spoletium (Spoleto) and Beneventum (Benevento). Ticinum eventually fell to the Lombards in either May or June 572. Alboin had in the meantime chosen Verona as his seat, establishing himself and his treasure in a royal palace built there by Theodoric. This choice may have been another attempt to link himself with the Gothic king. It was in this palace that Alboin was killed on June 28, 572. In the account given by Paul the Deacon, the most detailed narrative on Alboin's death, history and saga intermingle almost inextricably. Much earlier and shorter is the story told by Marius of Aventicum in his "Chronica", written about a decade after Alboin's murder. According to his version the king was killed in a conspiracy by a man close to him, called Hilmegis (Paul's Helmechis), with the connivance of the queen. Helmichis then married the widow, but the two were forced to escape to Byzantine Ravenna, taking with them the royal treasure and part of the army, which hints at the cooperation of Byzantium. Roger Collins describes Marius as an especially reliable source because of his early date and his having lived close to Lombard Italy. Also contemporary is Gregory of Tours' account presented in the "Historia Francorum", and echoed by the later Fredegar. Gregory's account diverges in several respects from most other sources. In his tale it is told how Alboin married the daughter of a man he had slain, and how she waited for a suitable occasion for revenge, eventually poisoning him. She had previously fallen in love with one of her husband's servants, and after the assassination tried to escape with him, but they were captured and killed. However, historians including Walter Goffart place little trust in this narrative. Goffart notes other similar doubtful stories in the "Historia" and calls its account of Alboin's demise "a suitably ironic tale of the doings of depraved humanity". Elements present in Marius' account are echoed in Paul's "Historia Langobardorum", which also contains distinctive features. One of the best known aspects unavailable in any other source is that of the skull cup. In Paul, the events that led to Alboin's downfall unfold in Verona. During a great feast, Alboin gets drunk and orders his wife Rosamund to drink from his cup, made from the skull of his father-in-law Cunimund after he had slain him in 567 and married Rosamund. Alboin "invited her to drink merrily with her father". This reignited the queen's determination to avenge her father. The tale has been often dismissed as a fable and Paul was conscious of the risk of disbelief. For this reason, he insists that he saw the skull cup personally during the 740s in the royal palace of Ticinum in the hands of king Ratchis. The use of skull cups has been noticed among nomadic peoples and, in particular, among the Lombards' neighbors, the Avars. Skull cups are believed to be part of a shamanistic ritual, where drinking from the cup was considered a way to assume the dead man's powers. In this context, Stefano Gasparri and Wilfried Menghen see in Cunimund's skull cup the sign of nomadic cultural influences on the Lombards: by drinking from his enemy's skull Alboin was taking his vital strength. As for the offering of the skull to Rosamund, that may have been a ritual request of complete submission of the queen and her people to the Lombards, and thus a cause of shame or humiliation. Alternatively, it may have been a rite to appease the dead through the offering of a libation. In the latter interpretation, the queen's answer reveals her determination not to let the wound opened by the killing of her father be healed through a ritual act, thus openly displaying her thirst for revenge. The episode is read in a radically different way by Walter Goffart. According to him, the whole story assumes an allegorical meaning, with Paul intent on telling an edifying story of the downfall of the hero and his expulsion from the promised land, because of his human weakness. In this story, the skull cup plays a key role as it unites original sin and barbarism. Goffart does not exclude the possibility that Paul had really seen the skull, but believes that by the 740s the connection between sin and barbarism as exemplified by the skull cup had already been established. In her plan to kill her husband Rosamund found an ally in Helmichis, the king's foster brother and "spatharius" (arms bearer). According to Paul the queen then recruited the king's "cubicularius" (bedchamberlain), Peredeo, into the plot, after having seduced him. When Alboin retired for his midday rest on June 28, care was taken to leave the door open and unguarded. Alboin's sword was also removed, leaving him defenceless when Peredeo entered his room and killed him. Alboin's remains were allegedly buried beneath the palace steps. Peredeo's figure and role is mostly introduced by Paul; the "Origo" had for the first time mentioned his name as "Peritheus", but there his role had been different, as he was not the assassin, but the instigator of the assassination. In the vein of his reading of the skull cup, Goffart sees Peredeo not as a historical figure but as an allegorical character: he notes a similarity between Peredeo's name and the Latin word "peritus", meaning "lost", a representation of those Lombards who entered into the service of the Empire. Alboin's death had a lasting impact, as it deprived the Lombards of the only leader they had that could have kept together the newborn Germanic entity. His end also represents the death of the last of the line of hero-kings that had led the Lombards through their migrations from the Elbe to Italy. His fame survived him for many centuries in epic poetry, with Saxons and Bavarians celebrating his prowess in battle, his heroism, and the magical properties of his weapons. To complete the coup d'état and legitimize his claim to the throne, Helmichis married the queen, whose high standing arose not only from being the king's widow but also from being the most prominent member of the remaining Gepid nation, and as such her support was a guarantee of the Gepids' loyalty to Helmichis. The latter could also count on the support of the Lombard garrison of Verona, where many may have opposed Alboin's aggressive policy and could have cultivated the hope of reaching an entente with the Empire. The Byzantines were almost certainly deeply involved in the plot. It was in their interest to stem the Lombard tide by bringing a pro-Byzantine regime into power in Verona, and possibly in the long run break the unity of the Lombards' kingdom, winning over the dukes with honors and emoluments. The coup ultimately failed, as it met with the resistance of most of the warriors, who were opposed to the king's assassination. As a result, the Lombard garrison in Ticinum proclaimed Duke Cleph the new king, and Helmichis, rather than going to war against overwhelming odds, escaped to Ravenna with Longinus' assistance, taking with him his wife, his troops, the royal treasure and Alboin's daughter Albsuinda. In Ravenna the two lovers became estranged and killed each other. Subsequently, Longinus sent Albsuinda and the treasure to Constantinople. Cleph kept the throne for only 18 months before being assassinated by a slave. Possibly he too was killed at the instigation of the Byzantines, who had every interest in avoiding a hostile and solid leadership among the Lombards. An important success for the Byzantines was that no king was proclaimed to succeed Cleph, opening a decade of interregnum, thus making them more vulnerable to attacks from Franks and Byzantines. It was only when faced with the danger of annihilation by the Franks in 584 that the dukes elected a new king in the person of Authari, son of Cleph, who began the definitive consolidation and centralization of the Lombard kingdom while the remaining imperial territories were reorganized under the control of an exarch in Ravenna with the capacity to defend the country without the Emperor's assistance. The consolidation of Byzantine and Lombard dominions had long-lasting consequences for Italy, as the region was from that moment on fragmented among multiple rulers until Italian unification in 1871. Alboin, together with other tribal leaders is mentioned in the 10th century Old English poem called "Widsith" (lines 70-75) : The historical period also formed the basis of the 1961 Italian adventure film "Sword of the Conqueror" (Italian: "Rosmunda e Alboino", German title "Alboin, König der Langobarden"), with Jack Palance as Alboin. There have been several artistic depictions of events from Alboin's life including Peter Paul Rubens' "Alboin and Rosamunde" (1615); Charles Landseer's "Assassination of Alboin, King of the Lombards" (1856); and Fortunino Matania's illustration "Rosamund captive before King Alboin of the Lombards" (1942).
https://en.wikipedia.org/wiki?curid=1575
Afonso de Albuquerque Afonso de Albuquerque, Duke of Goa (; 1453 – 16 December 1515) (also spelled Aphonso or Alfonso) was a Portuguese general, a conqueror, a statesman and an empire builder. Afonso advanced the three-fold Portuguese grand scheme of combating Islam, spreading Christianity, and securing the trade of spices by establishing a Portuguese Asian empire. Among his achievements, Afonso managed to conquer Goa and was the first European of the Renaissance to raid the Persian Gulf, and he led the first voyage by a European fleet into the Red Sea. His military and administrative works are generally regarded as among the most vital to building and securing the Portuguese Empire in the Orient, the Middle East, and the spice routes of eastern Oceania. Afonso is generally considered a military genius, and "probably the greatest naval commander of the age" given his successful strategy—he attempted to close all the Indian Ocean naval passages to the Atlantic, Red Sea, Persian Gulf, and to the Pacific, transforming it into a Portuguese "mare clausum" established over the opposition of the Ottoman Empire and its Muslim and Hindu allies. In the expansion of the Portuguese Empire, Afonso initiated a rivalry that would become known as the Ottoman–Portuguese war, which would endure for many years. Many of the Ottoman–Portuguese conflicts in which he was directly involved took place in the Indian Ocean, in the Persian Gulf regions for control of the trade routes, and on the coasts of India. It was his military brilliance in these initial campaigns against the much larger Ottoman Empire and its allies that enabled Portugal to become the first global empire in history. He had a record of engaging and defeating much larger armies and fleets. For example, his capture of Ormuz in 1507 against the Persians was accomplished with a fleet of seven ships. Other famous battles and offensives which he led include the conquest of Goa in 1510 and the capture of Malacca in 1511. He became admiral of the Indian Ocean, and was appointed head of the "fleet of the Arabian and Persian sea" in 1506. During the last five years of his life, he turned to administration, where his actions as the second governor of Portuguese India were crucial to the longevity of the Portuguese Empire. He pioneered European sea trade with China during the Ming Dynasty with envoy Rafael Perestrello, Thailand with Duarte Fernandes as envoy, and with Timor, passing through Malaysia and Indonesia in a voyage headed by António de Abreu and Francisco Serrão. He also aided diplomatic relations with Ethiopia using priest envoys João Gomes and João Sanches, and established diplomatic ties with Persia during the Safavid dynasty. He became known as "the Great", "the Terrible", "the Caesar of the East", "the Lion of the Seas", and "the Portuguese Mars". Afonso de Albuquerque was born in 1453 in Alhandra, near Lisbon. He was the second son of Gonçalo de Albuquerque, Lord of Vila Verde dos Francos, and Dona Leonor de Menezes. His father held an important position at court and was connected by remote illegitimate descent with the Portuguese monarchy. He was educated in mathematics and Latin at the court of Afonso V of Portugal, where he befriended Prince John, the future King John II of Portugal. Afonso's early training is described by Diogo Barbosa Machado: "“D. Alfonso de Albuquerque, surnamed the Great, by reason of the heroic deeds wherewith he filled Europe with admiration, and Asia with fear and trembling, was born in the year 1453, in the Estate called, for the loveliness of its situation, the Paradise of the Town of Alhandra, six leagues distant from Lisbon. He was the second son of Gonçalo de Albuquerque, Lord of Villaverde, and of D. Leonor de Menezes, daughter of D. Álvaro Gonçalves de Athayde, Count of Atouguia, and of his wife D. Guiomar de Castro, and corrected this injustice of nature by climbing to the summit of every virtue, both political and moral. He was educated in the Palace of the King D. Afonso V, in whose palaestra he strove emulously to become the rival of that African Mars”. Afonso served 10 years in North Africa, where he gained military experience in fierce campaigns against Muslim powers and Ottoman Turks. In 1471, under the command of Afonso V of Portugal, he was present at the conquest of Tangier and Arzila in Morocco, serving there as an officer for some years. In 1476 he accompanied Prince John in wars against Castile, including the Battle of Toro. He participated in the campaign on the Italian peninsula in 1480 to rescue Ferdinand II of Aragon from the Ottoman invasion of Otranto that ended in victory. On his return in 1481, when Prince John was crowned as King John II, Afonso was made Master of the Horse for his distinguished exploits, chief equerry ("estribeiro-mor") to the King, a post which he held throughout John's reign (1481–95). In 1489 he returned to military campaigns in North Africa, as commander of defense in the Graciosa fortress, an island in the river Luco near the city of Larache, and in 1490 was part of the guard of King John II, returning to Arzila in 1495, where his younger brother Martim died fighting by his side. Afonso made his mark under the stern John II, and won military campaigns in Africa and the Mediterranean sea, yet Asia is where he would make his greatest impact. When King Manuel I of Portugal was enthroned, he showed some reticence towards Afonso, a close friend of his dreaded predecessor and seventeen years his senior. Eight years later, on 6 April 1503, after a long military career and at a mature age, Afonso was sent on his first expedition to India together with his cousin Francisco de Albuquerque. Each commanded three ships, sailing with Duarte Pacheco Pereira and Nicolau Coelho. They engaged in several battles against the forces of the Zamorin of Calicut ("Calecute", Kozhikode) and succeeded in establishing the King of Cohin ("Cohim", Kochi) securely on his throne. In return, the King gave them permission to build the Portuguese fort "Immanuel" (Fort Kochi) and establish trade relations with Quilon ("Coulão", Kollam). This laid the foundation for the eastern Portuguese Empire. Afonso returned home in July 1504, and was well received by King Manuel I. After he assisted with the creation of a strategy for the Portuguese efforts in the east, King Manuel entrusted him with the command of a squadron of five vessels in the fleet of sixteen sailing for India in early 1506 headed by Tristão da Cunha. Their aim was to conquer Socotra and build a fortress there, hoping to close the trade in the Red Sea. Afonso went as "chief-captain for the Coast of Arabia", sailing under da Cunha's orders until reaching Mozambique. He carried a sealed letter with a secret mission ordered by the King: after fulfilling the first mission, he was to replace the first viceroy of India, Francisco de Almeida, whose term ended two years later. Before departing, he legitimized a natural son born in 1500, and made his will. The fleet left Lisbon on 6 April 1506. Afonso piloted his ship himself, having lost his appointed pilot on departure. In Mozambique Channel, they rescued Captain João da Nova, who had encountered difficulties on his return from India; da Nova and his ship, the "Frol de la mar", joined da Cunha's fleet. From Malindi, da Cunha sent envoys to Ethiopia, which at the time was thought to be closer than it actually is. Those included the priest João Gomes, João Sanches and Tunisian Sid Mohammed who, having failed to cross the region, headed for Socotra; from there, Afonso managed to land them in Filuk. After successful attacks on Arab cities on the east Africa coast, they conquered Socotra and built a fortress at Suq, hoping to establish a base to stop the Red Sea commerce to the Indian Ocean. However, Socotra was abandoned four years later, as it was not advantageous as a base. At Socotra, they parted ways: Tristão da Cunha sailed for India, where he would relieve the Portuguese besieged at Cannanore, while Afonso took seven ships and 500 men to Ormuz in the Persian Gulf, one of the chief eastern centers of commerce. On his way, he conquered the cities of Curiati (Kuryat), Muscat in July 1507, and Khor Fakkan, accepting the submission of the cities of Kalhat and Sohar. He arrived at Ormuz on 25 September and soon captured the city, which agreed to become a tributary state of the Portuguese king. Hormuz was then a tributary state of Shah Ismail of Persia. In a famous episode, shortly after its conquest Albuquerque was confronted by Persian envoys, who demanded the payment of the due tribute from him instead. He ordered them to be given a stock of cannonballs, arrows and weapons, retorting that "such was the currency struck in Portugal to pay the tribute demanded from the dominions of King Manuel" According to Brás de Albuquerque, it was Shah Ismael who coined the term "Lion of the seas", addressing Albuquerque as such. Afonso began building the Fort of Our Lady of Victory (later renamed Fort of Our Lady of the Conception), engaging his men of all ranks in the work. However, some of his officers revolted against the heavy work and climate and, claiming that Afonso was exceeding his orders, departed for India. With the fleet reduced to two ships and left without supplies, he was unable to maintain this position for long. Forced to abandon Ormuz in January 1508, he raided coastal villages to resupply the settlement of Socotra, returned to Ormuz, and then headed to India. Afonso arrived at Cannanore on the Malabar coast in December 1508, where he opened before the viceroy, Dom Francisco de Almeida, the sealed letter which he had received from the King, and which named as governor to succeed Almeida. The viceroy, supported by the officers who had abandoned Afonso at Ormuz, had a matching royal order, but declined to yield, protesting that his term ended only in January and stating his intention to avenge his son's death by fighting the Mamluk fleet of Mirocem, refusing Afonso's offer to fight him himself. Afonso avoided confrontation, which could have led to civil war, and moved to Kochi, India, to await further instruction from the King, maintaining his entourage himself. He was described by Fernão Lopes de Castanheda as patiently enduring open opposition from the group that had gathered around Almeida, with whom he kept formal contact. Increasingly isolated, he wrote to Diogo Lopes de Sequeira, who arrived in India with a new fleet, but was ignored as Sequeira joined the Viceroy. At the same time, Afonso refused approaches from opponents of the Viceroy, who encouraged him to seize power. On 3 February 1509, Almeida fought the naval Battle of Diu against a joint fleet of Mamluks, Ottomans, the Zamorin of Calicut, and the Sultan of Gujarat, regarding it as personal revenge for the death of his son. His victory was decisive: the Ottomans and Mamluks abandoned the Indian Ocean, easing the way for Portuguese rule there for the next century. In August, after a petition from Afonso's former officers with the support of Diogo Lopes de Sequeira claiming him unfit for governance, Afonso was sent in custody to St. Angelo Fort in Cannanore. There he remained under what he considered to be imprisonment. In September 1509, Sequeira tried to establish contact with the Sultan of Malacca but failed, leaving behind 19 Portuguese prisoners. Afonso was released after three months' confinement, on the arrival at Cannanore of the Marshal of Portugal with a large fleet. The Portuguese Marshal was the most important Portuguese noble ever to visit India and he brought an armada of fifteen ships and 3,000 men sent by the King to defend Afonso's rights, and to take Calicut. On 4 November 1509, Afonso became the second Governor of the State of India, a position he would hold until his death. Almeida having returned home in 1510, Afonso speedily showed the energy and determination of his character. He intended to dominate the Muslim world and control the Spice trade. Initially King Manuel I and his council in Lisbon tried to distribute the power, outlining three areas of jurisdiction in the Indian Ocean. In 1509, the nobleman Diogo Lopes de Sequeira was sent with a fleet to Southeast Asia, to seek an agreement with Sultan Mahmud Shah of Malacca, but failed and returned to Portugal. To Jorge de Aguiar was given the region between the Cape of Good Hope and Gujarat. He was succeeded by Duarte de Lemos, but left for Cochin and then for Portugal, leaving his fleet to Afonso. In January 1510, obeying the orders from the King and aware of the absence of Zamorin, Afonso advanced on Calicut. The attack was unsuccessful, as Marshal Fernando Coutinho ventured into the inner city against instructions, fascinated by its richness, and was ambushed. During the rescue, Afonso was shot in the chest and had to retreat, barely escaping with his life. Coutinho was killed during the escape. Soon after the failed attack, Afonso assembled a fleet of 23 ships and 1200 men. Contemporary reports state that he wanted to fight the Egyptian Mamluk Sultanate fleet in the Red Sea or return to Hormuz. However, he had been informed by Timoji (a privateer in the service of the Hindu Vijayanagara Empire) that it would be easier to fight them in Goa, where they had sheltered after the Battle of Diu, and also of the illness of the Sultan Yusuf Adil Shah, and war between the Deccan sultanates. So he relied on surprise in the capture of Goa from the Sultanate of Bijapur. He thus completed another mission, for Portugal wanted not to be seen as an eternal "guest" of Kochi and had been coveting Goa as the best trading port in the region. A first assault took place in Goa from 4 March to 20 May 1510. After initial occupation, feeling unable to hold the city given the poor condition of its fortifications, the cooling of Hindu residents' support and insubordination among his ranks following an attack by Ismail Adil Shah, Afonso refused a truce offered by the Sultan and abandoned the city in August. His fleet was scattered, and a palace revolt in Kochi hindered his recovery, so he headed to Fort Anjediva. New ships arrived from Portugal, which were intended for the nobleman Diogo Mendes de Vasconcelos at Malacca, who had been given a rival command of the region. Three months later, on 25 November Afonso reappeared at Goa with a renovated fleet. Diogo Mendes de Vasconcelos was compelled to accompany him with the reinforcements for Malacca and about 300 Malabari reinforcements from Cannanore. In less than a day, they took Goa from Ismail Adil Shah and his Ottoman allies, who surrendered on 10 December. It is estimated that 6000 of the 9000 Muslim defenders of the city died, either in the fierce battle in the streets or by drowning while trying to escape. Afonso regained the support of the Hindu population, although he frustrated the initial expectations of Timoji, who aspired to become governor. Afonso rewarded him by appointing him chief "Aguazil" of the city, an administrator and representative of the Hindu and Muslim people, as a knowledgeable interpreter of the local customs. He then made an agreement to lower the yearly tribute. In Goa, Afonso established the first Portuguese mint in the East, after Timoja's merchants had complained of the scarcity of currency, taking it as an opportunity to solidify the territorial conquest. The new coin, based on the existing local coins, showed a cross on the obverse and an armillary sphere (or "esfera"), King Manuel's badge, on the reverse. Gold cruzados or "manueis", silver "esferas" and "alf-esferas", and bronze "leais" were issued. Another mint was established at Malacca in 1511. Albuquerque founded at Goa the "Hospital Real de Goa" or Royal Hospital of Goa, by the Church of Santa Catarina. Upon hearing that the doctors were extorting the sickly with excessive fees, Albuquerque summoned them, declaring that "You charge a physicians' pay and don't know what disease the men who serve our lord the King suffer from. Thus, I want to teach you what is it that they die from" and put them to work building the city walls all day till nightfall before releasing them. Despite constant attacks, Goa became the center of Portuguese India, with the conquest triggering the compliance of neighbouring kingdoms: the Sultan of Gujarat and the Zamorin of Calicut sent embassies, offering alliances and local grants to fortify. Afonso then used Goa to secure the Spice trade in favor of Portugal and sell Persian horses to Vijayanagara and Hindu princes in return for their assistance. Afonso explained to his armies why the Portuguese wanted to capture Malacca: In February 1511, through a friendly Hindu merchant, Nina Chatu, Afonso received a letter from Rui de Araújo, one of the nineteen Portuguese held at Malacca since 1509. It urged moving forward with the largest possible fleet to demand their release, and gave details of the fortifications. Afonso showed it to Diogo Mendes de Vasconcelos, as an argument to advance in a joint fleet. In April 1511, after fortifying Goa, he gathered a force of about 900 Portuguese, 200 Hindu mercenaries and about eighteen ships. He then sailed to Malacca against orders and despite the protest of Diogo Mendes, who claimed command of the expedition. Afonso eventually centralized the Portuguese government in the Indian Ocean. After the Malaccan conquest he wrote a letter to the King to explain his disagreement with Diogo Mendes, suggesting that further divisions could be harmful to the Portuguese in India. Under his command was Ferdinand Magellan, who had participated in the failed embassy of Diogo Lopes de Sequeira in 1509. After a false start towards the Red Sea, they sailed to the Strait of Malacca. It was the richest city that the Portuguese tried to take, and a focal point in the trade network where Malay traders met Gujarati, Chinese, Japanese, Javanese, Bengali, Persian and Arabic, among others, described by Tomé Pires as of invaluable richness. Despite its wealth, it was mostly a wooden-built city, with few masonry buildings but was defended by a mercenary force estimated at 20,000 men and more than 2000 pieces of artillery. Its greatest weakness was the unpopularity of the government of Sultan Mahmud Shah, who favoured Muslims, arousing dissatisfaction amongst other merchants. Afonso made a bold approach to the city, his ships decorated with banners, firing cannon volleys. He declared himself lord of all the navigation, demanded the Sultan release the prisoners and pay for damages, and demanded consent to build a fortified trading post. The Sultan eventually freed the prisoners, but was unimpressed by the small Portuguese contingent. Afonso then burned some ships at the port and four coastal buildings as a demonstration. The city being divided by the Malacca River, the connecting bridge was a strategic point, so at dawn on 25 July the Portuguese landed and fought a tough battle, facing poisoned arrows, taking the bridge in the evening. After fruitlessly waiting for the Sultan's reaction, they returned to the ships and prepared a junk (offered by Chinese merchants), filling it with men, artillery and sandbags. Commanded by António de Abreu, it sailed upriver at high tide to the bridge. The day after, all had landed. After a fierce fight during which the Sultan appeared with an army of war elephants, the defenders were dispersed and the Sultan fled. Afonso waited for the reaction of the Sultan. Merchants approached, asking for Portuguese protection. They were given banners to mark their premises, a sign that they would not be looted. On 15 August, the Portuguese attacked again, but the Sultan had fled the city. Under strict orders, they looted the city, but respected the banners. Afonso prepared Malacca's defenses against a Malay counterattack, building a fortress, assigning his men to shifts and using stones from the mosque and the cemetery. Despite the delays caused by heat and malaria, it was completed in November 1511, its surviving door now known as "A Famosa" ('the famous'). It was possibly then that Afonso had a large stone engraved with the names of the participants in the conquest. To quell disagreements over the order of the names, he had it set facing the wall, with the single inscription "Lapidem quem reprobaverunt aedificantes" (Latin for "The stone the builders rejected", from David's prophecy, Psalm 118:22–23) on the front. He settled the Portuguese administration, reappointing Rui de Araújo as factor, a post assigned before his 1509 arrest, and appointing rich merchant Nina Chatu to replace the previous "bendahara", representative of the "Kafir" people and adviser. Besides assisting in the governance of the city and first Portuguese coinage, he provided the junks for several diplomatic missions. Meanwhile, Afonso arrested and had executed the powerful Javanese merchant Utimuti Raja who, after being appointed to a position in the Portuguese administration as representative of the Javanese population, had maintained contacts with the exiled royal family. Afonso arranged for the shipping of many Órfãs d'El-Rei to Portuguese Malacca. On 20 November 1511 Afonso sailed from Malacca to the coast of Malabar on the old "Flor de la Mar" carrack that had served to support the conquest of Malacca. Despite its unsound condition, he used it to transport the treasure amassed in the conquest, given its large capacity. He wanted to give the court of King Manuel a show of Malaccan treasures. There were also the offers from the Kingdom of Siam (Thailand) to the King of Portugal and all his own fortune. On the voyage the "Flor de la Mar" was wrecked in a storm, and Afonso barely escaped drowning. Most Muslim and Gujarati merchants having fled the city, Afonso invested in diplomatic efforts demonstrating generosity to Southeast Asian merchants, like the Chinese, to encourage good relations with the Portuguese. Trade and diplomatic missions were sent to continental kingdoms: Rui Nunes da Cunha was sent to Pegu (Burma), from where King Binyaram sent back a friendly emissary to Kochi in 1514 and Sumatra, Sumatran kings of Kampar and Indragiri sending emissaries to Afonso accepting the new power, as vassal states of Malacca. Knowing of Siamese ambitions over Malacca, Afonso sent Duarte Fernandes in a diplomatic mission to the Kingdom of Siam (Thailand), returning in a Chinese junk. He was one of the Portuguese who had been arrested in Malacca, having gathered knowledge about the culture of the region. There he was the first European to arrive, establishing amicable relations between the kingdom of Portugal and the court of the King of Siam Ramathibodi II, returning with a Siamese envoy bearing gifts and letters to Afonso and the King of Portugal. In November, after having secured Malacca and learning the location of the then secret "spice islands", Afonso sent three ships to find them, led by trusted António de Abreu with deputy commander Francisco Serrão. Malay sailors were recruited to guide them through Java, the Lesser Sunda Islands and the Ambon Island to Banda Islands, where they arrived in early 1512. There they remained for a month, buying and filling their ships with nutmeg and cloves. António de Abreu then sailed to Amboina whilst Serrão sailed towards the Moluccas, but he was shipwrecked near Seram. Sultan Abu Lais of Ternate heard of their stranding, and, seeing a chance to ally himself with a powerful foreign nation, brought them to Ternate in 1512 where they were permitted to build a fort on the island, the "", built in 1522. In early 1513, Jorge Álvares, sailing on a mission under Afonso's orders, was allowed to land in Lintin Island, on the Pearl River Delta in southern China. Soon after, Afonso sent Rafael Perestrello to southern China, seeking trade relations with the Ming dynasty. In ships from Portuguese Malacca, Rafael sailed to Canton (Guangzhou) in 1513, and again from 1515 to 1516 to trade with Chinese merchants. These ventures, along with those of Tomé Pires and Fernão Pires de Andrade, were the first direct European diplomatic and commercial ties with China. However, after the death of the Chinese Zhengde Emperor on 19 April 1521, conservative factions at court seeking to limit eunuch influence rejected the new Portuguese embassy, fought sea battles with the Portuguese around Tuen Mun, and Tomé was forced to write letters to Malacca stating that he and other ambassadors would not be released from prison in China until the Portuguese relinquished their control of Malacca and returned it to the deposed Sultan of Malacca (who was previously a Ming tributary vassal). Nonetheless, Portuguese relations with China became normalized again by the 1540s and in 1557 a permanent Portuguese base at Macau in southern China was established with consent from the Ming court. Afonso returned from Malacca to Cochin, but could not sail to Goa as it faced a serious revolt headed by the forces of Ismael Adil Shah, the Sultan of Bijapur, commanded by Rasul Khan and his countrymen. During Afonso's absence from Malacca, Portuguese who opposed the taking of Goa had waived its possession, even writing to the King that it would be best to let it go. Held up by the monsoon and with few forces available, Afonso had to wait for the arrival of reinforcement fleets headed by his nephew D. Garcia de Noronha, and Jorge de Mello Pereira. While at Cochin, Albuquerque started a school. In a private letter to King Manuel I, he states that he had found a chest full of books with which to teach the children of married Portuguese settlers ("casados") and Christian converts to read and write which, according to Albuquerque, there were about a hundred in his time, "all very sharp and easily learn what they are taught". On 10 September 1512, Afonso sailed from Cochin to Goa with fourteen ships carrying 1,700 soldiers. Determined to recapture the fortress, he ordered trenches dug and a wall breached. But on the day of the planned final assault, Rasul Khan surrendered. Afonso demanded the fort be handed over with its artillery, ammunition and horses, and the deserters to be given up. Some had joined Rasul Khan when the Portuguese were forced to flee Goa in May 1510, others during the recent siege. Rasul Khan consented, on condition that their lives be spared. Afonso agreed and he left Goa. He did spare the lives of the deserters, but had them horribly mutilated. One such renegade was Fernão Lopes, bound for Portugal in custody, who escaped at the island of Saint Helena and led a 'Robinson Crusoe' life for many years. After such measures the town became the most prosperous Portuguese settlement in India. In December 1512 an envoy from Ethiopia arrived at Goa. Mateus was sent by the regent queen Eleni, following the arrival of the Portuguese from Socotra in 1507, as an ambassador for the king of Portugal in search of a coalition to help face growing Muslim influence. He was received in Goa with great honour by Afonso, as a long-sought "Prester John" envoy. His arrival was announced by King Manuel to Pope Leo X in 1513. Although Mateus faced the distrust of Afonso's rivals, who tried to prove he was some impostor or Muslim spy, Afonso sent him to Portugal. The King is described as having wept with joy at their report. In February 1513, while Mateus was in Portugal, Afonso sailed to the Red Sea with a force of about 1000 Portuguese and 400 Malabaris. He was under orders to secure that channel for Portugal. Socotra had proved ineffective to control the Red Sea entrance and was abandoned, and Afonso's hint that Massawa could be a good Portuguese base might have been influenced by Mateus' reports. Knowing that the Mamluks were preparing a second fleet at Suez, he wanted to advance before reinforcements arrived in Aden, and accordingly laid siege to the city. Aden was a fortified city, but although he had scaling ladders they broke during the chaotic attack. After half a day of fierce battle Afonso was forced to retreat. He cruised the Red Sea inside the Bab al-Mandab, with the first European fleet to have sailed this route. He attempted to reach Jeddah, but the winds were unfavourable and so he sheltered at Kamaran island in May, until sickness among the men and lack of fresh water forced him to retreat. In August 1513, after a second attempt to reach Aden, he returned to India with no substantial results. In order to destroy the power of Egypt, he wrote to King Manuel of the idea of diverting the course of the Nile river to render the whole country barren. Perhaps most tellingly, he intended to steal the body of the Islamic prophet, Muhammad, and hold it for ransom until all Muslims had left the Holy Land. Although Albuquerque's expedition failed to reach Suez, such an incursion into the Red Sea by a Christian fleet for the first time in history stunned the Muslim world, and panic spread in Cairo. Albuquerque achieved during his term a favourable end to hostilities between the Portuguese and the Zamorin of Calicut, which had lasted ever since the massacre of the Portuguese in Calicut in 1502. As naval trade faltered and vassals defected, with no foreseeable solutions to the conflict with the Portuguese, the court of the Zamorin fell to in-fighting. The ruling Zamorin was assassinated and replaced by a rival, under the instigation of Albuquerque. Thus, peace talks could commence. the Portuguese were allowed to build a fortress in Calicut itself, and acquired rights to obtain as much pepper and ginger as they wished, at stipulated prices, and half the customs of Calicut as yearly tribute. Construction of the fortress began immediately, under the guise of chief-architect Tomás Fernandes. With peace concluded, in 1514 Afonso devoted himself to governing Goa and receiving embassies from Indian governors, strengthening the city and encouraging marriages of Portuguese men and local women. At that time, Portuguese women were barred from traveling overseas. In 1511 under a policy which Afonso promulgated, the Portuguese government encouraged their explorers to marry local women. To promote settlement, the King of Portugal granted freeman status and exemption from Crown taxes to Portuguese men (known as "casados", or "married men") who ventured overseas and married local women. With Afonso's encouragement, mixed marriages flourished. He appointed local people for positions in the Portuguese administration and did not interfere with local traditions (except "sati", the practice of immolating widows, which he banned). In March 1514 King Manuel sent to Pope Leo X a huge and exotic embassy led by Tristão da Cunha, who toured the streets of Rome in an extravagant procession of animals from the colonies and wealth from the Indies. His reputation reached its peak, laying foundations of the Portuguese Empire in the East. In early 1514, Afonso sent ambassadors to Gujarat's Sultan Muzaffar Shah II, ruler of Cambay, to seek permission to build a fort on Diu, India. The mission returned without an agreement, but diplomatic gifts were exchanged, including an Indian rhinoceros. Afonso sent the gift, named "ganda", and its Indian keeper, Ocem, to King Manuel. In late 1515, Manuel sent it as a gift, the famous Dürer's Rhinoceros to Pope Leo X. Dürer never saw the actual rhinoceros, which was the first living example seen in Europe since Roman times. In 1513, at Cannanore, Afonso was visited by a Persian ambassador from Shah Ismail I, who had sent ambassadors to Gujarat, Ormuz and Bijapur. The shah's ambassador to Bijapur invited Afonso to send back an envoy to Persia. Miguel Ferreira was sent via Ormuz to Tabriz, where he had several interviews with the shah about common goals on defeating the Mamluk sultan. At the same time, Albuquerque decided to conclude the effective conquest of Hormuz. He had learned that after the Portuguese retreat in 1507, a young king was reigning under the influence of a powerful Persian vizier, Reis Hamed, whom the king greatly feared. At Ormuz in March 1515, Afonso met the king and asked the vizier to be present. He then had him immediately stabbed and killed by his entourage, thus "freeing" the dominated king, so the island in the Persian Gulf yielded to him without resistance and remained a vassal state of the Portuguese Empire. Ormuz itself would not be Persian territory for another century, until a British-Persian alliance finally expelled the Portuguese in 1622. At Ormuz, Afonso met with Miguel Ferreira, returning with rich presents and an ambassador, carrying a letter from the Persian potentate Shah Ismael, inviting Afonso to become a leading lord in Persia. There he remained, engaging in diplomatic efforts, receiving envoys and overseeing the construction of the new fortress, while becoming increasingly ill. His illness was reported as early as September 1515. In November 1515, he embarked back to Goa, a journey he would not live to complete. Afonso's life ended on a bitter note, with a painful and ignominious close. At this time, his political enemies at the Portuguese court were planning his downfall. They had lost no opportunity in stirring up the jealousy of King Manuel against him, insinuating that Afonso intended to usurp power in Portuguese India. While on his return voyage from Ormuz in the Persian Gulf, near the harbor of Chaul, he received news of a Portuguese fleet arriving from Europe, bearing dispatches announcing that he was to be replaced by his personal foe, Lopo Soares de Albergaria. Realizing the plot that his enemies had moved against him, profoundly disillusioned, he voiced his bitterness: "Grave must be my sins before the King, for I am in ill favor with the King for love of the men, and with the men for love of the King." Feeling himself near death, he donned the surcoat of the Order of Santiago, of which he was a knight, and drew up his will, appointed the captain and senior officials of Ormuz, and organized a final council with his captains to decide the main matters affecting the Portuguese State of India. He wrote a brief letter to King Manuel, asking him to confer onto his natural son "all of the high honors and rewards" that were justly due to Afonso. He wrote in dignified and affectionate terms, assuring Manuel of his loyalty. On 16 December 1515, Afonso de Albuquerque died within sight of Goa. As his death was known, in the city "great wailing arose", and many took to the streets to witness his body carried on a chair by his main captains, in a procession lit by torches amidst the crowd. Afonso's body was buried in Goa, according to his will, in the Church of Nossa Senhora da Serra (Our Lady of the Hill), which he had been built in 1513 to thank the Madonna for his escape from Kamaran island. That night, even the Hindu natives of Goa gathered to mourn him alongside the Portuguese, "for he was much loved by all", and it was said that "God had need of him for war, and for that he had taken him". In Portugal, King Manuel's zigzagging policies continued, still trapped by the constraints of real-time medieval communication between Lisbon and India and unaware that Afonso was dead. Hearing rumours that the Mamluk Sultan of Egypt was preparing a magnificent army at Suez to prevent the conquest of Ormuz, he repented of having replaced Afonso, and in March 1516 urgently wrote to Albergaria to return the command of all operations to Afonso and provide him with resources to face the Egyptian threat. He organized a new Portuguese navy in Asia, with orders that Afonso (if he was still in India), be made commander-in-chief against the Sultan of Cairo's armies. Manuel would afterwards learn that Afonso had died many months earlier, and that his reversed decision had been delivered many months too late. After 51 years, in 1566, his body was moved to Nossa Senhora da Graça church in Lisbon, which was ruined and rebuilt after the 1755 Great Lisbon earthquake. King Manuel I of Portugal was belatedly convinced of Afonso's loyalty, and endeavoured to atone for his lack of confidence in Afonso by heaping honours upon his son, Brás de Albuquerque (1500–1580), whom he renamed "Afonso" in memory of the father. Afonso de Albuquerque was a prolific writer, having sent numerous letters to the king during his governorship, covering topics from minor issues to major strategies. In 1557 his son published a collection of his letters under the title "Commentarios do Grande Affonso d'Alboquerque"- a clear reference to Caesar's Commentaries- which he reviewed and re-published in 1576. There Afonso was described as "a man of middle stature, with a long face, fresh complexion, the nose somewhat large. He was a prudent man, and a Latin scholar, and spoke in elegant phrases; his conversation and writings showed his excellent education. He was of ready words, very authoritative in his commands, very circumspect in his dealings with the Moors, and greatly feared yet greatly loved by all, a quality rarely found united in one captain. He was very valiant and favoured by fortune." In 1572, Afonso's feats were described in "The Lusiads", the Portuguese main epic poem by Luís Vaz de Camões (Canto X, strophes 40–49). The poet praises his achievements, but has the muses frown upon the harsh rule of his men, of whom Camões was almost a contemporary fellow. In 1934, Afonso was celebrated by Fernando Pessoa in "Mensagem", a symbolist epic. In the first part of this work, called "Brasão" (Coat-of-Arms), he relates Portuguese historical protagonists to each of the fields in the Portuguese coat-of-arms, Afonso being one of the wings of the griffin headed by Henry the Navigator, the other wing being King John II. A variety of mango that he used to bring on his journeys to India has been named in his honour. Numerous homages have been paid to Afonso; he is featured in the Padrão dos Descobrimentos monument; there is a square carrying his name in the Portuguese capital of Lisbon, which also features a bronze statue; two Portuguese Navy ships have been named in his honour: the sloop NRP "Afonso de Albuquerque" (1884) and the warship NRP "Afonso de Albuquerque", the latter belonging to a sloop class named Albuquerque. The fabled Spice Islands were on the imagination of Europe since ancient times. In the 2nd century AD, the Malay Peninsula was known by the Greek philosopher Ptolemy, who labeled it 'Aurea Chersonesus"; and who said that it was believed the fabled area held gold in abundance. Even Indian traders referred to the East Pacific region as "Land of Gold" and made regular visits to Malaya in search of the precious metal, tin and sweet scented jungle woods. But neither Ptolemy, nor Rome, nor Alexander was able to see the fabled regions of the East Pacific. Afonso de Albuquerque became the first European to reach the Spice Islands. Upon discovering the Malay Archipelago, he proceeded in 1511 to conquer Malacca, then commissioned an expedition under the command of António de Abreu and Vice-Commander Francisco Serrão (the latter being a cousin of Magellan) to further explore the extremities of the region in east Indonesia. As a result of these voyages of exploration, the Portuguese became the first Europeans to discover and to reach the fabled Spice Islands in the Indies in addition to discovering their sea routes. Afonso found what had evaded Columbus' grasp – the wealth of the Orient. His discoveries did not go unnoticed, and it took little time for Magellan to arrive in the same region a few years later and discover the Philippines for Spain, giving birth to the Papal Treaty of Zaragoza. Afonso's operations sent a voyage pushing further south which made the European discovery of Timor in the far south of Oceania, and the discovery of Papua New Guinea in 1512. This was followed up by another Portuguese, Jorge de Menezes in 1526, who named Papua New Guinea, the "Island of the Papua". Through Afonso's diplomatic activities, Portugal opened the sea between Europe and China. As early as 1513, Jorge de Albuquerque, a Portuguese commanding officer in Malacca, sent his subordinate Jorge Álvares to sail to China on a ship loaded with pepper from Sumatra. After sailing across the sea, Jorge Álvares and his crew dropped anchor in Tamao, an island located at the mouth of the Pearl River. This was the first Portuguese to set foot in the territory known as China, the mythical "Middle Kingdom" where they erected a stone Padrão. Álvares is the first European to reach Chinese land by sea, and, the first European to enter Hong Kong. In 1514 Afonso de Albuquerque, the Viceroy of the Estado da India dispatched Rafael Perestrello to sail to China in order to pioneer European trade relations with the Chinese nation. Rafael Perestrelo was quoted as saying, "being a very good and honest people, the Chinese hope to make friends with the Portuguese." In spite of initial harmony and excitement between the two empires, difficulties soon arose. Portugal's efforts in establishing lasting ties with China did pay off in the long run; the Portuguese colonized Macau, and established the first European permanent settlement on Chinese soil, which served as a permanent colonial base in southern China, and the two empires maintained an exchange in culture and trade for nearly 500 years. The longest empire in history (1515–1999), begun by Afonso de Albuquerque five centuries earlier, ended when Portugal ceded the government of Macau to China. Afonso de Albuquerque pioneered trade relations with Thailand, and was as such the first recorded European to contact Thailand. Afonso de Albuquerque had a bastard son with an unrecorded woman. He legitimized the boy in February 1506. Before his death, he asked King Manuel I to leave to the son all his wealth and that the King oversee the son's education. When Afonso died, Manuel I renamed the child "Afonso" in his father's memory. Brás Afonso de Albuquerque, or Braz in the old spelling, was born in 1500 and died in 1580.
https://en.wikipedia.org/wiki?curid=1576
Alcaeus of Mytilene Alcaeus of Mytilene (; , "Alkaios ho Mutilēnaios"; – BC) was a lyric poet from the Greek island of Lesbos who is credited with inventing the Alcaic stanza. He was included in the canonical list of nine lyric poets by the scholars of Hellenistic Alexandria. He was a contemporary and an alleged lover of Sappho, with whom he may have exchanged poems. He was born into the aristocratic governing class of Mytilene, the main city of Lesbos, where he was involved in political disputes and feuds. The broad outlines of the poet's life are well known. He was born into the aristocratic, warrior class that dominated Mytilene, the strongest city-state on the island of Lesbos and, by the end of the seventh century BC, the most influential of all the North Aegean Greek cities, with a strong navy and colonies securing its trade-routes in the Hellespont. The city had long been ruled by kings born to the Penthilid clan but, during the poet's life, the Penthilids were a spent force and rival aristocrats and their factions contended with each other for supreme power. Alcaeus and his older brothers were passionately involved in the struggle but experienced little success. Their political adventures can be understood in terms of three tyrants who came and went in succession: Sometime before 600 BC, Mytilene fought Athens for control of Sigeion and Alcaeus was old enough to participate in the fighting. According to the historian Herodotus, the poet threw away his shield to make good his escape from the victorious Athenians then celebrated the occasion in a poem that he later sent to his friend, Melanippus. It is thought that Alcaeus travelled widely during his years in exile, including at least one visit to Egypt. His older brother, Antimenidas, appears to have served as a mercenary in the army of Nebuchadnezzar II and probably took part in the conquest of Askelon. Alcaeus wrote verses in celebration of Antimenides' return, including mention of his valour in slaying the larger opponent (frag. 350), and he proudly describes the military hardware that adorned their family home (frag. 357). Alcaeus was a contemporary and a countryman of Sappho and, since both poets composed for the entertainment of Mytilenean friends, they had many opportunities to associate with each other on a quite regular basis, such as at the "Kallisteia", an annual festival celebrating the island's federation under Mytilene, held at the 'Messon' (referred to as "temenos" in frs. 129 and 130), where Sappho performed publicly with female choirs. Alcaeus' reference to Sappho in terms more typical of a divinity, as "holy/pure, honey-smiling Sappho" (fr. 384), may owe its inspiration to her performances at the festival. The Lesbian or Aeolic school of poetry "reached in the songs of Sappho and Alcaeus that high point of brilliancy to which it never after-wards approached" and it was assumed by later Greek critics and during the early centuries of the Christian era that the two poets were in fact lovers, a theme which became a favourite subject in art (as in the urn pictured above). The poetic works of Alcaeus were collected into ten books, with elaborate commentaries, by the Alexandrian scholars Aristophanes of Byzantium and Aristarchus of Samothrace sometime in the 3rd century BC, and yet his verses today exist only in fragmentary form, varying in size from mere phrases, such as "wine, window into a man" (fr. 333) to entire groups of verses and stanzas, such as those quoted below (fr. 346). Alexandrian scholars numbered him in their canonic nine (one lyric poet per Muse). Among these, Pindar was held by many ancient critics to be pre-eminent, but some gave precedence to Alcaeus instead. The canonic nine are traditionally divided into two groups, with Alcaeus, Sappho and Anacreon, being 'monodists' or 'solo-singers', with the following characteristics: The other six of the canonic nine composed verses for public occasions, performed by choruses and professional singers and typically featuring complex metrical arrangements that were never reproduced in other verses. However, this division into two groups is considered by some modern scholars to be too simplistic and often it is practically impossible to know whether a lyric composition was sung or recited, or whether or not it was accompanied by musical instruments and dance. Even the private reflections of Alcaeus, ostensibly sung at dinner parties, still retain a public function. Critics often seek to understand Alcaeus in comparison with Sappho: The Roman poet, Horace, also compared the two, describing Alcaeus as "more full-throatedly singing" — see Horace's tribute below. Alcaeus himself seems to underscore the difference between his own 'down-to-earth' style and Sappho's more 'celestial' qualities when he describes her almost as a goddess (as cited above), and yet it has been argued that both poets were concerned with a balance between the divine and the profane, each emphasising different elements in that balance. Dionysius of Halicarnassus exhorts us to "Observe in Alcaeus the sublimity, brevity and sweetness coupled with stern power, his splendid figures, and his clearness which was unimpaired by the dialect; and above all mark his manner of expressing his sentiments on public affairs," while Quintilian, after commending Alcaeus for his excellence "in that part of his works where he inveighs against tyrants and contributes to good morals; in his language he is concise, exalted, careful and often like an orator;" goes on to add: "but he descended into wantonness and amours, though better fitted for higher things." The works of Alcaeus are conventionally grouped according to five genres. The following verses demonstrate some key characteristics of the Alcaic style (square brackets indicate uncertainties in the ancient text): The Greek meter here is relatively simple, comprising the Greater Asclepiad, adroitly used to convey, for example, the rhythm of jostling cups (). The language of the poem is typically direct and concise and comprises short sentences — the first line is in fact a model of condensed meaning, comprising an exhortation ("Let's drink!), a rhetorical question ("Why are we waiting for the lamps?") and a justifying statement (Only an inch of daylight left.) The meaning is clear and uncomplicated, the subject is drawn from personal experience, and there is an absence of poetic ornament, such as simile or metaphor. Like many of his poems (e.g., frs. 38, 326, 338, 347, 350), it begins with a verb (in this case "Let's drink!") and it includes a proverbial expression ("Only an inch of daylight left") though it is possible that he coined it himself. Alcaeus rarely used metaphor or simile and yet he had a fondness for the allegory of the storm-tossed ship of state. The following fragment of a hymn to Castor and Polydeuces (the Dioscuri) is possibly another example of this though some scholars interpret it instead as a prayer for a safe voyage. Hither now to me from your isle of Pelops, You powerful children of Zeus and Leda, Showing youselves kindly by nature, Castor And Polydeuces! Travelling abroad on swift-footed horses, Over the wide earth, over all the ocean, How easily you bring deliverance from Death's gelid rigor, Landing on tall ships with a sudden, great bound, A far-away light up the forestays running, Bringing radiance to a ship in trouble, Sailed in the darkness! The poem was written in Sapphic stanzas, a verse form popularly associated with his compatriot, Sappho, but in which he too excelled, here paraphrased in English to suggest the same rhythms. There were probably another three stanzas in the original poem but only nine letters of them remain. The 'far-away light' () is a reference to St Elmo's Fire, an electrical discharge supposed by ancient Greek mariners to be an epiphany of the Dioscuri, but the meaning of the line was obscured by gaps in the papyrus until reconstructed by a modern scholar—such reconstructions are typical of the extant poetry (see Scholars, fragments and sources below). This poem doesn't begin with a verb but with an adverb (Δευτέ) but still communicates a sense of action. He probably performed his verses at drinking parties for friends and political allies—men for whom loyalty was essential, particularly in such troubled times. The Roman poet Horace modelled his own lyrical compositions on those of Alcaeus, rendering the Lesbian poet's verse-forms, including 'Alcaic' and 'Sapphic' stanzas, into concise Latin — an achievement he celebrates in his third book of odes. In his second book, in an ode composed in Alcaic stanzas on the subject of an almost fatal accident he had on his farm, he imagines meeting Alcaeus and Sappho in Hades: Ovid compared Alcaeus to Sappho in Letters of the Heroines, where Sappho is imagined to speak as follows: The story of Alcaeus is partly the story of the scholars who rescued his work from oblivion. His verses have not come down to us through a manuscript tradition — generations of scribes copying an author's collected works, such as delivered intact into the modern age four entire books of Pindar's odes — but haphazardly, in quotes from ancient scholars and commentators whose own works have chanced to survive, and in the tattered remnants of papyri uncovered from an ancient rubbish pile at Oxyrhynchus and other locations in Egypt: sources that modern scholars have studied and correlated exhaustively, adding little by little to the world's store of poetic fragments. Ancient scholars quoted Alcaeus in support of various arguments. Thus for example Heraclitus 'The Allegorist' quoted fr. 326 and part of fr. 6, about ships in a storm, in his study on Homer's use of allegory. The hymn to Hermes, fr308(b), was quoted by Hephaestion (grammarian) and both he and Libanius, the rhetorician, quoted the first two lines of fr. 350, celebrating the return from Babylon of Alcaeus' brother. The rest of fr. 350 was paraphrased in prose by the historian/geographer Strabo. Many fragments were supplied in quotes by Athenaeus, principally on the subject of wine-drinking, but fr. 333, "wine, window into a man", was quoted much later by the Byzantine grammarian, John Tzetzes. The first 'modern' publication of Alcaeus' verses appeared in a Greek and Latin edition of fragments collected from the canonic nine lyrical poets by Michael Neander, published at Basle in 1556. This was followed by another edition of the nine poets, collected by Henricus Stephanus and published in Paris in 1560. Fulvius Ursinus compiled a fuller collection of Alcaic fragments, including a commentary, which was published at Antwerp in 1568. The first separate edition of Alcaeus was by Christian David Jani and it was published at Halle in 1780. The next separate edition was by August Matthiae, Leipzig 1827. Some of the fragments quoted by ancient scholars were able to be integrated by scholars in the nineteenth century. Thus for example two separate quotes by Athenaeus were united by Theodor Bergk to form fr. 362. Three separate sources were combined to form fr. 350, as mentioned above, including a prose paraphrase from Strabo that first needed to be restored to its original meter, a synthesis achieved by the united efforts of Otto Hoffmann, Karl Otfried Muller and Franz Heinrich Ludolf Ahrens. The discovery of the Oxyrhynchus papyri towards the end of the nineteenth century dramatically increased the scope of scholarly research. In fact, eight important fragments have now been compiled from papyri — frs. 9, 38A, 42, 45, 34, 129, 130 and most recently S262. These fragments typically feature lacunae or gaps that scholars fill with 'educated guesses', including for example a "brilliant supplement" by Maurice Bowra in fr. 34, a hymn to the Dioscuri that includes a description of St Elmo's fire in the ship's rigging. Working with only eight letters (; tr. "pró...tr...ntes"), Bowra conjured up a phrase that brilliantly develops the meaning and the euphony of the poem (; tr. "próton' ontréchontes"), describing luminescence "running along the forestays".
https://en.wikipedia.org/wiki?curid=1577
Ealdred (archbishop of York) Ealdred (or Aldred; died 11 September 1069) was Abbot of Tavistock, Bishop of Worcester, and Archbishop of York in Anglo-Saxon England. He was related to a number of other ecclesiastics of the period. After becoming a monk at the monastery at Winchester, he was appointed Abbot of Tavistock Abbey in around 1027. In 1046 he was named to the Bishopric of Worcester. Ealdred, besides his episcopal duties, served Edward the Confessor, the King of England, as a diplomat and as a military leader. He worked to bring one of the king's relatives, Edward the Exile, back to England from Hungary to secure an heir for the childless king. In 1058 he undertook a pilgrimage to Jerusalem, the first bishop from England to do so. As administrator of the Diocese of Hereford, he was involved in fighting against the Welsh, suffering two defeats at the hands of raiders before securing a settlement with Gruffydd ap Llywelyn, a Welsh ruler. In 1060, Ealdred was elected to the archbishopric of York, but had difficulty in obtaining papal approval for his appointment, only managing to do so when he promised not to hold the bishoprics of York and Worcester simultaneously. He helped secure the election of Wulfstan as his successor at Worcester. During his archiepiscopate, he built and embellished churches in his diocese, and worked to improve his clergy by holding a synod which published regulations for the priesthood. Some sources state that following King Edward the Confessor's death in 1066, it was Ealdred who crowned Harold Godwinson as King of England. Ealdred supported Harold as king, but when Harold was defeated at the Battle of Hastings, Ealdred backed Edgar the Ætheling and then endorsed King William the Conqueror, the Duke of Normandy and a distant relative of King Edward's. Ealdred crowned King William on Christmas Day in 1066. William never quite trusted Ealdred or the other English leaders, and Ealdred had to accompany William back to Normandy in 1067, but he had returned to York by the time of his death in 1069. Ealdred supported the churches and monasteries in his diocese with gifts and building projects. Ealdred was probably born in the west of England, and could be related to Lyfing, his predecessor as bishop of Worcester. His family, from Devonshire, may have been well-to-do. Another relative was Wilstan or Wulfstan, who under Ealdred's influence became Abbot of Gloucester. Ealdred was a monk in the cathedral chapter at Winchester Cathedral before becoming abbot of Tavistock Abbey about 1027, an office he held until about 1043. Even after leaving the abbacy of Tavistock, he continued to hold two properties from the abbey until his death. No contemporary documents relating to Ealdred's time as abbot have been discovered. Ealdred was made bishop of Worcester in 1046, a position he held until his resignation in 1062. He may have acted as suffragan, or subordinate bishop, to his predecessor Lyfing before formally assuming the bishopric, as from about 1043 Ealdred witnessed as an "episcopus", or bishop, and a charter from 1045 or early 1046 names Sihtric as abbot of Tavistock. Lyfing died on 26 March 1046, and Ealdred became bishop of Worcester shortly after. However, Ealdred did not receive the other two dioceses that Lyfing had held, Crediton and Cornwall; King Edward the Confessor (reigned 1043–1066) granted these to Leofric, who combined the two sees at Crediton in 1050. Ealdred was an advisor to King Edward the Confessor, and was often involved in the royal government. He was also a military leader, and in 1046 he led an unsuccessful expedition against the Welsh. This was in retaliation for a raid led by the Welsh rulers Gruffydd ap Rhydderch, Rhys ap Rhydderch, and Gruffydd ap Llywelyn. Ealdred's expedition was betrayed by some Welsh soldiers who were serving with the English, and Ealdred was defeated. In 1050, Ealdred went to Rome "on the king's errand", apparently to secure papal approval to move the seat, or centre, of the bishopric of Crediton to Exeter. It may also have been to secure the release of the king from a vow to go on pilgrimage, if sources from after the Norman Conquest of England are to be believed. While in Rome, he attended a papal council, along with his fellow English bishop Herman. That same year, as Ealdred was returning to England he met Sweyn, a son of Godwin, Earl of Wessex, and probably absolved Sweyn for having abducted the abbess of Leominster Abbey in 1046. Through Ealdred's intercession, Sweyn was restored to his earldom, which he had lost after abducting the abbess and murdering his cousin Beorn Estrithson. Ealdred helped Sweyn not only because Ealdred was a supporter of Earl Godwin's family but because Sweyn's earldom was close to his bishopric. As recently as 1049 Irish raiders had allied with Gruffydd ap Rhydderch of Gwent in raiding along the River Usk. Ealdred unsuccessfully tried to drive off the raiders, but was again routed by the Welsh. This failure underscored Ealdred's need for a strong earl in the area to protect against raids. Normally, the bishop of Hereford would have led the defence in the absence of an Earl of Hereford, but in 1049 the incumbent, Æthelstan, was blind, so Ealdred took on the role of defender. Earl Godwin's rebellion against the king in 1051 came as a blow to Ealdred, who was a supporter of the earl and his family. Ealdred was present at the royal council at London that banished Godwin's family. Later in 1051, when he was sent to intercept Harold Godwinson and his brothers as they fled England after their father's outlawing, Ealdred "could not, or would not" capture the brothers. The banishment of Ealdred's patron came shortly after the death of Ælfric Puttoc, the Archbishop of York. York and Worcester had long had close ties, and the two sees had often been held in plurality, or at the same time. Ealdred probably wanted to become Archbishop of York after Ælfric's death, but his patron's eclipse led to the king appointing Cynesige, a royal chaplain, instead. In September 1052, though, Godwin returned from exile and his family was restored to power. By late 1053 Ealdred was once more in royal favour. At some point, he was alleged to have accompanied Swein on a pilgrimage to the Holy Land, but proof is lacking. In 1054 King Edward sent Ealdred to Germany to obtain Emperor Henry III's help in returning Edward the Exile, son of Edmund Ironside, to England. Edmund (reigned 1016) was an elder half-brother of King Edward the Confessor, and Edmund's son Edward was in Hungary with King Andrew I, having left England as an infant after his father's death and the accession of Cnut as King of England. In this mission Ealdred was somewhat successful and obtained insight into the working of the German church during a stay of a year with Hermann II, the Archbishop of Cologne. He also was impressed with the buildings he saw, and later incorporated some of the German styles into his own constructions. The main objective of the mission, however, was to secure the return of Edward; but this failed, mainly because Henry III's relations with the Hungarians were strained, and the emperor was unable or unwilling to help Ealdred. Ealdred was able to discover that Edward was alive, and had a place at the Hungarian court. Although some sources state that Ealdred attended the coronation of Emperor Henry IV, this is not possible, as on the date that Henry was crowned, Ealdred was in England consecrating an abbot. Ealdred had returned to England by 1055, and brought with him a copy of the "Pontificale Romano-Germanicum", a set of liturgies, with him. An extant copy of this work, currently manuscript Cotton Vitellus E xii, has been identified as a copy owned by Ealdred. It appears likely that the "Rule of Chrodegang", a continental set of ordinances for the communal life of secular canons, was introduced into England by Ealdred sometime before 1059. Probably he brought it back from Germany, possibly in concert with Harold. After Ealdred's return to England he took charge of the sees of Hereford and Ramsbury. Ealdred also administered Winchcombe Abbey and Gloucester Abbey. The authors of the "Handbook of British Chronology Third Edition" say he was named bishop of Hereford in 1056, holding the see until he resigned it in 1060, but other sources say he merely administered the see while it was vacant, or that he was bishop of Hereford from 1055 to 1060. Ealdred became involved with the see of Ramsbury after its bishop Herman got into a dispute with King Edward over the movement of the seat of his bishopric to Malmesbury Abbey. Herman wished to move the seat of his see, but Edward refused permission for the move. Ealdred was a close associate of Herman's, and the historian H. R. Loyn called Herman "something of an alter ego" to Ealdred. According to the medieval chronicler John of Worcester, Ealdred was given the see of Ramsbury to administer while Herman remained outside England. Herman returned in 1058, and resumed his bishopric. There is no contemporary documentary evidence of Ealdred's administration of Ramsbury. The king again employed Ealdred as a diplomat in 1056, when he assisted earls Harold and Leofric in negotiations with the Welsh. Edward sent Ealdred after the death in battle of Bishop Leofgar of Hereford, who had attacked Gruffydd ap Llywelyn after encouragement from the king. However, Leofgar lost the battle and his life, and Edward had to sue for peace. Although details of the negotiations are lacking, Gruffydd ap Llywelyn swore loyalty to King Edward, but the oath may not have had any obligations on Gruffydd's part to Edward. The exact terms of the submission are not known in total, but Gruffydd was not required to assist Edward in war nor attend Edward's court. Ealdred was rewarded with the administration of the see of Hereford, which he held until 1061, and was appointed Archbishop of York. The diocese had suffered a serious raid from the Welsh in 1055, and during his administration, Ealdred continued the rebuilding of the cathedral church as well as securing the cathedral chapter's rights. Ealdred was granted the administration in order that the area might have someone with experience with the Welsh in charge. In 1058 Ealdred made a pilgrimage to Jerusalem, the first English bishop to make the journey. He travelled through Hungary, and the "Anglo-Saxon Chronicle" said that "he went to Jerusalem in such state as no-one had done before him". While in Jerusalem he made a gift of a gold chalice to the church of the Holy Sepulchre. It is possible that the reason Ealdred travelled through Hungary was to arrange the travel of Edward the Exile's family to England. Another possibility is that he wished to search for other possible heirs to King Edward in Hungary. It is not known exactly when Edward the Exile's family returned to England, whether they returned with Edward in 1057, or sometime later, so it is only a possibility that they returned with Ealdred in 1058. Very little documentary evidence is available from Ealdred's time as Bishop of Worcester. Only five leases that he signed survive, and all date from 1051 to 1053. Two further leases exist in "Hemming's Cartulary" as copies only. How the diocese of Worcester was administered when Ealdred was abroad is unclear, although it appears that Wulfstan, the prior of the cathedral chapter, performed the religious duties in the diocese. On the financial side, the "Evesham Chronicle" states that Æthelwig, who became abbot of Evesham Abbey in 1058, administered Worcester before he became abbot. Cynesige, the archbishop of York, died on 22 December 1060, and Ealdred was elected Archbishop of York on Christmas Day, 1060. Although a bishop was promptly appointed to Hereford, none was named to Worcester, and it appears that Ealdred intended to retain Worcester along with York, which several of his predecessors had done. There were a few reasons for this, one of which was political, as the kings of England preferred to appoint bishops from the south to the northern bishoprics, hoping to counter the northern tendency towards separatism. Another reason was that York was not a wealthy see, and Worcester was. Holding Worcester along with York allowed the archbishop sufficient revenue to support himself. In 1061 Ealdred travelled to Rome to receive the pallium, the symbol of an archbishop's authority. Journeying with him was Tostig, another son of Earl Godwin, who was now earl of Northumbria. William of Malmesbury says that Ealdred, by "amusing the simplicity of King Edward and alleging the custom of his predecessors, had acquired, more by bribery than by reason, the archbishopric of York while still holding his former see." On his arrival in Rome, however, charges of simony, or the buying of ecclesiastical office, and lack of learning were brought against him, and his elevation to York was refused by Pope Nicholas II, who also deposed him from Worcester. The story of Ealdred being deposed comes from the "Vita Edwardi", a life of Edward the Confessor, but the "Vita Wulfstani", an account of the life of Ealdred's successor at Worcester, Wulfstan, says that Nicholas refused the pallium until a promise to find a replacement for Worcester was given by Ealdred. Yet another chronicler, John of Worcester, mentions nothing of any trouble in Rome, and when discussing the appointment of Wulfstan, says that Wulfstan was elected freely and unanimously by the clergy and people. John of Worcester also claims that at Wulfstan's consecration, Stigand, the archbishop of Canterbury extracted a promise from Ealdred that neither he nor his successors would lay claim to any jurisdiction over the diocese of Worcester. Given that John of Worcester wrote his chronicle after the eruption of the Canterbury–York supremacy struggle, the story of Ealdred renouncing any claims to Worcester needs to be considered suspect. For whatever reason, Ealdred gave up the see of Worcester in 1062, when papal legates arrived in England to hold a council and make sure Ealdred relinquished Worcester. This happened at Easter in 1062. Ealdred was succeeded by Wulfstan, chosen by Ealdred, but John of Worcester relates that Ealdred had a hard time deciding between Wulfstan and Æthelwig. The legates had urged the selection of Wulfstan because of his saintliness. Because the position of Stigand, the archbishop of Canterbury, was irregular, Wulfstan sought and received consecration as a bishop from Ealdred. Normally, Wulfstan would have gone to the archbishop of Canterbury, as the see of Worcester was within Canterbury's province. Although Ealdred gave up the bishopric, the appointment of Wulfstan was one that allowed Ealdred to continue his considerable influence on the see of Worcester. Ealdred retained a number of estates belonging to Worcester. Even after the Norman Conquest, Ealdred still controlled some events in Worcester, and it was Ealdred, not Wulfstan, who opposed Urse d'Abetot's attempt to extend the castle of Worcester into the cathedral after the Norman Conquest. While archbishop, Ealdred built at Beverley, expanding on the building projects begun by his predecessor Cynesige, as well as repairing and expanding other churches in his diocese. He also built refectories for the canons at York and Southwell. He also was the one bishop that published ecclesiastical legislation during Edward the Confessor's reign, attempting to discipline and reform the clergy. He held a synod of his clergy shortly before 1066. John of Worcester, a medieval chronicler, stated that Ealdred crowned King Harold II in 1066, although the Norman chroniclers mention Stigand as the officiating prelate. Given Ealdred's known support of Godwin's family, John of Worcester is probably correct. Stigand's position as archbishop was canonically suspect, and as earl Harold had not allowed Stigand to consecrate one of the earl's churches, it is unlikely that Harold would have allowed Stigand to perform the much more important royal coronation. Arguments for Stigand having performed the coronation, however, rely on the fact that no other English source names the ecclesiastic who performed the ceremony; all Norman sources name Stigand as the presider. In all events, Ealdred and Harold were close, and Ealdred supported Harold's bid to become king. Ealdred perhaps accompanied Harold when the new king went to York and secured the support of the northern magnates shortly after Harold's consecration. According to the medieval chronicler Geoffrey Gaimar, after the Battle of Stamford Bridge Harold entrusted the loot gained from Harold Hardrada to Ealdred. Gaimar asserts that King Harold did this because he had heard of Duke William's landing in England, and needed to rush south to counter it. After the Battle of Hastings, Ealdred joined the group who tried to elevate Edgar the Ætheling, Edward the Exile's son, as king, but eventually he submitted to William the Conqueror at Berkhamsted. John of Worcester says that the group supporting Edgar vacillated over what to do while William ravaged the countryside, which led to Ealdred and Edgar's submission to William. Ealdred crowned William king on Christmas Day 1066. An innovation in William's coronation ceremony was that before the actual crowning, Ealdred asked the assembled crowd, in English, if it was their wish that William be crowned king. The Bishop of Coutances then did the same, but in Norman French. In March 1067, William took Ealdred with him when William returned to Normandy, along with the other English leaders Earl Edwin of Mercia, Earl Morcar, Edgar the Ætheling, and Archbishop Stigand. Ealdred at Whitsun 1068 performed the coronation of Matilda, William's wife. The "Laudes Regiae", or song commending a ruler, that was performed at Matilda's coronation may have been composed by Ealdred himself for the occasion. In 1069, when the northern thegns rebelled against William and attempted to install Edgar the Ætheling as king, Ealdred continued to support William. He was the only northern leader to support William, however. Ealdred was back at York by 1069. He died there on 11 September 1069, and his body was buried in his episcopal cathedral. He may have taken an active part in trying to calm the rebellions in the north in 1068 and 1069. The medieval chronicler William of Malmesbury records a story that when the new sheriff of Worcester, Urse d'Abetot, encroached on the cemetery of the cathedral chapter for Worcester Cathedral, Ealdred pronounced a rhyming curse on him, saying "Thou are called Urse. May you have God's curse." After Ealdred's death, one of the restraints on William's treatment of the English was removed. Ealdred was one of a few native Englishmen who William appears to have trusted, and his death led to fewer attempts to integrate Englishmen into the administration, although such efforts did not entirely stop. In 1070, a church council was held at Westminster and a number of bishops were deposed. By 1073 there were only two Englishmen in episcopal sees, and by the time of William's death in 1087, there was only one, Wulfstan II of Worcester. Ealdred did much to restore discipline in the monasteries and churches under his authority, and was liberal with gifts to the churches of his diocese. He built the monastic church of St Peter at Gloucester (now Gloucester Cathedral, though nothing of his fabric remains), then part of his diocese of Worcester. He also repaired a large part of Beverley Minster in the diocese of York, adding a presbytery and an unusually splendid painted ceiling covering "all the upper part of the church from the choir to the tower...intermingled with gold in various ways, and in a wonderful fashion". He added a pulpit "in German style" of bronze, gold and silver, surmounted by an arch with a rood cross in the same materials; these were examples of the lavish decorations added to important churches in the years before the conquest. Ealdred encouraged Folcard, a monk of Canterbury, to write the "Life" of Saint John of Beverley. This was part of Ealdred's promotion of the cult of Saint John, who had only been canonised in 1037. Along with the "Pontificale", Ealdred may have brought back from Cologne the first manuscript of the "Cambridge Songs" to enter England, a collection of Latin Goliardic songs which became famous in the Middle Ages. The historian Michael Lapidge suggests that the "Laudes Regiae", which are included in Cotton Vitellius E xii, might have been composed by Ealdred, or a member of his household. Another historian, H. J. Cowdrey, argued that the "laudes" were composed at Winchester. These praise songs are probably the same performed at Matilda's coronation, but might have been used at other court ceremonies before Ealdred's death. Historians have seen Ealdred as an "old-fashioned prince-bishop". Others say he "raised the see of York from its former rustic state". He was known for his generosity and for his diplomatic and administrative abilities. After the Conquest, Ealdred provided a degree of continuity between the pre- and post-Conquest worlds. One modern historian feels that it was Ealdred who was behind the compilation of the D version of the "Anglo-Saxon Chronicle", and gives a date in the 1050s as its composition. Certainly, Ealdred is one of the leading figures in the work, and it is likely that one of his clerks compiled the version.
https://en.wikipedia.org/wiki?curid=1583
Alexander Balas Alexander I Theopator Euergetes, surnamed Balas (), was the ruler of the Greek Seleucid kingdom in 150/Summer 152 – August 145 BC. Alexander defeated Demetrius I Soter for the crown in 150 BC. Ruling briefly, he lost the crown to Demetrius II Nicator during his defeat at the Battle of Antioch (145 BC) in Syria, dying shortly after. Alexander Balas claimed to be the son of Antiochus IV Epiphanes and Laodice IV and heir to the Seleucid throne. The ancient sources, Polybius and Diodorus say that this claim was false and that he and his sister Laodice VI were really natives of Smyrna of humble origin. Modern scholars disagree about whether this is true or was propaganda put about by Alexander's opponents. According to Diodorus, Alexander was originally put forward as a candidate for the Seleucid throne by Attalus II of Pergamum. Attalus had been disturbed by the Seleucid king Demetrius I's interference in Cappadocia, where he had dethroned king Ariarathes V. Boris Chrubasik is sceptical, noting that there is little subsequent evidence for Attalid involvement with Alexander. However, Selene Psoma has proposed that a large set of coins minted in a number of cities under Attalid control in this period was produced by Attalus II in order to fund Alexander's bid for the kingship. Alexander and his sister were maintained in Cilicia by Heracleides, a former minister of Antiochus IV and brother of Timarchus, an usurper in Media who had been executed by the reigning king Demetrius I Soter. In 153 BC, Heracleides brought Alexander and his sister to Rome, where he presented Alexander to the Roman Senate, which recognised him as the legitimate Seleucid king and agreed to support him in his bid to take the throne. Polybius mentions that Attalus II and Demetrius I also met with the Senate at this time but does not state how this was connected to the recognition of Alexander - if at all. After recruiting mercenaries, Alexander and Heracleides departed to Ephesus. From there, they invaded Phoenicia by sea, seizing Ptolemais Akko. Numismatic evidence shows that Alexander had also gained control of Seleucia Pieria, Byblos, Beirut, Tyre by 151 BC. On this coinage, Alexander heavily advertised his (claimed) connection to Antiochus IV, depicting Zeus Nicephorus on his coinage as Antiochus had done. He also assumed the title of "Theopator" ('Divinely Fathered'), which recalled Antiochus' epithet "Theos Epiphanes" ('God Manifest'). The coinage also presented Alexander Balas in the guise of Alexander the Great, with pronounced facial features and long flowing hair. This was intended to emphasise his military prowess to his soldiers. Alexander and Demetrius I competed with another to win over Jonathan Apphus, the leader of the ascendant faction in Judaea. Jonathan was won over to Alexander's side by the grant of a high position in the Seleucid court and the high priesthood in Jerusalem. Reinforced by Jonathan's hardened soldiers, Alexander fought a decisive battle with Demetrius in July 150 BC, in which Demetrius was killed. By autumn, Alexander's kingship was recognised throughout the Seleucid realm. Alexander gained control of Antioch at this time and his chancellor, Ammonius, murdered all the courtiers of Demetrius I, as well as his wife Laodice and his eldest son Antigonus. Ptolemy VI Philometor of Egypt entered into an alliance with Alexander, which was sealed by Alexander's marriage to his daughter Cleopatra Thea. The wedding took place at Ptolemais, with Ptolemy VI and Jonathan Apphus in attendance. Alexander took the opportunity to shower honours on Jonathan, whom he treated as his main agent in Judaea. The marriage was advertised by a special coinage issue, depicting the royal pair side by side - only the second depiction of a queen on Seleucid coinage. She is shown with divine attributes (a cornucopia and a calathus) and is depicted in front of the king. Some scholars have seen Alexander as little more than a Ptolemaic puppet, arguing that this coinage emphasises Cleopatra's dominance over him and that the chancellor Ammonius was a Ptolemaic agent. Other scholars argue that the alliance was advertised as an important one, but that the arguments for Alexander's subservience have been overstated. Being now master of the empire, he is said to have abandoned himself to a life of debauchery, handing the administration of Antioch over to two commanders, Hierax and Diodotus. This representation is partially a product of his opponents' propaganda, but Alexander is not recorded to have achieved anything in this years. Meanwhile, the Parthians took advantage of the unsettled situation to invade Media. The region had been lost to Seleucid control by the middle of 148 BC. In early 147 BC Demetrius' son Demetrius II returned to Syria with a force of Cretan mercenaries led by a man called Lasthenes. Much of Coele Syria was lost to him immediately, possibly as a result of the succession of the regional commander. Jonathan attacked Demetrius's position from the south, seizing Jaffa and Ashdod, while Alexander Balas was occupied with a revolt in Cilicia. In 145 BC Ptolemy VI of Egypt invaded Syria, ostensibly in support of Alexander Balas. In practice, Ptolemy's intervention came at a heavy cost; with Alexander's permission, he took control of all the Seleucid cities along the coast, including Seleucia Pieria. He may also have started minting his own coinage in the Syrian cities. While he was at Ptolemais Akko, however, Ptolemy switched sides. According to Josephus, Ptolemy discovered that Alexander's chancellor, Ammonius, had been plotting to assassinate him, but when he demanded that Ammonius be punished, Alexander refused. Ptolemy remarried his Cleopatra Thea to Demetrius II and continued his march northward. Alexander's commanders of Antioch, Diodotus and Hierax, surrendered the city to Ptolemy. Alexander returned from Cilicia with his army, but Ptolemy VI and Demetrius II defeated his forces in a battle at the Oenoparas river. Earlier, Alexander had sent his infant son Antiochus to an Arabian dynast called Zabdiel Diocles. Alexander now fled to Arabia in order to join up with Zabdiel, but he was killed. Sources disagree about whether the killer was a pair of his own generals who had decided to switch sides or Zabdiel himself. Alexander's severed head was brought to Ptolemy, who also died shortly after from wounds sustained in the battle. Zabdiel continued to look after Alexander's infant son Antiochus, until 145 BC when the general Diodotus declared him king, in order to serve as the figurehead of a rebellion against Demetrius II. In 130 BC, another claimant to the throne, Alexander Zabinas, would also claim to be Alexander Balas' son; almost certainly spuriously. Alexander is the title character of the oratorio "Alexander Balus", written in 1747 by George Frideric Handel.
https://en.wikipedia.org/wiki?curid=1586
Alexander III of Russia Alexander III (; 10 March 18451 November 1894) was Emperor of Russia, King of Poland and Grand Duke of Finland from 13 March 1881 until his death on 1 November 1894. He was highly reactionary and reversed some of the liberal reforms of his father, Alexander II. Under the influence of Konstantin Pobedonostsev (1827–1907) he opposed any reform that limited his autocratic rule. During his reign, Russia fought no major wars; he was therefore styled "The Peacemaker" (). Grand Duke Alexander Alexandrovich was born on 10 March 1845 at the Winter Palace in Saint Petersburg, Russian Empire, the second son and third child of Emperor Alexander II and his first wife Maria Alexandrovna (née Princess Marie of Hesse). In disposition Alexander bore little resemblance to his soft-hearted, liberal father, and still less to his refined, philosophic, sentimental, chivalrous, yet cunning great-uncle Emperor Alexander I, who could have been given the title of "the first gentleman of Europe". Although an enthusiastic amateur musician and patron of the ballet, Alexander was seen as lacking refinement and elegance. Indeed, he rather relished the idea of being of the same rough texture as some of his subjects. His straightforward, abrupt manner savoured sometimes of gruffness, while his direct, unadorned method of expressing himself harmonized well with his rough-hewn, immobile features and somewhat sluggish movements. His education was not such as to soften these peculiarities. More than six feet tall (about 1.9 m), he was also noted for his immense physical strength. A sebaceous cyst on the left side of his nose caused him to be mocked by some of his contemporaries, and he sat for photographs and portraits with the right side of his face most prominent. An account from the memoirs of the artist Alexander Benois gives one impression of Alexander III: After a performance of the ballet "Tsar Kandavl" at the Mariinsky Theatre, I first caught sight of the Emperor. I was struck by the size of the man, and although cumbersome and heavy, he was still a mighty figure. There was indeed something of the muzhik "[Russian peasant]" about him. The look of his bright eyes made quite an impression on me. As he passed where I was standing, he raised his head for a second, and to this day I can remember what I felt as our eyes met. It was a look as cold as steel, in which there was something threatening, even frightening, and it struck me like a blow. The Tsar's gaze! The look of a man who stood above all others, but who carried a monstrous burden and who every minute had to fear for his life and the lives of those closest to him. In later years I came into contact with the Emperor on several occasions, and I felt not the slightest bit timid. In more ordinary cases Tsar Alexander III could be at once kind, simple, and even almost homely. Though he was destined to be a strongly counter-reforming emperor, Alexander had little prospect of succeeding to the throne during the first two decades of his life, as he had an elder brother, Nicholas, who seemed of robust constitution. Even when Nicholas first displayed symptoms of delicate health, the notion that he might die young was never taken seriously, and he was betrothed to Princess Dagmar of Denmark, daughter of King Christian IX of Denmark and Queen Louise of Denmark, and whose siblings included King Frederick VIII of Denmark, Queen Alexandra of the United Kingdom and King George I of Greece. Great solicitude was devoted to the education of Nicholas as tsesarevich, whereas Alexander received only the training of an ordinary Grand Duke of that period. This included acquaintance with French, English and German, and military drill. Alexander became tsesarevich upon Nicholas's sudden death in 1865; it was then that he began to study the principles of law and administration under Konstantin Pobedonostsev, then a professor of civil law at Moscow State University and later (from 1880) chief procurator of the Holy Synod of the Orthodox Church in Russia. Pobedonostsev instilled into the young man's mind the belief that zeal for Russian Orthodox thought was an essential factor of Russian patriotism to be cultivated by every right-minded emperor. While he was heir apparent from 1865 to 1881 Alexander did not play a prominent part in public affairs, but allowed it to become known that he had ideas which did not coincide with the principles of the existing government. On his deathbed the previous tsesarevich was said to have expressed the wish that his fiancée, Princess Dagmar of Denmark, should marry his successor. This wish was swiftly realized when on in the Grand Church of the Winter Palace in St. Petersburg, Alexander wed Dagmar, who converted to Orthodox Christianity and took the name Maria Feodorovna. The union proved a happy one to the end; unlike nearly all of his predecessors since Peter I, there was no adultery in his marriage. The couple spent their wedding night at the Tsarevich's private dacha known as "My Property". Later on the Tsarevich became estranged from his father; this was due to their vastly differing political views, as well was his resentment towards Alexander II's long-standing relationship with Catherine Dolgorukov (with whom he had several illegitimate children) while his mother, the Empress, was suffering from chronic ill-health. To the scandal of many at court, including the Tsarevich himself, Alexander II married Catherine a mere month after Marie Alexandrovna's death in 1880. On 13 March 1881 (N.S.) Alexander's father, Alexander II, was assassinated by members of the extremist organization Narodnaya Volya. As a result, he ascended to the Russian imperial throne in Nennal. He and Maria Feodorovna were officially crowned and anointed at the Assumption Cathedral in Moscow on 27 May 1883. Alexander's ascension to the throne was followed by an outbreak of anti-Jewish riots. Alexander III disliked the extravagance of the rest of his family. It was also expensive for the Crown to pay so many grand dukes each year. Each one received an annual salary of 250,000 rubles, and grand duchesses received a dowry of a million when they married. He limited the title of grand duke and duchess to only children and male-line grandchildren of emperors. The rest would bear a princely title and the style of Serene Highness. He also forbade morganatic marriages, as well as those outside of the Orthodoxy. On the day of his assassination, Alexander II had signed an ukaz setting up consultative commissions to advise the monarch. On ascending to the throne, however, Alexander III took Pobedonostsev's advice and cancelled the policy before its publication. He made it clear that his autocracy would not be limited. All of Alexander III's internal reforms aimed to reverse the liberalization that had occurred in his father's reign. The new Emperor believed that remaining true to Russian Orthodoxy, Autocracy, and Nationality (the ideology introduced by his grandfather, emperor Nicholas I) would save Russia from revolutionary agitation. Alexander weakened the power of the "zemstvo" (elective local administrative bodies) and placed the administration of peasant communes under the supervision of land-owning proprietors appointed by his government. These "land captains" ("zemskiye nachalniki") were feared and resented throughout the Empire's peasant communities. These acts weakened the nobility and the peasantry and brought Imperial administration under the Emperor's personal control. In such policies Alexander III followed the advice of Konstantin Pobedonostsev, who retained control of the Church in Russia through his long tenure as Procurator of the Holy Synod (from 1880 to 1905) and who became tutor to Alexander's son and heir, Nicholas. (Pobedonostsev appears as "Toporov" in Tolstoy's novel "Resurrection".) Other conservative advisors included Count D. A. Tolstoy (minister of education, and later of internal affairs) and I. N. Durnovo (D. A. Tolstoy's successor in the latter post). Mikhail Katkov and other journalists supported the emperor in his autocracy. The Russian famine of 1891–92, which caused 375,000 to 500,000 deaths, and the ensuing cholera epidemic permitted some liberal activity, as the Russian government could not cope with the crisis and had to allow zemstvos to help with relief (among others, Leo Tolstoy helped organize soup-kitchens, and Chekhov directed anti-cholera precautions in several villages). Alexander's political ideal was a nation composed of a single nationality, language, and religion, all under one form of administration. Through the teaching of the Russian language in Russian schools in Germany, Poland, and Finland, the destruction of the remnants of German, Polish, and Swedish institutions in the respective provinces, and the patronization of Eastern Orthodoxy, he attempted to realize this ideal. Alexander was hostile to Jews; His reign witnessed a sharp deterioration in the Jews' economic, social, and political condition. His policy was eagerly implemented by tsarist officials in the "May Laws" of 1882. They banned Jews from inhabiting rural areas and shtetls (even within the Pale of Settlement) and restricted the occupations in which they could engage. Encouraged by its successful assassination of Alexander II, the Narodnaya Volya movement began planning the murder of Alexander III. The Okhrana uncovered the plot and five of the conspirators, including Alexander Ulyanov, the older brother of Vladimir Lenin, were captured and hanged in May 1887. The general negative consensus about the tsar's foreign policy follows the conclusions of the British Prime Minister Lord Salisbury in 1885: In foreign affairs Alexander III was a man of peace, but not at any price, and held that the best means of averting war is to be well-prepared for it. Diplomat Nikolay Girs, scion of a rich and powerful family, served as his Foreign Minister from 1882 to 1895 and established the peaceful policies for which Alexander has been given credit. Girs was an architect of the Franco-Russian Alliance of 1891, which was later expanded into the Triple Entente with the addition of Great Britain. That alliance brought France out of diplomatic isolation, and moved Russia from the German orbit to a coalition with France, one that was strongly supported by French financial assistance to Russia's economic modernization. Girs was in charge of a diplomacy that featured numerous negotiated settlements, treaties and conventions. These agreements defined Russian boundaries and restored equilibrium to dangerously unstable situations. The most dramatic success came in 1885, settling long-standing tensions with Great Britain, which was fearful that Russian expansion to the South would be a threat to India. Girs was usually successful in restraining the aggressive inclinations of Tsar Alexander convincing him that the very survival of the czarist system depended on avoiding major wars. With a deep insight into the tsar's moods and views, Girs was usually able to shape the final decisions by outmaneuvering hostile journalists, ministers, and even the czarina, as well as his own ambassadors. His Russia fought no wars. Though Alexander was indignant at the conduct of German chancellor Otto von Bismarck towards Russia, he avoided an open rupture with Germany—even reviving the League of Three Emperors for a period of time and in 1887, signed the Reinsurance Treaty with the Germans. However, in 1890, the expiration of the treaty coincided with the dismissal of Bismarck by the new German emperor, Kaiser Wilhelm II (for whom the Tsar had an immense dislike), and the unwillingness of Wilhelm II's government to renew the treaty. In response Alexander III then began cordial relations with France, eventually entering into an alliance with the French in 1892. Despite chilly relations with Berlin, the Tsar nevertheless confined himself to keeping a large number of troops near the German frontier. With regard to Bulgaria he exercised similar self-control. The efforts of Prince Alexander and afterwards of Stambolov to destroy Russian influence in the principality roused his indignation, but he vetoed all proposals to intervene by force of arms. In Central Asian affairs he followed the traditional policy of gradually extending Russian domination without provoking conflict with the United Kingdom (see Panjdeh Incident), and he never allowed the bellicose partisans of a forward policy to get out of hand. His reign cannot be regarded as an eventful period of Russian history; but under his hard rule the country made considerable progress. Alexander and his wife regularly spent their summers at Langinkoski manor near Kotka on the Finnish coast, where their children were immersed in a Scandinavian lifestyle of relative modesty. Alexander rejected foreign influence, German influence in particular, thus the adoption of local national principles was deprecated in all spheres of official activity, with a view to realizing his ideal of a Russia homogeneous in language, administration and religion. These ideas conflicted with those of his father, who had German sympathies despite being a patriot; Alexander II often used the German language in his private relations, occasionally ridiculed the Slavophiles and based his foreign policy on the Prussian alliance. Some differences between father and son had first appeared during the Franco-Prussian War, when Alexander II supported the cabinet of Berlin while the Tsesarevich made no effort to conceal his sympathies for the French. These sentiments would resurface during 1875–1879, when the Eastern Question excited Russian society. At first the Tsesarevich was more Slavophile than the government, but his phlegmatic nature restrained him from many exaggerations, and any popular illusions he may have imbibed were dispelled by personal observation in Bulgaria, where he commanded the left wing of the invading army. Never consulted on political questions, Alexander confined himself to military duties and fulfilled them in a conscientious and unobtrusive manner. After many mistakes and disappointments, the army reached Constantinople and the Treaty of San Stefano was signed, but much that had been obtained by that important document had to be sacrificed at the Congress of Berlin. Bismarck failed to do what was expected of him by the Russian emperor. In return for the Russian support which had enabled him to create the German Empire, it was thought that he would help Russia to solve the Eastern question in accordance with Russian interests, but to the surprise and indignation of the cabinet of Saint Petersburg he confined himself to acting the part of "honest broker" at the Congress, and shortly afterwards contracted an alliance with Austria-Hungary for the purpose of counteracting Russian designs in Eastern Europe. The Tsesarevich could refer to these results as confirmation of the views he had expressed during the Franco-Prussian War; he concluded that for Russia, the best thing was to recover as quickly as possible from her temporary exhaustion, and prepare for future contingencies by military and naval reorganization. In accordance with this conviction, he suggested that certain reforms should be introduced. Following his father's assassination, Alexander III was advised that it would be difficult for him to be kept safe at the Winter Palace. As a result, Alexander relocated his family to the Gatchina Palace, located south of St. Petersburg, making it his primary residence. Under heavy guard he would make occasional visits into St. Petersburg, but even then he would stay in the Anichkov Palace, as opposed to the Winter Palace. In the 1860s Alexander fell madly in love with his mother's lady-in-waiting, Princess Maria Elimovna Meshcherskaya. Dismayed to learn that Prince Wittgenstein had proposed to her in early 1866, he told his parents that he was prepared to give up his rights of succession in order to marry his beloved "Dusenka". On 19 May 1866, Alexander II informed his son that Russia had come to an agreement with the parents of Princess Dagmar of Denmark, his fourth cousin. Before then, she had been the fiancée of his late elder brother Nicholas. At first Alexander refused to travel to Copenhagen, declaring that he did not love Dagmar and wanted to marry Maria. In response the enraged emperor ordered Alexander to go straight to Denmark and propose to Princess Dagmar. The Tsesarevich then realised that he was not a free man and that duty had to come first and foremost; the only thing left to do was to write in his diary "Farewell, dear Dusenka." Maria was forced to leave Russia, accompanied by her aunt, Princess Chernyshova. Almost a year after her first appearance in Paris, Pavel Pavlovich Demidov, 2nd Prince di San Donato, fell in love with her and the couple married in 1867. Maria would die giving birth to her son Elim Pavlovich Demidov, 3rd Prince di San Donato. Alexander soon grew fond of Dagmar and had six children by her, five of whom survived into adulthood: Nicholas (b. 1868), George (b. 1871), Xenia (b. 1875), Michael (b. 1878) and Olga (b. 1882). Of his five surviving children, he was closest to his youngest two. In 1885 it was Alexander who commissioned Peter Carl Fabergé to produce the first of what were to become a series of jeweled Easter eggs (now called "Fabergé eggs") for her as an Easter gift, the First Hen egg, which delighted her immensely and became an annual Easter tradition for Alexander and, upon his succession, for his son Nicholas as well. Each summer his parents-in-law, King Christian IX and Queen Louise, held family reunions at the Danish royal palaces of Fredensborg and Bernstorff, bringing Alexander, Maria and their children to Denmark. His sister-in-law, the Princess of Wales, would come from Great Britain with some of her children, and his brother-in-law, King George I of Greece, his wife, Queen Olga, who was a first cousin of Alexander and a Romanov Grand Duchess by birth, came with their children from Athens. In contrast to the strict security observed in Russia, Alexander and Maria revelled in the relative freedom that they enjoyed in Denmark, Alexander once commenting to the Prince and Princess of Wales near the end of a visit that he envied them being able to return to a happy home in England, while he was returning to his Russian prison. In Denmark, he was able to enjoy joining his children in muddy ponds looking for tadpoles, sneaking into his father-in-law's orchard to steal apples, and playing pranks, such as turning a water hose on the visiting King Oscar II of Sweden. As Tsesarevich—and then as Tsar—Alexander had an extremely poor relationship with his brother Grand Duke Vladimir. This tension was reflected in the rivalry between Maria Feodorovna and Vladimir's wife, Grand Duchess Marie Pavlovna. Alexander had better relationships with his other brothers: Alexei (whom he made rear admiral and then a grand admiral of the Russian Navy), Sergei (whom he made governor of Moscow) and Paul. Despite the antipathy that Alexander had towards his stepmother, Princess Catherine Dolgorukov, he nevertheless allowed her to remain in the Winter Palace for some time after his father's assassination and to retain various keepsakes of him. These included Alexander II's blood-soaked uniform that he died wearing, and his reading glasses. On the Imperial train derailed in an accident at Borki. At the moment of the crash, the imperial family was in the dining car. Its roof collapsed, and Alexander held its remains on his shoulders as the children fled outdoors. The onset of Alexander's kidney failure was later attributed to the blunt trauma suffered in this incident. In 1894, Alexander III became ill with terminal kidney disease (nephritis). Maria Fyodorovna's sister-in-law, Queen Olga of Greece, offered her villa of Mon Repos, on the island of Corfu, in the hope that it might improve the Tsar's condition. By the time that they reached Crimea, they stayed at the Maly Palace in Livadia, as Alexander was too weak to travel any farther. Recognizing that the Tsar's days were numbered, various imperial relatives began to descend on Livadia. Even the famed clergyman John of Kronstadt paid a visit and administered Communion to the Tsar. On 21 October, Alexander received Nicholas's fiancée, Princess Alix, who had come from her native Darmstadt to receive the Tsar's blessing. Despite being exceedingly weak, Alexander insisted on receiving Alix in full dress uniform, an event that left him exhausted. Soon after, his health began to deteriorate more rapidly. He died in the arms of his wife, and in the presence of his physician, Ernst Viktor von Leyden, at Maly Palace in Livadia on the afternoon of at the age of forty-nine, and was succeeded by his eldest son Tsesarevich Nicholas, who took the throne as Nicholas II. After leaving Livadia on 6 November and traveling to St. Petersburg by way of Moscow, his remains were interred on 18 November at the Peter and Paul Fortress. In 1909, a bronze equestrian statue of Alexander III sculpted by Paolo Troubetzkoy was placed in Znamenskaya Square in front of the Moscow Rail Terminal in St. Petersburg. Both the horse and rider were sculpted in massive form, leading to the nickname of "hippopotamus". Troubetzkoy envisioned the statue as a caricature, jesting that he wished "to portray an animal atop another animal", and it was quite controversial at the time, with many, including the members of the Imperial Family, opposed to the design, but it was approved because the Empress Dowager unexpectedly liked the monument. Following the Revolution of 1917 the statue remained in place as a symbol of tsarist autocracy until 1937 when it was placed in storage. In 1994 it was again put on public display, in front of the Marble Palace. Another memorial is located in the city of Irkutsk at the Angara embankment. On 18 November 2017, Vladimir Putin unveiled a bronze monument to Alexander III on the site of the former Maly Livadia Palace in Crimea. The four-meter monument by Russian sculptor Andrey Kovalchuk depicts Alexander III sitting on a stump, his stretched arms resting on a sabre. An inscription repeats his alleged saying "Russia has only two allies: the Army and the Navy." Alexander III had six children (five of whom survived to adulthood) of his marriage with Princess Dagmar of Denmark, also known as Marie Feodorovna.
https://en.wikipedia.org/wiki?curid=1592
Alexander I of Scotland Alexander I (medieval Gaelic: "Alaxandair mac Maíl Coluim"; modern Gaelic: "Alasdair mac Mhaol Chaluim"; c. 1078 – 23 April 1124), posthumously nicknamed The Fierce, was the King of Scotland from 1107 to his death. Alexander was the fifth son of Malcolm III by his wife Margaret of Wessex, grandniece of Edward the Confessor. Alexander was named after Pope Alexander II. He was the younger brother of King Edgar, who was unmarried, and his brother's heir presumptive by 1104 (and perhaps earlier). In that year he was the senior layman present at the examination of the remains of Saint Cuthbert at Durham prior to their re-interment. He held lands in Scotland north of the Forth and in Lothian. On the death of Edgar in 1107, he succeeded to the Scottish crown; but, in accordance with Edgar's instructions, their brother David was granted an appanage in southern Scotland. Edgar's will granted David the lands of the former kingdom of Strathclyde or Cumbria, and this was apparently agreed in advance by Edgar, Alexander, David and their brother-in-law Henry I of England. In 1113, perhaps at Henry's instigation, and with the support of his Anglo-Norman allies, David demanded, and received, additional lands in Lothian along the Upper Tweed and Teviot. David did not receive the title of king, but of "prince of the Cumbrians", and his lands remained under Alexander's final authority. The dispute over Tweeddale and Teviotdale does not appear to have damaged relations between Alexander and David, although it was unpopular in some quarters. A Gaelic poem laments:It's bad what Malcolm's son has done,dividing us from Alexander;he causes, like each king's son before,the plunder of stable Alba. The dispute over the eastern marches does not appear to have caused lasting trouble between Alexander and Henry of England. In 1114 he joined Henry on campaign in Wales against Gruffudd ap Cynan of Gwynedd. Alexander's marriage with Henry's illegitimate daughter Sybilla of Normandy may have occurred as early as 1107, or as at late as 1114. William of Malmesbury's account attacks Sybilla, but the evidence argues that Alexander and Sybilla were a devoted but childless couple and Sybilla was of noteworthy piety. Sybilla died in unrecorded circumstances at "Eilean nam Ban" (Kenmore on Loch Tay) in July 1122 and was buried at Dunfermline Abbey. Alexander did not remarry and Walter Bower wrote that he planned an Augustinian Priory at the "Eilean nam Ban" dedicated to Sybilla's memory, and he may have taken steps to have her venerated. Alexander had at least one illegitimate child, Máel Coluim mac Alaxandair, who was later to be involved in a revolt against David I in the 1130s. He was imprisoned at Roxburgh for many years afterwards, perhaps until his death some time after 1157. Alexander was, like his brothers Edgar and David, a notably pious king. He was responsible for foundations at Scone and Inchcolm. His mother's chaplain and hagiographer Thurgot was named Bishop of Saint Andrews (or "Cell Rígmonaid") in 1107, presumably by Alexander's order. The case of Thurgot's would-be successor Eadmer shows that Alexander's wishes were not always accepted by the religious community, perhaps because Eadmer had the backing of the Archbishop of Canterbury, Ralph d'Escures, rather than Thurstan of York. Alexander also patronised Saint Andrews, granting lands intended for an Augustinian Priory, which may have been the same as that intended to honour his wife. For all his religiosity, Alexander was not remembered as a man of peace. John of Fordun says of him: He manifested the terrible aspect of his character in his reprisals in the Mormaerdom of Moray. Andrew of Wyntoun's "Orygynale Cronykil of Scotland" says that Alexander was holding court at Invergowrie when he was attacked by "men of the Isles". Walter Bower says the attackers were from Moray and Mearns. Alexander pursued them north, to "Stockford" in Ross (near Beauly) where he defeated them. This, says Wyntoun, is why he was named the "Fierce". The dating of this is uncertain, as are his enemies' identity. However, in 1116 the Annals of Ulster report: "Ladhmann son of Domnall, grandson of the king of Scotland, was killed by the men of Moray." The king referred to is Alexander's father, Malcolm III, and Domnall was Alexander's half brother. The Mormaerdom or Kingdom of Moray was ruled by the family of Macbeth (Mac Bethad mac Findláich) and Lulach (Lulach mac Gille Coemgáin): not overmighty subjects, but a family who had ruled Alba within little more than a lifetime. Who the Mormaer or King was at this time is not known; it may have been Óengus of Moray or his father, whose name is not known. As for the Mearns, the only known Mormaer of Mearns, Máel Petair, had murdered Alexander's half-brother Duncan II (Donnchad mac Maíl Coluim) in 1094. Alexander died in April 1124 at his court at Stirling; his brother David, probably the acknowledged heir since the death of Sybilla, succeeded him. Alexander was depicted in a fantasy novel:
https://en.wikipedia.org/wiki?curid=1593
Alexander I of Serbia Alexander I or Aleksandar Obrenović (; 14 August 187611 June 1903) was king of Serbia from 1889 to 1903 when he and his wife, Draga Mašin, were assassinated by a group of Royal Serbian Army officers, led by Captain Dragutin Dimitrijević. Alexander was born on 14 August 1876 to King Milan and Queen Natalie of Serbia. He belonged to the Obrenović dynasty. In 1889, King Milan unexpectedly abdicated and withdrew to private life, proclaiming Alexander king of Serbia under a regency until he should attain his majority at eighteen years of age. His mother became his regent. His parents were second cousins. In 1893, King Alexander, aged sixteen, arbitrarily proclaimed himself of full age, dismissed the regents and their government, and took the royal authority into his own hands. His action won popular support, as did his appointment of a radical ministry. In May 1894 King Alexander arbitrarily abolished King Milan's liberal constitution of 1888 and restored the conservative one of 1869. His attitude during the Greco-Turkish War (1897) was one of strict neutrality. In 1894 the young King brought his father, Milan, back to Serbia and, in 1898, appointed him commander-in-chief of the Serbian army. During that time, Milan was regarded as the "de facto" ruler of the country. In the summer of 1900, King Alexander suddenly announced his engagement to Draga Mašin, a disreputable widow of an obscure engineer. Alexander had met Draga Mašin in 1897 when she was serving as a maid of honor to his mother. Draga was ten years older than the king, unpopular with Belgrade society, well known for her allegedly numerous sexual liaisons, and widely believed to be infertile. Since Alexander was an only child, it was imperative to secure the succession by producing an heir. So intense was the opposition to Mašin among the political classes that the king found it impossible for a time to recruit suitable candidates into senior posts Before making the announcement, Alexander did not consult with his father, who had been on vacation in Karlovy Vary and making arrangements to secure the hand of German Princess Alexandra zu Schaumburg-Lippe for his son, or his Prime Minister Dr. Vladan Đorđević, who was visiting the Paris Universal Exhibition at the time of the announcement. Both immediately resigned from their respective offices and Alexander had difficulty in forming a new cabinet. Alexander's mother also opposed the marriage and was subsequently banished from the kingdom. Opposition to the union seemed to subside somewhat for a time upon the publication of Tsar Nicholas II's congratulations to the king on his engagement and of his acceptance to act as the principal witness at the wedding. The marriage duly took place in August 1900. Even so, the unpopularity of the union weakened the King's position in the eyes of the army and of the country at large. King Alexander tried to reconcile political parties by unveiling a liberal constitution of his own initiative in 1901, introducing for the first time in the constitutional history of Serbia the system of two chambers ("skupština" and "senate"). This reconciled the political parties but did not reconcile the army which, already dissatisfied with the king's marriage, became still more so at the rumors that one of the two unpopular brothers of Queen Draga, Lieutenant Nikodije, was to be proclaimed heir-presumptive to the throne. Alexander's good relations and country's growing dependence to Austria-Hungary were detested by the Serbian public. Two million Serbs lived in Austria-Hungary and one more million lived in Ottoman Empire and there was a lot of migration to Serbia. Meanwhile, the independence of the senate and of the council of state caused increasing irritation to King Alexander. In March 1903 the King suspended the constitution for half an hour, time enough to publish the decrees dismissing and replacing the old senators and councillors of state. This arbitrary act increased dissatisfaction in the country. The general impression was that, as much as the senate was packed with men devoted to the royal couple and the government obtained a large majority at the general elections, King Alexander would not hesitate any longer to proclaim Queen Draga's brother as the heir presumptive to the throne. In spite of this, it had been agreed with the Serbian Government that Prince Mirko of Montenegro, who was married to Natalija Konstantinovic, the granddaughter of Princess Anka Obrenović, an aunt of King Milan, would be proclaimed heir-presumptive in the event that the marriage of King Alexander and Queen Draga was childless. Apparently to prevent Queen Draga's brother being named heir-presumptive, but in reality to replace Alexander Obrenović with Peter Karađorđević, a conspiracy was organized by a group of Army officers headed by Captain Dragutin Dimitrijević also known as "Apis", and Novak Perisic, a young Greek Orthodox militant who was in the pay of the Russians, as well as the leader of the Black Hand secret society which would assassinate Archduke Franz Ferdinand in 1914. Several politicians were also part of the conspiracy, and allegedly included former Prime Minister, Nikola Pašić. The royal couple's palace was invaded and they hid in a cupboard in the Queen's bedroom. (There is another possibility, used in a Serbian TV history series "The End of the Obrenović Dynasty" in which the royal couple was hidden in a secret panic room hidden behind the mirror in a common bedroom. The room contained an entrance to a secret passage leading out of the palace, but the entrance was inaccessible due to the placement of the queen's wardrobe over it after the wedding.) The conspirators searched the palace and eventually discovered the royal couple and murdered them in the early morning of June 11, 1903. King Alexander and Queen Draga were shot and their bodies mutilated and disemboweled and, according to eyewitness accounts, thrown from a second floor window of the palace onto piles of garden manure. The King was only 26 years old at the time of his death. King Alexander and Queen Draga were buried in the crypt of St. Mark's Church, Belgrade.
https://en.wikipedia.org/wiki?curid=1595
Alexander of Aphrodisias Alexander of Aphrodisias (; ) was a Peripatetic philosopher and the most celebrated of the Ancient Greek commentators on the writings of Aristotle. He was a native of Aphrodisias in Caria, and lived and taught in Athens at the beginning of the 3rd century, where he held a position as head of the Peripatetic school. He wrote many commentaries on the works of Aristotle, extant are those on the "Prior Analytics", "Topics", "Meteorology", "Sense and Sensibilia", and "Metaphysics". Several original treatises also survive, and include a work "On Fate", in which he argues against the Stoic doctrine of necessity; and one "On the Soul". His commentaries on Aristotle were considered so useful that he was styled, by way of pre-eminence, "the commentator" (). Alexander was a native of Aphrodisias in Caria (present-day Turkey) and came to Athens towards the end of the 2nd century. He was a student of the two Stoic, or possibly Peripatetic, philosophers Sosigenes and Herminus, and perhaps of Aristotle of Mytilene. At Athens he became head of the Peripatetic school and lectured on Peripatetic philosophy. Alexander's dedication of "On Fate" to Septimius Severus and Caracalla, in gratitude for his position at Athens, indicates a date between 198 and 209. A recently published inscription from Aphrodisias confirms that he was head of one of the Schools at Athens and gives his full name as Titus Aurelius Alexander. His full nomenclature shows that his grandfather or other ancestor was probably given Roman citizenship by the emperor Antoninus Pius, while proconsul of Asia. The inscription honours his father, also called Alexander and also a philosopher. This fact makes it plausible that some of the suspect works that form part of Alexander's corpus should be ascribed to his father. Alexander composed several commentaries on the works of Aristotle, in which he sought to escape a syncretistic tendency and to recover the pure doctrines of Aristotle. His extant commentaries are on "Prior Analytics" (Book 1), "Topics", "Meteorology", "Sense and Sensibilia", and "Metaphysics" (Books 1-5). The commentary on the "Sophistical Refutations" is deemed spurious, as is the commentary on the final nine books of the "Metaphysics". The lost commentaries include works on the "De Interpretatione", "Posterior Analytics", "Physics", "On the Heavens", "On Generation and Corruption", "On the Soul", and "On Memory". Simplicius of Cilicia mentions that Alexander provided commentary on the quadrature of the lunes, and the corresponding problem of squaring the circle. In April 2007, it was reported that imaging analysis had discovered an early commentary on Aristotle's "Categories" in the Archimedes Palimpsest, and Robert Sharples suggested Alexander as the most likely author. There are also several extant original writings by Alexander. These include: "On the Soul", "Problems and Solutions", "Ethical Problems", "On Fate", and "On Mixture and Growth". Three works attributed to him are considered spurious: "Medical Questions", "Physical Problems", and "On Fevers". Additional works by Alexander are preserved in Arabic translation, these include: "On the Principles of the Universe", "On Providence", and "Against Galen on Motion". "On the Soul" ("De anima") is a treatise on the soul written along the lines suggested by Aristotle in his own "De anima". Alexander contends that the undeveloped reason in man is material ("nous hylikos") and inseparable from the body. He argued strongly against the doctrine of the soul's immortality. He identified the active intellect ("nous poietikos"), through whose agency the potential intellect in man becomes actual, with God. A second book is known as the "Supplement to On the Soul" ("Mantissa"). The "Mantissa" is a series of twenty-five separate pieces of which the opening five deal directly with psychology. The remaining twenty pieces cover problems in physics and ethics, of which the largest group deals with questions of vision and light, and the final four with fate and providence. The "Mantissa" was probably not written by Alexander in its current form, but much of the actual material may be his. "Problems and Solutions" ("Quaestiones") consists of three books which, although termed "problems and solutions of physical questions," treat of subjects which are not all physical, and are not all problems. Among the sixty-nine items in these three books, twenty-four deal with physics, seventeen with psychology, eleven with logic and metaphysics, and six with questions of fate and providence. It is unlikely that Alexander wrote all of the "Quaestiones", some may be Alexander's own explanations, while others may be exercises by his students. "Ethical Problems" was traditionally counted as the fourth book of the "Quaestiones". The work is a discussion of ethical issues based on Aristotle, and contains responses to questions and problems deriving from Alexander's school. It is likely that the work was not written by Alexander himself, but rather by his pupils on the basis of debates involving Alexander. "On Fate" is a treatise in which Alexander argues against the Stoic doctrine of necessity. In "On Fate" Alexander denied three things - necessity (), the foreknowledge of fated events that was part of the Stoic identification of God and Nature, and determinism in the sense of a sequence of causes that was laid down beforehand () or predetermined by antecedents (). He defended a view of moral responsibility we would call libertarianism today. "On Mixture and Growth" discusses the topic of mixture of physical bodies. It is both an extended discussion (and polemic) on Stoic physics, and an exposition of Aristotelian thought on this theme. "On the Principles of the Universe" is preserved in Arabic translation. This treatise is not mentioned in surviving Greek sources, but it enjoyed great popularity in the Muslim world, and a large number of copies have survived. The main purpose of this work is to give a general account of Aristotelian cosmology and metaphysics, but it also has a polemical tone, and it may be directed at rival views within the Peripatetic school. Alexander was concerned with filling the gaps of the Aristotelian system and smoothing out its inconsistencies, while also presenting a unified picture of the world, both physical and ethical. The topics dealt with are the nature of the heavenly motions and the relationship between the unchangeable celestial realm and the sublunar world of generation and decay. His principal sources are the "Physics" (book 7), "Metaphysics" (book 12), and the Pseudo-Aristotelian "On the Universe". "On Providence" survives in two Arabic versions. In this treatise, Alexander opposes the Stoic view that divine Providence extends to all aspects of the world; he regards this idea as unworthy of the gods. Instead, providence is a power that emanates from the heavens to the sublunar region, and is responsible for the generation and destruction of earthly things, without any direct involvement in the lives of individuals. By the 6th century Alexander's commentaries on Aristotle were considered so useful that he was referred to as "the commentator" (). His commentaries were greatly esteemed among the Arabs, who translated many of them, and he is heavily quoted by Maimonides. In 1210, the Church Council of Paris issued a condemnation, which probably targeted the writings of Alexander among others. In the early Renaissance his doctrine of the soul's mortality was adopted by Pietro Pomponazzi (against the Thomists and the Averroists), and by his successor Cesare Cremonini. This school is known as Alexandrists. Alexander's band, an optical phenomenon, is named after him. Several of Alexander's works were published in the Aldine edition of Aristotle, Venice, 1495–1498; his "De Fato" and "De Anima" were printed along with the works of Themistius at Venice (1534); the former work, which has been translated into Latin by Grotius and also by Schulthess, was edited by J. C. Orelli, Zürich, 1824; and his commentaries on the "Metaphysica" by H. Bonitz, Berlin, 1847. In 1989 the first part of his "On Aristotle Metaphysics" was published in English translation as part of the Ancient commentators project. Since then, other works of his have been translated into English.
https://en.wikipedia.org/wiki?curid=1599
Severus Alexander Severus Alexander (; Marcus Aurelius Severus Alexander Augustus; born Marcus Julius Gessius Bassianus Alexianus; c. 208 – 19 March 235) was Roman Emperor from 222 to 235 and the last emperor of the Severan dynasty. He succeeded his cousin Elagabalus upon the latter's assassination in 222. His own assassination marked the epoch event for the Crisis of the Third Century—nearly 50 years of civil wars, foreign invasion, and collapse of the monetary economy. Alexander was the heir to his cousin, the 18-year-old Emperor who had been murdered along with his mother Julia Soaemias, by his own guards, who, as a mark of contempt, had their remains cast into the Tiber river. He and his cousin were both grandsons of the influential and powerful Julia Maesa, who had arranged for Elagabalus' acclamation as emperor by the famous Third Gallic Legion. It was the rumor of Alexander's death that triggered the assassination of Elagabalus and his mother. His 13-year reign was the longest reign of a "sole" emperor since Antoninus Pius. He was also the second-youngest ever sole legal Roman Emperor during the existence of the united empire, the youngest being Gordian III. As emperor, Alexander's peacetime reign was prosperous. However, Rome was militarily confronted with the rising Sassanid Empire and growing incursions from the tribes of Germania. He managed to check the threat of the Sassanids. But when campaigning against Germanic tribes, Alexander attempted to bring peace by engaging in diplomacy and bribery. This alienated many in the Roman Army and led to a conspiracy to assassinate and replace him. Born in 207 or 208, Severus Alexander became emperor when he was around 14 years old, making him the youngest emperor in Rome's history, until the ascension of Gordian III. Alexander's grandmother Maesa believed that he had more potential to rule than her other grandson, the increasingly unpopular emperor Elagabalus. Thus, to preserve her own position, she had Elagabalus adopt the young Alexander and then arranged for Elagabalus' assassination, securing the throne for Alexander. The Roman army hailed Alexander as emperor on 13 March 222, immediately conferring on him the titles of Augustus, "pater patriae" and "pontifex maximus". Throughout his life, Alexander relied heavily on guidance from his grandmother, Maesa, and mother, Julia Mamaea. Maesa died in 223, leaving Mamaea as the sole influence upon Alexander's actions. As a young, immature, and inexperienced adolescent, Alexander knew little about government, warcraft, or the role of ruling over an empire. In time, however, the army came to admire what Jasper Burns refers to as "his simple virtues and moderate behavior, so different from [Elagabalus]". Under the influence of his mother, Alexander did much to improve the morals and condition of the people, and to enhance the dignity of the state. He employed noted jurists to oversee the administration of justice, such as the famous jurist Ulpian. His advisers were men like the senator and historian Cassius Dio, and it is claimed that he created a select board of 16 senators, although this claim is disputed. He also created a municipal council of 14 who assisted the urban prefect in administering the affairs of the 14 districts of Rome. Excessive luxury and extravagance at the imperial court were diminished, and he restored the Baths of Nero in 227 or 229; consequently, they are sometimes also known as the Baths of Alexander after him. Upon his accession he reduced the silver purity of the denarius from 46.5% to 43%—the actual silver weight dropped from 1.41 grams to 1.30 grams; however, in 229 he revalued the denarius, increasing the silver purity and weight to 45% and 1.46 grams. The following year he decreased the amount of base metal in the denarius while adding more silver, raising the silver purity and weight again to 50.5% and 1.50 grams. Additionally, during his reign taxes were lightened; literature, art and science were encouraged; and, for the convenience of the people, loan offices were instituted for lending money at a moderate rate of interest. In religious matters, Alexander preserved an open mind. According to the "Historia Augusta", he wished to erect a temple to Jesus but was dissuaded by the pagan priests; however, much of this book is full of falsifications and modern scholars deem it almost completely untrustworthy. He allowed a synagogue to be built in Rome, and he gave as a gift to this synagogue a scroll of the Torah known as the Severus Scroll. In legal matters, Alexander did much to aid the rights of his soldiers. He confirmed that soldiers could name anyone as heirs in their will, whereas civilians had strict restrictions over who could become heirs or receive a legacy. He also confirmed that soldiers could free their slaves in their wills, protected the rights of soldiers to their property when they were on campaign, and reasserted that a soldier's property acquired in or because of military service (his "castrense peculium") could be claimed by no one else, not even the soldier's father. On the whole, Alexander's reign was prosperous until the rise, in the east, of the Sassanids under Ardashir I. In 231 AD, Ardeshir invaded the Roman provinces of the east, overrunning Mesopotamia and penetrating possibly as far as Syria and Cappadocia, forcing from the young Alexander a vigorous response. Of the war that followed there are various accounts. According to the most detailed authority, Herodian, the Roman armies suffered a number of humiliating setbacks and defeats, while according to the Historia Augusta as well as Alexander's own dispatch to the Roman Senate, he gained great victories. Making Antioch his base, he organized in 233 a three-fold invasion of the Sassanian Empire; at the head of the main body he himself advanced to recapture northern Mesopotamia, while another army invaded Media through the mountains of Armenia, and a third advanced from the south in the direction of Babylon. The northernmost army gained some success, fighting in mountainous territory favorable to the Roman infantry, but the southern army was surrounded and destroyed by Ardashir's skilful horse-archers, and Alexander himself retreated after an indecisive campaign, his army wracked by indiscipline and disease. Further losses were incurred by the retreating northern army in the inclement cold of Armenia as it retired into winter quarters, due to an incompetent failure to establish adequate supply lines. Still, Mesopotamia was retaken, and Ardashir was not thereafter able to extend his conquests, though his son, Shapur, would obtain some success later in the century. Although the Sassanids were checked for the time, the conduct of the Roman army showed an extraordinary lack of discipline. In 232, there was a mutiny in the Syrian legion, which proclaimed Taurinus emperor. Alexander managed to suppress the uprising, and Taurinus drowned while attempting to flee across the Euphrates. The emperor returned to Rome and celebrated a triumph in 233. Alexander's reign was also characterized by a significant breakdown of military discipline. In 223, the Praetorian Guard murdered their prefect, Ulpian, in Alexander's presence. Alexander could not openly punish the ringleader of the riot, and instead removed him to nominal post of honor in Egypt and then Crete, where he was "quietly put out of the way" sometime after the excitement had abated. The soldiers then fought a three-day battle against the populace of Rome, and this battle ended after several parts of the city were set on fire. Dio was among those who gave a highly critical account of military discipline during the time, saying that the soldiers would rather just surrender to the enemy. Different reasons are given for this issue; Campbell points to "...the decline in the prestige of the Severan dynasty, the feeble nature of Alexander himself, who appeared to be no soldier and to be completely dominated by his mother's advice, and lack of real military success at a time during which the empire was coming under increasing pressure." Herodian, on the other hand, was convinced that "the emperor's miserliness (partly the result of his mother's greed) and slowness to bestow donatives" were instrumental in the fall of military discipline under Alexander. After the Persian war, Alexander returned to Antioch with the famous Origen, one of the greatest Fathers of the Christian Church. Alexander's mother, Julia Mamaea, asked for Origen to tutor Alexander in Christianity. While Alexander was being educated in the Christian doctrines, the northern portion of his empire was being invaded by Germanic and Sarmatian tribes. A new and menacing enemy started to emerge directly after Alexander's success in the Persian war. In 234, the barbarians crossed the Rhine and Danube in hordes that caused alarm as far as Rome. The soldiers serving under Alexander, already demoralized after their costly war against the Persians, were further discontented with their emperor when their homes were destroyed by the barbarian invaders. As word of the invasion spread, the Emperor took the front line and went to battle against the Germanic invaders. The Romans prepared heavily for the war, building a fleet to carry the entire army across. However, at this point in Alexander's career, he still knew little about being a general. Because of this, he hoped the mere threat of his armies would be sufficient to persuade the hostile tribes to surrender. Severus enforced a strict military discipline in his men that sparked a rebellion among his legions. Due to incurring heavy losses against the Persians, and on the advice of his mother, Alexander attempted to buy the Germanic tribes off, so as to gain time. It was this decision that resulted in the legionaries looking down upon Alexander. They considered him dishonorable and feared he was unfit to be Emperor. Under these circumstances the army swiftly looked to replace Alexander. Gaius Iulius Verus Maximinus was the next best option. He was a soldier from Thrace who had a golden reputation and was working hard to increase his military status. He was also a man with superior personal strength, who rose to his present position from a peasant background. With the Thracian's hailing came the end of the Severan Dynasty, and, with the growing animosity of Severus' army towards him, the path for his assassination was paved. Alexander was forced to face his German enemies in the early months of 235. By the time he and his mother arrived, the situation had settled, and so his mother convinced him that to avoid violence, trying to bribe the German army to surrender was the more sensible course of action. According to historians, it was this tactic combined with insubordination from his own men that destroyed his reputation and popularity. Pusillanimity was responsible for the revolt of Alexander's army, resulting in Severus falling victim to the swords of his own men, following the nomination of Maximinus as emperor. Alexander was assassinated on 19 March 235, together with his mother, in a mutiny of the Legio XXII "Primigenia" at Moguntiacum (Mainz) while at a meeting with his generals. These assassinations secured the throne for Maximinus. Lampridius documents two theories that elaborate on Severus's assassination. The first claims that the disaffection of Mamaea was the main motive behind the homicide. However, Lampridius makes it clear that he is more supportive of an alternative theory, that Alexander was murdered in Sicilia (located in Britain). This theory has it that, in an open tent after his lunch, Alexander was consulting with his insubordinate troops, who compared him to his cousin Elagabalus, the divisive and unpopular Emperor whose own assassination paved the way for Alexander's reign. A German servant entered the tent and initiated the call for Alexander's assassination, at which point many of the troops joined in the attack. Alexander's attendants fought against the other troops but could not hold off the combined might of those seeking the Emperor's assassination. Within minutes, Alexander was dead. His mother Julia Mamaea was in the same tent with Alexander and soon fell victim to the same group of assassins. Alexander’s body was buried together with the body of his mother Julia Manaea in a mausoleum in Rome. The actual mausoleum, called Monte di Grano, is the third in size in Rome after the ones of Hadrian and of Augustus. It is still visible in Piazza dei Tribuni, in the Quadraro area in Rome, where it resembles a large earth mound. The large sarcofagus found inside the tomb in the XVI century, and which contained the emperor’s remains is in the Palazzo dei Conservatory Museum in Rome. According to some sources inside the same sarcofagus in 1582 a precious glass urn was found, the Portland Vase, currently on display at the British Museum in London. Alexander's death marked the end of the Severan dynasty. He was the last of the Syrian emperors and the first emperor to be overthrown by military discontent on a wide scale. After his death his economic policies were completely discarded, and the Roman currency was devalued; this signaled the beginning of the chaotic period known as the Crisis of the Third Century, which brought the empire to the brink of collapse. Alexander's death at the hands of his troops can also be seen as the heralding of a new role for Roman emperors. Though they were not yet expected to personally fight in battle during Alexander's time, emperors were increasingly expected to display general competence in military affairs. Thus, Alexander's taking of his mother's advice to not get involved in battle, his dishonorable and unsoldierly methods of dealing with the Germanic threat, and the relative failure of his military campaign against the Persians were all deemed highly unacceptable by the soldiers. Indeed, Maximinus was able to overthrow Alexander by "harping on his own military excellence in contrast to that feeble coward". Yet by arrogating the power to dethrone their emperor, the legions paved the way for a half-century of widespread chaos and instability. Although the Senate declared the emperor and his rule damned upon the report of his death and the ascension of a replacement emperor, Alexander was deified after the death of Maximinus in 238. His "damnatio memoriae" was also reversed after Maximinus's death. Perhaps his most tangible legacy was the emergence in the 16th Century of the 'Barberini vase'. This was allegedly found at the mausoleum of the Roman Emperor Alexander Severus and his family at . The discovery of the vase is described by Pietro Santi Bartoli and referenced on page 28 of a book on The Portland Vase. Pietro Bartoli indicates that the vase contained the ashes of Severus Alexander. However, this together with the interpretations of the scenes depicted are the source of countless theories and disputed 'facts'. The vase passed through the hands of Sir William Hamilton Ambassador to the Royal Court in Naples and was later sold to the Duke and Duchess of Portland, and has subsequently been known as the Portland Vase. Following catastrophic damage, this vase (1-25BC) has been reconstructed three times and resides in the British Museum. The Portland vase itself was borrowed and near copied by Josiah Wedgewood who appears to have added modesty drapery. The vase formed the basis of Jasperware. Alexander was married three times. His most famous wife was Sallustia Orbiana, "Augusta", whom he married in 225 when she was 16 years old. Their marriage was arranged by Alexander's mother, Mamaea. However, as soon as Orbiana received the title of "Augusta", Mamaea became increasingly jealous and resentful of Alexander's wife due to Mamaea's excessive desire of all regal female titles. Alexander divorced and exiled Orbiana in 227, after her father, Seius Sallustius, was executed after being accused of treason. Alexander's second wife was Sulpicia Memmia, a member of one of the most ancient Patrician families in Rome. Her father was a man of consular rank; her grandfather's name was "Catulus". The identity of Alexander's third wife is unknown. Alexander did not father children with any of his wives. According to the Augustan History, a late Roman work containing biographies of emperors and others, and considered by scholars to be a work of very dubious historical reliability, Alexander prayed every morning in his private chapel. He was extremely tolerant of Jews and Christians alike. He continued all privileges towards Jews during his reign, and the Augustan History relates that Alexander placed images of Abraham and Jesus in his oratory, along with other Roman deities and classical figures. Also according to the "Historia Augusta", Alexander's "chief amusement consisted in having young dogs play with little pigs."
https://en.wikipedia.org/wiki?curid=1600
Alexander Jannaeus Alexander Jannaeus (also known as Alexander Jannai/Yannai; , born Jonathan Alexander) was the second king of the Hasmonean dynasty, who ruled over an expanding kingdom of Judea from 103 to 76 BCE. A son of John Hyrcanus, he inherited the throne from his brother Aristobulus I, and married his brother's widow, Queen Salome Alexandra. From his conquests to expand the kingdom to a bloody civil war, Alexander's reign has been generalised as cruel and oppressive with never-ending conflict. The major historical sources of Alexander's life are Josephus's "Antiquities of the Jews" and "The Jewish War". The kingdom of Alexander Jannaeus was the largest and strongest known Jewish State outside of biblical sources, having conquered most of Palestine's Mediterranean coastline and regions surrounding the Jordan River. Alexander also had many of his subjects killed for their disapproval of his handling of state affairs. Due to his territorial expansion and interactions with his subjects, he was continuously embroiled with foreign wars and domestic turmoil. Alexander Jannaeus was the third son of John Hyrcanus by his second wife. When Aristobulus I, Hyrcanus' son by his first wife, became king, he deemed it necessary for his own security to imprison his half-brother. Aristobulus died after a reign of one year. Upon his death, his widow, Salome Alexandra had Alexander and his brothers released from prison. One of these brothers is said to have unsuccessfully sought the throne. Alexander, as the oldest living brother, had the right not only to the throne, but also to Salome, the widow of his deceased brother, who had died childless; and, although she was thirteen years older than him, he married her in accordance with Jewish law. By her he had two sons: the eldest, Hyrcanus II, became high-priest in 62 BCE; and Aristobulus II, who was high-priest from 66 - 62 BCE and started a bloody civil war with his brother, ending in his capture by Pompey the Great. Like his brother, he was an avid supporter of the aristocratic priestly faction known as the Sadducees. His wife Salome on the other hand, came from a pharisaic family (her brother was Simeon ben Shetach, a famous Pharisee leader) and was more sympathetic to their cause and protected them throughout his turbulent reign. Like his father, Alexander also served as the high priest. This raised the ire of the religious authorities who insisted that these two offices should not be combined. According to the Talmud, Yannai was a questionable desecrated priest (rumour had it that his mother was captured in Modiin and violated) and, in the opinion of the Pharisees, was not allowed to serve in the temple. This infuriated the king and he sided with the Sadducees who defended him. This incident led the king to turn against the Pharisees and he persecute them until his death. Alexander's first expedition was against the city of Ptolemais. While Alexander went ahead to besiege the city, Zoilus of Dora took the opportunity to see if he could relieve Ptolemais in hopes of establishing his rule over coastal territories. Alexander's Hasmonean army quickly defeated Zoilus's forces. Ptolemais then requested aid from Ptolemy IX Lathyros, who had been banished by his mother Cleopatra III; Ptolemy founded a kingdom in Cyprus after being cast out by his mother. The situation at Ptolemais was seized as an opportunity by Ptolemy to possibly gain a stronghold and control the Judean coast in order to invade Egypt by sea. However, an individual named Demaenetus convinced the inhabitants of their imprudence in requesting Ptolemy's assistance. They realised that by allying themselves with Ptolemy, they had unintentionally declared war on Cleopatra. When Ptolemy arrived at the city, the inhabitants denied him access. Alexander too didn't want to be involved in a war between Cleopatra and Ptolemy, so he abandoned his campaign against Ptolemais and returned to Jerusalem. After offering Ptolemy four hundred talents and a peace treaty in return for Zoilus's death, Alexander met him with treachery by negotiating an alliance with Cleopatra. Once he had formed an alliance with Ptolemy, Alexader continued his conquests by capturing the coastal cities of Dora and Straton's Tower. As soon as Ptolemy learned of Alexander's scheme, he was determined to kill him. Ptolemy put Ptolemais under siege, but left his generals to attack the city, while he continued to pursue Alexander. Ptolemy's pursuit caused much destruction in the Galilee region. Here he captured Asochis on the Sabbath, taking ten thousand people as prisoners. Ptolemy also initiated an unsuccessful attack on Sepphoris. Ptolemy and Alexander engaged in battle at Asophon near the Jordan River. Estimated to have fifty to eighty thousand soldiers, Alexander's army consisted of both Jews and pagans. At the head of his armed forces were his elite pagan mercenaries; they were specialised in Greek-style phalanx. One of Ptolemy's commanders, Philostephanus, commenced the first attack by crossing the river that divided both forces. The Hasmoneans had the advantage, however, Philostephanus held back a certain amount of his forces whom he sent to recover lost ground. Perceiving them as vast reinforcements, Alexander's army fled. Some of his retreating forces tried to push back, but quickly dispersed as Ptolemy's forces pursued Alexander's fleeing army; thirty to fifty thousand Hasmonean soldiers died. Ptolemy's forces at Ptolemais also succeeded in capturing the city. He then continued to conquer much of the Hasmonean kingdom, occupying the entirety of northern Judea, the coast, and territories east of the Jordan River. While doing so, he pillaged villages and ordered his soldiers to cannibalise women and children to create psychological fear towards his enemies. At the time, Salome Alexandra was notified of Cleopatra's approachment to Judea. Realising that her son had amassed a formidable force in Judea, Cleopatra appointed Jewish generals Ananias and Chelkias to command her forces. She too also went with a fleet towards Judea. When Cleopatra arrived at Ptolemais, the people refused her entry, so she besieged the city. Ptolemy, believing Syria was defenseless, withdrew to Cyprus after his miscalculation. While in pursuit of Ptolemy, Chelkias died in Coele-Syria. The war abruptly came to an end with Ptolemy fleeing to Cyprus. Alexander then approached Cleopatra. Bowing before her, he requested to retain his rule. Cleopatra was urged by her subordinates to annex Judea. However, Ananias demanded she consider the residential Egyptian Jews who were the main support of her throne. This induced Cleopatra to modify her longings for Judea. Alexander though would meet her demands and suspend his campaigns. These negotiations took place at Scythopolis. Cleopatra died five years later. Confident, after her death, Alexander found himself free to continue with new campaigns. Alexander captured Gadara and fought to capture the strong fortress of Amathus in the Transjordan region, but was defeated. He was more successful in his expedition against the coastal cities, capturing Raphia and Anthedon. In 96 BCE, Jannaeus defeated the inhabitants of Gaza. This victory gained Judean control over the Mediterranean outlet of the main Nabataean trade route. Alexander initially returned his focus back to the Transjordan region where, avenging his previous defeat, he destroyed Amathus. Determined to proceed with future campaigns despite his initial defeat at Amathus, Alexander set his focus on Gaza. A victory against the city wasn't so easily achieved. Gaza's general Apollodotus strategically employed a night attack against the Hasmonean army. With a force of two thousand less-skilled soldiers and ten thousand slaves, Gaza's military was able to deceive the Hasmonean army into believing they were being attacked by Ptolemy. The Gazans killed many and the Hasmonean army fled the battle. When morning exposed the delusive tactic, Alexader continued his assault but lost a thousand additional soldiers. The Gazans still remained defiant in hopes that the Nabataean kingdom would come to their aid. The city, however, would eventually suffer defeat due to its own leadership. Gaza at the time was governed by two brothers, Lysimachus and Apollodotus. Lysimachus finally convinced the people to surrender, and Alexander peacefully entered the city. Though he at first seemed peaceful, Alexander suddenly turned against the inhabitants. Some men killed their wives and children out of desperation, to ensure they wouldn't be captured and enslaved. Others burned down their homes to prevent the soldiers from plundering. The town council and five hundred civilians took refuge at the Temple of Apollo, where Alexander had them massacred. The Judean Civil War initially began after the conquest of Gaza around 99 BCE. Due to Jannaeus's victory at Gaza, the Nabataean kingdom no longer had direct access to the Mediterranean Sea. Alexander soon captured Gadara, which together with the loss of Gaza caused the Nabataeans to lose their main trade routes leading to Rome and Damascus. After losing Gadara, the Nabataean king Obodas I launched an attack against Alexander in a steep valley at Gadara, where Alexander barely managed to escape. After his defeat in the Battle of Gadara, Jannaeus returned to Jerusalem, only to be met with fierce Jewish opposition. During the Jewish holiday Sukkot, Alexander Jannaeus, while officiating as the High Priest at the Temple in Jerusalem, demonstrated his displeasure against the Pharisees by refusing to perform the water libation ceremony properly: instead of pouring it on the altar, he poured it on his feet. The crowd responded with shock at his mockery and showed their displeasure by pelting him with etrogim (citrons). They made the situation worse by insulting him. They called him a descendant of the captives and unsuitable to hold office and to sacrifice. Outraged, he killed six thousand people. Alexander also had wooden barriers built around the altar and the temple preventing people from going near him. Only the priest were permitted to enter. This incident during the Feast of Tabernacles was a major factor leading up to the Judean Civil War. After Jannaeus succeeded early in the war, the rebels asked for Seleucid assistance. Judean insurgents joined forces with Demetrius III Eucaerus to fight against Jannaeus. Alexander had gathered six thousand two hundred mercenaries and twenty thousand Jews for battle as Demetrius had forty thousand soldiers and three thousand horses. There were attempts from both sides to persuade each other to abandon positions but were unsuccessful. The Seleucid forces defeated Jannaeus at Shechem, and all of Alexander's mercenaries were killed in battle. This defeat forced Alexander to take refuge in the mountains. In sympathy towards Jannaeus, six thousand Judean rebels ultimately returned to him. In fear of this news, Demetrius withdrew. Nevertheless, war between Jannaeus and the rebels who returned to him continued. They fought until Alexander achieved victory. Most of the rebels died in battle, while the remaining rebels fled to the city of Bethoma until they were defeated (Josephus, "Antiquities" 13.372-83). Jannaeus had brought the surviving rebels back to Jerusalem where he had eight hundred Jews, primarily Pharisees, crucified. Before their deaths, Alexander had the rebels' wives and children executed before their eyes as Jannaeus ate with his concubines. Alexander later returned the land he had seized in Moab and Galaaditis from the Nabataeans in order to have them end their support for the Jewish rebels. The remaining rebels who numbered eight thousand, fled by night in fear of Alexander. Afterward, all rebel hostility ceased and Alexander's reign continued undisturbed. During his last years, Alexander continued campaigning in the east. The Nabataean king Aretas III managed to defeat Alexander in battle, however, Alexander continued expanding the Hasmonean kingdom into Transjordan. In Gaulanitis, he captured the cities of Gaulana, Seleucia, Gamala and Hippos; in Galaaditis, the cities of Pella, Dium, and Gerasa. Alexander had Pella destroyed for refusing to Judaize. Alexander captured all these cities in a period of three years (83-80 BCE). Three years later, Alexander had succumbed to an illness during the siege of Ragaba. Having reigned 27 years, Alexander Jannaeus died at the age of forty-nine.
https://en.wikipedia.org/wiki?curid=1606
Alexios I Komnenos Alexios I Komnenos (, – 15 August 1118), Latinized Alexius I Comnenus, was Byzantine emperor from 1081 to 1118. Although he was not the founder of the Komnenian dynasty, it was during his reign that the Komnenos family came to full power. Inheriting a collapsing empire and faced with constant warfare during his reign against both the Seljuq Turks in Asia Minor and the Normans in the western Balkans, Alexios was able to curb the Byzantine decline and begin the military, financial, and territorial recovery known as the "Komnenian restoration". The basis for this recovery were various reforms initiated by Alexios. His appeals to Western Europe for help against the Turks were also the catalyst that likely contributed to the convoking of the Crusades. Alexios was the son of the Domestic of the Schools John Komnenos and Anna Dalassene, and the nephew of Isaac I Komnenos (emperor 1057–1059). Alexios' father declined the throne on the abdication of Isaac, who was thus succeeded by four emperors of other families between 1059 and 1081. Under one of these emperors, Romanos IV Diogenes (1067–1071), Alexios served with distinction against the Seljuq Turks. Under Michael VII Doukas "Parapinakes" (1071–1078) and Nikephoros III Botaneiates (1078–1081), he was also employed, along with his elder brother Isaac, against rebels in Asia Minor, Thrace, and in Epirus. In 1074, western mercenaries led by Roussel de Bailleul rebelled in Asia Minor, but Alexios successfully subdued them by 1076. In 1078, he was appointed commander of the field army in the West by Nikephoros III. In this capacity, Alexios defeated the rebellions of Nikephoros Bryennios the Elder (whose son or grandson later married Alexios' daughter Anna) and Nikephoros Basilakes, the first at the Battle of Kalavrye and the latter in a surprise night attack on his camp. Alexios was ordered to march against his brother-in-law Nikephoros Melissenos in Asia Minor but refused to fight his kinsman. This did not, however, lead to a demotion, as Alexios was needed to counter the expected invasion of the Normans of Southern Italy, led by Robert Guiscard. While Byzantine troops were assembling for the expedition, the Doukas faction at court approached Alexios and convinced him to join a conspiracy against Nikephoros III. The mother of Alexios, Anna Dalassene, was to play a prominent role in this coup d'état of 1081, along with the current empress, Maria of Alania. First married to Michael VII Doukas and secondly to Nikephoros III Botaneiates, she was preoccupied with the future of her son by Michael VII, Constantine Doukas. Nikephoros III intended to leave the throne to one of his close relatives, and this resulted in Maria's ambivalence and alliance with the Komnenoi, though the real driving force behind this political alliance was Anna Dalassene. The empress was already closely connected to the Komnenoi through Maria's cousin Irene's marriage to Isaac Komnenos, so the Komnenoi brothers were able to see her under the pretense of a friendly family visit. Furthermore, to aid the conspiracy Maria had adopted Alexios as her son, though she was only five years older than he. Maria was persuaded to do so on the advice of her own "Alans" and her eunuchs, who had been instigated by Isaac Komnenos. Given Anna's tight hold on her family, Alexios must have been adopted with her implicit approval. As a result, Alexios and Constantine, Maria's son, were now adoptive brothers, and both Isaac and Alexios took an oath that they would safeguard his rights as emperor. By secretly giving inside information to the Komnenoi, Maria was an invaluable ally. As stated in the Alexiad, Isaac and Alexios left Constantinople in mid-February 1081 to raise an army against Botaneiates. However, when the time came, Anna quickly and surreptitiously mobilized the remainder of the family and took refuge in the Hagia Sophia. From there she negotiated with the emperor for the safety of family members left in the capital, while protesting her sons' innocence of hostile actions. Under the falsehood of making a vesperal visit to worship at the church, she deliberately excluded the grandson of Botaneiates and his loyal tutor, met with Alexios and Isaac, and fled for the forum of Constantine. The tutor discovered they were missing and eventually found them on the palace grounds, but Anna was able to convince him that they would return to the palace shortly. Then to gain entrance to both the outer and inner sanctuary of the church, the women pretended to the gatekeepers that they were pilgrims from Cappadocia who had spent all their funds and wanted to worship before starting their return trip. However, before they were to gain entry into the sanctuary, Straboromanos and royal guards caught up with them to summon them back to the palace. Anna then protested that the family was in fear for their lives, her sons were loyal subjects (Alexios and Isaac were discovered absent without leave), and had learned of a plot by enemies of the Komnenoi to have them both blinded and had, therefore, fled the capital so they may continue to be of loyal service to the emperor. She refused to go with them and demanded that they allow her to pray to the Mother of God for protection. This request was granted and Anna then manifested her true theatrical and manipulative capabilities: Nikephoros III Botaneiates was forced into a public vow that he would grant protection to the family. Straboromanos tried to give Anna his cross, but for her it was not sufficiently large enough for all bystanders to witness the oath. She also demanded that the cross be personally sent by Botaneiates as a vow of his good faith. He obliged, sending a complete assurance for the family with his own cross. At the emperor's further insistence, and for their own protection, they took refuge at the convent of Petrion, where they were eventually joined by Maria of Bulgaria, mother of Irene Doukaina. Botaneiates allowed them to be treated as refugees rather than as guests. They were allowed to have family members bring in their own food and were on good terms with the guards from whom they learned the latest news. Anna was highly successful in three important aspects of the revolt: she bought time for her sons to steal imperial horses from the stables and escape the city; she distracted the emperor, giving her sons time to gather and arm their troops; and she gave a false sense of security to Botaneiates that there was no real treasonous coup against him. After bribing the Western troops guarding the city, Isaac and Alexios Komnenos entered the capital victoriously on April 1, 1081. During this time, Alexios was rumored to be the lover of Empress Maria of Alania, the daughter of King Bagrat IV of Georgia, who had been successively married to Michael VII Doukas and his successor Nikephoros III Botaneiates, and who was renowned for her beauty. Alexios arranged for Maria to stay on the palace grounds, and it was thought that he was considering marrying her. However, his mother consolidated the Doukas family connection by arranging the Emperor's marriage to Irene Doukaina, granddaughter of the Caesar John Doukas, the uncle of Michael VII, who would not have supported Alexios otherwise. As a measure intended to keep the support of the Doukai, Alexios restored Constantine Doukas, the young son of Michael VII and Maria, as co-emperor and a little later betrothed him to his own first-born daughter Anna, who moved into the Mangana Palace with her fiancé and his mother. This situation changed drastically, however, when Alexios' first son John II Komnenos was born in 1087: Anna's engagement to Constantine was dissolved, and she was moved to the main Palace to live with her mother and grandmother. Alexios became estranged from Maria, who was stripped of her imperial title and retired to a monastery, and Constantine Doukas was deprived of his status as co-emperor. Nevertheless, he remained in good relations with the imperial family and succumbed to his weak constitution soon afterwards. The nearly thirty-seven year reign of Alexios was full of struggle. At the outset he faced the formidable attack of the Normans, led by Robert Guiscard and his son Bohemund, who took Dyrrhachium and Corfu and laid siege to Larissa in Thessaly (see Battle of Dyrrhachium). Alexios suffered several defeats before he was able to strike back with success. He enhanced his resistance by bribing the German king Henry IV with 360,000 gold pieces to attack the Normans in Italy, which forced the Normans to concentrate on their defenses at home in 1083–84. He also secured the alliance of Henry, Count of Monte Sant'Angelo, who controlled the Gargano Peninsula and dated his charters by Alexios' reign. Henry's allegiance would be the last example of Byzantine political control on peninsular Italy. The Norman danger subsided with the death of Guiscard in 1085, and the Byzantines recovered most of their losses. Alexios next had to deal with disturbances in Thrace, where the heretical sects of the Bogomils and the Paulicians revolted and made common cause with the Pechenegs from beyond the Danube. Paulician soldiers in imperial service likewise deserted during Alexios' battles with the Normans. As soon as the Norman threat had passed, Alexios set out to punish the rebels and deserters, confiscating their lands. This led to a further revolt near Philippopolis, and the commander of the field army in the west, Gregory Pakourianos, was defeated and killed in the ensuing battle. In 1087 the Pechenegs raided into Thrace, and Alexios crossed into Moesia to retaliate but failed to take Dorostolon (Silistra). During his retreat, the emperor was surrounded and worn down by the Pechenegs, who forced him to sign a truce and to pay protection money. In 1090 the Pechenegs invaded Thrace again, while Tzachas, the brother-in-law of the Sultan of Rum, launched a fleet and attempted to arrange a joint siege of Constantinople with the Pechenegs. Alexios overcame this crisis by entering into an alliance with a horde of 40,000 Cumans, with whose help he crushed the Pechenegs at Levounion in Thrace on 29 April 1091. This put an end to the Pecheneg threat, but in 1094 the Cumans began to raid the imperial territories in the Balkans. Led by a pretender claiming to be Constantine Diogenes, a long-dead son of the Emperor Romanos IV, the Cumans crossed the mountains and raided into eastern Thrace until their leader was eliminated at Adrianople. With the Balkans more or less pacified, Alexios could now turn his attention to Asia Minor, which had been almost completely overrun by the Seljuq Turks. By the time Alexios ascended the throne, the Seljuqs had taken most of Asia Minor. Alexios was able to secure much of the coastal regions by sending peasant soldiers to raid the Seljuq camps, but these victories were unable to stop the Turks altogether. As early as 1090, Alexios had taken reconciliatory measures towards the Papacy, with the intention of seeking western support against the Seljuqs. In 1095 his ambassadors appeared before Pope Urban II at the Council of Piacenza. The help he sought from the West was simply some mercenary forces, not the immense hosts that arrived, to his consternation and embarrassment, after the pope preached the First Crusade at the Council of Clermont later that same year. This was the People's Crusade: a mob of mostly unarmed pilgrims led by the preacher Peter the Hermit. Not quite ready to supply this number of people as they traversed his territories, the emperor saw his Balkan possessions subjected to further pillage at the hands of his own allies. Eventually Alexios dealt with the People's Crusade by hustling them on to Asia Minor. There, they were massacred by the Turks of Kilij Arslan I at the Battle of Civetot in October 1096. The "Prince's Crusade", the second and much more formidable host of crusaders, gradually made its way to Constantinople, led in sections by Godfrey of Bouillon, Bohemond of Taranto, Raymond IV of Toulouse, and other important members of the western nobility. Alexios used the opportunity to meet the crusader leaders separately as they arrived, extracting from them oaths of homage and the promise to turn over conquered lands to the Byzantine Empire. Transferring each contingent into Asia, Alexios promised to supply them with provisions in return for their oaths of homage. The crusade was a notable success for Byzantium, as Alexios recovered a number of important cities and islands. The siege of Nicaea by the crusaders forced the city to surrender to the emperor in 1097, and the subsequent crusader victory at Dorylaion allowed the Byzantine forces to recover much of western Asia Minor. John Doukas re-established Byzantine rule in Chios, Rhodes, Smyrna, Ephesus, Sardis, and Philadelphia in 1097–1099. This success is ascribed by Alexios' daughter Anna to his policy and diplomacy, but by the Latin historians of the crusade to his treachery and deception. In 1099, a Byzantine fleet of ten ships was sent to assist the crusaders in capturing Laodicea and other coastal towns as far as Tripoli. The crusaders believed their oaths were made invalid when the Byzantine contingent under Tatikios failed to help them during the siege of Antioch; Bohemund, who had set himself up as Prince of Antioch, briefly went to war with Alexios in the Balkans, but he was blockaded by the Byzantine forces and agreed to become a vassal of Alexios by the Treaty of Devol in 1108. In 1116, though already terminally ill, Alexios conducted a series of defensive operations in Bythinia and Mysia to defend his Anatolian territories against the inroads of Malik Shah, the Seljuq Sultan of Iconium. In 1117 he moved onto the offensive and pushed his army deep into the Turkish-dominated Anatolian Plateau, where he defeated the Seljuq sultan at the Battle of Philomelion. During the last twenty years of his life Alexios lost much of his popularity. The years were marked by persecution of the followers of the Paulician and Bogomil heresies—one of his last acts was to publicly burn at the stake Basil, a Bogomil leader, with whom he had engaged in a theological dispute. In spite of the success of the First Crusade, Alexios also had to repel numerous attempts on his territory by the Seljuqs in 1110–1117. Alexios was for many years under the strong influence of an "eminence grise", his mother Anna Dalassene, a wise and immensely able politician whom, in a uniquely irregular fashion, he had crowned as "Augusta" instead of the rightful claimant to the title, his wife Irene Doukaina. Alexios was never happier than when taking part in military exercises and he assumed personal command of his troops whenever possible. As such, Dalassene was the effective administrator of the Empire during Alexios' long absences in military campaigns: she was constantly at odds with her daughter-in-law and had assumed total responsibility for the upbringing and education of her granddaughter Anna Komnene. Alexios' last years were also troubled by anxieties over the succession. Although he had crowned his son John II Komnenos co-emperor at the age of five in 1092, his wife, Irene Doukaina wished to alter the succession in favor of their daughter Anna and Anna's husband, Nikephoros Bryennios the Younger. Bryennios had been made "kaisar" (Caesar) and received the newly created title of "panhypersebastos" ("honoured above all"), and remained loyal to both Alexios and John. Nevertheless, the intrigues of Irene and Anna disturbed even Alexios' dying hours. Apart from all of his external enemies, a host of rebels also sought to overthrow Alexios from the imperial throne, thereby posing another major threat to his reign. Due to the troubled times the empire was enduring, he had by far the greatest number of rebellions against him of all the Byzantine emperors. These included: Under Alexios the debased "solidus" ("tetarteron" and "histamenon") was discontinued and a gold coinage of higher fineness (generally .900–.950) was established in 1092, commonly called the "hyperpyron" at 4.45 grs. The "hyperpyron" was slightly smaller than the "solidus". It was introduced along with the electrum "aspron trachy" worth a third of a "hyperpyron" and about 25% gold and 75% silver, the billon "aspron trachy" or "stamenon", valued at 48 to the "hyperpyron" and with 7% silver wash and the copper "tetarteron" and "noummion" worth 18 and 36 to the billon "aspron trachy". Alexios' reform of the Byzantine monetary system was an important basis for the financial recovery and therefore supported the so-called Komnenian restoration, as the new coinage restored financial confidence. Alexios I had overcome a dangerous crisis and stabilized the Byzantine Empire, inaugurating a century of imperial prosperity and success. He had also profoundly altered the nature of the Byzantine government. By seeking close alliances with powerful noble families, Alexios put an end to the tradition of imperial exclusivity and co-opted most of the nobility into his extended family and, through it, his government. Those who did not become part of this extended family were deprived of power and prestige. This measure, which was intended to diminish opposition, was paralleled by the introduction of new courtly dignities, like that of "panhypersebastos" given to Nikephoros Bryennios, or that of "sebastokrator" given to the emperor's brother Isaac Komnenos. Although this policy met with initial success, it gradually undermined the relative effectiveness of imperial bureaucracy by placing family connections over merit. Alexios' policy of integration of the nobility bore the fruit of continuity: every Byzantine emperor who reigned after Alexios I Komnenos was related to him by either descent or marriage. By his marriage with Irene Doukaina, Alexios I had the following children:
https://en.wikipedia.org/wiki?curid=1613
Alexei Petrovich, Tsarevich of Russia Grand Duke Alexei Petrovich of Russia (28 February 1690 – 7 July 1718) was a Russian Tsarevich. He was born in Moscow, the son of Tsar Peter I and his first wife, Eudoxia Lopukhina. Alexei despised his father and repeatedly thwarted Peter's plans to raise him as successor to the throne. His brief defection to Austria scandalized the Russian government, leading to harsh repressions against Alexei and his associates. Alexei died after interrogation under torture, and his son Peter Alexeyevich became the new heir apparent. The young Alexei was brought up by his mother, who fostered an atmosphere of disdain towards his father, the Tsar. Alexei's relations with his father suffered from the hatred between his father and his mother, as it was very difficult for him to feel affection for his mother's worst persecutor. From the ages of 6 to 9, Alexei was educated by his tutor Vyazemsky, but after the removal of his mother by Peter the Great to the Suzdal Intercession Convent, Alexei was confined to the care of educated foreigners, who taught him history, geography, mathematics and French. In 1703, Alexei was ordered to follow the army to the field as a private in a bombardier regiment. In 1704, he was present at the capture of Narva. At this period, the preceptors of the Tsarevich had the highest opinion of his ability. Alexei had strong leanings towards archaeology and ecclesiology. However, Peter had wished his son and heir to dedicate himself to the service of new Russia, and demanded from him unceasing labour in order to maintain Russia's new wealth and power. Painful relations between father and son, quite apart from the prior personal antipathies, were therefore inevitable. It was an additional misfortune for Alexei that his father should have been too busy to attend to him just as he was growing up from boyhood to manhood. He was left in the hands of reactionary boyars and priests, who encouraged him to hate his father and wish for the death of the Tsar. In 1708 Peter sent Alexei to Smolensk to collect provender and recruits, and after that to Moscow to fortify it against Charles XII of Sweden. At the end of 1709, Alexei went to Dresden for one year. There, he finished lessons in French, German, mathematics and fortification. After his education, Alexei married Princess Charlotte of Brunswick-Wolfenbüttel, whose family was connected by marriage to many of the great families of Europe (i.e., Charlotte's sister Elizabeth was married to Holy Roman Emperor Charles VI, ruler of the Habsburg Monarchy). He met with Princess Charlotte, both were pleased with each other and the marriage went forward. In theory, Alexei could have refused the marriage, and he had been encouraged by his father to at least meet his intended. "Why haven't you written to tell me what you thought about her?" wrote Peter in a letter dated 13 August 1710. The marriage contract was signed in September. The wedding was celebrated at Torgau, Germany, on 14 October 1711 (O.S.). One of the terms of the marriage contract agreed to by Alexei was that while any forthcoming children were to be raised in the Orthodox faith, Charlotte herself was allowed to retain her Protestant faith, an agreement opposed by Alexei's followers. As for the marriage itself, the first 6 months went well but quickly became a failure within the next 6 months. Alexei was drunk constantly and Alexei pronounced his bride "pock-marked" and "too thin". He insisted on separate apartments and ignored her in public. Three weeks later, the bridegroom was hurried away by his father to Toruń to superintend the provisioning of the Russian troops in Poland. For the next twelve months Alexei was kept constantly on the move. His wife joined him at Toruń in December, but in April 1712 a peremptory ukase ordered him off to the army in Pomerania, and in the autumn of the same year he was forced to accompany his father on a tour of inspection through Finland. He had two children with Charlotte: Peter Alexeyevich would succeed as the Emperor Peter II in 1727. With his death in 1730, the direct male-line of the House of Romanov became extinct. After the birth of Natalia in 1714, Alexei brought his long-time Finnish serf mistress Afrosinia to live in the palace. Some historians speculate that it was his conservative powerbase's disapproval of his foreign, non-Orthodox bride, more so than her appearance, that caused Alexei to spurn Charlotte. Another influence was Alexander Kikin, a high-placed official who had fallen out with the Tsar and had been deprived of his estates. Immediately on his return from Finland, Alexei was dispatched by his father to Staraya Russa and Lake Ladoga to see to the building of new ships. This was the last commission entrusted to him, since Peter had not been satisfied with his son's performance and his lack of enthusiasm. Nevertheless, Peter made one last effort to "reclaim" his son. On 22 October 1715 (O.S.), Charlotte died, after giving birth to a son, the grand-duke Peter, the future Emperor Peter II. On the day of the funeral, Peter sent Alexei a stern letter, urging him to take interest in the affairs of the state. Peter threatened to cut him off if he did not acquiesce in his father's plans. Alexei wrote a pitiful reply to his father, offering to renounce the succession in favour of his infant son Peter. Peter would agree but on the condition that Alexei remove himself as a dynastic threat and become a monk. While Alexei was pondering his options, on 26 August 1716 Peter wrote to Alexei from abroad, urging him, if he desired to remain tsarevich, to join him and the army without delay. Rather than face this ordeal, Alexei fled to Vienna and placed himself under the protection of his brother-in-law, the emperor Charles VI, who sent him for safety first to the Tirolean fortress of Ehrenberg (near Reutte), and finally to the castle of Sant'Elmo at Naples. He was accompanied throughout his journey by Afrosinia. That the emperor sincerely sympathized with Alexei, and suspected Peter of harbouring murderous designs against his son, is plain from his confidential letter to George I of Great Britain, whom he consulted on this delicate affair. Peter felt insulted: the flight of the tsarevich to a foreign potentate was a reproach and a scandal, and he had to be recovered and brought back to Russia at all costs. This difficult task was accomplished by Count Peter Tolstoi, the most subtle and unscrupulous of Peter's servants. Alexei would only consent to return on his father solemnly swearing, that if he came back he should not be punished in the least, but cherished as a son and allowed to live quietly on his estates and marry Afrosinia. On 31 January 1718, the tsarevich reached Moscow. Peter had already determined to institute a searching inquisition in order to get at the bottom of the mystery of the flight. On 18 February a "confession" was extorted from Alexei which implicated most of his friends, and he then publicly renounced the succession to the throne in favour of the baby grand-duke Peter Petrovich. A brutal reign of terror ensued, in the course of which the ex-tsaritsa Eudoxia was dragged from her monastery and publicly tried for alleged adultery, while all who had in any way befriended Alexei were impaled or broken on the wheel while having their flesh torn with red-hot pincers on their bare backs or bare feet slowly roasted over burning coals, and were otherwise lingeringly done to death. Alexei's servants were beheaded or had their tongues cut out. All this was done to terrorize the reactionaries and isolate the tsarevich. In April 1718 fresh confessions were extorted from, and in regard to, Alexei. This included the words of Afrosinia, who had turned state's evidence. "I shall bring back the old people..." Alexei is reported to have told her, "...and choose myself new ones according to my will; when I become sovereign I shall live in Moscow and leave Saint Petersburg simply as any other town; I won't launch any ships; I shall maintain troops only for defense, and won't make war on anyone; I shall be content with the old domains. In winter I shall live in Moscow, and in summer in Iaroslavl." Despite this and other hearsay evidence, there were no facts to go upon. The worst that could be brought against him was that he had wished his father's death. In the eyes of Peter, his son was now a self-convicted and most dangerous traitor, whose life was forfeit. But there was no getting over the fact that his father had sworn to pardon him and let him live in peace if he returned to Russia. The whole matter was solemnly submitted to a grand council of prelates, senators, ministers and other dignitaries on 13 June 1718 (O.S.). The clergy, for their part, declared the Tsarevich Alexei, "...had placed his Confidence in those who loved the ancient Customs, and that he had become acquainted with them by the Discourses they held, wherein they had constantly praised the ancient Manners, and spoke with Distaste of the Novelties his Father had introduced." Declaring this to be a civil rather than an ecclesiastical matter, the clergy left the matter to the tsar's own decision. At noon on 24 June (O.S.), the temporal dignitaries—the 126 members of both the Senate and magistrates that comprised the court—declared Alexei guilty and sentenced him to death. But the examination by torture continued, so desperate was Peter to uncover any possible collusion. On 19 June (O.S.), the weak and ailing tsarevich received twenty-five strokes with the knout, and then, on 24 June (O.S.), he was subject to fifteen more. On 26 June (O.S.), Alexei died in the Peter and Paul fortress in Saint Petersburg, two days after the senate had condemned him to death for conspiring rebellion against his father, and for hoping for the cooperation of the common people and the armed intervention of his brother-in-law, the emperor.
https://en.wikipedia.org/wiki?curid=1620
Andrew Jackson Andrew Jackson (March 15, 1767 – June 8, 1845) was an American soldier and statesman who served as the seventh president of the United States from 1829 to 1837. Before being elected to the presidency, Jackson gained fame as a general in the United States Army and served in both houses of the U.S. Congress. As president, Jackson sought to advance the rights of the "common man" against a "corrupt aristocracy" and to preserve the Union. Born in the colonial Carolinas to a Scotch-Irish family in the decade before the American Revolutionary War, Jackson became a frontier lawyer and married Rachel Donelson Robards. He served briefly in the United States House of Representatives and the United States Senate, representing Tennessee. After resigning, he served as a justice on the Tennessee Supreme Court from 1798 until 1804. Jackson purchased a property later known as The Hermitage, and became a wealthy, slaveowning planter. In 1801, he was appointed colonel of the Tennessee militia and was elected its commander the following year. He led troops during the Creek War of 1813–1814, winning the Battle of Horseshoe Bend. The subsequent Treaty of Fort Jackson required the Creek surrender of vast lands in present-day Alabama and Georgia. In the concurrent war against the British, Jackson's victory in 1815 at the Battle of New Orleans made him a national hero. Jackson then led U.S. forces in the First Seminole War, which led to the annexation of Florida from Spain. Jackson briefly served as Florida's first territorial governor before returning to the Senate. He ran for president in 1824, winning a plurality of the popular and electoral vote. As no candidate won an electoral majority, the House of Representatives elected John Quincy Adams in a contingent election. In reaction to the alleged "corrupt bargain" between Adams and Henry Clay and the ambitious agenda of President Adams, Jackson's supporters founded the Democratic Party. Jackson ran again in 1828, defeating Adams in a landslide. Jackson faced the threat of secession by South Carolina over what opponents called the "Tariff of Abominations." The crisis was defused when the tariff was amended, and Jackson threatened the use of military force if South Carolina attempted to secede. In Congress, Henry Clay led the effort to reauthorize the Second Bank of the United States. Jackson, regarding the Bank as a corrupt institution, vetoed the renewal of its charter. After a lengthy struggle, Jackson and his allies thoroughly dismantled the Bank. In 1835, Jackson became the only president to completely pay off the national debt, fulfilling a longtime goal. His presidency marked the beginning of the ascendancy of the party "spoils system" in American politics. In 1830, Jackson signed the Indian Removal Act, which forcibly relocated most members of the Native American tribes in the South to Indian Territory. The relocation process dispossessed the Indians and resulted in widespread death and disease. Jackson opposed the abolitionist movement, which grew stronger in his second term. In foreign affairs, Jackson's administration concluded a "most favored nation" treaty with Great Britain, settled claims of damages against France from the Napoleonic Wars, and recognized the Republic of Texas. In January 1835, he survived the first assassination attempt on a sitting president. In his retirement, Jackson remained active in Democratic Party politics, supporting the presidencies of Martin Van Buren and James K. Polk. Though fearful of its effects on the slavery debate, Jackson advocated the annexation of Texas, which was accomplished shortly before his death. Jackson has been widely revered in the United States as an advocate for democracy and the common man. Many of his actions proved divisive, garnering both fervent support and strong opposition from many in the country. His reputation has suffered since the 1970s, largely due to his role in Native American removal. Surveys of historians and scholars have ranked Jackson favorably among U.S. presidents. Andrew Jackson was born on March 15, 1767, in the Waxhaws region of the Carolinas. His parents were Scots-Irish colonists Andrew Jackson and his wife Elizabeth Hutchinson, Presbyterians who had emigrated from Ulster, present day Northern Ireland, two years earlier. Jackson's father was born in Carrickfergus, County Antrim, around 1738. Jackson's parents lived in the village of Boneybefore, also in County Antrim. His paternal ancestors originated in Killingswold Grove, Yorkshire, England. When they immigrated to North America in 1765, Jackson's parents brought two children from Ireland, Hugh (born 1763) and Robert (born 1764). The family probably landed in Philadelphia. Most likely they traveled overland through the Appalachian Mountains to the Scots-Irish community in the Waxhaws, straddling the border between North and South Carolina. Jackson's father died in February 1767 at the age of 29, in a logging accident while clearing land, three weeks before his son Andrew was born. Jackson, his mother, and his brothers lived with Jackson's aunt and uncle in the Waxhaws region, and Jackson received schooling from two nearby priests. Jackson's exact birthplace is unclear because of a lack of knowledge of his mother's actions immediately following her husband's funeral. The area was so remote that the border between North and South Carolina had not been officially surveyed. In 1824, Jackson wrote a letter saying he had been born on the plantation of his uncle James Crawford in Lancaster County, South Carolina. Jackson may have claimed to be a South Carolinian because the state was considering nullification of the Tariff of 1824, which he opposed. In the mid-1850s, second-hand evidence indicated that he might have been born at a different uncle's home in North Carolina. As a young boy, Jackson was easily offended and was considered something of a bully. He was, however, said to have taken a group of younger and weaker boys under his wing and been very kind to them. During the Revolutionary War, Jackson's eldest brother, Hugh, died from heat exhaustion after the Battle of Stono Ferry on June 20, 1779. Anti-British sentiment intensified following the brutal Waxhaws Massacre on May 29, 1780. Jackson's mother encouraged him and his elder brother Robert to attend the local militia drills. Soon, they began to help the militia as couriers. They served under Colonel William Richardson Davie at the Battle of Hanging Rock on August 6. Andrew and Robert were captured by the British in April 1781 while staying at the home of the Crawford family. When Andrew refused to clean the boots of a British officer, the officer slashed at the youth with a sword, leaving him with scars on his left hand and head, as well as an intense hatred for the British. Robert also refused to do as commanded and was struck with the sword. The two brothers were held as prisoners, contracted smallpox, and nearly starved to death in captivity. Later that year, their mother Elizabeth secured the brothers' release. She then began to walk both boys back to their home in the Waxhaws, a distance of some 40 miles (64 km). Both were in very poor health. Robert, who was far worse, rode on the only horse they had, while Andrew walked behind them. In the final two hours of the journey, a torrential downpour began which worsened the effects of the smallpox. Within two days of arriving back home, Robert was dead and Andrew in mortal danger. After nursing Andrew back to health, Elizabeth volunteered to nurse American prisoners of war on board two British ships in the Charleston harbor, where there had been an outbreak of cholera. In November, she died from the disease and was buried in an unmarked grave. Andrew became an orphan at age 14. He blamed the British personally for the loss of his brothers and mother. After the Revolutionary War, Jackson received a sporadic education in a local Waxhaw school. On bad terms with much of his extended family, he boarded with several different people. In 1781, he worked for a time as a saddle-maker, and eventually taught school. He apparently prospered in neither profession. In 1784, he left the Waxhaws region for Salisbury, North Carolina, where he studied law under attorney Spruce Macay. With the help of various lawyers, he was able to learn enough to qualify for the bar. In September 1787, Jackson was admitted to the North Carolina bar. Shortly thereafter, his friend John McNairy helped him get appointed to a vacant prosecutor position in the Western District of North Carolina, which would later become the state of Tennessee. During his travel west, Jackson bought his first slave and in 1788, having been offended by fellow lawyer Waightstill Avery, fought his first duel. The duel ended with both men firing into the air, having made a secret agreement to do so before the engagement. Jackson moved to the small frontier town of Nashville in 1788, where he lived as a boarder with Rachel Stockly Donelson, the widow of John Donelson. Here Jackson became acquainted with their daughter, Rachel Donelson Robards. The younger Rachel was in an unhappy marriage with Captain Lewis Robards; he was subject to fits of jealous rage. The two were separated in 1790. According to Jackson, he married Rachel after hearing that Robards had obtained a divorce. Her divorce had not been made final, making Rachel's marriage to Jackson bigamous and therefore invalid. After the divorce was officially completed, Rachel and Jackson remarried in 1794. To complicate matters further, evidence shows that Rachel had been living with Jackson and referred to herself as Mrs. Jackson before the petition for divorce was ever made. It was not uncommon on the frontier for relationships to be formed and dissolved unofficially, as long as they were recognized by the community. In 1794, Jackson formed a partnership with fellow lawyer John Overton, dealing in claims for land reserved by treaty for the Cherokee and Chickasaw. Like many of their contemporaries, they dealt in such claims although the land was in Indian country. Most of the transactions involved grants made under the 'land grab' act of 1783 that briefly opened Indian lands west of the Appalachians within North Carolina to claim by that state's residents. He was one of the three original investors who founded Memphis, Tennessee, in 1819. After moving to Nashville, Jackson became a protege of William Blount, a friend of the Donelsons and one of the most powerful men in the territory. Jackson became attorney general in 1791, and he won election as a delegate to the Tennessee constitutional convention in 1796. When Tennessee achieved statehood that year, he was elected its only U.S. Representative. He was a member of the Democratic-Republican Party, the dominant party in Tennessee. As a representative, Jackson staunchly defended the rights of Tennesseans against the Indians. He strongly opposed the Jay Treaty and criticized George Washington for allegedly removing Republicans from public office. Jackson joined several other Republican congressmen in voting against a resolution of thanks for Washington, a vote that would later haunt him when he sought the presidency. In 1797, the state legislature elected him as U.S. Senator. Jackson seldom participated in debate and found the job dissatisfying. He pronounced himself "disgusted with the administration" of President John Adams and resigned the following year without explanation. Upon returning home, with strong support from western Tennessee, he was elected to serve as a judge of the Tennessee Supreme Court at an annual salary of $600. Jackson's service as a judge is generally viewed as a success and earned him a reputation for honesty and good decision making. Jackson resigned the judgeship in 1804. His official reason for resigning was ill health. He had been suffering financially from poor land ventures, and so it is also possible that he wanted to return full-time to his business interests. After arriving in Tennessee, Jackson won the appointment of judge advocate of the Tennessee militia. In 1802, while serving on the Tennessee Supreme Court, he declared his candidacy for major general, or commander, of the Tennessee militia, a position voted on by the officers. At that time, most free men were members of the militia. The organizations, intended to be called up in case of conflict with Europeans or Indians, resembled large social clubs. Jackson saw it as a way to advance his stature. With strong support from western Tennessee, he tied with John Sevier with seventeen votes. Sevier was a popular Revolutionary War veteran and former governor, the recognized leader of politics in eastern Tennessee. On February 5, Governor Archibald Roane broke the tie in Jackson's favor. Jackson had also presented Roane with evidence of land fraud against Sevier. Subsequently, in 1803, when Sevier announced his intention to regain the governorship, Roane released the evidence. Jackson then published a newspaper article accusing Sevier of fraud and bribery. Sevier insulted Jackson in public, and the two nearly fought a duel over the matter. Despite the charges leveled against Sevier, he defeated Roane and continued to serve as governor until 1809. In addition to his legal and political career, Jackson prospered as planter, slave owner, and merchant. He built a home and the first general store in Gallatin, Tennessee, in 1803. The next year, he acquired the Hermitage, a plantation in Davidson County, near Nashville. He later added to the plantation, which eventually totaled . The primary crop was cotton, grown by slaves—Jackson began with nine, owned as many as 44 by 1820, and later up to 150, placing him among the planter elite. Jackson also co-owned with his son Andrew Jackson Jr. the Halcyon plantation in Coahoma County, Mississippi, which housed 51 slaves at the time of his death. Throughout his lifetime, Jackson may have owned as many as 300 slaves. Men, women, and child slaves were owned by Jackson on three sections of the Hermitage plantation. Slaves lived in extended family units of between five and ten persons and were quartered in cabins made either of brick or logs. The size and quality of the Hermitage slave quarters exceeded the standards of his times. To help slaves acquire food, Jackson supplied them with guns, knives, and fishing equipment. At times he paid his slaves with monies and coins to trade in local markets. The Hermitage plantation was a profit-making enterprise. Jackson permitted slaves to be whipped to increase productivity or if he believed his slaves' offenses were severe enough. At various times he posted advertisements for fugitive slaves who had escaped from his plantation. In one advertisement placed in the Tennessee Gazette in October 1804, Jackson offered "ten dollars extra, for every hundred lashes any person will give him, to the amount of three hundred." The controversy surrounding his marriage to Rachel remained a sore point for Jackson, who deeply resented attacks on his wife's honor. By May 1806, Charles Dickinson, who, like Jackson, raced horses, had published an attack on Jackson in the local newspaper, and it resulted in a written challenge from Jackson to a duel. Since Dickinson was considered an expert shot, Jackson determined it would be best to let Dickinson turn and fire first, hoping that his aim might be spoiled in his quickness; Jackson would wait and take careful aim at Dickinson. Dickinson did fire first, hitting Jackson in the chest. The bullet that struck Jackson was so close to his heart that it could not be removed. Under the rules of dueling, Dickinson had to remain still as Jackson took aim and shot and killed him. Jackson's behavior in the duel outraged men in Tennessee, who called it a brutal, cold-blooded killing and saddled Jackson with a reputation as a violent, vengeful man. He became a social outcast. After the Sevier affair and the duel, Jackson was looking for a way to salvage his reputation. He chose to align himself with former vice president Aaron Burr. Burr's political career ended after the killing of Alexander Hamilton in a duel in 1804; in 1805 he set out on a tour of what was then the western United States. Burr was extremely well received by the people of Tennessee, and stayed for five days at the Hermitage. Burr's true intentions are not known with certainty. He seems to have been planning a military operation to conquer Spanish Florida and drive the Spanish from Texas. To many westerners like Jackson, the promise seemed enticing. Western American settlers had long held bitter feelings towards Spain due to territorial disputes and the persistent failure of the Spanish to keep Indians living on their lands from raiding American settlements. On October 4, 1806, Jackson addressed the Tennessee militia, declaring that the men should be "at a moment's warning ready to march." On the same day, he wrote to James Winchester, proclaiming that the United States "can conquer not only the Floridas [at that time there was an East Florida and a West Florida.], but all Spanish North America." He continued: Jackson agreed to provide boats and other provisions for the expedition. However, on November 10, he learned from a military captain that Burr's plans apparently included seizure of New Orleans, then part of the Louisiana Territory of the United States, and incorporating it, along with lands won from the Spanish, into a new empire. He was further outraged when he learned from the same man of the involvement of Brigadier General James Wilkinson, whom he deeply disliked, in the plan. Jackson acted cautiously at first, but wrote letters to public officials, including President Thomas Jefferson, vaguely warning them about the scheme. In December, Jefferson, a political opponent of Burr, issued a proclamation declaring that a treasonous plot was underway in the West and calling for the arrest of the perpetrators. Jackson, safe from arrest because of his extensive paper trail, organized the militia. Burr was soon captured, and the men were sent home. Jackson traveled to Richmond, Virginia, to testify on Burr's behalf in trial. The defense team decided against placing him on the witness stand, fearing his remarks were too provocative. Burr was acquitted of treason, despite Jefferson's efforts to have him convicted. Jackson endorsed James Monroe for president in 1808 against James Madison. The latter was part of the Jeffersonian wing of the Democratic-Republican Party. Jackson lived relatively quietly at the Hermitage in the years after the Burr trial, eventually accumulating 640 acres of land. Leading up to 1812, the United States found itself increasingly drawn into international conflict. Formal hostilities with Spain or France never materialized, but tensions with Britain increased for a number of reasons. Among these was the desire of many Americans for more land, particularly British Canada and Florida, the latter still controlled by Spain, Britain's European ally. On June 18, 1812, Congress officially declared war on the United Kingdom of Great Britain and Ireland, beginning the War of 1812. Jackson responded enthusiastically, sending a letter to Washington offering 2,500 volunteers. However, the men were not called up for many months. Biographer Robert V. Remini claims that Jackson saw the apparent slight as payback by the Madison administration for his support of Burr and Monroe. Meanwhile, the United States military repeatedly suffered devastating defeats on the battlefield. On January 10, 1813, Jackson led an army of 2,071 volunteers to New Orleans to defend the region against British and Native American attacks. He had been instructed to serve under General Wilkinson, who commanded Federal forces in New Orleans. Lacking adequate provisions, Wilkinson ordered Jackson to halt in Natchez, then part of the Mississippi Territory, and await further orders. Jackson reluctantly obeyed. The newly appointed Secretary of War, John Armstrong Jr., sent a letter to Jackson dated February 6 ordering him to dismiss his forces and to turn over his supplies to Wilkinson. In reply to Armstrong on March 15, Jackson defended the character and readiness of his men, and promised to turn over his supplies. He also promised, instead of dismissing the troops without provisions in Natchez, to march them back to Nashville. The march was filled with agony. Many of the men had fallen ill. Jackson and his officers turned over their horses to the sick. He paid for provisions for the men out of his own pocket. The soldiers began referring to their commander as "Hickory" because of his toughness, and Jackson became known as "Old Hickory." The army arrived in Nashville within about a month. Jackson's actions earned him respect and praise from the people of Tennessee. Jackson faced financial ruin, until his former aide-de-camp Thomas Benton persuaded Secretary Armstrong to order the army to pay the expenses Jackson had incurred. On June 14, Jackson served as a second in a duel on behalf of his junior officer William Carroll against Jesse Benton, the brother of Thomas. On September 3, Jackson and his top cavalry officer, Brigadier General John Coffee, were involved in a street brawl with the Benton brothers. Jackson was severely wounded by Jesse with a gunshot to the shoulder. Jackson, with 2,500 men, was ordered to crush the hostile Indians. On October 10, he set out on the expedition, his arm still in a sling from fighting the Bentons. Jackson established Fort Strother as a supply base. On November 3, Coffee defeated a band of Red Sticks at the Battle of Tallushatchee. Coming to the relief of friendly Creeks besieged by Red Sticks, Jackson won another decisive victory at the Battle of Talladega. In the winter, Jackson, encamped at Fort Strother, faced a severe shortage of troops due to the expiration of enlistments and chronic desertions. He sent Coffee with the cavalry (which abandoned him) back to Tennessee to secure more enlistments. Jackson decided to combine his force with that of the Georgia militia, and marched to meet the Georgia troops. From January 22–24, 1814, while on their way, the Tennessee militia and allied Muscogee were attacked by the Red Sticks at the Battles of Emuckfaw and Enotachopo Creek. Jackson's troops repelled the attackers, but outnumbered, were forced to withdraw to Fort Strother. Jackson, now with over 2,000 troops, marched most of his army south to confront the Red Sticks at a fortress they had constructed at a bend in the Tallapoosa River. On March 27, enjoying an advantage of more than 2 to 1, he engaged them at the Battle of Horseshoe Bend. An initial artillery barrage did little damage to the well-constructed fort. A subsequent Infantry charge, in addition to an assault by Coffee's cavalry and diversions caused by the friendly Creeks, overwhelmed the Red Sticks. The campaign ended three weeks later with Red Eagle's surrender, although some Red Sticks such as McQueen fled to East Florida. On June 8, Jackson accepted a commission as brigadier general in the United States Army, and 10 days later became a major general, in command of the Seventh Military Division. Subsequently, Jackson, with Madison's approval, imposed the Treaty of Fort Jackson. The treaty required the Muscogee, including those who had not joined the Red Sticks, to surrender 23 million acres (8,093,713 ha) of land to the United States. Most of the Creeks bitterly acquiesced. Though in ill-health from dysentery, Jackson turned his attention to defeating Spanish and British forces. Jackson accused the Spanish of arming the Red Sticks and of violating the terms of their neutrality by allowing British soldiers into the Floridas. The first charge was true, while the second ignored the fact that it was Jackson's threats to invade Florida which had caused them to seek British protection. In the November 7 Battle of Pensacola, Jackson defeated British and Spanish forces in a short skirmish. The Spanish surrendered and the British fled. Weeks later, he learned that the British were planning an attack on New Orleans, which sat on the mouth of the Mississippi River and held immense strategic and commercial value. Jackson abandoned Pensacola to the Spanish, placed a force in Mobile, Alabama, to guard against a possible invasion there, and rushed the rest of his force west to defend the city. The Creeks coined their own name for Jackson, "Jacksa Chula Harjo" or "Jackson, old and fierce." After arriving in New Orleans on December 1, 1814, Jackson instituted martial law in the city, as he worried about the loyalty of the city's Creole and Spanish inhabitants. At the same time, he formed an alliance with Jean Lafitte's smugglers, and formed military units consisting of African-Americans and Muscogees, in addition to recruiting volunteers in the city. Jackson received some criticism for paying white and non-white volunteers the same salary. These forces, along with U.S. Army regulars and volunteers from surrounding states, joined with Jackson's force in defending New Orleans. The approaching British force, led by Admiral Alexander Cochrane and later General Edward Pakenham, consisted of over 10,000 soldiers, many of whom had served in the Napoleonic Wars. Jackson had only about 5,000 men, most of whom were inexperienced and poorly trained. The British arrived on the east bank of the Mississippi River on the morning of December 23. That evening, Jackson attacked the British and temporarily drove them back. On January 8, 1815, the British launched a major frontal assault against Jackson's defenses. An initial artillery barrage by the British did little damage to the well-constructed American defenses. Once the morning fog had cleared, the British launched a frontal assault, and their troops made easy targets for the Americans protected by their parapets. Despite managing to temporarily drive back the American right flank, the overall attack ended in disaster. For the battle on January 8, Jackson admitted to only 71 total casualties. Of these, 13 men were killed, 39 wounded, and 19 missing or captured. The British admitted 2,037 casualties. Of these, 291 men were killed (including Pakenham), 1,262 wounded, and 484 missing or captured. After the battle, the British retreated from the area, and open hostilities ended shortly thereafter when word spread that the Treaty of Ghent had been signed in Europe that December. Coming in the waning days of the war, Jackson's victory made him a national hero, as the country celebrated the end of what many called the "Second American Revolution" against the British. By a Congressional resolution on February 27, 1815, Jackson was given the Thanks of Congress and awarded a Congressional Gold Medal. Alexis de Tocqueville ("underwhelmed" by Jackson according to a 2001 commentator) later wrote in "Democracy in America" that Jackson "was raised to the Presidency, and has been maintained there, solely by the recollection of a victory which he gained, twenty years ago, under the walls of New Orleans." Some have claimed that, because the war was already ended by the preliminary signing of the Treaty of Ghent, Jackson's victory at New Orleans was without importance aside from making him a celebrated figure. However, the Spanish, who had sold the Louisiana Territory to France, disputed France's right to sell it to the United States through the Louisiana Purchase in 1803. In April 1815, Spain, assuming that the British had won at New Orleans, asked for the return of the Louisiana Territory. Spanish representatives claimed to have been assured that they would receive the land back. Furthermore, Article IX of the Treaty of Ghent stipulated that the United States must return land taken from the Creeks to their original owners, essentially undoing the Treaty of Fort Jackson. Thanks to Jackson's victory at New Orleans, the American government felt that it could safely ignore that provision and it kept the lands that Jackson had acquired. Jackson, still not knowing for certain of the treaty's signing, refused to lift martial law in the city. State senator Louis Louaillier had written an anonymous piece in the New Orleans newspaper, challenging Jackson's refusal to release the militia after the British ceded the field of battle. Jackson attempted to find the author and, after Louiallier admitted to having written the piece, he was imprisoned. In March 1815, after U.S. District Court Judge Dominic A. Hall signed a writ of "habeas corpus" on behalf of Louaillier, Jackson ordered Hall's arrest. Jackson did not relent his campaign of suppressing dissent until after ordering the arrest of a Louisiana legislator, a federal judge, and a lawyer, and after the intervention of State Judge Joshua Lewis. Lewis was simultaneously serving under Jackson in the militia, and also had signed a writ of "habeas corpus" against Jackson, his commanding officer, seeking Judge Hall's release. Civilian authorities in New Orleans had reason to fear Jackson—he summarily ordered the execution of six members of the militia who had attempted to leave. Their deaths were not well publicized until the Coffin Handbills were circulated during his 1828 presidential campaign. Following the war, Jackson remained in command of troops on the southern border of the U.S. He conducted business from the Hermitage. He signed treaties with the Cherokee and Chickasaw which gained for the United States large parts of Tennessee and Kentucky. The treaty with the Chickasaw, finally agreed to later in the year, is commonly known as the Jackson Purchase. Several Native American tribes, which became known as the Seminole, straddled the border between the U.S. and Florida. The Seminole, in alliance with escaped slaves, frequently raided Georgia settlements before retreating back into Florida. These skirmishes continually escalated, and the conflict is now known as the First Seminole War. In 1816, Jackson led a detachment into Florida which destroyed the Negro Fort, a community of escaped slaves and their descendants. Jackson was ordered by President Monroe in December 1817 to lead a campaign in Georgia against the Seminole and Creek Indians. Jackson was also charged with preventing Florida from becoming a refuge for runaway slaves, after Spain promised freedom to fugitive slaves. Critics later alleged that Jackson exceeded orders in his Florida actions. His orders from President Monroe were to "terminate the conflict." Jackson believed the best way to do this was to seize Florida from Spain once and for all. Before departing, Jackson wrote to Monroe, "Let it be signified to me through any channel ... that the possession of the Floridas would be desirable to the United States, and in sixty days it will be accomplished." Jackson invaded Florida on March 15, 1818, capturing Pensacola. He crushed Seminole and Spanish resistance in the region and captured two British agents, Robert Ambrister and Alexander Arbuthnot, who had been working with the Seminole. After a brief trial, Jackson executed both of them, causing a diplomatic incident with the British. Jackson's actions polarized Monroe's cabinet, some of whom argued that Jackson had gone against Monroe's orders and violated the Constitution, since the United States had not declared war upon Spain. He was defended by Secretary of State John Quincy Adams. Adams thought that Jackson's conquest of Florida would force Spain to finally sell the province, and Spain did indeed sell Florida to the United States in the Adams–Onís Treaty of 1819. A congressional investigation exonerated Jackson, but he was deeply angered by the criticism he received, particularly from Speaker of the House Henry Clay. After the ratification of the Adams–Onís Treaty in 1821, Jackson resigned from the army and briefly served as the territorial Governor of Florida before returning to Tennessee. In the spring of 1822, Jackson suffered a physical breakdown. His body had two bullets lodged in it, and he had grown exhausted from years of hard military campaigning. He regularly coughed up blood, and his entire body shook. Jackson feared that he was on the brink of death. After several months of rest, he recovered. During his convalescence, Jackson's thoughts increasingly turned to national affairs. He obsessed over rampant corruption in the Monroe administration and grew to detest the Second Bank of the United States, blaming it for causing the Panic of 1819 by contracting credit. Jackson turned down an offer to run for governor of his home state, but accepted John Overton's plan to have the legislature nominate him for president. On July 22, 1822, he was officially nominated by the Tennessee legislature. Jackson had come to dislike Secretary of the Treasury William H. Crawford, who had been the most vocal critic of Jackson in Monroe's cabinet, and he hoped to prevent Tennessee's electoral votes from going to Crawford. Yet Jackson's nomination garnered a welcoming response even outside of Tennessee, as many Americans appreciated Jackson's attacks on banks. The Panic of 1819 had devastated the fortunes of many, and banks and politicians seen as supportive of banks were particularly unpopular. With his growing political viability, Jackson emerged as one of the five major presidential candidates, along with Crawford, Adams, Clay, and Secretary of War John C. Calhoun. During the Era of Good Feelings, the Federalist Party had faded away, and all five presidential contenders were members of the Democratic-Republican Party. Jackson's campaign promoted him as a defender of the common people, as well as the one candidate who could rise above sectional divisions. On the major issues of the day, most prominently the tariff, Jackson expressed centrist beliefs, and opponents accused him of obfuscating his positions. At the forefront of Jackson's campaign was combatting corruption. Jackson vowed to restore honesty in government and to scale back its excesses. In 1823, Jackson reluctantly allowed his name to be placed in contention for one of Tennessee's U.S. Senate seats. The move was independently orchestrated by his advisors William Berkeley Lewis and U.S. Senator John Eaton in order to defeat incumbent John Williams, who openly opposed his presidential candidacy. The legislature narrowly elected him. His return, after 24 years, 11 months, 3 days out of office, marks the second longest gap in service to the chamber in history. Although Jackson was reluctant to serve once more in the Senate, he was appointed chairman of the Committee on Military Affairs. Eaton wrote to Rachel that Jackson as a senator was "in harmony and good understanding with every body," including Thomas Hart Benton, now a senator from Missouri, with whom Jackson had fought in 1813. Meanwhile, Jackson himself did little active campaigning for the presidency, as was customary. Eaton updated an already-written biography of him in preparation for the campaign and, along with others, wrote letters to newspapers praising Jackson's record and past conduct. Democratic-Republican presidential nominees had historically been chosen by informal Congressional nominating caucuses, but this method had become unpopular. In 1824, most of the Democratic-Republicans in Congress boycotted the caucus. Those who attended backed Crawford for president and Albert Gallatin for vice president. A Pennsylvania convention nominated Jackson for president a month later, stating that the irregular caucus ignored the "voice of the people" in the "vain hope that the American people might be thus deceived into a belief that he [Crawford] was the regular democratic candidate." Gallatin criticized Jackson as "an honest man and the idol of the worshipers of military glory, but from incapacity, military habits, and habitual disregard of laws and constitutional provisions, altogether unfit for the office." After Jackson won the Pennsylvania nomination, Calhoun dropped out of the presidential race and successfully sought the vice presidency instead. In the presidential election, Jackson won a plurality of the electoral vote, taking several southern and western states as well as the mid-Atlantic states of Pennsylvania and New Jersey. He was the only candidate to win states outside of his regional base, as Adams dominated New England, Clay took three western states, and Crawford won Virginia and Georgia. Jackson won a plurality of the popular vote, taking 42 percent, although not all states held a popular vote for the presidency. He won 99 electoral votes, more than any other candidate, but still short of 131, which he needed for a true majority. With no candidate having won a majority of the electoral votes, the House of Representatives held a contingent election under the terms of the Twelfth Amendment. The amendment specifies that only the top three electoral vote-winners are eligible to be elected by the House, so Clay was eliminated from contention. Jackson believed that he was likely to win this contingent election, as Crawford and Adams lacked Jackson's national appeal, and Crawford had suffered a debilitating stroke that made many doubt his physical fitness for the presidency. Clay, who as Speaker of the House presided over the election, saw Jackson as a dangerous demagogue who might topple the republic in favor of his own leadership. He threw his support behind Adams, who shared Clay's support for federally funded internal improvements such as roads and canals. With Clay's backing, Adams won the contingent election on the first ballot. Furious supporters of Jackson accused Clay and Adams of having reached a "corrupt bargain" after Adams appointed Clay as his Secretary of State. "So you see," Jackson growled, "the Judas of the West has closed the contract and receive the thirty pieces of silver. [H]is end will be the same." After the election, Jackson resigned his Senate seat and returned to Tennessee. Almost immediately, opposition arose to the Adams presidency. Jackson opposed Adams's plan to involve the U.S. in Panama's quest for independence, writing, "The moment we engage in confederations, or alliances with any nation, we may from that time date the down fall of our republic." Adams damaged his standing in his first annual message to Congress, when he argued that Congress must not give the world the impression "that we are palsied by the will of our constituents." Jackson was nominated for president by the Tennessee legislature in October 1825, more than three years before the 1828 election. It was the earliest such nomination in presidential history, and it attested to the fact that Jackson's supporters began the 1828 campaign almost as soon as the 1824 campaign ended. Adams's presidency foundered, as his ambitious agenda faced defeat in a new era of mass politics. Critics led by Jackson attacked Adams's policies as a dangerous expansion of Federal power. New York Senator Martin Van Buren, who had been a prominent supporter of Crawford in 1824, emerged as one of the strongest opponents of Adams's policies, and he settled on Jackson as his preferred candidate in 1828. Van Buren was joined by Vice President Calhoun, who opposed much of Adams's agenda on states' rights grounds. Van Buren and other Jackson allies established numerous pro-Jackson newspapers and clubs around the country, while Jackson avoided campaigning but made himself available to visitors at his Hermitage plantation. In the election, Jackson won a commanding 56 percent of the popular vote and 68 percent of the electoral vote. The election marked the definitive end of the one-party Era of Good Feelings, as Jackson's supporters coalesced into the Democratic Party and Adams's followers became known as the National Republicans. In the large Scots-Irish community that was especially numerous in the rural South and Southwest, Jackson was a favorite. The campaign was heavily personal. As was the custom at the time, neither candidate personally campaigned, but their political followers organized campaign events. Both candidates were rhetorically attacked in the press. Jackson was labelled a slave trader who bought and sold slaves and moved them about in defiance of higher standards of slaveholder behavior. A series of pamphlets known as the Coffin Handbills were published to attack Jackson, one of which revealed his order to execute soldiers at New Orleans. Another accused him of engaging in cannibalism by eating the bodies of American Indians killed in battle, while still another labeled his mother a "common prostitute" and stated that Jackson's father was a "mulatto man." Rachel Jackson was also a frequent target of attacks, and was widely accused of bigamy, a reference to the controversial situation of her marriage with Jackson. Jackson's campaigners fired back by claiming that while serving as Minister to Russia, Adams had procured a young girl to serve as a prostitute for Emperor Alexander I. They also stated that Adams had a billiard table in the White House and that he had charged the government for it. Rachel had been under extreme stress during the election, and often struggled while Jackson was away. She began experiencing significant physical stress during the election season. Jackson described her symptoms as "excruciating pain in the left shoulder, arm, and breast." After struggling for three days, Rachel finally died of a heart attack on December 22, 1828, three weeks after her husband's victory in the election (which began on October 31 and ended on December 2) and 10 weeks before Jackson took office as president. A distraught Jackson had to be pulled from her so the undertaker could prepare the body. He felt that the abuse from Adams's supporters had hastened her death and never forgave him. Rachel was buried at the Hermitage on Christmas Eve. "May God Almighty forgive her murderers," Jackson swore at her funeral. "I never can." Jackson's name has been associated with Jacksonian democracy or the shift and expansion of democracy with the passing of some political power from established elites to ordinary voters based in political parties. "The Age of Jackson" shaped the national agenda and American politics. Jackson's philosophy as president was similar to that of Jefferson, advocating republican values held by the Revolutionary generation. Jackson took a moral tone, with the belief that agrarian sympathies, and strong states rights with a limited federal government, would produce less corruption. He feared that monied and business interests would corrupt republican values. When South Carolina opposed the tariff law, he took a strong line in favor of nationalism and against secession. Jackson believed in the ability of the people to "arrive at right conclusions." They had the right not only to elect but to "instruct their agents & representatives." Office holders should either obey the popular will or resign. He rejected the view of a powerful and independent Supreme Court with binding decisions, arguing that "the Congress, the Executive, and the Court must each or itself be guided by its own opinions of the Constitution." Jackson thought that Supreme Court justices should be made to stand for election, and believed in strict constructionism as the best way to ensure democratic rule. He called for term limits on presidents and the abolition of the Electoral College. According to Robert V. Remini, Jackson "was far ahead of his times–and maybe even further than this country can ever achieve." Jackson departed from the Hermitage on January 19 and arrived in Washington on February 11. He then set about choosing his cabinet members. Jackson chose Van Buren as expected for Secretary of State, Eaton of Tennessee as Secretary of War, Samuel D. Ingham of Pennsylvania as Secretary of Treasury, John Branch of North Carolina as Secretary of Navy, John M. Berrien of Georgia as Attorney General, and William T. Barry of Kentucky as Postmaster General. Jackson's first choice of cabinet proved to be unsuccessful, full of bitter partisanship and gossip. Jackson blamed Adams in part for what was said about Rachel during the campaign, and refused to meet him after arriving in Washington. Therefore, Adams chose not to attend the inauguration. On March 4, 1829, Andrew Jackson became the first United States president-elect to take the oath of office on the East Portico of the U.S. Capitol. In his inaugural speech, Jackson promised to respect the sovereign powers of states and the constitutional limits of the presidency. He also promised to pursue "reform" by removing power from "unfaithful or incompetent hands." At the conclusion of the ceremony, Jackson invited the public to the White House, where his supporters held a raucous party. Thousands of spectators overwhelmed the White House staff, and minor damage was caused to fixtures and furnishings. Jackson's populism earned him the nickname "King Mob." In an effort to purge the government of corruption, Jackson launched presidential investigations into all executive Cabinet offices and departments. He believed appointees should be hired on merit and withdrew many candidates he believed were lax in their handling of monies. He believed that the federal government had been corrupted and that he had received a mandate from the American people to purge such corruption. Jackson's investigations uncovered enormous fraud in the federal government, and numerous officials were removed from office and indicted on corruption. In the first year of Jackson's presidency, his investigations uncovered $280,000 stolen from the Treasury, and the Department of the Navy was saved $1 million. He asked Congress to reform embezzlement laws, reduce fraudulent applications for federal pensions, pass revenue laws to prevent evasion of custom duties, and pass laws to improve government accounting. Jackson's Postmaster General Barry resigned after a Congressional investigation into the postal service revealed mismanagement of mail services, collusion and favoritism in awarding lucrative contracts, as well as failure to audit accounts and supervise contract performances. Jackson replaced Barry with Treasury Auditor and prominent Kitchen Cabinet member Amos Kendall, who went on to implement much needed reforms in the Post Office Department. Jackson repeatedly called for the abolition of the Electoral College by constitutional amendment in his annual messages to Congress as president. In his third annual message to Congress, he expressed the view "I have heretofore recommended amendments of the Federal Constitution giving the election of President and Vice-President to the people and limiting the service of the former to a single term. So important do I consider these changes in our fundamental law that I can not, in accordance with my sense of duty, omit to press them upon the consideration of a new Congress." Although he was unable to implement these goals, Jackson's time in office did see a variety of other reforms. He supported an act in July 1836 that enabled widows of Revolutionary War soldiers who met certain criteria to receive their husband's pensions. In 1836, Jackson established the ten-hour day in national shipyards. Jackson enforced the Tenure of Office Act, signed by President Monroe in 1820, that limited appointed office tenure and authorized the president to remove and appoint political party associates. Jackson believed that a rotation in office was a democratic reform preventing hereditary officeholding and made civil service responsible to the popular will. Jackson declared that rotation of appointments in political office was "a leading principle in the republican creed." Jackson noted, "In a country where offices are created solely for the benefit of the people no one man has any more intrinsic right to official station than another." Jackson believed that rotating political appointments would prevent the development of a corrupt bureaucracy. The number of federal office holders removed by Jackson were exaggerated by his opponents; Jackson rotated only about 20% of federal office holders during his first term, some for dereliction of duty rather than political purposes. Jackson, nonetheless, used his presidential power to award loyal Democrats by granting them federal office appointments. Jackson's approach incorporated patriotism for country as qualification for holding office. Having appointed a soldier who had lost his leg fighting on the battlefield to postmaster, Jackson stated, "[i]f he lost his leg fighting for his country, that is ... enough for me." Jackson's theory regarding rotation of office generated what would later be called the spoils system. The political realities of Washington sometimes forced Jackson to make partisan appointments despite his personal reservations. Supervision of bureaus and departments whose operations were outside of Washington (such as the New York Customs House; the Postal Service; the Departments of Navy and War; and the Bureau of Indian Affairs, whose budget had increased enormously in the previous two decades) proved to be difficult. Remini writes that because "friendship, politics, and geography constituted the President's total criteria for appointments, most of his appointments were predictably substandard." Jackson devoted a considerable amount of his presidential time during his early years in office responding to what came to be known as the "Petticoat affair" or "Eaton affair." Washington gossip circulated among Jackson's cabinet members and their wives, including Calhoun's wife Floride Calhoun, concerning Secretary of War Eaton and his wife Peggy Eaton. Salacious rumors held that Peggy, as a barmaid in her father's tavern, had been sexually promiscuous or had even been a prostitute. Controversy also ensued because Peggy had married soon after her previous husband's death, and it was alleged that she and her husband had engaged in an adulterous affair while her previous husband was still living. Petticoat politics emerged when the wives of cabinet members, led by Mrs. Calhoun, refused to socialize with the Eatons. Allowing a prostitute in the official family was unthinkable—but Jackson refused to believe the rumors, telling his Cabinet that "She is as chaste as a virgin!" Jackson believed that the dishonorable people were the rumormongers, who in essence questioned and dishonored Jackson himself by, in attempting to drive the Eatons out, daring to tell him who he could and could not have in his cabinet. Jackson was also reminded of the attacks that were made against his wife. These memories increased his dedication to defending Peggy Eaton. Meanwhile, the cabinet wives insisted that the interests and honor of all American women was at stake. They believed a responsible woman should never accord a man sexual favors without the assurance that went with marriage. A woman who broke that code was dishonorable and unacceptable. Historian Daniel Walker Howe notes that this was the feminist spirit that in the next decade shaped the woman's rights movement. Secretary of State Martin Van Buren, a widower, was already forming a coalition against Calhoun. He could now see his main chance to strike hard; he took the side of Jackson and Eaton. In the spring of 1831, Jackson, at Van Buren's suggestion, demanded the resignations of all the cabinet members except Barry. Van Buren himself resigned to avoid the appearance of bias. In 1832, Jackson nominated Van Buren to be Minister to Great Britain. Calhoun blocked the nomination with a tie-breaking vote against it, claiming the defeated nomination would "...kill [Van Buren], sir, kill dead. He will never kick, sir, never kick." Van Buren continued to serve as an important adviser to Jackson and was placed on the ticket for vice president in the 1832 election, making him Jackson's heir-apparent. The Petticoat affair led to the development of the Kitchen Cabinet. The Kitchen Cabinet emerged as an unofficial group of advisors to the president. Its existence was partially rooted in Jackson's difficulties with his official cabinet, even after the purging. Throughout his eight years in office, Jackson made about 70 treaties with Native American tribes both in the South and in the Northwest. Jackson's presidency marked a new era in Indian-Anglo American relations initiating a policy of Indian removal. Jackson himself sometimes participated in the treaty negotiating process with various Indian tribes, though other times he left the negotiations to his subordinates. The southern tribes included the Choctaw, Creek, Chickasaw, Seminole and the Cherokee. The northwest tribes include the Chippewa, Ottawa, and the Potawatomi. Relations between Indians and Americans increasingly grew tense and sometimes violent as a result of territorial conflicts. Previous presidents had at times supported removal or attempts to "civilize" the Indians, but generally let the problem play itself out with minimal intervention. There had developed a growing popular and political movement to deal with the issue, and out of this policy to relocate certain Indian populations. Jackson, never known for timidity, became an advocate for this relocation policy in what many historians consider the most controversial aspect of his presidency. In his First Annual Message to Congress, Jackson advocated land west of the Mississippi River be set aside for Indian tribes. On May 26, 1830, Congress passed the Indian Removal Act, which Jackson signed into law two days later. The Act authorized the president to negotiate treaties to buy tribal lands in the east in exchange for lands farther west, outside of existing state borders. The act specifically pertained to the Five Civilized Tribes in the South, the conditions being that they could either move west or stay and obey state law, effectively relinquishing their sovereignty. Jackson, Eaton, and General Coffee negotiated with the Chickasaw, who quickly agreed to move. Jackson put Eaton and Coffee in charge of negotiating with the Choctaw. Lacking Jackson's skills at negotiation, they frequently bribed the chiefs in order to gain their submission. The tactics worked, and the chiefs agreed to move. The removal of the Choctaw took place in the winter of 1831 and 1832, and was wrought with misery and suffering. The Seminole, despite the signing of the Treaty of Payne's Landing in 1832, refused to move. In December 1835, this dispute began the Second Seminole War. The war lasted over six years, finally ending in 1842. Members of the Creek Nation had signed the Treaty of Cusseta in 1832, allowing the Creek to either sell or retain their land. Conflict later erupted between the Creek who remained and the white settlers, leading to a second Creek War. A common complaint amongst the tribes was that the men who had signed the treaties did not represent the whole tribe. The state of Georgia became involved in a contentious dispute with the Cherokee, culminating in the 1832 Supreme Court decision in "Worcester v. Georgia". Chief Justice John Marshall, writing for the court, ruled that Georgia could not forbid whites from entering tribal lands, as it had attempted to do with two missionaries supposedly stirring up resistance amongst the tribespeople. Jackson is frequently attributed the following response: "John Marshall has made his decision, now let him enforce it." The quote, apparently indicating Jackson's dismissive view of the courts, was attributed to Jackson by Horace Greeley, who cited as his source Representative George N. Briggs. Remini argues that Jackson did not say it because, while it "certainly sounds like Jackson...[t]here was nothing for him to enforce." This is because a writ of "habeas corpus" had never been issued for the missionaries. The Court also did not ask federal marshals to carry out the decision, as had become standard. A group of Cherokees led by John Ridge negotiated the Treaty of New Echota. Ridge was not a widely recognized leader of the Cherokee, and this document was rejected by some as illegitimate. Another faction, led by John Ross, unsuccessfully petitioned to protest the proposed removal. The Cherokee largely considered themselves independent, and not subject to the laws of the United States or Georgia. The treaty was enforced by Jackson's successor, Van Buren. Subsequently, as many as 4,000 out of 18,000 Cherokee died on the "Trail of Tears" in 1838. More than 45,000 American Indians were relocated to the West during Jackson's administration, though a few Cherokees walked back afterwards or migrated to the high Smoky Mountains. The Black Hawk War took place during Jackson's presidency in 1832 after a group of Indians crossed into U.S. territory. In 1828, Congress had approved the "Tariff of Abominations", which set the tariff at an historically high rate. Southern planters, who sold their cotton on the world market, strongly opposed this tariff, which they saw as favoring northern interests. The South now had to pay more for goods it did not produce locally; and other countries would have more difficulty affording southern cotton. The issue came to a head during Jackson's presidency, resulting in the Nullification Crisis, in which South Carolina threatened disunion. The South Carolina Exposition and Protest of 1828, secretly written by Calhoun, asserted that their state had the right to "nullify"—declare void—the tariff legislation of 1828. Although Jackson sympathized with the South in the tariff debate, he also vigorously supported a strong union, with effective powers for the central government. Jackson attempted to face down Calhoun over the issue, which developed into a bitter rivalry between the two men. One incident came at the April 13, 1830, Jefferson Day dinner, involving after-dinner toasts. Robert Hayne began by toasting to "The Union of the States, and the Sovereignty of the States." Jackson then rose, and in a booming voice added "Our federal Union: It must be preserved!" – a clear challenge to Calhoun. Calhoun clarified his position by responding "The Union: Next to our Liberty, the most dear!" In May 1830, Jackson discovered that Calhoun had asked President Monroe to censure Jackson for his invasion of Spanish Florida in 1818 while Calhoun was serving as Secretary of War. Calhoun's and Jackson's relationship deteriorated further. By February 1831, the break between Calhoun and Jackson was final. Responding to inaccurate press reports about the feud, Calhoun had published letters between him and Jackson detailing the conflict in the "United States Telegraph". Jackson and Calhoun began an angry correspondence which lasted until Jackson stopped it in July. The "Telegraph", edited by Duff Green, initially supported Jackson. After it sided with Calhoun on nullification, Jackson needed a new organ for the administration. He enlisted the help of longtime supporter Francis Preston Blair, who in November 1830 established a newspaper known as the "Washington Globe", which from then on served as the primary mouthpiece of the Democratic Party. Jackson supported a revision to tariff rates known as the Tariff of 1832. It was designed to placate the nullifiers by lowering tariff rates. Written by Treasury Secretary Louis McLane, the bill lowered duties from 45% to 27%. In May, Representative John Quincy Adams introduced a slightly revised version of the bill, which Jackson accepted. It passed Congress on July 9 and was signed by the president on July 14. The bill failed to satisfy extremists on either side. On November 24, the South Carolina legislature nullified both the Tariff of 1832 and the Tariff of 1828. In response, Jackson sent U.S. Navy warships to Charleston harbor, and threatened to hang any man who worked to support nullification or secession. On December 28, 1832, Calhoun resigned as vice president, after having been elected to the U.S. Senate. This was part of a strategy whereby Calhoun, with less than three months remaining on his vice presidential term, would replace Robert Y. Hayne in the Senate, and he would then become governor of South Carolina. Hayne had often struggled to defend nullification on the floor of the Senate, especially against fierce criticism from Senator Daniel Webster of Massachusetts. Also that December, Jackson issued a resounding proclamation against the "nullifiers," stating that he considered "the power to annul a law of the United States, assumed by one State, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, unauthorized by its spirit, inconsistent with every principle on which it was founded, and destructive of the great object for which it was formed." South Carolina, the president declared, stood on "the brink of insurrection and treason," and he appealed to the people of the state to reassert their allegiance to that Union for which their ancestors had fought. Jackson also denied the right of secession: "The Constitution ... forms a government not a league ... To say that any State may at pleasure secede from the Union is to say that the United States are not a nation." Jackson tended to personalize the controversy, frequently characterizing nullification as a conspiracy between disappointed and bitter men whose ambitions had been thwarted. Jackson asked Congress to pass a "Force Bill" explicitly authorizing the use of military force to enforce the tariff. It was introduced by Senator Felix Grundy of Tennessee, and was quickly attacked by Calhoun as "military despotism." At the same time, Calhoun and Clay began to work on a new compromise tariff. A bill sponsored by the administration had been introduced by Representative Gulian C. Verplanck of New York, but it lowered rates more sharply than Clay and other protectionists desired. Clay managed to get Calhoun to agree to a bill with higher rates in exchange for Clay's opposition to Jackson's military threats and, perhaps, with the hope that he could win some Southern votes in his next bid for the presidency. The Compromise Tariff passed on March 1, 1833. The Force Bill passed the same day. Calhoun, Clay, and several others marched out of the chamber in opposition, the only dissenting vote coming from John Tyler of Virginia. The new tariff was opposed by Webster, who argued that it essentially surrendered to South Carolina's demands. Jackson, despite his anger over the scrapping of the Verplanck bill and the new alliance between Clay and Calhoun, saw it as an efficient way to end the crisis. He signed both bills on March 2, starting with the Force Bill. The South Carolina Convention then met and rescinded its nullification ordinance, but in a final show of defiance, nullified the Force Bill. On May 1, Jackson wrote, "the tariff was only the pretext, and disunion and southern confederacy the real object. The next pretext will be the negro, or slavery question." Addressing the subject of foreign affairs in his First Annual Address to Congress, Jackson declared it to be his "settled purpose to ask nothing that is not clearly right and to submit to nothing that is wrong." When Jackson took office, spoliation claims, or compensation demands for the capture of American ships and sailors, dating from the Napoleonic era, caused strained relations between the U.S. and French governments. The French Navy had captured and sent American ships to Spanish ports while holding their crews captive forcing them to labor without any charges or judicial rules. According to Secretary of State Martin Van Buren, relations between the U.S. and France were "hopeless." Jackson's Minister to France, William C. Rives, through diplomacy was able to convince the French government to sign a reparations treaty on July 4, 1831, that would award the U.S. ₣ 25,000,000 ($5,000,000) in damages. The French government became delinquent in payment due to internal financial and political difficulties. The French king Louis Philippe I and his ministers blamed the French Chamber of Deputies. By 1834, the non-payment of reparations by the French government drew Jackson's ire and he became impatient. In , Jackson sternly reprimanded the French government for non-payment, stating the federal government was "wholly disappointed" by the French, and demanded Congress authorize trade reprisals against France. Feeling insulted by Jackson's words, the French people began pressuring their government not to pay the indemnity until Jackson had apologized for his remarks. In his December 1835 State of the Union Address, Jackson refused to apologize, stating he had a good opinion of the French people and his intentions were peaceful. Jackson described in lengthy and minute detail the history of events surrounding the treaty and his belief that the French government was purposely stalling payment. The French accepted Jackson's statements as sincere and in February 1836, reparations were paid. In addition to France, the Jackson administration successfully settled spoliation claims with Denmark, Portugal, and Spain. Jackson's state department was active and successful at making trade agreements with Russia, Spain, Turkey, Great Britain, and Siam. Under the treaty of Great Britain, American trade was reopened in the West Indies. The trade agreement with Siam was America's first treaty between the United States and an Asiatic country. As a result, American exports increased 75% while imports increased 250%. Jackson's attempt to purchase Texas from Mexico for $5,000,000 failed. The chargé d'affaires in Mexico, Colonel Anthony Butler, suggested that the U.S. take Texas over militarily, but Jackson refused. Butler was later replaced toward the end of Jackson's presidency. In 1835, the Texas Revolution began when pro-slavery American settlers in Texas fought the Mexican government for Texan independence. By May 1836, they had routed the Mexican military, establishing an independent Republic of Texas. The new Texas government legalized slavery and demanded recognition from President Jackson and annexation into the United States. Jackson was hesitant in recognizing Texas, unconvinced that the new republic could maintain independence from Mexico, and not wanting to make Texas an anti-slavery issue during the 1836 election. The strategy worked; the Democratic Party and national loyalties were held intact, and Van Buren was elected president. Jackson formally recognized the Republic of Texas, nominating Alcée Louis la Branche as chargé d'affaires on the last full day of his presidency, March 3, 1837. Jackson failed in his efforts to open trade with China and Japan and was unsuccessful at thwarting Great Britain's presence and power in South America. The 1832 presidential election demonstrated the rapid development and organization of political parties during this time period. The Democratic Party's first national convention, held in Baltimore, nominated Jackson's choice for vice president, Van Buren. The National Republican Party, who had held their first convention in Baltimore earlier in December 1831, nominated Henry Clay, now a senator from Kentucky, and John Sergeant of Pennsylvania. The Anti-Masonic Party emerged by capitalizing on opposition to Freemasonry, which existed primarily in New England, after the disappearance and possible murder of William Morgan. The party, which had earlier held its convention also in Baltimore in September 1831, nominated William Wirt of Maryland and Amos Ellmaker of Pennsylvania. Clay was, like Jackson, a Mason, and so some anti-Jacksonians who would have supported the National Republican Party supported Wirt instead. In 1816, the Second Bank of the United States was chartered by President James Madison to restore the United States economy devastated by the War of 1812. Monroe had appointed Nicholas Biddle as the Bank's executive. Jackson believed that the Bank was a fundamentally corrupt monopoly. Its stock was mostly held by foreigners, he insisted, and it exerted an unfair amount of control over the political system. Jackson used the issue to promote his democratic values, believing the Bank was being run exclusively for the wealthy. Jackson stated the Bank made "the rich richer and the potent more powerful." He accused it of making loans with the intent of influencing elections. In his address to Congress in 1830, Jackson called for a substitute for the Bank that would have no private stockholders and no ability to lend or purchase land. Its only power would be to issue bills of exchange. The address touched off fiery debate in the Senate. Thomas Hart Benton, now a strong supporter of the president despite the brawl years earlier, gave a speech excoriating the Bank and calling for debate on its recharter. Webster led a motion to narrowly defeat the resolution. Shortly afterward, the "Globe" announced that Jackson would stand for reelection. Despite his misgivings about the Bank, Jackson supported a plan proposed in late 1831 by his moderately pro-Bank Treasury Secretary Louis McLane, who was secretly working with Biddle, to recharter a reformed version of the Bank in a way that would free up funds which would in turn be used to strengthen the military or pay off the nation's debt. This would be done, in part, through the sale of government stock in the Bank. Over the objections of Attorney General Roger B. Taney, an irreconcilable opponent of the Bank, he allowed McLane to publish a Treasury Report which essentially recommended rechartering the Bank. Clay hoped to make the Bank an issue in the election, so as to accuse Jackson of going beyond his powers if he vetoed a recharter bill. He and Webster urged Biddle to immediately apply for recharter rather than wait to reach a compromise with the administration. Biddle received advice to the contrary from moderate Democrats such as McLane and William Lewis, who argued that Biddle should wait because Jackson would likely veto the recharter bill. On January 6, 1832, Biddle submitted to Congress a renewal of the Bank's charter without any of the proposed reforms. The submission came four years before the original 20-year charter was to end. Biddle's recharter bill passed the Senate on June 11 and the House on July 3, 1832. Jackson determined to veto it. Many moderate Democrats, including McLane, were appalled by the perceived arrogance of the bill and supported his decision. When Van Buren met Jackson on July 4, Jackson declared, "The Bank, Mr. Van Buren, is trying to kill me. But I will kill it." Jackson vetoed the bill on July 10. The veto message was crafted primarily by Taney, Kendall, and Jackson's nephew and advisor Andrew Jackson Donelson. It attacked the Bank as an agent of inequality that supported only the wealthy. The veto was considered "one of the strongest and most controversial" presidential statements and "a brilliant political manifesto." The National Republican Party immediately made Jackson's veto of the Bank a political issue. Jackson's political opponents castigated the veto as "the very slang of the leveller and demagogue," claiming Jackson was using class warfare to gain support from the common man. At Biddle's direction, the Bank poured thousands of dollars into a campaign to defeat Jackson, seemingly confirming Jackson's view that it interfered in the political process. Jackson successfully portrayed his veto as a defense of the common man against governmental tyranny. Clay proved to be no match for Jackson's ability to resonate with the people and the Democratic Party's strong political networks. Democratic newspapers, parades, barbecues, and rallies increased Jackson's popularity. Jackson himself made numerous public appearances on his return trip from Tennessee to Washington, D.C. He won the election by a landslide, receiving 54 percent of the popular vote and 219 electoral votes. Clay received 37 percent of the popular vote and 49 electoral votes. Wirt received only eight percent of the popular vote and seven electoral votes while the Anti-Masonic Party eventually declined. Jackson believed the solid victory was a popular mandate for his veto of the Bank's recharter and his continued warfare on the Bank's control over the national economy. In 1833, Jackson attempted to begin removing federal deposits from the bank, whose money-lending functions were taken over by the legions of local and state banks that materialized across America, thus drastically increasing credit and speculation. Jackson's moves were greatly controversial. He removed McLane from the Treasury Department, having him serve instead as Secretary of State, replacing Edward Livingston. He replaced McLane with William J. Duane. In September, he fired Duane for refusing to remove the deposits. Signalling his intent to continue battling the Bank, he replaced Duane with Taney. Under Taney, the deposits began to be removed. They were placed in a variety of state banks which were friendly to the administration's policies, known to critics as pet banks. Biddle responded by stockpiling the Bank's reserves and contracting credit, thus causing interest rates to rise and bringing about a financial panic. The moves were intended to force Jackson into a compromise. "Nothing but the evidence of suffering abroad will produce any effect in Congress," he wrote. At first, Biddle's strategy was successful, putting enormous pressure on Jackson. But Jackson handled the situation well. When people came to him complaining, he referred them to Biddle, saying that he was the man who had "all the money." Jackson's approach worked. Biddle's strategy backfired, increasing anti-Bank sentiment. In 1834, those who disagreed with Jackson's expansion of executive power united and formed the Whig Party, calling Jackson "King Andrew I," and named their party after the English Whigs who opposed seventeenth century British monarchy. A movement emerged among Whigs in the Senate to censure Jackson. The censure was a political maneuver spearheaded by Clay, which served only to perpetuate the animosity between him and Jackson. Jackson called Clay "reckless and as full of fury as a drunken man in a brothel." On March 28, the Senate voted to censure Jackson 26–20. It also rejected Taney as Treasury Secretary. The House however, led by Ways and Means Committee chairman James K. Polk, declared on April 4 that the Bank "ought not to be rechartered" and that the depositions "ought not to be restored." It voted to continue allowing pet banks to be places of deposit and voted even more overwhelmingly to investigate whether the Bank had deliberately instigated the panic. Jackson called the passage of these resolutions a "glorious triumph." It essentially sealed the Bank's demise. The Democrats later suffered a temporary setback. Polk ran for Speaker of the House to replace Andrew Stevenson. After Southerners discovered his connection to Van Buren, he was defeated by fellow-Tennessean John Bell, a Democrat-turned-Whig who opposed Jackson's removal policy. The national economy following the withdrawal of the remaining funds from the Bank was booming and the federal government through duty revenues and sale of public lands was able to pay all bills. On January 1, 1835, Jackson paid off the entire national debt, the only time in U.S. history that has been accomplished. The objective had been reached in part through Jackson's reforms aimed at eliminating the misuse of funds and through his vetoes of legislation which he deemed extravagant. In December 1835, Polk defeated Bell in a rematch and was elected Speaker. Finally, on January 16, 1837, when the Jacksonians had a majority in the Senate, the censure was expunged after years of effort by Jackson supporters. The expunction movement was led, ironically, by Benton. In 1836, in response to increased land speculation, Jackson issued the Specie Circular, an executive order that required buyers of government lands to pay in "specie" (gold or silver coins). The result was high demand for specie, which many banks could not meet in exchange for their notes, contributing to the Panic of 1837. The White House Van Buren biography notes, "Basically the trouble was the 19th-century cyclical economy of 'boom and bust,' which was following its regular pattern, but Jackson's financial measures contributed to the crash. His destruction of the Second Bank of the United States had removed restrictions upon the inflationary practices of some state banks; wild speculation in lands, based on easy bank credit, had swept the West. To end this speculation, Jackson in 1836 had issued a Specie Circular..." The first recorded physical attack on a U.S. president was directed at Jackson. He had ordered the dismissal of Robert B. Randolph from the navy for embezzlement. On May 6, 1833, Jackson sailed on USS "Cygnet" to Fredericksburg, Virginia, where he was to lay the cornerstone on a monument near the grave of Mary Ball Washington, George Washington's mother. During a stopover near Alexandria, Randolph appeared and struck the president. He fled the scene chased by several members of Jackson's party, including the writer Washington Irving. Jackson declined to press charges. On January 30, 1835, what is believed to be the first attempt to kill a sitting president of the United States occurred just outside the United States Capitol. When Jackson was leaving through the East Portico after the funeral of South Carolina Representative Warren R. Davis, Richard Lawrence, an unemployed house painter from England, aimed a pistol at Jackson, which misfired. Lawrence then pulled out a second pistol, which also misfired. Historians believe the humid weather contributed to the double misfiring. Jackson, infuriated, attacked Lawrence with his cane, until others present, including Davy Crockett, fearing that the president would beat Lawrence to a pulp, intervened to restrain and disarm Lawrence. Lawrence offered a variety of explanations for the attempted shooting. He blamed Jackson for the loss of his job. He claimed that with the president dead, "money would be more plenty," (a reference to Jackson's struggle with the Bank of the United States) and that he "could not rise until the President fell." Finally, Lawrence told his interrogators that he was a deposed English king—specifically, Richard III, dead since 1485—and that Jackson was his clerk. He was deemed insane and was institutionalized. Afterwards, the pistols were tested and retested. Each time they performed perfectly. Many believed that Jackson had been protected by the same Providence that also protected their young nation. The incident became a part of Jacksonian mythos. Jackson initially suspected that a number of his political enemies might have orchestrated the attempt on his life. His suspicions were never proven. During the summer of 1835, Northern abolitionists began sending anti-slavery tracts through the postal system into the South. Pro-slavery Southerners demanded that the postal service ban distribution of the materials, which were deemed "incendiary," and some began to riot. Jackson wanted sectional peace, and desired to placate Southerners ahead of the 1836 election. He fiercely disliked the abolitionists, whom he believed were, by instituting sectional jealousies, attempting to destroy the Union. Jackson also did not want to condone open insurrection. He supported the solution of Postmaster General Amos Kendall, which gave Southern postmasters discretionary powers to either send or detain the anti-slavery tracts. That December, Jackson called on Congress to prohibit the circulation through the South of "incendiary publications intended to instigate the slaves to insurrection." Jackson initially opposed any federal exploratory scientific expeditions during his first term in office. The last scientific federally funded expeditions took place from 1817 to 1823, led by Stephen H. Harriman on the Red River of the North. Jackson's predecessor, President Adams, attempted to launch a scientific oceanic exploration in 1828, but Congress was unwilling to fund the effort. When Jackson assumed office in 1829 he pocketed Adams' expedition plans. Eventually, wanting to establish his presidential legacy, similar to Jefferson and the Lewis and Clark Expedition, Jackson sponsored scientific exploration during his second term. On May 18, 1836, Jackson signed a law creating and funding the oceanic United States Exploring Expedition. Jackson put Secretary of the Navy Mahlon Dickerson in charge, to assemble suitable ships, officers, and scientific staff for the expedition; with a planned launch before Jackson's term of office expired. Dickerson proved unfit for the task, preparations stalled and the expedition was not launched until 1838, during the presidency of Van Buren. One brig ship, , later used in the expedition; having been commissioned by Secretary Dickerson in May 1836, circumnavigated the world and explored and mapped the Southern Ocean, confirming the existence of the continent of Antarctica. In spite of economic success following Jackson's vetoes and war against the Bank, reckless speculation in land and railroads eventually caused the Panic of 1837. Contributing factors included Jackson's veto of the Second National Bank renewal charter in 1832 and subsequent transfer of federal monies to state banks in 1833 that caused western banks to relax their lending standards. Two other Jacksonian acts in 1836 contributed to the Panic of 1837: the Specie Circular, which mandated western lands only be purchased by money backed by gold and silver, and the Deposit and Distribution Act, which transferred federal monies from eastern to western state banks and in turn led to a speculation frenzy by banks. Jackson's Specie Circular, albeit designed to reduce speculation and stabilize the economy, left many investors unable to afford to pay loans in gold and silver. The same year there was a downturn in Great Britain's economy that stopped investment in the United States. As a result, the U.S. economy went into a depression, banks became insolvent, the national debt (previously paid off) increased, business failures rose, cotton prices dropped, and unemployment dramatically increased. The depression that followed lasted for four years until 1841, when the economy began to rebound. Jackson appointed six justices to the Supreme Court. Most were undistinguished. His first appointee, John McLean, had been nominated in Barry's place after Barry had agreed to become postmaster general. McLean "turned Whig and forever schemed to win" the presidency. His next two appointees–Henry Baldwin and James Moore Wayne–disagreed with Jackson on some points but were poorly regarded even by Jackson's enemies. In reward for his services, Jackson nominated Taney to the Court to fill a vacancy in January 1835, but the nomination failed to win Senate approval. Chief Justice Marshall died in 1835, leaving two vacancies on the court. Jackson nominated Taney for Chief Justice and Philip Pendleton Barbour for Associate Justice. Both were confirmed by the new Senate. Taney served as Chief Justice until 1864, presiding over a court that upheld many of the precedents set by the Marshall Court. He was generally regarded as a good and respectable judge, but his opinion in "Dred Scott v. Sandford" largely overshadows his career. On the last full day of his presidency, Jackson nominated John Catron, who was confirmed. Two new states were admitted into the Union during Jackson's presidency: Arkansas (June 15, 1836) and Michigan (January 26, 1837). Both states increased Democratic power in Congress and helped Van Buren win the presidency in 1836. This was in keeping with the tradition that new states would support the party which had done the most to admit them. In 1837, after serving two terms as president, Jackson was replaced by his chosen successor Martin Van Buren and retired to the Hermitage. He immediately began putting it in order as it had been poorly managed in his absence by his adopted son, Andrew Jackson Jr. Although he suffered ill health, Jackson remained highly influential in both national and state politics. He was a firm advocate of the federal union of the states and rejected any talk of secession, insisting, "I will die with the Union." Blamed for causing the Panic of 1837, he was unpopular in his early retirement. Jackson continued to denounce the "perfidy and treachery" of banks and urged his successor, Van Buren, to repudiate the Specie Circular as president. As a solution to the panic, he supported an Independent Treasury system, which was designed to hold the money balances of the government in the form of gold or silver and would be restricted from printing paper money so as to prevent further inflation. A coalition of conservative Democrats and Whigs opposed the bill, and it was not passed until 1840. During the delay, no effective remedy had been implemented for the depression. Van Buren grew deeply unpopular. A unified Whig Party nominated popular war hero William Henry Harrison and former Jacksonian John Tyler in the 1840 presidential election. The Whigs' campaign style in many ways mimicked that of the Democrats when Jackson ran. They depicted Van Buren as an aristocrat who did not care for the concerns of ordinary Americans, while glorifying Harrison's military record and portraying him as a man of the people. Jackson campaigned heavily for Van Buren in Tennessee. He favored the nomination of Polk for vice president at the 1840 Democratic National Convention over controversial incumbent Richard Mentor Johnson. No nominee was chosen, and the party chose to leave the decision up to individual state electors. Harrison won the election, and the Whigs captured majorities in both houses of Congress. "The democracy of the United States has been shamefully beaten," Jackson wrote to Van Buren. "but I trust, not conquered." Harrison died only a month into his term, and was replaced by Tyler. Jackson was encouraged because Tyler had a strong independent streak and was not bound by party lines. Sure enough, Tyler quickly incurred the wrath of the Whigs in 1841 when he vetoed two Whig-sponsored bills to establish a new national bank, bringing satisfaction to Jackson and other Democrats. After the second veto, Tyler's entire cabinet, with the exception of Daniel Webster, resigned. Jackson strongly favored the annexation of Texas, a feat he had been unable to accomplish during his own presidency. While Jackson still feared that annexation would stir up anti-slavery sentiment, his belief that the British would use Texas as a base to threaten the United States overrode his other concerns. He also insisted that Texas was part of the Louisiana Purchase and therefore rightfully belonged to the United States. At the request of Senator Robert J. Walker of Mississippi, acting on behalf of the Tyler administration, which also supported annexation, Jackson wrote several letters to Texas president Sam Houston, urging him to wait for the Senate to approve annexation and lecturing him on how much being a part of the United States would benefit Texas. Initially prior to the 1844 election, Jackson again supported Van Buren for president and Polk for vice president. A treaty of annexation was signed by Tyler on April 12, 1844, and submitted to the Senate. When a letter from Secretary of State Calhoun to British Ambassador Richard Pakenham linking annexation to slavery was made public, anti-annexation sentiment exploded in the North and the bill failed to be ratified. Van Buren decided to write the "Hamlet letter," opposing annexation. This effectively extinguished any support that Van Buren might previously have enjoyed in the South. The Whig nominee, Henry Clay, also opposed annexation, and Jackson recognized the need for the Democrats to nominate a candidate who supported it and could therefore gain the support of the South. If the plan failed, Jackson warned, Texas would not join the Union and would potentially fall victim to a Mexican invasion supported by the British. Jackson met with Polk, Robert Armstrong, and Andrew Jackson Donelson in his study. He then pointed directly at a startled Polk, telling him that, as a man from the southwest and a supporter of annexation, he would be the perfect candidate. Polk called the scheme "utterly abortive," but agreed to go along with it. At the 1844 Democratic National Convention, Polk emerged as the party's nominee after Van Buren failed to win the required two-thirds majority of delegates. George M. Dallas was selected for vice president. Jackson convinced Tyler to drop his plans of running for re-election as an independent by promising, as Tyler requested, to welcome the President and his allies back into the Democratic Party and by instructing Blair to stop criticizing the President. Polk won the election, defeating Clay. A bill of annexation was passed by Congress in February and signed by Tyler on March 1. Jackson's age and illness eventually overcame him. On June 8, 1845, he was surrounded by family and friends at his deathbed. Jackson, startled by their sobbing, said, "What is the matter with my dear children? Have I alarmed you? Oh, do not cry. Be good children and we will all meet in Heaven." He died immediately after at the age of 78 of chronic dropsy and heart failure. According to a newspaper account from the Boon Lick Times, "[he] fainted whilst being removed from his chair to the bed ... but he subsequently revived ... Gen. Jackson died at the Hermitage at 6 p.m. on Sunday the 8th instant. ... When the messenger finally came, the old soldier, patriot and Christian was looking out for his approach. He is gone, but his memory lives, and will continue to live." In his will, Jackson left his entire estate to Andrew Jackson Jr. except for specifically enumerated items that were left to various friends and family members. Jackson had three adopted sons: Theodore, an Indian about whom little is known, Andrew Jackson Jr., the son of Rachel's brother Severn Donelson, and Lyncoya, a Creek Indian orphan adopted by Jackson after the Battle of Tallushatchee. Lyncoya died of tuberculosis on July 1, 1828, at the age of sixteen. The Jacksons also acted as guardians for eight other children. John Samuel Donelson, Daniel Smith Donelson, and Andrew Jackson Donelson were the sons of Rachel's brother Samuel Donelson, who died in 1804. Andrew Jackson Hutchings was Rachel's orphaned grand nephew. Caroline Butler, Eliza Butler, Edward Butler, and Anthony Butler were the orphaned children of Edward Butler, a family friend. They came to live with the Jacksons after the death of their father. The widower Jackson invited Rachel's niece Emily Donelson to serve as hostess at the White House. Emily was married to Andrew Jackson Donelson, who acted as Jackson's private secretary and in 1856 ran for vice president on the American Party ticket. The relationship between the president and Emily became strained during the Petticoat affair, and the two became estranged for over a year. They eventually reconciled and she resumed her duties as White House hostess. Sarah Yorke Jackson, the wife of Andrew Jackson Jr., became co-hostess of the White House in 1834. It was the only time in history when two women simultaneously acted as unofficial First Lady. Sarah took over all hostess duties after Emily died from tuberculosis in 1836. Jackson used Rip Raps as a retreat. Jackson's quick temper was notorious. Biographer H. W. Brands notes that his opponents were terrified of his temper: "Observers likened him to a volcano, and only the most intrepid or recklessly curious cared to see it erupt. ... His close associates all had stories of his blood-curdling oaths, his summoning of the Almighty to loose His wrath upon some miscreant, typically followed by his own vow to hang the villain or blow him to perdition. Given his record—in duels, brawls, mutiny trials, and summary hearings—listeners had to take his vows seriously." On the last day of his presidency, Jackson admitted that he had but two regrets, that he "had been unable to shoot Henry Clay or to hang John C. Calhoun." On his deathbed, he was once again quoted as regretting that he had not hanged Calhoun for treason. "My country would have sustained me in the act, and his fate would have been a warning to traitors in all time to come," he said. Remini expresses the opinion that Jackson was typically in control of his temper, and that he used his anger, along with his fearsome reputation, as a tool to get what he wanted. Jackson was a lean figure, standing at tall, and weighing between on average. Jackson also had an unruly shock of red hair, which had completely grayed by the time he became president at age 61. He had penetrating deep blue eyes. Jackson was one of the more sickly presidents, suffering from chronic headaches, abdominal pains, and a hacking cough. Much of his trouble was caused by a musket ball in his lung that was never removed, that often brought up blood and sometimes made his whole body shake. In 1838, Jackson became an official member of the First Presbyterian Church in Nashville. Both his mother and his wife had been devout Presbyterians all their lives, but Jackson himself had postponed officially entering the church in order to avoid accusations that he had joined only for political reasons. Jackson was a Freemason, initiated at Harmony Lodge No. 1 in Tennessee. He was elected Grand Master of the Grand Lodge of Tennessee in 1822 and 1823. During the 1832 presidential election, Jackson faced opposition from the Anti-Masonic Party. He was the only U.S. president to have served as Grand Master of a state's Grand Lodge until Harry S. Truman in 1945. His Masonic apron is on display in the Tennessee State Museum. An obelisk and bronze Masonic plaque decorate his tomb at the Hermitage. Jackson remains one of the most studied and controversial figures in American history. Historian Charles Grier Sellers says, "Andrew Jackson's masterful personality was enough by itself to make him one of the most controversial figures ever to stride across the American stage." There has never been universal agreement on Jackson's legacy, for "his opponents have ever been his most bitter enemies, and his friends almost his worshippers." He was always a fierce partisan, with many friends and many enemies. He has been lauded as the champion of the common man, while criticized for his treatment of Indians and for other matters. James Parton was the first man after Jackson's death to write a full biography of him. Trying to sum up the contradictions in his subject, he wrote: Jackson was criticized by his contemporary Alexis de Tocqueville in "Democracy in America" for flattering the dominant ideas of his time, including the mistrust over the federal power, for sometimes enforcing his view by force and disrespect towards the institutions and the law: In the 20th century, Jackson was written about by many admirers. Arthur M. Schlesinger Jr.'s "Age of Jackson" (1945) depicts Jackson as a man of the people battling inequality and upper-class tyranny. From the 1970s to the 1980s, Robert Remini published a three-volume biography of Jackson followed by an abridged one-volume study. Remini paints a generally favorable portrait of Jackson. He contends that Jacksonian democracy "stretches the concept of democracy about as far as it can go and still remain workable. ... As such it has inspired much of the dynamic and dramatic events of the nineteenth and twentieth centuries in American history—Populism, Progressivism, the New and Fair Deals, and the programs of the New Frontier and Great Society." To Remini, Jackson serves as "the embodiment of the new American ... This new man was no longer British. He no longer wore the queue and silk pants. He wore trousers, and he had stopped speaking with a British accent." Other 20th-century writers such as Richard Hofstadter and Bray Hammond depict Jackson as an advocate of the sort of "laissez-faire" capitalism that benefits the rich and oppresses the poor. Jackson's initiatives to deal with the conflicts between Indians and American settlers has been a source of controversy. Starting mainly around 1970, Jackson came under attack from some historians on this issue. Howard Zinn called him "the most aggressive enemy of the Indians in early American history" and "exterminator of Indians." Conversely, in 1969, Francis Paul Prucha argued that Jackson's removal of the "Five Civilized Tribes" from the extremely hostile white environment in the Old South to Oklahoma probably saved their very existence. Similarly, Remini claims that, if not for Jackson's policies, the Southern tribes would have been totally wiped out, just like other tribes-namely, the Yamasee, Mahican, and Narragansett–which did not move. Jackson has long been honored, along with Thomas Jefferson, in the Jefferson–Jackson Day fundraising dinners held by state Democratic Party organizations to honor the two men whom the party regards as its founders. Because both Jefferson and Jackson were slave owners, as well as because of Jackson's Indian removal policies, many state party organizations have renamed the dinners. Brands argues that Jackson's reputation suffered since the 1960s as his actions towards Indians and African Americans received new attention. He also claims that the Indian controversy has eclipsed Jackson's other achievements in public memory. Brands notes that he was often hailed during his lifetime as the "second George Washington," because, while Washington had fought for independence, Jackson confirmed it at New Orleans and made the United States a great power. Over time, while the Revolution has maintained a strong presence in the public conscience, memory of the War of 1812, including the Battle of New Orleans, has sharply declined. Brands argues that this is because once America had become a military power, "it was easy to think that America had been destined for this role from the beginning." Still, Jackson's performance in office compared to other presidents has generally been ranked in the top half in public opinion polling. His position in C-SPAN's poll dropped from 13th in 2009 to 18th in 2017. Jackson has appeared on U.S. banknotes as far back as 1869, and extending into the 21st century. His image has appeared on the $5, $10, $20, and $10,000 note. Most recently, his image has appeared on the U.S. $20 Federal reserve note beginning in 1928. In 2016, Treasury Secretary Jack Lew announced his goal that by 2020 an image of Harriet Tubman would replace Jackson's depiction on the front side of the $20 banknote, and that an image of Jackson would be placed on the reverse side, though the final decision will be made by his successors. Jackson has appeared on several postage stamps. He first appeared on an 1863 two-cent stamp, which is commonly referred to by collectors as the "Black Jack" due to the large portraiture of Jackson on its face printed in pitch black. During the American Civil War, the Confederate government issued two Confederate postage stamps bearing Jackson's portrait, one a and the other a , both issued in 1863. Numerous counties and cities are named after him, including the city of Jacksonville in Florida and North Carolina; the cities of Jackson in Louisiana, Michigan, Mississippi, Missouri, and Tennessee; Jackson County in Florida, Illinois, Michigan, Mississippi, Missouri, Ohio, and Oregon; and Jackson Parish in Louisiana. Memorials to Jackson include a set of four identical equestrian statues by the sculptor Clark Mills: in Lafayette Square, Washington, D.C.; in Jackson Square, New Orleans; in Nashville on the grounds of the Tennessee State Capitol; and in Jacksonville, Florida. Other equestrian statues of Jackson have been erected elsewhere, as in the State Capitol grounds in Raleigh, North Carolina. That statue controversially identifies him as one of the "presidents North Carolina gave the nation," and he is featured alongside James Polk and Andrew Johnson, both U.S. presidents born in North Carolina. There is a bust of Andrew Jackson in Plaza Ferdinand VII in Pensacola, Florida, where he became the first governor of the Florida Territory in 1821. There is also a 1928 bronze sculpture of Andrew Jackson by Belle Kinney Scholz and Leopold Scholz in the U.S. Capitol Building as part of the National Statuary Hall Collection. Jackson and his wife Rachel were the main subjects of a 1951 historical novel by Irving Stone, "The President's Lady", which told the story of their lives up until Rachel's death. The novel was the basis for the 1953 film of the same name starring Charlton Heston as Jackson and Susan Hayward as Rachel. Jackson has been a supporting character in a number of historical films and television productions. Lionel Barrymore played Jackson in "The Gorgeous Hussy" (1936), a fictionalized biography of Peggy Eaton starring Joan Crawford. "The Buccaneer" (1938), depicting the Battle of New Orleans, included Hugh Sothern as Jackson, and was remade in 1958 with Heston again playing Jackson. Basil Ruysdael played Jackson in Walt Disney's 1955 "Davy Crockett" TV miniseries. Wesley Addy appeared as Jackson in some episodes of the 1976 PBS miniseries "The Adams Chronicles". Jackson is the protagonist of the comedic historic rock musical "Bloody Bloody Andrew Jackson" (2008) with music and lyrics by Michael Friedman and book by Alex Timbers.
https://en.wikipedia.org/wiki?curid=1623
City A city is a large human settlement. It can be defined as a permanent and densely settled place with administratively defined boundaries whose members work primarily on non-agricultural tasks. Cities generally have extensive systems for housing, transportation, sanitation, utilities, land use, and communication. Their density facilitates interaction between people, government organisations and businesses, sometimes benefiting different parties in the process, such as improving efficiency of goods and service distribution. This concentration also can have significant negative consequences, such as forming urban heat islands, concentrating pollution, and stressing water supplies and other resources. Historically, city-dwellers have been a small proportion of humanity overall, but following two centuries of unprecedented and rapid urbanisation, roughly half of the world population now lives in cities, which has had profound consequences for global sustainability. Present-day cities usually form the core of larger metropolitan areas and urban areas—creating numerous commuters traveling towards city centres for employment, entertainment, and edification. However, in a world of intensifying globalisation, all cities are to varying degrees also connected globally beyond these regions. This increased influence means that cities also have significant influences on global issues, such as sustainable development, global warming and global health. Other important traits of cities besides population include the capital status and relative continued occupation of the city. For example, country capitals such as Abu Dhabi, Beijing, Berlin, Cairo, London, Moscow, Paris, Rome, Seoul, Tokyo, Taipei, and Washington, D.C. reflect their nation's identity. Some historic capitals, such as Kyoto, maintain their reflection of cultural identity even without modern capital status. Religious holy sites offer another example of capital status within a religion, Jerusalem, Mecca, and Varanasi each hold significance. The cities of Faiyum, Damascus, and Argos are among those laying claim to the longest continual inhabitation. In terms of relative age, the oldest cities in the Americas are Cholula near Puebla, Florés in Petén, and Acoma near Albuquerque, while the oldest capital cities in the Americas are Mexico City, Santo Domingo, and San Juan. Another example of relative age, is in the age of the oldest capital cities of the superpower and emerging superpower, they are the U.S. state capital of Santa Fe, New Mexico, and the Chinese prefecture capital of Xi'an, Shaanxi. A city is distinguished from other human settlements by its relatively great size, but also by its functions and its special symbolic status, which may be conferred by a central authority. The term can also refer either to the physical streets and buildings of the city or to the collection of people who dwell there, and can be used in a general sense to mean urban rather than rural territory. National censuses use a variety of definitions - invoking factors such as population, population density, number of dwellings, economic function, and infrastructure - to classify populations as urban. Typical working definitions for small-city populations start at around 100,000 people. Common population definitions for an urban area (city or town) range between 1,500 and 50,000 people, with most U.S. states using a minimum between 1,500 and 5,000 inhabitants. Some jurisdictions set no such minima. In the United Kingdom, city status is awarded by the Crown and then remains permanently. (Historically, the qualifying factor was the presence of a cathedral, resulting in some very small cities such as Wells, with a population 12,000 and St Davids, with a population of 1,841 .) According to the "functional definition" a city is not distinguished by size alone, but also by the role it plays within a larger political context. Cities serve as administrative, commercial, religious, and cultural hubs for their larger surrounding areas. Examples of settlements with "city" in their names which may not meet any of the traditional criteria to be named such include Broad Top City, Pennsylvania (population 452), and City Dulas, Anglesey, a hamlet. The presence of a literate elite is sometimes included in the definition. A typical city has professional administrators, regulations, and some form of taxation (food and other necessities or means to trade for them) to support the government workers. (This arrangement contrasts with the more typically horizontal relationships in a tribe or village accomplishing common goals through informal agreements between neighbors, or through leadership of a chief.) The governments may be based on heredity, religion, military power, work systems such as canal-building, food-distribution, land-ownership, agriculture, commerce, manufacturing, finance, or a combination of these. Societies that live in cities are often called civilizations. The word "city" and the related "civilization" come from the Latin root "civitas", originally meaning citizenship or community member and eventually coming to correspond with urbs, meaning "city" in a more physical sense. The Roman "civitas" was closely linked with the Greek "polis"—another common root appearing in English words such as "metropolis". Urban geography deals both with cities in their larger context and with their internal structure. Town siting has varied through history according to natural, technological, economic, and military contexts. Access to water has long been a major factor in city placement and growth, and despite exceptions enabled by the advent of rail transport in the nineteenth century, through the present most of the world's urban population lives near the coast or on a river. Urban areas as a rule cannot produce their own food and therefore must develop some relationship with a hinterland which sustains them. Only in special cases such as mining towns which play a vital role in long-distance trade, are cities disconnected from the countryside which feeds them. Thus, centrality within a productive region influences siting, as economic forces would in theory favor the creation of market places in optimal mutually reachable locations. The vast majority of cities have a central area containing buildings with special economic, political, and religious significance. Archaeologists refer to this area by the Greek term temenos or if fortified as a citadel. These spaces historically reflect and amplify the city's centrality and importance to its wider sphere of influence. Today cities have a city center or downtown, sometimes coincident with a central business district. Cities typically have public spaces where anyone can go. These include privately owned spaces open to the public as well as forms of public land such as public domain and the commons. Western philosophy since the time of the Greek agora has considered physical public space as the substrate of the symbolic public sphere. Public art adorns (or disfigures) public spaces. Parks and other natural sites within cities provide residents with relief from the hardness and regularity of typical built environments. Urban structure generally follows one or more basic patterns: geomorphic, radial, concentric, rectilinear, and curvilinear. Physical environment generally constrains the form in which a city is built. If located on a mountainside, urban structure may rely on terraces and winding roads. It may be adapted to its means of subsistence (e.g. agriculture or fishing). And it may be set up for optimal defense given the surrounding landscape. Beyond these "geomorphic" features, cities can develop internal patterns, due to natural growth or to city planning. In a radial structure, main roads converge on a central point. This form could evolve from successive growth over a long time, with concentric traces of town walls and citadels marking older city boundaries. In more recent history, such forms were supplemented by ring roads moving traffic around the outskirts of a town. Dutch cities such as Amsterdam and Haarlem are structured as a central square surrounded by concentric canals marking every expansion. In cities such as Moscow, this pattern is still clearly visible. A system of rectilinear city streets and land plots, known as the grid plan, has been used for millennia in Asia, Europe, and the Americas. The Indus Valley Civilisation built Mohenjo-Daro, Harappa and other cities on a grid pattern, using ancient principles described by Kautilya, and aligned with the compass points. The ancient Greek city of Priene exemplifies a grid plan with specialized districts used across the Hellenistic Mediterranean. Urban-type settlement extends far beyond the traditional boundaries of the city proper in a form of development sometimes described critically as urban sprawl. Decentralization and dispersal of city functions (commercial, industrial, residential, cultural, political) has transformed the very meaning of the term and has challenged geographers seeking to classify territories according to an urban-rural binary. Metropolitan areas include suburbs and exurbs organized around the needs of commuters, and sometimes edge cities characterized by a degree of economic and political independence. (In the US these are grouped into metropolitan statistical areas for purposes of demography and marketing.) Some cities are now part of a continuous urban landscape called urban agglomeration, conurbation, or megalopolis (exemplified by the BosWash corridor of the Northeastern United States.) Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city both followed from the development of agriculture, which enabled production of surplus food, and thus a social division of labour (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal. Jericho and Çatalhöyük, dated to the eighth millennium BC, are among the earliest proto-cities known to archaeologists. In the fourth and third millennium BC, complex civilizations flourished in the river valleys of Mesopotamia, India, China, and Egypt. Excavations in these areas have found the ruins of cities geared variously towards trade, politics, or religion. Some had large, dense populations, but others carried out urban activities in the realms of politics or religion without having large associated populations. Among the early Old World cities, Mohenjo-daro of the Indus Valley Civilization in present-day Pakistan, existing from about 2600 BC, was one of the largest, with a population of 50,000 or more and a sophisticated sanitation system. China's planned cities were constructed according to sacred principles to act as celestial microcosms. The Ancient Egyptian cities known physically by archaeologists are not extensive. They include (known by their Arab names) El Lahun, a workers' town associated with the pyramid of Senusret II, and the religious city Amarna built by Akhenaten and abandoned. These sites appear planned in a highly regimented and stratified fashion, with a minimalistic grid of rooms for the workers and increasingly more elaborate housing available for higher classes. In Mesopotamia, the civilization of Sumer, followed by Assyria and Babylon, gave rise to numerous cities, governed by kings and fostering multiple languages written in cuneiform. The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos to Carthage and Cádiz. In the following centuries, independent city-states of Greece developed the "polis", an association of male landowning citizens who collectively constituted the city. The agora, meaning "gathering place" or "assembly", was the center of athletic, artistic, spiritual and political life of the polis. Rome's rise to power brought its population to one million. Under the authority of its empire, Rome transformed and founded many cities ("coloniae"), and with them brought its principles of urban architecture, design, and society. In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th century BC and the 18th century BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec, Andean civilization, Mayan, Mound Builders, and Pueblo peoples drew on these earlier urban traditions. Many of their ancient cities continue to be inhabited, including major metropolitan cities such as Mexico City, in the same location as Tenochtitlan; while ancient continuously inhabited Pueblos are near modern urban areas in New Mexico, such as Acoma Pueblo near the Albuquerque metropolitan area and Taos Pueblo near Taos; while others like Lima are located nearby ancient Peruvian sites such as Pachacamac. Jenné-Jeno, located in present-day Mali and dating to the third century BC, lacked monumental architecture and a distinctive elite social class—but nevertheless had specialized production and relations with a hinterland. Pre-Arabic trade contacts probably existed between Jenné-Jeno and North Africa. Other early urban centers in sub-Saharan Africa, dated to around 500 AD, include Awdaghust, Kumbi-Saleh the ancient capital of Ghana, and Maranda a center located on a trade route between Egypt and Gao. In the first millennium AD, Angkor in the Khmer Empire grew into one of the most extensive cities in the world and may have supported up to one million people. In the remnants of the Roman Empire, cities of late antiquity gained independence but soon lost population and importance. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba. From the 9th through the end of the 12th century, Constantinople, capital of the Eastern Roman Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. The Ottoman Empire gradually gained control over many cities in the Mediterranean area, including Constantinople in 1453. In the Holy Roman Empire, beginning in the 12th. century, free imperial cities such as Nuremberg, Strasbourg, Frankfurt, Basel, Zurich, Nijmegen became a privileged elite among towns having won self-governance from their local lay or secular lord or having been granted self-governanace by the emperor and being placed under his immediate protection. By 1480, these cities, as far as still part of the empire, became part of the Imperial Estates governing the empire with the emperor through the Imperial Diet. By the thirteenth and fourteenth centuries, some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed a considerable autonomy in late medieval Japan. In the West, nation-states became the dominant unit of political organization following the Peace of Westphalia in the seventeenth century. Western Europe's larger capitals (London and Paris) benefited from the growth of commerce following the emergence of an Atlantic trade. However, most towns remained small. During the Spanish colonization of the Americas the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories, and were bound to several laws regarding administration, finances and urbanism. The growth of modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas. Industrialized cities became deadly places to live, due to health problems resulting from overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape. In the second half of the twentieth century, deindustrialization (or "economic restructuring") in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America's "Steel Belt" became a "Rust Belt" and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Such cities have shifted with varying success into the service economy and public-private partnerships, with concomitant gentrification, uneven revitalization efforts, and selective cultural development. Under the Great Leap Forward and subsequent five-year plans continuing today, the People's Republic of China has undergone concomitant urbanization and industrialization and to become the world's leading manufacturer. Amidst these economic changes, high technology and instantaneous telecommunication enable select cities to become centers of the knowledge economy. A new smart city paradigm, supported by institutions such as the RAND Corporation and IBM, is bringing computerized surveillance, data analysis, and governance to bear on cities and city-dwellers. Some companies are building brand new masterplanned cities from scratch on greenfield sites. Urbanization is the process of migration from rural into urban areas, driven by various political, economic, and cultural factors. Until the 18th century, an equilibrium existed between the rural agricultural population and towns featuring markets and small-scale manufacturing. With the agricultural and industrial revolutions urban population began its unprecedented growth, both through migration and through demographic expansion. In England the proportion of the population living in cities jumped from 17% in 1801 to 72% in 1891. In 1900, 15% of the world population lived in cities. The cultural appeal of cities also plays a role in attracting residents. Urbanization rapidly spread across the Europe and the Americas and since the 1950s has taken hold in Asia and Africa as well. The Population Division of the United Nations Department of Economic and Social Affairs, reported in 2014 that for the first time more than half of the world population lives in cities. Latin America is the most urban continent, with four fifths of its population living in cities, including one fifth of the population said to live in shantytowns (favelas, poblaciones callampas, etc.). Batam, Indonesia, Mogadishu, Somalia, Xiamen, China and Niamey, Niger, are considered among the world's fastest-growing cities, with annual growth rates of 5–8%. In general, the more developed countries of the "Global North" remain more urbanized than the less developed countries of the "Global South"—but the difference continues to shrink because urbanization is happening faster in the latter group. Asia is home to by far the greatest absolute number of city-dwellers: over two billion and counting. The UN predicts an additional 2.5 billion citydwellers (and 300 million fewer countrydwellers) worldwide by 2050, with 90% of urban population expansion occurring in Asia and Africa. Megacities, cities with population in the multi-millions, have proliferated into the dozens, arising especially in Asia, Africa, and Latin America. Economic globalization fuels the growth of these cities, as new torrents of foreign capital arrange for rapid industrialization, as well as relocation of major businesses from Europe and North America, attracting immigrants from near and far. A deep gulf divides rich and poor in these cities, with usually contain a super-wealthy elite living in gated communities and large masses of people living in substandard housing with inadequate infrastructure and otherwise poor conditions. Cities around the world have expanded physically as they grow in population, with increases in their surface extent, with the creation of high-rise buildings for residential and commercial use, and with development underground. Urbanization can create rapid demand for water resources management, as formerly good sources of freshwater become overused and polluted, and the volume of sewage begins to exceed manageable levels. Local government of cities takes different forms including prominently the municipality (especially in England, in the United States, in India, and in other British colonies; legally, the municipal corporation; "municipio" in Spain and in Portugal, and, along with "municipalidad", in most former parts of the Spanish and Portuguese empires) and the "commune" (in France and in Chile; or "comune" in Italy). The chief official of the city has the title of mayor. Whatever their true degree of political authority, the mayor typically acts as the figurehead or personification of their city. City governments have authority to make laws governing activity within cities, while its jurisdiction is generally considered subordinate (in ascending order) to state/provincial, national, and perhaps international law. This hierarchy of law is not enforced rigidly in practice—for example in conflicts between municipal regulations and national principles such as constitutional rights and property rights. Legal conflicts and issues arise more frequently in cities than elsewhere due to the bare fact of their greater density. Modern city governments thoroughly regulate everyday life in many dimensions, including public and personal health, transport, burial, resource use and extraction, recreation, and the nature and use of buildings. Technologies, techniques, and laws governing these areas—developed in cities—have become ubiquitous in many areas. Municipal officials may be appointed from a higher level of government or elected locally. Cities typically provide municipal services such as education, through school systems; policing, through police departments; and firefighting, through fire departments; as well as the city's basic infrastructure. These are provided more or less routinely, in a more or less equal fashion. Responsibility for administration usually falls on the city government, though some services may be operated by a higher level of government, while others may be privately run. Armies may assume responsibility for policing cities in states of domestic turmoil such as America's King assassination riots of 1968. The traditional basis for municipal finance is local property tax levied on real estate within the city. Local government can also collect revenue for services, or by leasing land that it owns. However, financing municipal services, as well as urban renewal and other development projects, is a perennial problem, which cities address through appeals to higher governments, arrangements with the private sector, and techniques such as privatization (selling services into the private sector), corporatization (formation of quasi-private municipally-owned corporations), and financialization (packaging city assets into tradable financial instruments and derivatives). This situation has become acute in deindustrialized cities and in cases where businesses and wealthier citizens have moved outside of city limits and therefore beyond the reach of taxation. Cities in search of ready cash increasingly resort to the municipal bond, essentially a loan with interest and a repayment date. City governments have also begun to use tax increment financing, in which a development project is financed by loans based on future tax revenues which it is expected to yield. Under these circumstances, creditors and consequently city governments place a high importance on city credit ratings. Governance includes government but refers to a wider domain of social control functions implemented by many actors including nongovernmental organizations. The impact of globalization and the role of multinational corporations in local governments worldwide, has led to a shift in perspective on urban governance, away from the "urban regime theory" in which a coalition of local interests functionally govern, toward a theory of outside economic control, widely associated in academics with the philosophy of neoliberalism. In the neoliberal model of governance, public utilities are privatized, industry is deregulated, and corporations gain the status of governing actors—as indicated by the power they wield in public-private partnerships and over business improvement districts, and in the expectation of self-regulation through corporate social responsibility. The biggest investors and real estate developers act as the city's de facto urban planners. The related concept of good governance places more emphasis on the state, with the purpose of assessing urban governments for their suitability for development assistance. The concepts of governance and good governance are especially invoked in the emergent megacities, where international organizations consider existing governments inadequate for their large populations. Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions. Government is legally the final authority on planning but in practice the process involves both public and private elements. The legal principle of eminent domain is used by government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation. The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems. Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic and racial lines. People living relatively close together may live, work, and play, in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the "banlieues", areas of urban development which surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the west, and, increasingly, gated communities and other forms of "privatopia" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods. Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status as factory workers which in the nineteenth century provided access to the means of production. Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market. As hubs of trade cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism. In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities, however in very dense cities, increased crowding and waiting times may lead to some negative effects. Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, housekeeping and prostitution to grey-collar work in law, finance, and administration. Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves playing some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, world history, and social change. Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful. Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; to attract businesses, investors, residents, and tourists; and to create a shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—"purchasing" (the brand of)—a city. Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities. Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people concentrated into cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside. During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, functionally extends modern urban crime prevention, which already uses concepts such as defensible space. Although capture is the more common objective, warfare has in some cases spelt complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto "Carthago delenda est". Since the atomic bombing of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of "countervalue" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces. Cities are responsible for a substantial portion of the emissions responsible for global warming. Over half of the world population is in cities, and cities have outside influence on construction and transportation—two of the key contributors to global warming emissions. A report by the C40 Cities Climate Leadership Group described consumption based emissions as having significantly more impact than production-based emissions within cities. The report estimates that 85% of the emissions associated with goods within a city is generated outside of that city. Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital (pipes, wires, plants, vehicles, etc.) but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private. Infrastructure in general (if not every infrastructure project) plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already. Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance. Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives. Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace. Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network for wastewater including sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide. Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, streetlights and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications. Because cities rely on specialization and an economic system based on wage labour, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. Citydwellers travel foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas. Historically, city streets were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the west, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In western cities, industrializing, expanding, and electrifying at this time, public transit systems and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown. Since the mid-twentieth century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks. The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. Economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia. Walking and cycling ("non-motorized transport") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic. Housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity. Home ownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Urban ecosystems, influenced as they are by the density of human buildings and activities differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species which never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions. Typical urban fauna include insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. Cities generate considerable ecological footprints, locally and at longer distances, due to concentrated populations and technological activities. From one perspective, cities are not ecologically sustainable due to their resource needs. From another, proper management may be able to ameliorate a city's ill effects. Air pollution arises from various forms of combustion, including fireplaces, wood or coal-burning stoves, other heating systems, and internal combustion engines. Industrialized cities, and today third-world megacities, are notorious for veils of smog (industrial haze) which envelop them, posing a chronic threat to the health of their millions of inhabitants. Urban soil contains higher concentrations of heavy metals (especially lead, copper, and nickel) and has lower pH than soil in comparable wilderness. Modern cities are known for creating their own microclimates, due to concrete, asphalt, and other artificial surfaces, which heat up in sunlight and channel rainwater into underground ducts. The temperature in New York City exceeds nearby rural temperatures by an average of 2–3 °C and at times 5–10 °C differences have been recorded. This effect varies nonlinearly with population changes (independently of the city's physical size). Aerial particulates increase rainfall by 5–10%. Thus, urban areas experience unique climates, with earlier flowering and later leaf dropping than in nearby country. Poor and working-class people face disproportionate exposure to environmental risks (known as environmental racism when intersecting also with racial segregation). For example, within the urban microclimate, less-vegetated poor neighborhoods bear more of the heat (but have fewer means of coping with it). One of the main methods of improving the urban ecology is including in the cities more natural areas: Parks, Gardens, Lawns, and Trees. These areas improve the health, the well being of the human, animal, and plant population of the cities. Generally they are called Urban open space (although this word not always mean green space), Green space, Urban greening. Well-maintained urban trees can provide many social, ecological, and physical benefits to the residents of the city. A study published in Nature's Scientific Reports journal in 2019 found that people who spent at least two hours per week in nature, were 23 percent more likely to be satisfied with their life and were 59 percent more likely to be in good health than those who had zero exposure. The study used data from almost 20,000 people in the UK. Benefits increased for up to 300 minutes of exposure. The benefits applied to men and women of all ages, as well as across different ethnicities, socioeconomic status, and even those with long-term illnesses and disabilities. People who did not get at least two hours — even if they surpassed an hour per week — did not get the benefits. The study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients. The study didn't count time spent in a person's own yard or garden as time in nature, but the majority of nature visits in the study took place within two miles from home. "Even visiting local urban green spaces seems to be a good thing," Dr. White said in a press release. "Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit." As the world becomes more closely linked through economics, politics, technology, and culture (a process called globalization), cities have come to play a leading role in transnational affairs, exceeding the limitations of international relations conducted by national governments. This phenomenon, resurgent today, can be traced back to the Silk Road, Phoenicia, and the Greek city-states, through the Hanseatic League and other alliances of cities. Today the information economy based on high-speed internet infrastructure enables instantaneous telecommunication around the world, effectively eliminating the distance between cities for the purposes of stock markets and other high-level elements of the world economy, as well as personal communications and mass media. A global city, also known as a world city, is a prominent centre of trade, banking, finance, innovation, and markets. Saskia Sassen used the term "global city" in her 1991 work, "The Global City: New York, London, Tokyo" to refer to a city's power, status, and cosmopolitanism, rather than to its size. Following this view of cities, it is possible to rank the world's cities hierarchically. Global cities form the capstone of the global hierarchy, exerting command and control through their economic and political influence. Global cities may have reached their status due to early transition to post-industrialism or through inertia which has enabled them to maintain their dominance from the industrial era. This type of ranking exemplifies an emerging discourse in which cities, considered variations on the same ideal type, "must" compete with each other globally to achieve prosperity. Critics of the notion point to the different realms of power and interchange. The term "global city" is heavily influenced by economic factors and, thus, may not account for places that are otherwise significant. Paul James, for example argues that the term is "reductive and skewed" in its focus on financial systems. Multinational corporations and banks make their headquarters in global cities and conduct much of their business within this context. American firms dominate the international markets for law and engineering and maintain branches in the biggest foreign global cities. Global cities feature concentrations of extremely wealthy and extremely poor people. Their economies are lubricated by their capacity (limited by the national government's immigration policy, which functionally defines the supply side of the labor market) to recruit low- and high-skilled immigrant workers from poorer areas. More and more cities today draw on this globally available labor force. Cities increasingly participate in world political activities independently of their enclosing nation-states. Early examples of this phenomenon are the sister city relationship and the promotion of multi-level governance within the European Union as a technique for European integration. Cities including Hamburg, Prague, Amsterdam, The Hague, and City of London maintain their own embassies to the European Union at Brussels. New urban dwellers may increasingly not simply as immigrants but as transmigrants, keeping one foot each (through telecommunications if not travel) in their old and their new homes. Cities participate in global governance by various means including membership in global networks which transmit norms and regulations. At the general, global level, United Cities and Local Governments (UCLG) is a significant umbrella organization for cities; regionally and nationally, Eurocities, Asian Network of Major Cities 21, the Federation of Canadian Municipalities the National League of Cities, and the United States Conference of Mayors play similar roles. UCLG took responsibility for creating Agenda 21 for culture, a program for cultural policies promoting sustainable development, and has organized various conferences and reports for its furtherance. Networks have become especially prevalent in the arena of environmentalism and specifically climate change following the adoption of Agenda 21. Environmental city networks include the C40 Cities Climate Leadership Group, World Association of Major Metropolises ("Metropolis"), the United Nations Global Compact Cities Programme, the Carbon Neutral Cities Alliance (CNCA), the Covenant of Mayors and the Compact of Mayors, ICLEI – Local Governments for Sustainability, and the Transition Towns network. Cities with world political status as meeting places for advocacy groups, non-governmental organizations, lobbyists, educational institutions, intelligence agencies, military contractors, information technology firms, and other groups with a stake in world policymaking. They are consequently also sites for symbolic protest. The United Nations System has been involved in a series of events and declarations dealing with the development of cities during this period of rapid urbanization. UN-Habitat coordinates the UN urban agenda, working with the UN Environmental Programme, the UN Development Programme, the Office of the High Commissioner for Human Rights, the World Health Organization, and the World Bank. The World Bank, a United Nations specialized agency, has been a primary force in promoting the Habitat conferences, and since the first Habitat conference has used their declarations as a framework for issuing loans for urban infrastructure. The bank's structural adjustment programs contributed to urbanization in the Third World by creating incentives to move to cities. The World Bank and UN-Habitat in 1999 jointly established the Cities Alliance (based at the World Bank headquarters in Washington, D.C.) to guide policymaking, knowledge sharing, and grant distribution around the issue of urban poverty. (UN-Habitat plays an advisory role in evaluating the quality of a locality's governance.) The Bank's policies have tended to focus on bolstering real estate markets through credit and technical assistance. The United Nations Educational, Scientific and Cultural Organization, UNESCO has increasingly focused on cities as key sites for influencing cultural governance. It has developed various city networks including the International Coalition of Cities against Racism and the Creative Cities Network. UNESCO's capacity to select World Heritage Sites gives the organization significant influence over cultural capital, tourism, and historic preservation funding. Cities figure prominently in traditional Western culture, appearing in the Bible in both evil and holy forms, symbolized by Babylon and Jerusalem. Cain and Nimrod are the first city builders in the Book of Genesis. In Sumerian mythology Gilgamesh built the walls of Uruk. Cities can be perceived in terms of extremes or opposites: at once liberating and oppressive, wealthy and poor, organized and chaotic. The name anti-urbanism refers to various types of ideological opposition to cities, whether because of their culture or their political relationship with the country. Such opposition may result from identification of cities with oppression and the ruling elite. This and other political ideologies strongly influence narratives and themes in discourse about cities. In turn, cities symbolize their home societies. Writers, painters, and filmmakers have produced innumerable works of art concerning the urban experience. Classical and medieval literature includes a genre of "descriptiones" which treat of city features and history. Modern authors such as Charles Dickens and James Joyce are famous for evocative descriptions of their home cities. Fritz Lang conceived the idea for his influential 1927 film "Metropolis" while visiting Times Square and marveling at the nighttime neon lighting. Other early cinematic representations of cities in the twentieth century generally depicted them as technologically efficient spaces with smoothly functioning systems of automobile transport. By the 1960s, however, traffic congestion began to appear in such films as "The Fast Lady" (1962) and "Playtime" (1967). Literature, film, and other forms of popular culture have supplied visions of future cities both utopian and dystopian. The prospect of expanding, communicating, and increasingly interdependent world cities has given rise to images such as Nylonkong (NY, London, Hong Kong) and visions of a single world-encompassing ecumenopolis. Bibliography Further reading
https://en.wikipedia.org/wiki?curid=5391
Chives Chives, scientific name Allium schoenoprasum, is a species of flowering plant in the family Amaryllidaceae that produces edible leaves and flowers. Their close relatives include the common onions, garlic, shallot, leek, scallion,, and Chinese onion. A perennial plant, it is widespread in nature across much of Europe, Asia, and North America. "A. schoenoprasum" is the only species of "Allium" native to both the New and the Old Worlds. Chives are a commonly used herb and can be found in grocery stores or grown in home gardens. In culinary use, the green stalks (scapes) and the unopened, immature flower buds are diced and used as an ingredient for omelettes, fish, potatoes, soups, and many other dishes. The edible flowers can be used in salads. Chives have insect-repelling properties that can be used in gardens to control pests. The plant provides a great deal of nectar for pollinators. It was rated in the top 10 for most nectar production (nectar per unit cover per year) in a UK plants survey conducted by the AgriLand project which is supported by the UK Insect Pollinators Initiative. Chives are a bulb-forming herbaceous perennial plant, growing to tall. The bulbs are slender, conical, long and broad, and grow in dense clusters from the roots. The scapes (or stems) are hollow and tubular, up to long and across, with a soft texture, although, prior to the emergence of a flower, they may appear stiffer than usual. The grass-like leaves, which are shorter than the scapes, are also hollow and tubular, or terete, (round in cross-section) which distinguishes it at a glance from garlic chives ("Allium tuberosum"). The flowers are pale purple, and star-shaped with six petals, wide, and produced in a dense inflorescence of 10-30 together; before opening, the inflorescence is surrounded by a papery bract. The seeds are produced in a small, three-valved capsule, maturing in summer. The herb flowers from April to May in the southern parts of its habitat zones and in June in the northern parts. Chives are the only species of "Allium" native to both the New and the Old Worlds. Sometimes, the plants found in North America are classified as "A. schoenoprasum" var. "sibiricum", although this is disputed. Differences between specimens are significant. One example was found in northern Maine growing solitary, instead of in clumps, also exhibiting dingy grey flowers. Although chives are repulsive to insects in general, due to their sulfur compounds, their flowers attract bees, and they are at times kept to increase desired insect life. It was formally described by the Swedish botanist Carl Linnaeus in his seminal publication "Species Plantarum" in 1753. The name of the species derives from the Greek σχοίνος, "skhoínos" (sedge or rush) and πράσον, "práson" (leek). Its English name, chives, derives from the French word "cive", from "cepa", the Latin word for onion. In the Middle Ages, it was known as 'rush leek'. It has two known subspecies; "Allium schoenoprasum" subsp. "gredense" (Rivas Goday) Rivas Mart., Fern. Gonz. & Sánchez Mata and "Allium schoenoprasum" subsp. "latiorifolium" (Pau) Rivas Mart., Fern. Gonz. & Sánchez Mata Chives are native to temperate areas of Europe, Asia and North America. It is found in Asia within the Caucasus (in Armenia, Azerbaijan and Georgia), also in China, Iran, Iraq, Japan (within the provinces of Hokkaido and Honshu), Kazakhstan, Kyrgyzstan, Mongolia, Pakistan, Russian Federation (within the provinces of Kamchatka, Khabarovsk, and Primorye) Siberia and Turkey. In middle Europe, it is found within Austria, the Czech Republic, Germany, the Netherlands, Poland and Switzerland. In northern Europe, in Denmark, Finland, Norway, Sweden and the United Kingdom. In southeastern Europe, within Bulgaria, Greece, Italy and Romania. It is also found in southwestern Europe, in France, Portugal and Spain. In North America, it is found in Canada (within the provinces and territories of Alberta, British Columbia, Manitoba, Northwest Territories, Nova Scotia, New Brunswick, Newfoundland, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan and Yukon), and the United States (within the states of Alaska, Colorado, Connecticut, Idaho, Maine, Maryland, Massachusetts, Michigan, Minnesota, Montana, New Hampshire, New Jersey, New York, Ohio, Oregon, Pennsylvania, Rhode Island, Vermont, Washington, West Virginia, Wisconsin and Wyoming). Chives are grown for their scapes and leaves, which are used for culinary purposes as a flavoring herb, and provide a somewhat milder onion-like flavor than those of other "Allium" species. Chives have a wide variety of culinary uses, such as in traditional dishes in France, Sweden, and elsewhere. In his 1806 book "Attempt at a Flora" ("Försök til en flora"), Retzius describes how chives are used with pancakes, soups, fish, and sandwiches. They are also an ingredient of the "gräddfil" sauce with the traditional herring dish served at Swedish midsummer celebrations. The flowers may also be used to garnish dishes. In Poland and Germany, chives are served with quark. Chives are one of the "fines herbes" of French cuisine, the others being tarragon, chervil and parsley. Chives can be found fresh at most markets year-round, making them readily available; they can also be dry-frozen without much impairment to the taste, giving home growers the opportunity to store large quantities harvested from their own gardens. Retzius also describes how farmers would plant chives between the rocks making up the borders of their flowerbeds, to keep the plants free from pests (such as Japanese beetles). The growing plant repels unwanted insect life, and the juice of the leaves can be used for the same purpose, as well as fighting fungal infections, mildew, and scab. Chives are cultivated both for their culinary uses and their ornamental value; the violet flowers are often used in ornamental dry bouquets. The flowers are also edible and are used in salads, or used to make Blossom vinegars. Chives thrive in well-drained soil, rich in organic matter, with a pH of 6-7 and full sun. They can be grown from seed and mature in summer, or early the following spring. Typically, chives need to be germinated at a temperature of 15 to 20 °C (60-70 °F) and kept moist. They can also be planted under a cloche or germinated indoors in cooler climates, then planted out later. After at least four weeks, the young shoots should be ready to be planted out. They are also easily propagated by division. In cold regions, chives die back to the underground bulbs in winter, with the new leaves appearing in early spring. Chives starting to look old can be cut back to about 2–5 cm. When harvesting, the needed number of stalks should be cut to the base. During the growing season, the plant continually regrows leaves, allowing for a continuous harvest. Chives are susceptible to damage by leek moth larvae, which bore into the leaves or bulbs of the plant. Chives have been cultivated in Europe since the Middle Ages (fifth until the 15th centuries), although their usage dates back 5000 years. They were sometimes referred to as "rush leeks". It was mentioned in 80 A.D. by Marcus Valerius Martialis in his "Epigrams". The Romans believed chives could relieve the pain from sunburn or a sore throat. They believed eating chives could increase blood pressure and act as a diuretic. Romani have used chives in fortune telling. Bunches of dried chives hung around a house were believed to ward off disease and evil. In the 19th century, Dutch farmers fed cattle on the herb to give a different taste to milk.
https://en.wikipedia.org/wiki?curid=5395
Chris Morris (satirist) Christopher J Morris (born 15 June 1962) is an English comedian, writer, director, actor, voice actor, and producer. He is known for his black humour, surrealism, and controversial subject matters, and has been hailed for his "uncompromising, moralistic drive" by the British Film Institute. In the early 1990s, Morris teamed up with his radio producer, Armando Iannucci, to create "On the Hour", a satire of news programmes. This was expanded into a television spin off, "The Day Today", which launched the career of Steve Coogan, and has since been hailed as one of the most important satirical shows of the 1990s. Morris further developed the satirical news format with "Brass Eye", which lampooned celebrities whilst focusing on themes such as crime and drugs. For many, the apotheosis of Morris' career was a "Brass Eye" special, which dealt with the moral panic surrounding paedophilia. It quickly became one of the most complained about programmes in British television history, leading the "Daily Mail" to describe him as "the most loathed man on TV". Meanwhile, Morris's postmodern sketch comedy and ambient music radio show "Blue Jam", which had seen controversy similar to Brass Eye, helped him to gain a cult following. "Blue Jam" was adapted into the TV series "Jam", which some hailed as "the most radical and original television programme broadcast in years", and he went on to win a BAFTA for Best Short Film after expanding a "Blue Jam" sketch into "My Wrongs 8245–8249 & 117", which starred Paddy Considine. This was followed by "Nathan Barley", a sitcom written in collaboration with a then little-known Charlie Brooker that satirised hipsters, which had low ratings but found success upon its DVD release. Morris followed this by joining the cast of the sitcom "The IT Crowd", his first project in which he did not have writing or producing input. In 2010, Morris directed his first feature-length film, "Four Lions", which satirised Islamic terrorism through a group of inept British Pakistanis. Reception of the film was largely positive, earning Morris his second BAFTA, for "Outstanding Debut". Since 2012, he has directed four episodes of Iannucci's political comedy "Veep" and appeared onscreen in "The Double" and "Stewart Lee's Comedy Vehicle". Morris was born in Colchester, Essex, to father Paul Michael Morris, a GP, and mother Rosemary Parrington and grew up in a Victorian farmhouse in the village Buckden, Huntingdonshire, which he describes as "very dull". He has two younger brothers, including theatre director Tom Morris. From an early age he was a prankster, and also had a passion for radio. From the age of 10 he was educated at Stonyhurst College, an independent Jesuit boarding school in Lancashire. He went to study zoology at the University of Bristol, where he gained a 2:1. On graduating, Morris pursued a career as a musician in various bands, for which he played the bass guitar. He then went to work for Radio West, a local radio station in Bristol. He then took up a news traineeship with BBC Radio Cambridgeshire, where he took advantage of access to editing and recording equipment to create elaborate spoofs and parodies. He also spent time in early 1987 hosting a 2–4pm afternoon show and finally ended up presenting Saturday morning show "I.T." In July 1987, he moved on to BBC Radio Bristol to present his own show "No Known Cure", broadcast on Saturday and Sunday mornings. The show was surreal and satirical, with odd interviews conducted with unsuspecting members of the public. He was fired from Bristol in 1990 after "talking over the news bulletins and making silly noises". In 1988 he also joined, from its launch, Greater London Radio (GLR). He presented "The Chris Morris Show" on GLR until 1993, when one show got suspended after a sketch was broadcast involving a child "outing" celebrities. In 1991, Morris joined Armando Iannucci's spoof news project "On the Hour". Broadcast on BBC Radio 4, it saw him work alongside Iannucci, Steve Coogan, Stewart Lee, Richard Herring and Rebecca Front. In 1992, Morris hosted Danny Baker's Radio 5 Morning Edition show for a week whilst Baker was on holiday. In 1994, Morris began a weekly evening show, the "Chris Morris Music Show", on BBC Radio 1 alongside Peter Baynham and 'man with a mobile phone' Paul Garner. In the shows, Morris perfected the spoof interview style that would become a central component of his "Brass Eye" programme. In the same year, Morris teamed up with Peter Cook (as Sir Arthur Streeb-Greebling), in a series of improvised conversations for BBC Radio 3 entitled "Why Bother?". In 1994, a BBC 2 television series based on "On the Hour" was broadcast under the name "The Day Today". "The Day Today" made a star of Morris, and marked the television debut of Steve Coogan's Alan Partridge character. The programme ended on a high after just one series, with Morris winning the 1994 British Comedy Award for Best Newcomer for his lead role as the Paxmanesque news anchor. In 1996, Morris appeared on the daytime programme The Time, The Place, posing as an academic, Thurston Lowe, in a discussion entitled "Are British Men Lousy Lovers?", but was found out when a producer alerted the show's host, John Stapleton. In 1997, the black humour which had featured in "On the Hour" and "The Day Today" became more prominent in "Brass Eye", another spoof current affairs television documentary, shown on Channel 4. "Brass Eye" became known for tricking celebrities and politicians into throwing support behind public awareness campaigns for made-up issues that were often absurd or surreal (such as a drug called "cake" and an elephant with its trunk stuck up its anus). From 1997 to 1999 Morris created "Blue Jam" for BBC Radio 1, a surreal taboo-breaking radio show set to an ambient soundtrack. In 2000 this was followed by "Jam", a television reworking. Morris released a 'remix' version of this, entitled "Jaaaaam". In 2001, a special episode of "Brass Eye" on the moral panic that surrounds paedophilia attracted a record-breaking number of complaints – the total remains the third highest on UK television after "Celebrity Big Brother 2007" and "" – as well as heated discussion in the press. Many complainants, some of whom later admitted to not having seen the programme (notably Beverley Hughes, a government minister), felt the satire was directed at the victims of paedophilia, which Morris denies. Channel 4 defended the show, insisting the target was the media and its hysterical treatment of paedophilia, and not victims of crime. In 2002, Morris ventured into film, directing the short "My Wrongs#8245–8249 & 117", adapted from a "Blue Jam" monologue about a man led astray by a sinister talking dog. It was the first film project of Warp Films, a branch of Warp Records. In 2002 it won the BAFTA for best short film. In 2005 Morris worked on a sitcom entitled "Nathan Barley", based on the character created by Charlie Brooker for his website TVGoHome (Morris had contributed to TVGoHome on occasion, under the pseudonym 'Sid Peach'). Co-written by Brooker and Morris, the series was broadcast on Channel 4 in early 2005. Morris was a cast member in "The IT Crowd", a Channel 4 sitcom which focused on the information technology department of the fictional company Reynholm Industries. The series was written and directed by Graham Linehan (writer of "Father Ted" and "Black Books", with whom Morris collaborated on "The Day Today", "Brass Eye" and "Jam") and produced by Ash Atalla ("The Office"). Morris played Denholm Reynholm, the eccentric managing director of the company. This marked the first time Morris has acted in a substantial role in a project which he has not developed himself. Morris' character appeared to leave the series during episode two of the second series. His character made a brief return in the first episode of the third series. In November 2007, Morris wrote an article for "The Observer" in response to Ronan Bennett's article published six days earlier in "The Guardian". Bennett's article, "Shame on us", accused the novelist Martin Amis of racism. Morris' response, "The absurd world of Martin Amis", was also highly critical of Amis; although he did not accede to Bennett's accusation of racism, Morris likened Amis to the Muslim cleric Abu Hamza (who was jailed for inciting racial hatred in 2006), suggesting that both men employ "mock erudition, vitriol and decontextualised quotes from the Qu'ran" to incite hatred. Morris served as script editor for the 2009 series "Stewart Lee's Comedy Vehicle", working with former colleagues Stewart Lee, Kevin Eldon and Armando Iannucci. He maintained this role for the second (2011) and third series (2014), also appearing as a mock interviewer dubbed the "hostile interrogator" in the third and fourth series. Morris completed his debut feature film "Four Lions" in late 2009, a satire based on a group of Islamist terrorists in Sheffield. It premiered at the Sundance Film Festival in January 2010 and was short-listed for the festival's World Cinema Narrative prize. The film (working title "Boilerhouse") was picked up by Film Four. Morris told "The Sunday Times" that the film sought to do for Islamic terrorism what "Dad's Army", the classic BBC comedy, did for the Nazis by showing them as "scary but also ridiculous". In 2012, Morris directed the seventh and penultimate episode of the first season of "Veep", an Armando Iannucci-devised American version of "The Thick of It". In 2013, he returned to direct two episodes for the second season of "Veep", and a further episode for season three in 2014. In 2013, Morris appeared briefly in Richard Ayoade's "The Double", a black comedy film based on the Fyodor Dostoyevsky novella of the same name. Morris had previously worked with Ayoade on "Nathan Barley" and "The IT Crowd". In February 2014, Morris made a surprise appearance at the beginning of a Stewart Lee live show, introducing the comedian with fictional anecdotes about their work together. The following month, Morris appeared in the third series of "Stewart Lee's Comedy Vehicle" as a "hostile interrogator", a role previously occupied by Armando Iannucci. In December 2014, it was announced that a short radio collaboration with Noel Fielding and Richard Ayoade would be broadcast on BBC Radio 6. According to Fielding, the work had been in progress since around 2006. However, in January 2015 it was decided, 'in consultation with [Morris]', that the project was not yet complete, and so the intended broadcast did not go ahead. A statement released by Film4 in February 2016 made reference to funding what would be Morris's second feature-film. In November 2017 it was reported that Morris had shot the movie, starring Anna Kendrick, in the Dominican Republic but the title was not made public. It was later reported in January 2018 that Jim Gaffigan and Rupert Friend had joined the cast of the still-untitled film, and that the plot would revolve around an FBI hostage situation gone wrong. The completed film, titled "The Day Shall Come", had its world premiere at South by Southwest on 11 March 2019. Morris often co-writes and performs incidental music for his television shows, notably with "Jam" and the 'extended remix' version, "Jaaaaam". In the early 1990s Morris contributed a Pixies parody track entitled "Motherbanger" to a flexi-disc given away with an edition of Select music magazine. Morris supplied sketches for British band Saint Etienne's 1993 single "You're in a Bad Way" (the sketch 'Spongbake' appears at the end of the 4th track on the CD single). In 2000, he collaborated by mail with Amon Tobin to create the track "Bad Sex", which was released as a B-side on the Tobin single "Slowly". British band Stereolab's song "Nothing to Do with Me" from their 2001 album "Sound-Dust" featured various lines from Chris Morris sketches as lyrics. In 2003, Morris was listed in "The Observer" as one of the 50 funniest acts in British comedy. In 2005, Channel 4 aired a show called "The Comedian's Comedian" in which foremost writers and performers of comedy ranked their 50 favourite acts. Morris was at number eleven. Morris won the BAFTA for outstanding debut with his film "Four Lions". Adeel Akhtar and Nigel Lindsay collected the award in his absence. Lindsay stated that Morris had sent him a text message before they collected the award reading, 'Doused in petrol, Zippo at the ready'. In June 2012 Morris was placed at number 16 in the Top 100 People in UK Comedy. In 2010, a biography, "Disgusting Bliss: The Brass Eye of Chris Morris", was published. Written by Lucian Randall, the book depicted Morris as "brilliant but uncompromising", and a "frantic-minded perfectionist". In November 2014, a three-hour retrospective of Morris's radio career was broadcast on BBC Radio 4 Extra under the title 'Raw Meat Radio', presented by Mary Anne Hobbs and featuring interviews with Armando Iannucci, Peter Baynham, Paul Garner, and others. Morris won the Best TV Comedy Newcomer award from the British Comedy Awards in 1998 for his performance in "The Day Today". He has won two BAFTA awards: the BAFTA Award for Best Short Film in 2002 for "My Wrongs#8245–8249 & 117", and the BAFTA Award for Outstanding Debut by a British director, writer or producer in 2011 for "Four Lions". Morris lives in Brixton, with his wife, the actress turned literary agent Jo Unwin. The pair met in 1984 at the Edinburgh Festival, when he was playing bass guitar for the Cambridge Footlights Revue and she was in a comedy troupe called the Millies. They have two sons, Charles and Frederick, both of whom were born in Lambeth. Until the release of "Four Lions" he gave very few interviews and little had been published about Morris's personal life. In 2010 he made numerous media appearances to promote and support the film, both in the UK and US, at one point appearing as a guest on "Late Night with Jimmy Fallon". In 2019, two lengthy interviews with Morris conducted by fellow British comedian Adam Buxton for "The Adam Buxton Podcast" were released in the run up to the release of Morris’ new movie "The Day Shall Come". Morris can be heard as himself in a 2008 podcast for CERN, being taken on a tour of the facility by the physicist Brian Cox. Morris has a large birthmark on his face, which he usually covers with makeup when acting.
https://en.wikipedia.org/wiki?curid=5397
Colorado Colorado (, other variants) is a state in the western United States encompassing most of the southern Rocky Mountains as well as the northeastern portion of the Colorado Plateau and the western edge of the Great Plains. It is the 8th most extensive and 21st most populous U.S. state. The estimated population of Colorado is 5,758,736 as of 2019, an increase of 14.5% since the 2010 United States Census. The region has been inhabited by Native Americans for more than 13,000 years, with the Lindenmeier Site containing artifacts dating from approximately 11200 BC to 3000 BC; the eastern edge of the Rocky Mountains was a major migration route for early peoples who spread throughout the Americas. The state was named for the Colorado River, which early Spanish explorers named the "Río Colorado" ("Red River") for the ruddy silt the river carried from the mountains. The Territory of Colorado was organized on February 28, 1861, and on August 1, 1876, U.S. President Ulysses S. Grant signed Proclamation 230 admitting Colorado to the Union as the 38th state. Colorado is nicknamed the "Centennial State" because it became a state one century after the signing of the United States Declaration of Independence. Colorado is bordered by Wyoming to the north, Nebraska to the northeast, Kansas to the east, Oklahoma to the southeast, New Mexico to the south, Utah to the west, and touches Arizona to the southwest at the Four Corners. Colorado is noted for its vivid landscape of mountains, forests, high plains, mesas, canyons, plateaus, rivers and desert lands. Colorado is part of the western and southwestern United States and is one of the Mountain States. Denver is the capital and most populous city of Colorado. Residents of the state are known as Coloradans, although the antiquated term "Coloradoan" is occasionally used. Colorado is a comparatively wealthy state, ranking 8th in household income in 2016, and 11th in per capita income in 2010. Major parts of the economy include government and defense, mining, agriculture, tourism, and increasingly other kinds of manufacturing. With increasing temperatures and decreasing water availability, Colorado's agriculture, forestry and tourism economies are expected to be heavily affected by climate change. Colorado is notable for its diverse geography, which includes alpine mountains, high plains, deserts with huge sand dunes, and deep canyons. In 1861, the United States Congress defined the boundaries of the new Territory of Colorado exclusively by lines of latitude and longitude, stretching from 37°N to 41°N latitude, and from 102°02′48″W to 109°02′48″W longitude (25°W to 32°W from the Washington Meridian). After years of government surveys, the borders of Colorado are now officially defined by 697 boundary markers and 697 straight boundary lines. Colorado, Wyoming, and Utah are the only states that have their borders defined solely by straight boundary lines with no natural features. The southwest corner of Colorado is the Four Corners Monument at 36°59′56″N, 109°2′43″W. This border delineating Colorado, New Mexico, Arizona, and Utah is the only place in the United States where four states meet. The summit of Mount Elbert at elevation in Lake County is the highest point in Colorado and the Rocky Mountains of North America. Colorado is the only U.S. state that lies entirely above 1,000 meters elevation. The point where the Arikaree River flows out of Yuma County, Colorado, and into Cheyenne County, Kansas, is the lowest point in Colorado at elevation. This point, which holds the distinction of being the highest low elevation point of any state, is higher than the high elevation points of 18 states and the District of Columbia. A little less than half of Colorado is flat and rolling land. East of the Rocky Mountains are the Colorado Eastern Plains of the High Plains, the section of the Great Plains within Nebraska at elevations ranging from roughly . The Colorado plains are mostly prairies but also include deciduous forests, buttes, and canyons. Precipitation averages annually. Eastern Colorado is presently mainly farmland and rangeland, along with small farming villages and towns. Corn, wheat, hay, soybeans, and oats are all typical crops. Most villages and towns in this region boast both a water tower and a grain elevator. Irrigation water is available from both surface and subterranean sources. Surface water sources include the South Platte, the Arkansas River, and a few other streams. Subterranean water is generally accessed through artesian wells. Heavy usage of these wells for irrigation purposes caused underground water reserves to decline in the region. Eastern Colorado also hosts a considerable amount and range of livestock, such as cattle ranches and hog farms. Roughly 70% of Colorado's population resides along the eastern edge of the Rocky Mountains in the Front Range Urban Corridor between Cheyenne, Wyoming, and Pueblo, Colorado. This region is partially protected from prevailing storms that blow in from the Pacific Ocean region by the high Rockies in the middle of Colorado. The "Front Range" includes Denver, Boulder, Fort Collins, Loveland, Castle Rock, Colorado Springs, Pueblo, Greeley, and other townships and municipalities in between. On the other side of the Rockies, the significant population centers in Western Colorado (which is not considered the "Front Range") are the cities of Grand Junction, Durango, and Montrose. The Continental Divide of the Americas extends along the crest of the Rocky Mountains. The area of Colorado to the west of the Continental Divide is called the Western Slope of Colorado. West of the Continental Divide, water flows to the southwest via the Colorado River and the Green River into the Gulf of California. Within the interior of the Rocky Mountains are several large parks which are high broad basins. In the north, on the east side of the Continental Divide is the North Park of Colorado. The North Park is drained by the North Platte River, which flows north into Wyoming and Nebraska. Just to the south of North Park, but on the western side of the Continental Divide, is the Middle Park of Colorado, which is drained by the Colorado River. The South Park of Colorado is the region of the headwaters of the South Platte River. In southmost Colorado is the large San Luis Valley, where the headwaters of the Rio Grande are located. The valley sits between the Sangre De Cristo Mountains and San Juan Mountains, and consists of large desert lands that eventually run into the mountains. The Rio Grande drains due south into New Mexico, Mexico, and Texas. Across the Sangre de Cristo Range to the east of the San Luis Valley lies the Wet Mountain Valley. These basins, particularly the San Luis Valley, lie along the Rio Grande Rift, a major geological formation of the Rocky Mountains, and its branches. To the west of the Great Plains of Colorado rises the eastern slope of the Rocky Mountains. Notable peaks of the Rocky Mountains include Longs Peak, Mount Evans, Pikes Peak, and the Spanish Peaks near Walsenburg, in southern Colorado. This area drains to the east and the southeast, ultimately either via the Mississippi River or the Rio Grande into the Gulf of Mexico. The Rocky Mountains within Colorado contain 53 peaks that are or higher in elevation above sea level, known as fourteeners. These mountains are largely covered with trees such as conifers and aspens up to the tree line, at an elevation of about in southern Colorado to about in northern Colorado. Above this tree line only alpine vegetation grows. Only small parts of the Colorado Rockies are snow-covered year-round. Much of the alpine snow melts by mid-August with the exception of a few snow-capped peaks and a few small glaciers. The Colorado Mineral Belt, stretching from the San Juan Mountains in the southwest to Boulder and Central City on the front range, contains most of the historic gold- and silver-mining districts of Colorado. Mount Elbert is the highest summit of the Rocky Mountains. The 30 highest major summits of the Rocky Mountains of North America all lie within the state. The Western Slope area of Colorado includes the western face of the Rocky Mountains and all of the state to the western border. This area includes several terrains and climates from alpine mountains to arid deserts. The Western Slope includes many ski resort towns in the Rocky Mountains and towns west of the mountains. It is less populous than the Front Range but includes a large number of national parks and monuments. From west to east, the land of Colorado consists of desert lands, desert plateaus, alpine mountains, National Forests, relatively flat grasslands, scattered forests, buttes, and canyons in the western edge of the Great Plains. The famous Pikes Peak is located just west of Colorado Springs. Its isolated peak is visible from nearly the Kansas border on clear days, and also far to the north and the south. The northwestern corner of Colorado is a sparsely populated region, and it contains part of the noted Dinosaur National Monument, which not only is a paleontological area, but is also a scenic area of rocky hills, canyons, arid desert, and streambeds. Here, the Green River briefly crosses over into Colorado. Desert lands in Colorado are located in and around areas such as the Pueblo, Canon City, Florence, Great Sand Dunes National Park and Preserve, San Luis Valley, Cortez, Canyon of the Ancients National Monument, Hovenweep National Monument, Ute Mountain, Delta, Grand Junction, Colorado National Monument, and other areas surrounding the Uncompahgre Plateau and Uncompahgre National Forest. The Western Slope of Colorado is drained by the Colorado River and its tributaries (primarily the Gunnison River, Green River, and the San Juan River), or by evaporation in its arid areas. The Colorado River flows through Glenwood Canyon, and then through an arid valley made up of desert from Rifle to Parachute, through the desert canyon of De Beque Canyon, and into the arid desert of Grand Valley, where the city of Grand Junction is located. Also prominent in or near the southern portion of the Western Slope are the Grand Mesa, which lies to the southeast of Grand Junction; the high San Juan Mountains, a rugged mountain range; and to the west of the San Juan Mountains, the Colorado Plateau, a high arid region that borders Southern Utah. Grand Junction, Colorado is the largest city on the Western Slope. Grand Junction and Durango are the only major centers of television broadcasting west of the Continental Divide in Colorado, though most mountain resort communities publish daily newspapers. Grand Junction is located along Interstate 70, the only major highway in Western Colorado. Grand Junction is also along the major railroad of the Western Slope, the Union Pacific. This railroad also provides the tracks for Amtrak's California Zephyr passenger train, which crosses the Rocky Mountains between Denver and Grand Junction via a route on which there are no continuous highways. The Western Slope includes multiple notable destinations in the Colorado Rocky Mountains, including Glenwood Springs, with its resort hot springs, and the ski resorts of Aspen, Breckenridge, Vail, Crested Butte, Steamboat Springs, and Telluride. Higher education in and near the Western Slope can be found at Colorado Mesa University in Grand Junction, Western Colorado University in Gunnison, Fort Lewis College in Durango, and Colorado Mountain College in Glenwood Springs and Steamboat Springs. The Four Corners Monument in the southwest corner of Colorado marks the common boundary of Colorado, New Mexico, Arizona, and Utah; the only such place in the United States. The climate of Colorado is more complex than states outside of the Mountain States region. Unlike most other states, southern Colorado is not always warmer than northern Colorado. Most of Colorado is made up of mountains, foothills, high plains, and desert lands. Mountains and surrounding valleys greatly affect local climate. As a general rule, with an increase in elevation comes a decrease in temperature and an increase in precipitation. Northeast, east, and southeast Colorado are mostly the high plains, while Northern Colorado is a mix of high plains, foothills, and mountains. Northwest and west Colorado are predominantly mountainous, with some desert lands mixed in. Southwest and southern Colorado are a complex mixture of desert and mountain areas. The climate of the Eastern Plains is semiarid (Köppen climate classification: "BSk") with low humidity and moderate precipitation, usually from annually. The area is known for its abundant sunshine and cool, clear nights, which give this area a great average diurnal temperature range. The difference between the highs of the days and the lows of the nights can be considerable as warmth dissipates to space during clear nights, the heat radiation not being trapped by clouds. The Front Range urban corridor, where most of the population of Colorado resides, lies in a pronounced precipitation shadow as a result of being on the lee side of the Rocky Mountains. In summer, this area can have many days above 95 °F (35 °C) and often 100 °F (38 °C). On the plains, the winter lows usually range from 25 to −10 °F (−4 to −23 °C). About 75% of the precipitation falls within the growing season, from April to September, but this area is very prone to droughts. Most of the precipitation comes from thunderstorms, which can be severe, and from major snowstorms that occur in the winter and early spring. Otherwise, winters tend to be mostly dry and cold. In much of the region, March is the snowiest month. April and May are normally the rainiest months, while April is the wettest month overall. The Front Range cities closer to the mountains tend to be warmer in the winter due to Chinook winds which warm the area, sometimes bringing temperatures of 70 °F (21 °C) or higher in the winter. The average July temperature is 55 °F (13 °C) in the morning and 90 °F (32 °C) in the afternoon. The average January temperature is 18 °F (−8 °C) in the morning and 48 °F (9 °C) in the afternoon, although variation between consecutive days can be 40 °F (20 °C). Just west of the plains and into the foothills, there are a wide variety of climate types. Locations merely a few miles apart can experience entirely different weather depending on the topography. Most valleys have a semi-arid climate not unlike the eastern plains, which transitions to an alpine climate at the highest elevations. Microclimates also exist in local areas that run nearly the entire spectrum of climates, including subtropical highland ("Cfb/Cwb"), humid subtropical ("Cfa"), humid continental ("Dfa/Dfb"), Mediterranean ("Csa/Csb") and subarctic ("Dfc"). Extreme weather changes are common in Colorado, although a significant portion of the extreme weather occurs in the least populated areas of the state. Thunderstorms are common east of the Continental Divide in the spring and summer, yet are usually brief. Hail is a common sight in the mountains east of the Divide and across the eastern Plains, especially the northeast part of the state. Hail is the most commonly reported warm-season severe weather hazard, and occasionally causes human injuries, as well as significant property damage. The eastern Plains are subject to some of the biggest hail storms in North America. Notable examples are the severe hailstorms that hit Denver on July 11, 1990 and May 8, 2017, the latter being the costliest ever in the state. The Eastern Plains are part of the extreme western portion of Tornado Alley; some damaging tornadoes in the Eastern Plains include the 1990 Limon F3 tornado and the 2008 Windsor EF3 tornado, which devastated the small town. Portions of the eastern Plains see especially frequent tornadoes, both those spawned from mesocyclones in supercell thunderstorms and from less intense landspouts, such as within the Denver convergence vorticity zone (DCVZ). The Plains are also susceptible to occasional floods and particularly severe flash floods, which are caused both by thunderstorms and by the rapid melting of snow in the mountains during warm weather. Notable examples include the 1965 Denver Flood, the Big Thompson River flooding of 1976 and the 2013 Colorado floods. Hot weather is common during summers in Denver. The city's record in 1901 for the number of consecutive days above 90 °F (32 °C) was broken during the summer of 2008. The new record of 24 consecutive days surpassed the previous record by almost a week. Much of Colorado is very dry, with the state averaging only of precipitation per year statewide. The state rarely experiences a time when some portion is not in some degree of drought. The lack of precipitation contributes to the severity of wildfires in the state, such as the Hayman Fire of 2002, one of the largest wildfires in American history, and the Fourmile Canyon Fire of 2010, which until the Waldo Canyon Fire and High Park Fire of June 2012, and the Black Forest Fire of June 2013, was the most destructive wildfire in Colorado's recorded history. However, some of the mountainous regions of Colorado receive a huge amount of moisture from winter snowfalls. The spring melts of these snows often cause great waterflows in the Yampa River, the Colorado River, the Rio Grande, the Arkansas River, the North Platte River, and the South Platte River. Water flowing out of the Colorado Rocky Mountains is a very significant source of water for the farms, towns, and cities of the southwest states of New Mexico, Arizona, Utah, and Nevada, as well as the Midwest, such as Nebraska and Kansas, and the southern states of Oklahoma and Texas. A significant amount of water is also diverted for use in California; occasionally (formerly naturally and consistently), the flow of water reaches northern Mexico. The highest official ambient air temperature ever recorded in Colorado was on July 20, 2019, at John Martin Dam. The lowest official air temperature was on February 1, 1985, at Maybell. Despite its mountainous terrain, Colorado is relatively quiet seismically. The U.S. National Earthquake Information Center is located in Golden. On August 22, 2011, a 5.3 magnitude earthquake occurred west-southwest of the city of Trinidad. There were no casualties and only a small amount of damage was reported. It was the second-largest earthquake in Colorado's history. A magnitude 5.7 earthquake was recorded in 1973. In early morning hours of August 24, 2018, four minor earthquakes rattled the state of Colorado ranging from magnitude 2.9 to 4.3. Colorado has recorded 525 earthquakes since 1973, a majority of which range 2to 3.5 on the Richter scale. The region that is today the state of Colorado has been inhabited by Native Americans for more than 13,000 years. The Lindenmeier Site in Larimer County contains artifacts dating from approximately 11200 BC to 3000 BC. The eastern edge of the Rocky Mountains was a major migration route that was important to the spread of early peoples throughout the Americas. The Ancient Pueblo peoples lived in the valleys and mesas of the Colorado Plateau. The Ute Nation inhabited the mountain valleys of the Southern Rocky Mountains and the Western Rocky Mountains, even as far east as the Front Range of present day. The Apache and the Comanche also inhabited Eastern and Southeastern parts of the state. At times, the Arapaho Nation and the Cheyenne Nation moved west to hunt across the High Plains. The Spanish Empire claimed Colorado as part of its New Mexico province prior to U.S. involvement in the region. The U.S. acquired a territorial claim to the eastern Rocky Mountains with the Louisiana Purchase from France in 1803. This U.S. claim conflicted with the claim by Spain to the upper Arkansas River Basin as the exclusive trading zone of its colony of Santa Fé de Nuevo México. In 1806, Zebulon Pike led a U.S. Army reconnaissance expedition into the disputed region. Colonel Pike and his men were arrested by Spanish cavalrymen in the San Luis Valley the following February, taken to Chihuahua, and expelled from Mexico the following July. The U.S. relinquished its claim to all land south and west of the Arkansas River and south of 42nd parallel north and west of the 100th meridian west as part of its purchase of Florida from Spain with the Adams-Onís Treaty of 1819. The treaty took effect February 22, 1821. Having settled its border with Spain, the U.S. admitted the southeastern portion of the Territory of Missouri to the Union as the state of Missouri on August 10, 1821. The remainder of Missouri Territory, including what would become northeastern Colorado, became unorganized territory, and remained so for 33 years over the question of slavery. After 11 years of war, Spain finally recognized the independence of Mexico with the Treaty of Córdoba signed on August 24, 1821. Mexico eventually ratified the Adams-Onís Treaty in 1831. The Texian Revolt of 1835–36 fomented a dispute between the U.S. and Mexico which eventually erupted into the Mexican–American War in 1846. Mexico surrendered its northern territory to the U.S. with the Treaty of Guadalupe Hidalgo at the conclusion of the war in 1848. Most American settlers traveling overland west to the Oregon Country, the new goldfields of California, or the new Mormon settlements of the State of Deseret in the Salt Lake Valley, avoided the rugged Southern Rocky Mountains, and instead followed the North Platte River and Sweetwater River to South Pass (Wyoming), the lowest crossing of the Continental Divide between the Southern Rocky Mountains and the Central Rocky Mountains. In 1849, the Mormons of the Salt Lake Valley organized the extralegal State of Deseret, claiming the entire Great Basin and all lands drained by the rivers Green, Grand, and Colorado. The federal government of the U.S. flatly refused to recognize the new Mormon government, because it was theocratic and sanctioned plural marriage. Instead, the Compromise of 1850 divided the Mexican Cession and the northwestern claims of Texas into a new state and two new territories, the state of California, the Territory of New Mexico, and the Territory of Utah. On April 9, 1851, Mexican American settlers from the area of Taos settled the village of San Luis, then in the New Mexico Territory, later to become Colorado's first permanent Euro-American settlement. In 1854, Senator Stephen A. Douglas persuaded the U.S. Congress to divide the unorganized territory east of the Continental Divide into two new organized territories, the Territory of Kansas and the Territory of Nebraska, and an unorganized southern region known as the Indian territory. Each new territory was to decide the fate of slavery within its boundaries, but this compromise merely served to fuel animosity between free soil and pro-slavery factions. The gold seekers organized the Provisional Government of the Territory of Jefferson on August 24, 1859, but this new territory failed to secure approval from the Congress of the United States embroiled in the debate over slavery. The election of Abraham Lincoln for the President of the United States on November 6, 1860, led to the secession of nine southern slave states and the threat of civil war among the states. Seeking to augment the political power of the Union states, the Republican Party-dominated Congress quickly admitted the eastern portion of the Territory of Kansas into the Union as the free State of Kansas on January 29, 1861, leaving the western portion of the Kansas Territory, and its gold-mining areas, as unorganized territory. Thirty days later on February 28, 1861, outgoing U.S. President James Buchanan signed an Act of Congress organizing the free Territory of Colorado. The original boundaries of Colorado remain unchanged except for government survey amendments. The name Colorado was chosen because it was commonly believed that the Colorado River originated in the territory. In 1776, Spanish priest Silvestre Vélez de Escalante recorded that Native Americans in the area knew the river as "" for the red-brown silt that the river carried from the mountains. In 1859, a U.S. Army topographic expedition led by Captain John Macomb located the confluence of the Green River with the Grand River in what is now Canyonlands National Park in Utah. The Macomb party designated the confluence as the source of the Colorado River. On April 12, 1861, South Carolina artillery opened fire on Fort Sumter to start the American Civil War. While many gold seekers held sympathies for the Confederacy, the vast majority remained fiercely loyal to the Union cause. In 1862, a force of Texas cavalry invaded the Territory of New Mexico and captured Santa Fe on March 10. The object of this Western Campaign was to seize or disrupt the gold fields of Colorado and California and to seize ports on the Pacific Ocean for the Confederacy. A hastily organized force of Colorado volunteers force-marched from Denver City, Colorado Territory, to Glorieta Pass, New Mexico Territory, in an attempt to block the Texans. On March 28, the Coloradans and local New Mexico volunteers stopped the Texans at the Battle of Glorieta Pass, destroyed their cannon and supply wagons, and dispersed 500 of their horses and mules. The Texans were forced to retreat to Santa Fe. Having lost the supplies for their campaign and finding little support in New Mexico, the Texans abandoned Santa Fe and returned to San Antonio in defeat. The Confederacy made no further attempts to seize the Southwestern United States. In 1864, Territorial Governor John Evans appointed the Reverend John Chivington as Colonel of the Colorado Volunteers with orders to protect white settlers from Cheyenne and Arapaho warriors who were accused of stealing cattle. Colonel Chivington ordered his men to attack a band of Cheyenne and Arapaho encamped along Sand Creek. Chivington reported that his troops killed more than 500 warriors. The militia returned to Denver City in triumph, but several officers reported that the so-called battle was a blatant massacre of Indians at peace, that most of the dead were women and children, and that bodies of the dead had been hideously mutilated and desecrated. Three U.S. Army inquiries condemned the action, and incoming President Andrew Johnson asked Governor Evans for his resignation, but none of the perpetrators was ever punished. This event is now known as the Sand Creek massacre. In the midst and aftermath of Civil War, many discouraged prospectors returned to their homes, but a few stayed and developed mines, mills, farms, ranches, roads, and towns in Colorado Territory. On September 14, 1864, James Huff discovered silver near Argentine Pass, the first of many silver strikes. In 1867, the Union Pacific Railroad laid its tracks west to Weir, now Julesburg, in the northeast corner of the Territory. The Union Pacific linked up with the Central Pacific Railroad at Promontory Summit, Utah, on May 10, 1869, to form the First Transcontinental Railroad. The Denver Pacific Railway reached Denver in June the following year, and the Kansas Pacific arrived two months later to forge the second line across the continent. In 1872, rich veins of silver were discovered in the San Juan Mountains on the Ute Indian reservation in southwestern Colorado. The Ute people were removed from the San Juans the following year. The United States Congress passed an enabling act on March 3, 1875, specifying the requirements for the Territory of Colorado to become a state. On August 1, 1876 (four weeks after the Centennial of the United States), U.S. President Ulysses S. Grant signed a proclamation admitting Colorado to the Union as the 38th state and earning it the moniker "Centennial State". The discovery of a major silver lode near Leadville in 1878 triggered the Colorado Silver Boom. The Sherman Silver Purchase Act of 1890 invigorated silver mining, and Colorado's last, but greatest, gold strike at Cripple Creek a few months later lured a new generation of gold seekers. Colorado women were granted the right to vote on November 7, 1893, making Colorado the second state to grant universal suffrage and the first one by a popular vote (of Colorado men). The repeal of the Sherman Silver Purchase Act in 1893 led to a staggering collapse of the mining and agricultural economy of Colorado, but the state slowly and steadily recovered. Between the 1880s and 1930s, Denver's floriculture industry developed into a major industry in Colorado. This period became known locally as the Carnation Gold Rush. Poor labor conditions and discontent among miners resulted in several major clashes between strikers and the Colorado National Guard, including the 1903-1904 Western Federation of Miners Strike and Colorado Coalfield War, the latter of which included the Ludlow massacre that killed a dozen women and children. In 1927, the Columbine Mine massacre resulted in six dead strikers following a confrontation with Colorado Rangers. More than 5,000 Colorado miners—many immigrants—are estimated to have died in accidents since records began to be formally collected following an accident in Crested Butte that killed 59 in 1884. Colorado became the first western state to host a major political convention when the Democratic Party met in Denver in 1908. By the U.S. Census in 1930, the population of Colorado first exceeded one million residents. Colorado suffered greatly through the Great Depression and the Dust Bowl of the 1930s, but a major wave of immigration following World War II boosted Colorado's fortune. Tourism became a mainstay of the state economy, and high technology became an important economic engine. The United States Census Bureau estimated that the population of Colorado exceeded five million in 2009. Three warships of the U.S. Navy have been named the USS "Colorado". The first USS "Colorado" was named for the Colorado River. The later two ships were named in honor of the state, including the battleship USS "Colorado" which served in World War II in the Pacific beginning in 1941. At the time of the attack on Pearl Harbor, this USS "Colorado" was located at the naval base in San Diego, Calif. and hence went unscathed. On September 11, 1957, a plutonium fire occurred at the Rocky Flats Plant, which resulted in the significant plutonium contamination of surrounding populated areas. Since extirpation by trapping and poisoning of the gray wolf ("Canis lupus") from Colorado in the 1930s, a wolf pack recolonized Moffat County, Colorado in northwestern Colorado in 2019. The United States Census Bureau estimates that the population of Colorado was 5,758,736 as of 2019, a 14.51% increase since the 2010 United States Census. Colorado's most populous city and capital, is Denver. The Greater Denver Metropolitan Area, with an estimated 2017 population of 3,515,374, is considered the largest metropolitan area within the state and is found within the larger Front Range Urban Corridor, home to about five million. The largest increases are expected in the Front Range Urban Corridor, especially in the Denver metropolitan area. The state's fastest-growing counties are Douglas and Weld. The center of population of Colorado is located just north of the village of Critchell in Jefferson County. According to the 2010 United States Census, Colorado had a population of 5,029,196. Racial composition of the state's population was: People of Hispanic and Latino American (of any race made) heritage made up 20.7% of the population. According to the 2000 Census, the largest ancestry groups in Colorado are German (22%) including of Swiss and Austrian nationalities, Mexican (18%), Irish (12%), and English (12%). Persons reporting German ancestry are especially numerous in the Front Range, the Rockies (west-central counties), and Eastern parts/High Plains. Colorado has a high proportion of Hispanic, mostly Mexican-American, citizens in Metropolitan Denver, Colorado Springs, as well as the smaller cities of Greeley and Pueblo, and elsewhere. Southern, Southwestern, and Southeastern Colorado has a large number of Hispanos, the descendants of the early Mexican settlers of colonial Spanish origin. In 1940, the Census Bureau reported Colorado's population as 8.2% Hispanic and 90.3% non-Hispanic white. The Hispanic population of Colorado has continued to grow quickly over the past decades. By 2019, Hispanics made up 22% of Colorado's population, and Non-Hispanic Whites made up 70%. Spoken English in Colorado has many Spanish idioms. Colorado also has some large African-American communities located in Denver, in the neighborhoods of Montbello, Five Points, Whittier, and many other East Denver areas. The state has sizable numbers of Asian-Americans of Mongolian, Chinese, Filipino, Korean, Southeast Asian, and Japanese descent. The highest population of Asian Americans can be found on the south and southeast side of Denver, as well as some on Denver's southwest side. The Denver metropolitan area is considered more liberal and diverse than much of the state when it comes to political issues and environmental concerns. There were a total of 70,331 births in Colorado in 2006. (Birth rate of 14.6 per thousand.) In 2007, non-Hispanic whites were involved in 59.1% of all the births. Some 14.06% of those births involved a non-Hispanic white person and someone of a different race, most often with a couple including one Hispanic. A birth where at least one Hispanic person was involved counted for 43% of the births in Colorado. As of the 2010 Census, Colorado has the seventh highest percentage of Hispanics (20.7%) in the U.S. behind New Mexico (46.3%), California (37.6%), Texas (37.6%), Arizona (29.6%), Nevada (26.5%), and Florida (22.5%). Per the 2000 census, the Hispanic population is estimated to be 918,899 or approximately 20% of the state total population. Colorado has the 5th-largest population of Mexican-Americans, behind California, Texas, Arizona, and Illinois. In percentages, Colorado has the 6th-highest percentage of Mexican-Americans, behind New Mexico, California, Texas, Arizona, and Nevada. In 2011, 46% of Colorado's population younger than the age of one were minorities, meaning that they had at least one parent who was not non-Hispanic white. "Note: Births in table don't add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number." In 2017, Colorado recorded the second-lowest fertility rate in the United States outside of New England, after Oregon, at 1.63 children per woman. Significant, contributing factors to the decline in pregnancies were the Title X Family Planning Program and an Intrauterine device grant from Warren Buffett's family. Spanish is the second-most spoken language in Colorado, after English. There is one Native Coloradan language still spoken in Colorado, Colorado River Numic (Ute). Major religious affiliations of the people of Colorado are 64% Christian, of whom there are 44% Protestant, 16% Roman Catholic, 3% Mormon, and 1% Eastern Orthodox. Other religious breakdowns are 1% Jewish, 1% Muslim, 1% Buddhist and 4% other. The religiously unaffiliated make up 29% of the population. The largest denominations by number of adherents in 2010 were the Catholic Church with 811,630; non-denominational Evangelical Protestants with 229,981; and The Church of Jesus Christ of Latter-day Saints with 151,433. According to several studies, Coloradans have the lowest rates of obesity of any state in the US. , 18% of the population was considered medically obese, and while the lowest in the nation, the percentage had increased from 17% in 2004. According to a report in the Journal of the American Medical Association, residents of Colorado had a 2014 life expectancy of 80.21 years, the longest of any U.S. state. A number of film productions have shot on location in Colorado, especially prominent Westerns like "True Grit", "The Searchers", and "Butch Cassidy and the Sundance Kid". A number of historic military forts, railways with trains still operating, mining ghost towns have been utilized and transformed for historical accuracy in well known films. There are also a number of scenic highways and mountain passes that helped to feature the open road in films such as "Vanishing Point", "Bingo" and "Starman". Some Colorado landmarks have been featured in films, such as The Stanley Hotel in "Dumb and Dumber" and "The Shining" and the Sculptured House in "Sleeper". In 2015, "Furious 7" was to film driving sequences on Pikes Peak Highway in Colorado. The TV series "Good Luck Charlie" was being filmed in Denver, Colorado. The Colorado Office of Film and Television has noted that more than 400 films have been shot in Colorado. There are also a number of established film festivals in Colorado, including Aspen Shortsfest, Boulder International Film Festival, Castle Rock Film Festival, Denver Film Festival, Festivus Film Festival (ended in 2013), Mile High Horror Film Festival, Moondance International Film Festival, Mountainfilm in Telluride, Rocky Mountain Women's Film Festival, and Telluride Film Festival. Colorado is known for its Southwest and Rocky Mountain cuisine. Mexican restaurants are prominent throughout the state. Boulder, Colorado was named America's Foodiest Town 2010 by Bon Appétit. Boulder, and Colorado in general, is home to a number of national food and beverage companies, top-tier restaurants and farmers' markets. Boulder, Colorado also has more Master Sommeliers per capita than any other city, including San Francisco and New York. The Food & Wine Classic is held annually each June in Aspen, Colorado. Aspen also has a reputation as the culinary capital of the Rocky Mountain region. Denver is known for steak, but now has a diverse culinary scene with many restaurants. Colorado wines include award-winning varietals that have attracted favorable notice from outside the state. With wines made from traditional "Vitis vinifera" grapes along with wines made from cherries, peaches, plums and honey, Colorado wines have won top national and international awards for their quality. Colorado's grape growing regions contain the highest elevation vineyards in the United States, with most viticulture in the state practiced between above sea level. The mountain climate ensures warm summer days and cool nights. Colorado is home to two designated American Viticultural Areas of the Grand Valley AVA and the West Elks AVA, where most of the vineyards in the state are located. However, an increasing number of wineries are located along the Front Range. In 2018, Wine Enthusiast Magazine named Colorado's Grand Valley AVA in Mesa County, Colorado, as one of the Top Ten wine travel destinations in the world. Colorado is home to many nationally praised microbreweries, including New Belgium Brewing Company, Odell Brewing Company, Great Divide Brewing Company, and Bristol Brewing Company. The area of northern Colorado near and between the cities of Denver, Boulder, and Fort Collins is known as the "Napa Valley of Beer" due to its high density of craft breweries. Colorado is open to cannabis (marijuana) tourism. With the adoption of their 64th state amendment in 2013, Colorado became the first state in the union to legalize the medicinal (2000), industrial (2013), and recreational (2014) use of marijuana. Colorado's marijuana industry sold $1.31 billion worth of marijuana in 2016 and $1.26 billion in the first three-quarters of 2017. The state generated tax, fee, and license revenue of $194 million in 2016 on legal marijuana sales. Colorado regulates hemp as any part of the plant with less than 0.3% THC. Amendment 64, adopted by the voters in the 2012 general election, forces the Colorado state legislature to enact legislation governing the cultivation, processing and sale of recreational marijuana and industrial hemp. On April 4, 2014, Senate Bill 14–184 addressing oversight of Colorado's industrial hemp program was first introduced, ultimately being signed into law by Governor John Hickenlooper on May 31, 2014. On November 7, 2000, 54% of Colorado voters passed Amendment 20, which amends the Colorado State constitution to allow the medical use of marijuana. A patient's medical use of marijuana, within the following limits, is lawful: Currently Colorado has listed "eight medical conditions for which patients can use marijuana—cancer, glaucoma, HIV/AIDS, muscle spasms, seizures, severe pain, severe nausea and cachexia, or dramatic weight loss and muscle atrophy". Colorado Governor John Hickenlooper has allocated about half of the state's $13 million "Medical Marijuana Program Cash Fund" to medical research in the 2014 budget. On November 6, 2012, voters amended the state constitution to protect "personal use" of marijuana for adults, establishing a framework to regulate marijuana in a manner similar to alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014. Colorado has five major professional sports leagues, all based in the Denver metropolitan area. Colorado is the least populous state with a franchise in each of the major professional sports leagues. The Pikes Peak International Hill Climb is a major hillclimbing motor race held at the Pikes Peak Highway. The Cherry Hills Country Club has hosted several professional golf tournaments, including the U.S. Open, U.S. Senior Open, U.S. Women's Open, PGA Championship and BMW Championship. The following universities and colleges participate in the National Collegiate Athletic Association Division I. The most popular college sports program is the University of Colorado Buffaloes, who used to play in the Big-12 but now play in the Pac-12. They have won the 1957 and 1991 Orange Bowl, 1995 Fiesta Bowl, and 1996 Cotton Bowl Classic. CNBC's list of "Top States for Business for 2010" has recognized Colorado as the third-best state in the nation, falling short only to Texas and Virginia. The total state product in 2015 was $318,600 million. Median Annual Household Income in 2016 was $70,666, 8th in the nation. Per capita personal income in 2010 was $51,940, ranking Colorado 11th in the nation. The state's economy broadened from its mid-19th-century roots in mining when irrigated agriculture developed, and by the late 19th century, raising livestock had become important. Early industry was based on the extraction and processing of minerals and agricultural products. Current agricultural products are cattle, wheat, dairy products, corn, and hay. The federal government is also a major economic force in the state with many important federal facilities including NORAD (North American Aerospace Defense Command), United States Air Force Academy, Schriever Air Force Base located approximately 10 miles (16 kilometers) east of Peterson Air Force Base, and Fort Carson, both located in Colorado Springs within El Paso County; NOAA, the National Renewable Energy Laboratory (NREL) in Golden, and the National Institute of Standards and Technology in Boulder; U.S. Geological Survey and other government agencies at the Denver Federal Center near Lakewood; the Denver Mint, Buckley Air Force Base, the Tenth Circuit Court of Appeals, and the Byron G. Rogers Federal Building and United States Courthouse in Denver; and a federal Supermax Prison and other federal prisons near Cañon City. In addition to these and other federal agencies, Colorado has abundant National Forest land and four National Parks that contribute to federal ownership of of land in Colorado, or 37% of the total area of the state. In the second half of the 20th century, the industrial and service sectors have expanded greatly. The state's economy is diversified, and is notable for its concentration of scientific research and high-technology industries. Other industries include food processing, transportation equipment, machinery, chemical products, the extraction of metals such as gold (see Gold mining in Colorado), silver, and molybdenum. Colorado now also has the largest annual production of beer of any state. Denver is an important financial center. The state's diverse geography and majestic mountains attract millions of tourists every year, including 85.2 million in 2018. Tourism contributes greatly to Colorado's economy, with tourists generating $22.3 billion in 2018. A number of nationally known brand names have originated in Colorado factories and laboratories. From Denver came the forerunner of telecommunications giant Qwest in 1879, Samsonite luggage in 1910, Gates belts and hoses in 1911, and Russell Stover Candies in 1923. Kuner canned vegetables began in Brighton in 1864. From Golden came Coors beer in 1873, CoorsTek industrial ceramics in 1920, and Jolly Rancher candy in 1949. CF&I railroad rails, wire, nails, and pipe debuted in Pueblo in 1892. Holly Sugar was first milled from beets in Holly in 1905, and later moved its headquarters to Colorado Springs. The present-day Swift packed meat of Greeley evolved from Monfort of Colorado, Inc., established in 1930. Estes model rockets were launched in Penrose in 1958. Fort Collins has been the home of Woodward Governor Company's motor controllers (governors) since 1870, and Waterpik dental water jets and showerheads since 1962. Celestial Seasonings herbal teas have been made in Boulder since 1969. Rocky Mountain Chocolate Factory made its first candy in Durango in 1981. Colorado has a flat 4.63% income tax, regardless of income level. Unlike most states, which calculate taxes based on federal "adjusted gross income", Colorado taxes are based on "taxable income"—income after federal exemptions and federal itemized (or standard) deductions. Colorado's state sales tax is 2.9% on retail sales. When state revenues exceed state constitutional limits, according to Colorado's Taxpayer Bill of Rights legislation, full-year Colorado residents can claim a sales tax refund on their individual state income tax return. Many counties and cities charge their own rates, in addition to the base state rate. There are also certain county and special district taxes that may apply. Real estate and personal business property are taxable in Colorado. The state's senior property tax exemption was temporarily suspended by the Colorado Legislature in 2003. The tax break was scheduled to return for assessment year 2006, payable in 2007. , the state's unemployment rate was 4.2%. The West Virginia teachers' strike in 2018 inspired teachers in other states, including Colorado, to take similar action. Colorado has significant hydrocarbon resources. According to the Energy Information Administration, Colorado hosts seven of the Nation's hundred largest natural gas fields, and two of its hundred largest oil fields. Conventional and unconventional natural gas output from several Colorado basins typically account for more than five percent of annual U.S. natural gas production. Colorado's oil shale deposits hold an estimated of oil—nearly as much oil as the entire world's proven oil reserves; the economic viability of the oil shale, however, has not been demonstrated. Substantial deposits of bituminous, subbituminous, and lignite coal are found in the state. Uranium mining in Colorado goes back to 1872, when pitchblende ore was taken from gold mines near Central City, Colorado. The Colorado uranium industry has seen booms and busts, but continues to this day. Not counting byproduct uranium from phosphate, Colorado is considered to have the third-largest uranium reserves of any U.S. state, behind Wyoming and New Mexico. Uranium price increases from 2001 to 2007 prompted a number of companies to revive uranium mining in Colorado. Price drops and financing problems in late 2008 forced these companies to cancel or scale back uranium-mining project. Currently, there are no uranium producing mines in Colorado. Colorado's high Rocky Mountain ridges and eastern plains offer wind power potential, and geologic activity in the mountain areas provides potential for geothermal power development. Much of the state is sunny, and could produce solar power. Major rivers flowing from the Rocky Mountains offer hydroelectric power resources. Corn grown in the flat eastern part of the state offers potential resources for ethanol production. Colorado's primary mode of transportation (in terms of passengers) is its highway system. Interstate 25 (I-25) is the primary north–south highway in the state, connecting Pueblo, Colorado Springs, Denver, and Fort Collins, and extending north to Wyoming and south to New Mexico. I-70 is the primary east–west corridor. It connects Grand Junction and the mountain communities with Denver, and enters Utah and Kansas. The state is home to a network of US and Colorado highways that provide access to all principal areas of the state. Many smaller communities are connected to this network only via county roads. Denver International Airport (DIA) is the fifth-busiest domestic U.S. airport and twentieth busiest airport in the world by passenger traffic. DIA handles by far the largest volume of commercial air traffic in Colorado, and is the busiest U.S. hub airport between Chicago and the Pacific coast, making Denver the most important airport for connecting passenger traffic in the western United States. Extensive public transportation bus services are offered both intra-city and inter-city—including the Denver metro area's extensive RTD services. The Regional Transportation District (RTD) operates the popular RTD Bus & Rail transit system in the Denver Metropolitan Area. the RTD rail system had 170 light-rail vehicles, serving of track. Amtrak operates two passenger rail lines in Colorado, the California Zephyr and Southwest Chief. Colorado's contribution to world railroad history was forged principally by the Denver and Rio Grande Western Railroad which began in 1870 and wrote the book on mountain railroading. In 1988 the "Rio Grande" acquired, but was merged into, the Southern Pacific Railroad by their joint owner Philip Anschutz. On September 11, 1996, Anschutz sold the combined company to the Union Pacific Railroad, creating the largest railroad network in the United States. The Anschutz sale was partly in response to the earlier merger of Burlington Northern and Santa Fe which formed the large Burlington Northern and Santa Fe Railway (BNSF), Union Pacific's principal competitor in western U.S. railroading. Both Union Pacific and BNSF have extensive freight operations in Colorado. Colorado's freight railroad network consists of 2,688 miles of Class I trackage. It is integral to the U.S. economy, being a critical artery for the movement of energy, agriculture, mining, and industrial commodities as well as general freight and manufactured products between the East and Midwest and the Pacific coast states. In August 2014, Colorado began to issue driver licenses to aliens not lawfully in the United States who lived in Colorado. In September 2014, KCNC reported that 524 non-citizens were issued Colorado driver licenses that are normally issued to U.S. citizens living in Colorado. Like the federal government and all other U.S. states, Colorado's state constitution provides for three branches of government: the legislative, the executive, and the judicial branches. The Governor of Colorado heads the state's executive branch. The current governor is Jared Polis, a Democrat. Colorado's other statewide elected executive officers are the Lieutenant Governor of Colorado (elected on a ticket with the Governor), Secretary of State of Colorado, Colorado State Treasurer, and Attorney General of Colorado, all of whom serve four-year terms. The seven-member Colorado Supreme Court is the highest judicial court in the state. The state legislative body is the Colorado General Assembly, which is made up of two houses, the House of Representatives and the Senate. The House has 65 members and the Senate has 35. , the Democratic Party holds a 19 to 16 majority in the Senate and a 41 to 24 majority in the House. Most Coloradans are native to other states (nearly 60% according to the 2000 census), and this is illustrated by the fact that the state did not have a native-born governor from 1975 (when John David Vanderhoof left office) until 2007, when Bill Ritter took office; his election the previous year marked the first electoral victory for a native-born Coloradan in a gubernatorial race since 1958 (Vanderhoof had ascended from the Lieutenant Governorship when John Arthur Love was given a position in Richard Nixon's administration in 1973). In the 2016 election, the Democratic party won the Colorado electoral college votes. Tax is collected by the Colorado Department of Revenue. The State of Colorado is divided into 64 counties. Counties are important units of government in Colorado since the state has no secondary civil subdivisions such as townships. Two of these counties, the City and County of Denver and the City and County of Broomfield, have consolidated city and county governments. Nine Colorado counties have a population in excess of 250,000 each, while eight Colorado counties have a population of less than 2,500 each. The ten most populous Colorado counties are all located in the Front Range Urban Corridor. The United States Office of Management and Budget (OMB) has defined one combined statistical area (CSA), seven Metropolitan Statistical Areas (MSAs), and seven Micropolitan Statistical Areas (μSAs) in the state of Colorado. The most populous of the 14 Core Based Statistical Areas in Colorado is the Denver-Aurora-Broomfield, CO Metropolitan Statistical Area. This area had an estimated population of 2,888,227 on July 1, 2017, an increase of +13.55% since the 2010 United States Census. The more extensive Denver-Aurora-Boulder, CO Combined Statistical Area had an estimated population of 3,515,374 on July 1, 2017, an increase of +13.73% since the 2010 United States Census. The most populous extended metropolitan region in Rocky Mountain Region is the Front Range Urban Corridor along the northeast face of the Southern Rocky Mountains. This region with Denver at its center had an estimated population of 4,495,181 on July 1, 2012, an increase of +3.73% since the 2010 United States Census. The state of Colorado currently has 271 active incorporated municipalities, including 196 towns, 73 cities, and two consolidated city and county governments. Colorado municipalities operate under one of five types of municipal governing authority. Colorado has one town with a territorial charter, 160 statutory towns, 12 statutory cities, 96 home rule municipalities (61 cities and 35 towns), and two consolidated city and county governments. In addition to its 271 municipalities, Colorado has 187 unincorporated Census Designated Places and many other small communities. The state of Colorado has more than 3,000 districts with taxing authority. These districts may provide schools, law enforcement, fire protection, water, sewage, drainage, irrigation, transportation, recreation, infrastructure, cultural facilities, business support, redevelopment, or other services. Some of these districts have authority to levy sales tax and well as property tax and use fees. This has led to a hodgepodge of sales tax and property tax rates in Colorado. There are some street intersections in Colorado with a different sales tax rate on each corner, sometimes substantially different. Some of the more notable Colorado districts are: Colorado is considered a swing state or (more recently) a blue state in both state and federal elections. Coloradans have elected 17 Democrats and 12 Republicans to the governorship in the last 100 years. In presidential politics, Colorado was considered a reliably Republican state during the post-World War II era, voting for the Democratic candidate only in 1948, 1964, and 1992. However, it became a competitive swing state by the turn of the century, and voted consecutively for Democrat Barack Obama in 2008 and 2012, as well as Democrat Hillary Clinton in 2016. Colorado politics has the contrast of conservative cities such as Colorado Springs and liberal cities such as Boulder and Denver. Democrats are strongest in metropolitan Denver, the college towns of Fort Collins and Boulder, southern Colorado (including Pueblo), and a few western ski resort counties. The Republicans are strongest in the Eastern Plains, Colorado Springs, Greeley, and far Western Colorado near Grand Junction. Colorado is represented by two United States Senators: Colorado is represented by seven Representatives to the United States House of Representatives: On the November 8, 1932 ballot, Colorado approved the repeal of alcohol prohibition more than a year before the Twenty-first Amendment to the United States Constitution was ratified. In 2012, voters amended the state constitution protecting "personal use" of marijuana for adults, establishing a framework to regulate cannabis in a manner similar to alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014. On May 29, 2019, Governor Jared Polis signed House Bill 1124 immediately prohibiting law enforcement officials in Colorado from holding undocumented immigrants solely on the basis of a request from U.S. Immigration and Customs Enforcement. Former Military installations and outposts include: Colorado is home to 4 national parks, 8 national monuments, 2 national recreation areas, 2 national historic sites, 3 national historic trails, a national scenic trail, 11 national forests, 2 national grasslands, 42 national wilderness areas, 2 national conservation areas, 8 national wildlife refuges, 44 state parks, 307 state wildlife areas, and numerous other scenic, historic, and recreational areas. Units of the National Park System in Colorado:
https://en.wikipedia.org/wiki?curid=5399
Carboniferous The Carboniferous ( ) is a geologic period and system that spans 60 million years from the end of the Devonian Period million years ago (Mya), to the beginning of the Permian Period, Mya. The name "Carboniferous" means "coal-bearing" and derives from the Latin words "carbō" ("coal") and "ferō" ("I bear, I carry"), and was coined by geologists William Conybeare and William Phillips in 1822. Based on a study of the British rock succession, it was the first of the modern 'system' names to be employed, and reflects the fact that many coal beds were formed globally during that time. The Carboniferous is often treated in North America as two geological periods, the earlier Mississippian and the later Pennsylvanian. Terrestrial animal life was well established by the Carboniferous period. Amphibians were the dominant land vertebrates, of which one branch would eventually evolve into amniotes, the first solely terrestrial vertebrates. Arthropods were also very common, and many (such as "Meganeura") were much larger than those of today. Vast swaths of forest covered the land, which would eventually be laid down and become the coal beds characteristic of the Carboniferous stratigraphy evident today. The atmospheric content of oxygen also reached its highest levels in geological history during the period, 35% compared with 21% today, allowing terrestrial invertebrates to evolve to great size. The later half of the period experienced glaciations, low sea level, and mountain building as the continents collided to form Pangaea. A minor marine and terrestrial extinction event, the Carboniferous rainforest collapse, occurred at the end of the period, caused by climate change. In the United States the Carboniferous is usually broken into Mississippian (earlier) and Pennsylvanian (later) subperiods. The Mississippian is about twice as long as the Pennsylvanian, but due to the large thickness of coal-bearing deposits with Pennsylvanian ages in Europe and North America, the two subperiods were long thought to have been more or less equal in duration. In Europe the Lower Carboniferous sub-system is known as the Dinantian, comprising the Tournaisian and Visean Series, dated at 362.5-332.9 Ma, and the Upper Carboniferous sub-system is known as the Silesian, comprising the Namurian, Westphalian, and Stephanian Series, dated at 332.9-298.9 Ma. The Silesian is roughly contemporaneous with the late Mississippian Serpukhovian plus the Pennsylvanian. In Britain the Dinantian is traditionally known as the Carboniferous Limestone, the Namurian as the Millstone Grit, and the Westphalian as the Coal Measures and Pennant Sandstone. The International Commission on Stratigraphy (ICS) faunal stages (in bold) from youngest to oldest, together with some of their regional subdivisions, are: Late Pennsylvanian: Gzhelian (most recent) Late Pennsylvanian: Kasimovian Middle Pennsylvanian: Moscovian Early Pennsylvanian: Bashkirian / Morrowan Late Mississippian: Serpukhovian Middle Mississippian: Visean Early Mississippian: Tournaisian (oldest) A global drop in sea level at the end of the Devonian reversed early in the Carboniferous; this created the widespread inland seas and the carbonate deposition of the Mississippian. There was also a drop in south polar temperatures; southern Gondwanaland was glaciated throughout the period, though it is uncertain if the ice sheets were a holdover from the Devonian or not. These conditions apparently had little effect in the deep tropics, where lush swamps, later to become coal, flourished to within 30 degrees of the northernmost glaciers. Mid-Carboniferous, a drop in sea level precipitated a major marine extinction, one that hit crinoids and ammonites especially hard. This sea level drop and the associated unconformity in North America separate the Mississippian subperiod from the Pennsylvanian subperiod. This happened about 323 million years ago, at the onset of the Permo-Carboniferous Glaciation. The Carboniferous was a time of active mountain-building as the supercontinent Pangaea came together. The southern continents remained tied together in the supercontinent Gondwana, which collided with North America–Europe (Laurussia) along the present line of eastern North America. This continental collision resulted in the Hercynian orogeny in Europe, and the Alleghenian orogeny in North America; it also extended the newly uplifted Appalachians southwestward as the Ouachita Mountains. In the same time frame, much of present eastern Eurasian plate welded itself to Europe along the line of the Ural Mountains. Most of the Mesozoic supercontinent of Pangea was now assembled, although North China (which would collide in the Latest Carboniferous), and South China continents were still separated from Laurasia. The Late Carboniferous Pangaea was shaped like an "O." There were two major oceans in the Carboniferous—Panthalassa and Paleo-Tethys, which was inside the "O" in the Carboniferous Pangaea. Other minor oceans were shrinking and eventually closed - Rheic Ocean (closed by the assembly of South and North America), the small, shallow Ural Ocean (which was closed by the collision of Baltica and Siberia continents, creating the Ural Mountains) and Proto-Tethys Ocean (closed by North China collision with Siberia/Kazakhstania). Average global temperatures in the Early Carboniferous Period were high: approximately 20 °C (68 °F). However, cooling during the Middle Carboniferous reduced average global temperatures to about 12 °C (54 °F). Lack of growth rings of fossilized trees suggest a lack of seasons of a tropical climate. Glaciations in Gondwana, triggered by Gondwana's southward movement, continued into the Permian and because of the lack of clear markers and breaks, the deposits of this glacial period are often referred to as Permo-Carboniferous in age. The cooling and drying of the climate led to the Carboniferous Rainforest Collapse (CRC) during the late Carboniferous. Tropical rainforests fragmented and then were eventually devastated by climate change. Carboniferous rocks in Europe and eastern North America largely consist of a repeated sequence of limestone, sandstone, shale and coal beds. In North America, the early Carboniferous is largely marine limestone, which accounts for the division of the Carboniferous into two periods in North American schemes. The Carboniferous coal beds provided much of the fuel for power generation during the Industrial Revolution and are still of great economic importance. The large coal deposits of the Carboniferous may owe their existence primarily to two factors. The first of these is the appearance of wood tissue and bark-bearing trees. The evolution of the wood fiber lignin and the bark-sealing, waxy substance suberin variously opposed decay organisms so effectively that dead materials accumulated long enough to fossilise on a large scale. The second factor was the lower sea levels that occurred during the Carboniferous as compared to the preceding Devonian period. This promoted the development of extensive lowland swamps and forests in North America and Europe. Based on a genetic analysis of mushroom fungi, it was proposed that large quantities of wood were buried during this period because animals and decomposing bacteria and fungi had not yet evolved enzymes that could effectively digest the resistant phenolic lignin polymers and waxy suberin polymers. They suggest that fungi that could break those substances down effectively only became dominant towards the end of the period, making subsequent coal formation much rarer. The Carboniferous trees made extensive use of lignin. They had bark to wood ratios of 8 to 1, and even as high as 20 to 1. This compares to modern values less than 1 to 4. This bark, which must have been used as support as well as protection, probably had 38% to 58% lignin. Lignin is insoluble, too large to pass through cell walls, too heterogeneous for specific enzymes, and toxic, so that few organisms other than Basidiomycetes fungi can degrade it. To oxidize it requires an atmosphere of greater than 5% oxygen, or compounds such as peroxides. It can linger in soil for thousands of years and its toxic breakdown products inhibit decay of other substances. One possible reason for its high percentages in plants at that time was to provide protection from insects in a world containing very effective insect herbivores (but nothing remotely as effective as modern plant eating insects) and probably many fewer protective toxins produced naturally by plants than exist today. As a result, undegraded carbon built up, resulting in the extensive burial of biologically fixed carbon, leading to an increase in oxygen levels in the atmosphere; estimates place the peak oxygen content as high as 35%, as compared to 21% today. This oxygen level may have increased wildfire activity. It also may have promoted gigantism of insects and amphibians — creatures that have been constrained in size by respiratory systems that are limited in their physiological ability to transport and distribute oxygen at the lower atmospheric concentrations that have since been available. In eastern North America, marine beds are more common in the older part of the period than the later part and are almost entirely absent by the late Carboniferous. More diverse geology existed elsewhere, of course. Marine life is especially rich in crinoids and other echinoderms. Brachiopods were abundant. Trilobites became quite uncommon. On land, large and diverse plant populations existed. Land vertebrates included large amphibians. Early Carboniferous land plants, some of which were preserved in coal balls, were very similar to those of the preceding Late Devonian, but new groups also appeared at this time. The main Early Carboniferous plants were the Equisetales (horse-tails), Sphenophyllales (scrambling plants), Lycopodiales (club mosses), Lepidodendrales (scale trees), Filicales (ferns), Medullosales (informally included in the "seed ferns", an artificial assemblage of a number of early gymnosperm groups) and the Cordaitales. These continued to dominate throughout the period, but during late Carboniferous, several other groups, Cycadophyta (cycads), the Callistophytales (another group of "seed ferns"), and the Voltziales (related to and sometimes included under the conifers), appeared. The Carboniferous lycophytes of the order Lepidodendrales, which are cousins (but not ancestors) of the tiny club-moss of today, were huge trees with trunks 30 meters high and up to 1.5 meters in diameter. These included "Lepidodendron" (with its cone called Lepidostrobus), "Anabathra", "Lepidophloios" and "Sigillaria". The roots of several of these forms are known as Stigmaria. Unlike present-day trees, their secondary growth took place in the cortex, which also provided stability, instead of the xylem. The Cladoxylopsids were large trees, that were ancestors of ferns, first arising in the Carboniferous. The fronds of some Carboniferous ferns are almost identical with those of living species. Probably many species were epiphytic. Fossil ferns and "seed ferns" include "Pecopteris", "Cyclopteris", "Neuropteris", "Alethopteris", and "Sphenopteris"; "Megaphyton" and "Caulopteris" were tree ferns. The Equisetales included the common giant form "Calamites", with a trunk diameter of 30 to and a height of up to . "Sphenophyllum" was a slender climbing plant with whorls of leaves, which was probably related both to the calamites and the lycopods. "Cordaites", a tall plant (6 to over 30 meters) with strap-like leaves, was related to the cycads and conifers; the catkin-like reproductive organs, which bore ovules/seeds, is called "Cardiocarpus". These plants were thought to live in swamps. True coniferous trees ("Walchia", of the order Voltziales) appear later in the Carboniferous, and preferred higher drier ground. In the oceans the marine invertebrate groups are the Foraminifera, corals, Bryozoa, Ostracoda, brachiopods, ammonoids, hederelloids, microconchids and echinoderms (especially crinoids). For the first time foraminifera take a prominent part in the marine faunas. The large spindle-shaped genus Fusulina and its relatives were abundant in what is now Russia, China, Japan, North America; other important genera include "Valvulina", "Endothyra", "Archaediscus", and "Saccammina" (the latter common in Britain and Belgium). Some Carboniferous genera are still extant. The microscopic shells of radiolarians are found in cherts of this age in the Culm of Devon and Cornwall, and in Russia, Germany and elsewhere. Sponges are known from spicules and anchor ropes, and include various forms such as the Calcispongea "Cotyliscus" and "Girtycoelia", the demosponge "Chaetetes", and the genus of unusual colonial glass sponges "Titusvillia". Both reef-building and solitary corals diversify and flourish; these include both rugose (for example, "Caninia", "Corwenia", "Neozaphrentis"), heterocorals, and tabulate (for example, "Chladochonus", "Michelinia") forms. Conularids were well represented by "Conularia" Bryozoa are abundant in some regions; the fenestellids including "Fenestella", "Polypora", and "Archimedes", so named because it is in the shape of an Archimedean screw. Brachiopods are also abundant; they include productids, some of which (for example, "Gigantoproductus") reached very large (for brachiopods) size and had very thick shells, while others like "Chonetes" were more conservative in form. Athyridids, spiriferids, rhynchonellids, and terebratulids are also very common. Inarticulate forms include "Discina" and "Crania". Some species and genera had a very wide distribution with only minor variations. Annelids such as "Serpulites" are common fossils in some horizons. Among the mollusca, the bivalves continue to increase in numbers and importance. Typical genera include "Aviculopecten", "Posidonomya", "Nucula", "Carbonicola", "Edmondia", and "Modiola". Gastropods are also numerous, including the genera "Murchisonia", "Euomphalus", "Naticopsis". Nautiloid cephalopods are represented by tightly coiled nautilids, with straight-shelled and curved-shelled forms becoming increasingly rare. Goniatite ammonoids are common. Trilobites are rarer than in previous periods, on a steady trend towards extinction, represented only by the proetid group. Ostracoda, a class of crustaceans, were abundant as representatives of the meiobenthos; genera included "Amphissites", "Bairdia", "Beyrichiopsis", "Cavellina", "Coryellina", "Cribroconcha", "Hollinella", "Kirkbya", "Knoxiella", and "Libumella". Amongst the echinoderms, the crinoids were the most numerous. Dense submarine thickets of long-stemmed crinoids appear to have flourished in shallow seas, and their remains were consolidated into thick beds of rock. Prominent genera include "Cyathocrinus", "Woodocrinus", and "Actinocrinus". Echinoids such as "Archaeocidaris" and "Palaeechinus" were also present. The blastoids, which included the Pentreinitidae and Codasteridae and superficially resembled crinoids in the possession of long stalks attached to the seabed, attain their maximum development at this time. Freshwater Carboniferous invertebrates include various bivalve molluscs that lived in brackish or fresh water, such as "Anthraconaia", "Naiadites", and "Carbonicola"; diverse crustaceans such as "Candona", "Carbonita", "Darwinula", "Estheria", "Acanthocaris", "Dithyrocaris", and "Anthrapalaemon". The eurypterids were also diverse, and are represented by such genera as "Adelophthalmus", "Megarachne" (originally misinterpreted as a giant spider, hence its name) and the specialised very large "Hibbertopterus". Many of these were amphibious. Frequently a temporary return of marine conditions resulted in marine or brackish water genera such as "Lingula", Orbiculoidea, and "Productus" being found in the thin beds known as marine bands. Fossil remains of air-breathing insects, myriapods and arachnids are known from the late Carboniferous, but so far not from the early Carboniferous. The first true priapulids appeared during this period. Their diversity when they do appear, however, shows that these arthropods were both well developed and numerous. Their large size can be attributed to the moistness of the environment (mostly swampy fern forests) and the fact that the oxygen concentration in the Earth's atmosphere in the Carboniferous was much higher than today. This required less effort for respiration and allowed arthropods to grow larger with the up to millipede-like "Arthropleura" being the largest-known land invertebrate of all time. Among the insect groups are the huge predatory Protodonata (griffinflies), among which was "Meganeura", a giant dragonfly-like insect and with a wingspan of ca. —the largest flying insect ever to roam the planet. Further groups are the Syntonopterodea (relatives of present-day mayflies), the abundant and often large sap-sucking Palaeodictyopteroidea, the diverse herbivorous Protorthoptera, and numerous basal Dictyoptera (ancestors of cockroaches). Many insects have been obtained from the coalfields of Saarbrücken and Commentry, and from the hollow trunks of fossil trees in Nova Scotia. Some British coalfields have yielded good specimens: "Archaeoptitus", from the Derbyshire coalfield, had a spread of wing extending to more than ; some specimens ("Brodia") still exhibit traces of brilliant wing colors. In the Nova Scotian tree trunks land snails ("Archaeozonites", "Dendropupa") have been found. Many fish inhabited the Carboniferous seas; predominantly Elasmobranchs (sharks and their relatives). These included some, like "Psammodus", with crushing pavement-like teeth adapted for grinding the shells of brachiopods, crustaceans, and other marine organisms. Other sharks had piercing teeth, such as the Symmoriida; some, the petalodonts, had peculiar cycloid cutting teeth. Most of the sharks were marine, but the Xenacanthida invaded fresh waters of the coal swamps. Among the bony fish, the Palaeonisciformes found in coastal waters also appear to have migrated to rivers. Sarcopterygian fish were also prominent, and one group, the Rhizodonts, reached very large size. Most species of Carboniferous marine fish have been described largely from teeth, fin spines and dermal ossicles, with smaller freshwater fish preserved whole. Freshwater fish were abundant, and include the genera "Ctenodus", "Uronemus", "Acanthodes", "Cheirodus", and "Gyracanthus". Sharks (especially the "Stethacanthids") underwent a major evolutionary radiation during the Carboniferous. It is believed that this evolutionary radiation occurred because the decline of the placoderms at the end of the Devonian period caused many environmental niches to become unoccupied and allowed new organisms to evolve and fill these niches. As a result of the evolutionary radiation Carboniferous sharks assumed a wide variety of bizarre shapes including "Stethacanthus" which possessed a flat brush-like dorsal fin with a patch of denticles on its top. "Stethacanthus" unusual fin may have been used in mating rituals. Carboniferous amphibians were diverse and common by the middle of the period, more so than they are today; some were as long as 6 meters, and those fully terrestrial as adults had scaly skin. They included a number of basal tetrapod groups classified in early books under the Labyrinthodontia. These had long bodies, a head covered with bony plates and generally weak or undeveloped limbs. The largest were over 2 meters long. They were accompanied by an assemblage of smaller amphibians included under the Lepospondyli, often only about long. Some Carboniferous amphibians were aquatic and lived in rivers ("Loxomma", "Eogyrinus", "Proterogyrinus"); others may have been semi-aquatic ("Ophiderpeton", "Amphibamus", "Hyloplesion") or terrestrial ("Dendrerpeton", "Tuditanus", "Anthracosaurus"). The Carboniferous Rainforest Collapse slowed the evolution of amphibians who could not survive as well in the cooler, drier conditions. Reptiles, however, prospered due to specific key adaptations. One of the greatest evolutionary innovations of the Carboniferous was the amniote egg, which allowed the laying of eggs in a dry environment, allowing for the further exploitation of the land by certain tetrapods. These included the earliest sauropsid reptiles ("Hylonomus"), and the earliest known synapsid ("Archaeothyris"). These small lizard-like animals quickly gave rise to many descendants, including reptiles, birds, and mammals. Reptiles underwent a major evolutionary radiation in response to the drier climate that preceded the rainforest collapse. By the end of the Carboniferous period, amniotes had already diversified into a number of groups, including protorothyridids, captorhinids, araeoscelids, and several families of pelycosaurs. Because plants and animals were growing in size and abundance in this time (for example, "Lepidodendron"), land fungi diversified further. Marine fungi still occupied the oceans. All modern classes of fungi were present in the Late Carboniferous (Pennsylvanian Epoch). The first 15 million years of the Carboniferous had very limited terrestrial fossils. This gap in the fossil record is called Romer's gap after the American palaentologist Alfred Romer. While it has long been debated whether the gap is a result of fossilisation or relates to an actual event, recent work indicates the gap period saw a drop in atmospheric oxygen levels, indicating some sort of ecological collapse. The gap saw the demise of the Devonian fish-like ichthyostegalian labyrinthodonts, and the rise of the more advanced temnospondyl and reptiliomorphan amphibians that so typify the Carboniferous terrestrial vertebrate fauna. Before the end of the Carboniferous Period, an extinction event occurred. On land this event is referred to as the Carboniferous Rainforest Collapse (CRC). Vast tropical rainforests collapsed suddenly as the climate changed from hot and humid to cool and arid. This was likely caused by intense glaciation and a drop in sea levels. The new climatic conditions were not favorable to the growth of rainforest and the animals within them. Rainforests shrank into isolated islands, surrounded by seasonally dry habitats. Towering lycopsid forests with a heterogeneous mixture of vegetation were replaced by much less diverse tree-fern dominated flora. Amphibians, the dominant vertebrates at the time, fared poorly through this event with large losses in biodiversity; reptiles continued to diversify due to key adaptations that let them survive in the drier habitat, specifically the hard-shelled egg and scales, both of which retain water better than their amphibian counterparts.
https://en.wikipedia.org/wiki?curid=5401
Comoros The Comoros (; , '), officially the Union of the Comoros (Comorian: "Udzima wa Komori," , '), is an island country in the Indian Ocean located at the northern end of the Mozambique Channel off the eastern coast of Africa. It shares maritime borders with Madagascar and the French region of Mayotte to the southeast, Tanzania to the northwest, Mozambique to the west, and the Seychelles to the northeast. The capital and largest city in Comoros is Moroni. The religion of the majority of the population, and the official state religion, is Sunni Islam. As a member of the Arab League, the Comoros is the only country in the Arab world which is entirely in the Southern Hemisphere. It is also a member state of the African Union, the Organisation internationale de la Francophonie, the Organisation of Islamic Cooperation, and the Indian Ocean Commission. The Union of the Comoros has three official languages—Comorian, French, and Arabic. At , excluding the contested island of Mayotte, the Comoros is the fourth-smallest African nation by area. The population, excluding Mayotte, is estimated at as of 2018. As a nation formed at a crossroads of different civilisations, the archipelago is noted for its diverse culture and history. The sovereign state is an archipelago consisting of three major islands and numerous smaller islands, all in the volcanic Comoro Islands. The major islands are commonly known by their French names: northwestern-most Grande Comore (Ngazidja), Mohéli (Mwali), and Anjouan (Ndzuani). In addition, the country has a claim on a fourth major island, southeastern-most Mayotte (Maore), though Mayotte voted against independence from France in 1974, has never been administered by an independent Comoros government, and continues to be administered by France (currently as an overseas department). France has vetoed United Nations Security Council resolutions that would affirm Comorian sovereignty over the island. In addition, Mayotte became an overseas department and a region of France in 2011 following a referendum passed overwhelmingly. The archipelago was first settled by Bantu speakers who came from East Africa, Arabs and Austronesians. It then became part of the French colonial empire during the 19th century, before becoming independent in 1975. Since declaring independence, the country has experienced more than 20 coups d'état or attempted coups, with various heads of state assassinated. Along with this constant political instability, the population of the Comoros lives with the worst income inequality of any nation, with a Gini coefficient over 60%, while also ranking in the worst quartile on the Human Development Index. about half the population lived below the international poverty line of US$1.25 a day. The French insular region of Mayotte, which is the most prosperous territory in the Mozambique Channel, is a major destination for migrants from the independent islands. The name "Comoros" derives from the Arabic word "qamar" ("moon"). According to mythology, a jinni (spirit) dropped a jewel, which formed a great circular inferno. This became the Karthala volcano, which created the island of Grande Comoro. King Solomon is also said to have visited the island. The first attested human inhabitants of the Comoro Islands are now thought to have been Austronesian settlers travelling by boat from islands in Southeast Asia. These people arrived no later than the eighth century AD, the date of the earliest known archaeological site, found on Mayotte, although settlement beginning as early as the first century has been postulated. Subsequent settlers came from the east coast of Africa, the Arabian Peninsula and the Persian Gulf, the Malay Archipelago, and Madagascar. Bantu-speaking settlers were present on the islands from the beginnings of settlement, probably brought to the islands as slaves. Development of the Comoros is divided into phases. The earliest reliably recorded phase is the Dembeni phase (eighth to tenth centuries), during which there were several small settlements on each island. From the eleventh to the fifteenth centuries, trade with the island of Madagascar and merchants from the Swahili coast and the Middle East flourished, more villages were founded and existing villages grew. Many Comorians can trace their genealogies to ancestors from the Arabian peninsula, particularly Hadhramaut, who arrived during this period. According to legend, in 632, upon hearing of Islam, islanders are said to have dispatched an emissary, Mtswa-Mwindza, to Mecca—but by the time he arrived there, the Prophet Muhammad had died. Nonetheless, after a stay in Mecca, he returned to Ngazidja and led the gradual conversion of his islanders to Islam. Among the earliest accounts of East Africa, the works of Al-Masudi describe early Islamic trade routes, and how the coast and islands were frequently visited by Muslims including Persian and Arab merchants and sailors in search of coral, ambergris, ivory, tortoiseshell, gold and slaves. They also brought Islam to the people of the Zanj including the Comoros. As the importance of the Comoros grew along the East African coast, both small and large mosques were constructed. The Comoros are part of the Swahili cultural and economic complex and the islands became a major hub of trade and an important location in a network of trading towns that included Kilwa, in present-day Tanzania, Sofala (an outlet for Zimbabwean gold), in Mozambique, and Mombasa in Kenya. The Portuguese arrived in the Indian Ocean at the end of the 15th century and the first Portuguese visit to the islands seems to have been that of Vasco da Gama's second fleet in 1503. For much of the 16th century the islands provided provisions to the Portuguese fort at Mozambique and although there was no formal attempt by the Portuguese crown to take possession, a number of Portuguese traders settled. By the end of the 16th century the local rulers were beginning to push back and, with the support of the Omani Sultan Saif bin Sultan they began to defeat the Dutch and the Portuguese. His successor Said bin Sultan increased Omani Arab influence in the region, moving his administration to nearby Zanzibar, which came under Omani rule. Nevertheless, the Comoros remained independent, and although the three smaller islands were usually politically unified, the largest island, Ngazidja, was divided into a number of autonomous kingdoms ("ntsi"). By the time Europeans showed interest in the Comoros, the islanders were well placed to take advantage of their needs, initially supplying ships of the route to India, particularly the English and, later, slaves to the plantation islands in the Mascarenes. In the last decade of the 18th century, Malagasy warriors, mostly Betsimisaraka and Sakalava, started raiding the Comoros for slaves and the islands were devastated as crops were destroyed and the people were slaughtered, taken into captivity or fled to the African mainland: it is said that by the time the raids finally ended in the second decade of the 19th century only one man remained on Mwali. The islands were repopulated by slaves from the mainland, who were traded to the French in Mayotte and the Mascarenes. On the Comoros, it was estimated in 1865 that as much as 40% of the population consisted of slaves. France first established colonial rule in the Comoros by taking possession of Mayotte in 1841 when the Sakalava usurper sultan "Andriantsoly" (also known as Tsy Levalo) signed the Treaty of April 1841, which ceded the island to the French authorities. Meanwhile, Ndzuani (or Johanna as it was know to the British) continued to serve as a way station for English merchants sailing to India and the Far East, as well as American whalers, although the British gradually abandoned it following their possession of Mauritius in 1814 and by the time the Suez Canal opened in 1869 there was no longer any significant supply trade at Ndzuani. Local commodities exported by the Comoros were, in addition to slaves, coconuts, timber, cattle and tortoiseshell. French settlers, French-owned companies, and wealthy Arab merchants established a plantation-based economy that used about one-third of the land for export crops. After its annexation, France converted Mayotte into a sugar plantation colony. The other islands were soon transformed as well, and the major crops of ylang-ylang, vanilla, cloves, perfume plants, coffee, cocoa beans, and sisal were introduced. In 1886, Mwali was placed under French protection by its Sultan Mardjani Abdou Cheikh. That same year, despite having no authority to do so, Sultan Said Ali of Bambao, one of the sultanates on Ngazidja, placed the island under French protection in exchange for French support of his claim to the entire island, which he retained until his abdication in 1910. In 1908 the islands were unified under a single administration ("Colonie de Mayotte et dépendances") and placed under the authority of the French colonial governor general of Madagascar. In 1909, Sultan Said Muhamed of Ndzuani abdicated in favour of French rule. In 1912 the colony and the protectorates were abolished and the islands became a province of the colony of Madagascar. Agreement was reached with France in 1973 for the Comoros to become independent in 1978, despite the deputies of Mayotte voting for increased integration with France. A referendum was held on all four of the islands. Three voted for independence by large margins, while Mayotte voted against, and remains under French administration. On 6 July 1975, however, the Comorian parliament passed a unilateral resolution declaring independence. Ahmed Abdallah proclaimed the independence of the Comorian State ("État comorien"; دولة القمر) and became its first president. The next 30 years were a period of political turmoil. On 3 August 1975, less than one month after independence, president Ahmed Abdallah was removed from office in an armed coup and replaced with United National Front of the Comoros (FNUK) member Prince Said Mohamed Jaffar. Months later, in January 1976, Jaffar was ousted in favour of his Minister of Defense Ali Soilih. The population of Mayotte voted against independence from France in three referenda during this period. The first, held on all the islands on 22 December 1974, won 63.8% support for maintaining ties with France on Mayotte; the second, held in February 1976, confirmed that vote with an overwhelming 99.4%, while the third, in April 1976, confirmed that the people of Mayotte wished to remain a French territory. The three remaining islands, ruled by President Soilih, instituted a number of socialist and isolationist policies that soon strained relations with France. On 13 May 1978, Bob Denard returned to overthrow President Soilih and reinstate Abdallah with the support of the French, Rhodesian and South African governments. During Soilih's brief rule, he faced seven additional coup attempts until he was finally forced from office and killed. In contrast to Soilih, Abdallah's presidency was marked by authoritarian rule and increased adherence to traditional Islam and the country was renamed the Federal Islamic Republic of the Comoros ("République Fédérale Islamique des Comores"; جمهورية القمر الإتحادية الإسلامية). Abdallah continued as president until 1989 when, fearing a probable coup d'état, he signed a decree ordering the Presidential Guard, led by Bob Denard, to disarm the armed forces. Shortly after the signing of the decree, Abdallah was allegedly shot dead in his office by a disgruntled military officer, though later sources claim an antitank missile was launched into his bedroom and killed him. Although Denard was also injured, it is suspected that Abdallah's killer was a soldier under his command. A few days later, Bob Denard was evacuated to South Africa by French paratroopers. Said Mohamed Djohar, Soilih's older half-brother, then became president, and served until September 1995, when Bob Denard returned and attempted another coup. This time France intervened with paratroopers and forced Denard to surrender. The French removed Djohar to Reunion, and the Paris-backed Mohamed Taki Abdoulkarim became president by election. He led the country from 1996, during a time of labour crises, government suppression, and secessionist conflicts, until his death November 1998. He was succeeded by Interim President Tadjidine Ben Said Massounde. The islands of Ndzuani and Mwali declared their independence from the Comoros in 1997, in an attempt to restore French rule. But France rejected their request, leading to bloody confrontations between federal troops and rebels. In April 1999, Colonel Azali Assoumani, Army Chief of Staff, seized power in a bloodless coup, overthrowing the Interim President Massounde, citing weak leadership in the face of the crisis. This was the Comoros' 18th coup, or attempted coup d'état since independence in 1975. Azali failed to consolidate power and reestablish control over the islands, which was the subject of international criticism. The African Union, under the auspices of President Thabo Mbeki of South Africa, imposed sanctions on Ndzuani to help broker negotiations and effect reconciliation. Under the terms of the Fomboni Accords, signed in December 2001 by the leaders of all three islands, the official name of the country was changed to the Union of the Comoros; the new state was to be highly decentralised and the central union government would devolve most powers to the new island governments, each lead by a president. The Union president, although elected by national elections, would be chosen in rotation from each of the islands every five years. Azali stepped down in 2002 to run in the democratic election of the President of the Comoros, which he won. Under ongoing international pressure, as a military ruler who had originally come to power by force, and was not always democratic while in office, Azali led the Comoros through constitutional changes that enabled new elections. A "Loi des compétences" law was passed in early 2005 that defines the responsibilities of each governmental body, and is in the process of implementation. The elections in 2006 were won by Ahmed Abdallah Mohamed Sambi, a Sunni Muslim cleric nicknamed the "Ayatollah" for his time spent studying Islam in Iran. Azali honoured the election results, thus allowing the first peaceful and democratic exchange of power for the archipelago. Colonel Mohammed Bacar, a French-trained former gendarme elected President of Ndzuani in 2001, refused to step down at the end of his five-year mandate. He staged a vote in June 2007 to confirm his leadership that was rejected as illegal by the Comoros federal government and the African Union. On 25 March 2008 hundreds of soldiers from the African Union and the Comoros seized rebel-held Ndzuani, generally welcomed by the population: there have been reports of hundreds, if not thousands, of people tortured during Bacar's tenure. Some rebels were killed and injured, but there are no official figures. At least 11 civilians were wounded. Some officials were imprisoned. Bacar fled in a speedboat to the French Indian Ocean territory of Mayotte to seek asylum. Anti-French protests followed in the Comoros (see 2008 invasion of Anjouan). Bacar was eventually granted asylum in Benin. Since independence from France, the Comoros experienced more than 20 coups or attempted coups. Following elections in late 2010, former Vice-President Ikililou Dhoinine was inaugurated as president on 26 May 2011. A member of the ruling party, Dhoinine was supported in the election by the incumbent President Ahmed Abdallah Mohamed Sambi. Dhoinine, a pharmacist by training, is the first President of the Comoros from the island of Mwali. Following the 2016 elections, Azali Assoumani, from Ngazidja, became president for a third term. In 2018 Azali held a referendum on constitutional reform that would permit a president to serve two terms. The amdendments passed, although the vote was widely contested and boycotted by the opposition, and in April 2019, and to widespread opposition, Azali was re-elected president to serve the first of potentially two five year terms. The Comoros is formed by Ngazidja (Grande Comore), Mwali (Mohéli) and Ndzuani (Anjouan), three major islands in the Comoros Archipelago, as well as many minor islets. The islands are officially known by their Comorian language names, though international sources still use their French names (given in parentheses above). The capital and largest city, Moroni, is located on Ngazidja. The archipelago is situated in the Indian Ocean, in the Mozambique Channel, between the African coast (nearest to Mozambique and Tanzania) and Madagascar, with no land borders. At , it is one of the smallest countries in the world. The Comoros also has claim to of territorial seas. The interiors of the islands vary from steep mountains to low hills. Ngazidja is the largest of the Comoros Archipelago, approximately equal in area to the other islands combined. It is also the most recent island, and therefore has rocky soil. The island's two volcanoes, Karthala (active) and La Grille (dormant), and the lack of good harbours are distinctive characteristics of its terrain. Mwali, with its capital at Fomboni, is the smallest of the four major islands. Ndzuani, whose capital is Mutsamudu, has a distinctive triangular shape caused by three mountain chains – Shisiwani, Nioumakele and Jimilime – emanating from a central peak, Mount Ntingui (). The islands of the Comoros Archipelago were formed by volcanic activity. Mount Karthala, an active shield volcano located on Ngazidja, is the country's highest point, at . It contains the Comoros' largest patch of disappearing rainforest. Karthala is currently one of the most active volcanoes in the world, with a minor eruption in May 2006, and prior eruptions as recently as April 2005 and 1991. In the 2005 eruption, which lasted from 17 to 19 April, 40,000 citizens were evacuated, and the crater lake in the volcano's caldera was destroyed. The Comoros also lays claim to the "Îles Éparses" or "Îles éparses de l'océan indien" (Scattered Islands in the Indian Ocean) – Glorioso Islands, comprising Grande Glorieuse, Île du Lys, Wreck Rock, South Rock, Verte Rocks (three islets) and three unnamed islets – one of France's overseas districts. The Glorioso Islands were administered by the colonial Comoros before 1975, and are therefore sometimes considered part of the Comoros Archipelago. Banc du Geyser, a former island in the Comoros Archipelago, now submerged, is geographically located in the "Îles Éparses", but was annexed by Madagascar in 1976 as an unclaimed territory. The Comoros and France each still view the Banc du Geyser as part of the Glorioso Islands and, thus, part of its particular exclusive economic zone. The climate is generally tropical and mild, and the two major seasons are distinguishable by their raininess. The temperature reaches an average of in March, the hottest month in the rainy season (called kashkazi/kaskazi [meaning north monsoon], which runs from December to April), and an average low of in the cool, dry season (kusi (meaning south monsoon), which proceeds from May to November). The islands are rarely subject to cyclones. The Comoros constitute an ecoregion in their own right, Comoros forests. In December 1952 a specimen of the coelacanth fish was re-discovered off the Comoros coast. The 66 million-year-old species was thought to have been long extinct until its first recorded appearance in 1938 off the South African coast. Between 1938 and 1975, 84 specimens were caught and recorded. Politics of the Comoros takes place in a framework of a federal presidential republic, whereby the President of the Comoros is both head of state and head of government, and of a multi-party system. The Constitution of the Union of the Comoros was ratified by referendum on 23 December 2001, and the islands' constitutions and executives were elected in the following months. It had previously been considered a military dictatorship, and the transfer of power from Azali Assoumani to Ahmed Abdallah Mohamed Sambi in May 2006 was a watershed moment as it was the first peaceful transfer in Comorian history. Executive power is exercised by the government. Federal legislative power is vested in both the government and parliament. The preamble of the constitution guarantees an Islamic inspiration in governance, a commitment to human rights, and several specific enumerated rights, democracy, "a common destiny" for all Comorians. Each of the islands (according to Title II of the Constitution) has a great amount of autonomy in the Union, including having their own constitutions (or Fundamental Law), president, and Parliament. The presidency and Assembly of the Union are distinct from each of the islands' governments. The presidency of the Union rotates between the islands. Despite widespread misgivings about the durability of the system of presidential rotation, Ngazidja holds the current presidency rotation, and Azali is President of the Union; Ndzuani is in theory to provide the next president. The Comorian legal system rests on Islamic law, an inherited French (Napoleonic Code) legal code, and customary law (mila na ntsi). Village elders, kadis or civilian courts settle most disputes. The judiciary is independent of the legislative and the executive. The Supreme Court acts as a Constitutional Council in resolving constitutional questions and supervising presidential elections. As High Court of Justice, the Supreme Court also arbitrates in cases where the government is accused of malpractice. The Supreme Court consists of two members selected by the president, two elected by the Federal Assembly, and one by the council of each island. Around 80 percent of the central government's annual budget is spent on the country's complex electoral system which provides for a semi-autonomous government and president for each of the three islands and a rotating presidency for the overarching Union government. A referendum took place on 16 May 2009 to decide whether to cut down the government's unwieldy political bureaucracy. 52.7% of those eligible voted, and 93.8% of votes were cast in approval of the referendum. Following the implementation of the changes, each island's president became a governor and the ministers became councillors. In November 1975, the Comoros became the 143rd member of the United Nations. The new nation was defined as comprising the entire archipelago, although the citizens of Mayotte chose to become French citizens and keep their island as a French territory. The Comoros has repeatedly pressed its claim to Mayotte before the United Nations General Assembly, which adopted a series of resolutions under the caption "Question of the Comorian Island of Mayotte", opining that Mayotte belongs to the Comoros under the principle that the territorial integrity of colonial territories should be preserved upon independence. As a practical matter, however, these resolutions have little effect and there is no foreseeable likelihood that Mayotte will become "de facto" part of the Comoros without its people's consent. More recently, the Assembly has maintained this item on its agenda but deferred it from year to year without taking action. Other bodies, including the Organization of African Unity, the Movement of Non-Aligned Countries and the Organisation of Islamic Cooperation, have similarly questioned French sovereignty over Mayotte. To close the debate and to avoid being integrated by force in the Union of the Comoros, the population of Mayotte overwhelmingly chose to become an overseas department and a region of France in a 2009 referendum. The new status was effective on 31 March 2011 and Mayotte has been recognised as an outermost region by the European Union on 1 January 2014. This decision legally integrates Mayotte in the French Republic. The Comoros is a member of the African Union, the Arab League, the European Development Fund, the World Bank, the International Monetary Fund, the Indian Ocean Commission and the African Development Bank. On 10 April 2008, the Comoros became the 179th nation to accept the Kyoto Protocol to the United Nations Framework Convention on Climate Change. The Comoros signed the UN treaty on the Prohibition of Nuclear Weapons. In May 2013 the Union of the Comoros became known for filing a referral to the Office of the Prosecutor of the International Criminal Court (ICC) regarding the events of "the 31 May 2010 Israeli raid on the Humanitarian Aid Flotilla bound for [the] Gaza Strip". In November 2014 the ICC Prosecutor eventually decided that the events did constitute war crimes but did not meet the gravity standards of bringing the case before ICC. The emigration rate of skilled workers was about 21.2% in 2000. The military resources of the Comoros consist of a small standing army and a 500-member police force, as well as a 500-member defence force. A defence treaty with France provides naval resources for protection of territorial waters, training of Comorian military personnel, and air surveillance. France maintains a few senior officers presence in the Comoros at government request. France maintains a small maritime base and a Foreign Legion Detachment (DLEM) on Mayotte. Once the new government was installed in May–June 2011, an expert mission from UNREC (Lomé) came to the Comoros and produced guidelines for the elaboration of a national security policy, which were discussed by different actors, notably the national defence authorities and civil society. By the end of the programme in end March 2012, a normative framework agreed upon by all entities involved in SSR will have been established. This will then have to be adopted by Parliament and implemented by the authorities. Both male and female same-sex sexual acts are illegal in Comoros. Such acts are punished with up to five years imprisonment. The level of poverty in the Comoros is high, but "judging by the international poverty threshold of $1.9 per person per day, only two out of every ten Comorians could be classified as poor, a rate that places the Comoros ahead of other low-income countries and 30 percentage points ahead of other countries in Sub-Saharan Africa." Poverty declined by about 10% between 2014 and 2018, and living conditions generally improved. Economic inequality remains widespread, with a major gap between rural and urban areas. Remittances through the sizable Comorian diaspora form a substantial part of the country's GDP and have contributed to decreases in poverty and increases in living standards. According to ILO's ILOSTAT statistical database, between 1991 and 2019 the unemployment rate as a percent of the total labor force ranged from 4.38% to 4.3%. An October 2005 paper by the Comoros Ministry of Planning and Regional Development, however, reported that "registered unemployment rate is 14.3 percent, distributed very unevenly among and within the islands, but with marked incidence in urban areas." In 2019, more than 56% of the labor force was employed in agriculture, with 29% employed in industry and 14% employed in services. The islands' agricultural sector is based on the export of spices, including vanilla, cinnamon, and cloves, and thus susceptible to price fluctuations in the volatile world commodity market for these goods. The Comoros is the world's largest producer of ylang-ylang, a plant whose extracted essential oil is used in the perfume industry; some 80% of the world's supply comes from the Comoros. High population densities, as much as 1000 per square kilometre in the densest agricultural zones, for what is still a mostly rural, agricultural economy may lead to an environmental crisis in the near future, especially considering the high rate of population growth. In 2004 the Comoros' real GDP growth was a low 1.9% and real GDP per capita continued to decline. These declines are explained by factors including declining investment, drops in consumption, rising inflation, and an increase in trade imbalance due in part to lowered cash crop prices, especially vanilla. Fiscal policy is constrained by erratic fiscal revenues, a bloated civil service wage bill, and an external debt that is far above the HIPC threshold. Membership in the franc zone, the main anchor of stability, has nevertheless helped contain pressures on domestic prices. The Comoros has an inadequate transportation system, a young and rapidly increasing population, and few natural resources. The low educational level of the labour force contributes to a subsistence level of economic activity, high unemployment, and a heavy dependence on foreign grants and technical assistance. Agriculture contributes 40% to GDP and provides most of the exports. The government is struggling to upgrade education and technical training, to privatise commercial and industrial enterprises, to improve health services, to diversify exports, to promote tourism, and to reduce the high population growth rate. The Comoros is a member of the Organization for the Harmonization of Business Law in Africa (OHADA). With fewer than a million people, the Comoros is one of the least populous countries in the world, but is also one of the most densely populated, with an average of . In 2001, 34% of the population was considered urban, but that is expected to grow, since rural population growth is negative, while overall population growth is still relatively high. Almost half the population of the Comoros is under the age of 15. Major urban centres include Moroni, Mitsamihuli, Fumbuni, Mutsamudu, Domoni, and Fomboni. There are between 200,000 and 350,000 Comorians in France. The islands of the Comoros share mostly African-Arab origins. Minorities include Malagasy (Christian) and Indian (mostly Ismaili). There are recent immigrants of Chinese origin in Grande Comore (especially Moroni). Although most French left after independence in 1975, a small Creole community, descended from settlers from France, Madagascar and Réunion, lives in the Comoros. The most common languages in the Comoros are the Comorian languages, collectively known "Shikomori". They are related to Swahili, and the four different variants (Shingazidja, Shimwali, Shindzuani and Shimaore) are spoken on each of the four islands. Arabic and Latin scripts are both used, Arabic being the more widely used, and an official orthography has recently been developed for the Latin script. Arabic and French are also official languages, along with Comorian. Arabic is widely known as a second language, being the language of Quranic teaching. French is the administrative language and the language of most non-Quranic formal education. Sunni Islam is the dominant religion, followed by as much as 99% of the population. Comoros is the only Muslim-majority country in Southern Africa and the second southernmost Muslim-majority territory after the French territory of Mayotte. A minority of the population of the Comoros are Christian, both Catholic and Protestant denominations are represented, and most Malagasy residents are also Christian. Expatriates from metropolitan France are mostly Catholic. There are 15 physicians per 100,000 people. The fertility rate was 4.7 per adult woman in 2004. Life expectancy at birth is 67 for females and 62 for males. Almost all children attend Quranic schools, usually before, although increasingly in tandem with regular schooling. Children are taught about the Qur'an, and memorise it, and learn the Arabic script. Most parents prefer their children to attend Koran schools before moving on to the French-based schooling system. Although the state sector is plagued by a lack of resources, and the teachers by unpaid salaries, there are numerous private and community schools of relatively good standard. The national curriculum, apart from a few years during the revolutionary period immediately post-independence, has been very much based on the French system, both because resources are French and most Comorians hope to go on to further education in France. There have recently been moves to Comorianise the syllabus and integrate the two systems, the formal and the Quran schools, into one, thus moving away from the secular educational system inherited from France. Pre-colonization education systems in Comoros focused on necessary skills such as agriculture, caring for livestock and completing household tasks. Religious education also taught children the virtues of Islam. The education system underwent a transformation during colonization in the early 1900s which brought secular education based on the French system. This was mainly for children of the elite. After Comoros gained independence in 1975, the education system changed again. Funding for teachers' salaries was lost, and many went on strike. Thus, the public education system was not functioning between 1997 and 2001. Since gaining independence, the education system has also undergone a democratization and options exist for those other than the elite. Enrollment has also grown. In 2000, 44.2% of children ages 5 to 14 years were attending school. There is a general lack of facilities, equipment, qualified teachers, textbooks and other resources. Salaries for teachers are often so far in arrears that many refuse to work. Prior to 2000, students seeking a university education had to attend school outside of the country, however in the early 2000s a university was created in the country. This served to help economic growth and to fight the "flight" of many educated people who were not returning to the islands to work. About fifty-seven percent of the population is literate in the Latin script while more than 90% are literate in the Arabic script. Comorian has no native script, but both Arabic and Latin scripts are used. Traditionally, women on Ndzuani wear red and white patterned garments called "shiromani", while on Ngazidja and Mwali colourful shawls called "leso" are worn. Many women apply a paste of ground sandalwood and coral called "msinzano" to their faces. Traditional male clothing is a long white shirt known as a "nkandu", and a bonnet called a "kofia". There are two types of marriages in Comoros, the little marriage (known as "Mna daho" on Ngazidja) and the customary marriage (known as "ada" on Ngazidja, "harusi" on the other islands). The little marriage is a simple legal marriage. It is small, intimate, and inexpensive and the bride's dowry is nominal. A man may undertake a number of "Mna daho" marriages in his lifetime, often at the same time, a woman fewer; but both men and women will usually only undertake one "ada", or grand marriage, and this must generally be within the village. The hallmarks of the grand marriage are dazzling gold jewelry, two weeks of celebration and an enormous bridal dowry. Although the expenses are shared between both families as well as with a wider social circle, an ada wedding on Ngazidja can cost up to €50,000 (74,000 US dollars). Many couples take a lifetime to save for their ada, and it is not uncommon for a marriage to be attended by a couple's adult children. The "ada" marriage marks a man's transition in the Ngazidja age system from youth to elder. His status in the social hierarchy greatly increases, and he will henceforth be entitled to speak in public and participate in the political process, both in his village and more widely across the island. He will be entitled to display his status by wearing a "mharuma", a type of shawl, across his shoulders, and he can enter the mosque by the door reserved for elders, and sit at the front. A woman's status also changes, although less formally, as she becomes a "mother" and moves into her own house. The system is less formalised on the other islands, but the marriage is nevertheless a significant and costly event across the archipelago. The "ada" is often criticized because of its great expense, but at the same time it is a source of social cohesion and the main reason why migrants in France and elsewhere continue to send money home. Increasingly, marriages are also being taxed for the purposes of village development, so the effects are not entirely negative. Comorian society has a bilateral descent system. Lineage membership and inheritance of immovable goods (land, housing) is matrilineal, passed in the maternal line, similar to many Bantu peoples who are also matrilineal, while other goods and patronymics are passed in the male line. However, there are differences between the islands, the matrilineal element being stronger on Ngazidja. Twarab music, imported from Zanzibar in the early 20th century, remains the most influential genre on the islands and is popular at "ada" marriages. There are two daily national newspapers published in the Comoros, the government-owned "Al-Watwan", and the privately owned "La Gazette des Comores", both published in Moroni. There are a number of smaller newsletters published on an irregular basis as well as a variety of news websites. The government-owned ORTC provides national radio and television service and there are a number of privately owned stations broadcasting locally in the larger towns. This article incorporates text from the Library of Congress Country Studies, which is in the public domain.
https://en.wikipedia.org/wiki?curid=5403
China China (), officially the People's Republic of China (PRC; ), is a country in East Asia. It is the world's most populous country, with a population of around 1.4 billion in 2019. Covering approximately 9.6 million square kilometres (3.7 million mi2), it is the world's third or fourth-largest country by area. Governed by the Communist Party of China, the state exercises jurisdiction over 22 provinces, five autonomous regions, four direct-controlled municipalities (Beijing, Tianjin, Shanghai, and Chongqing), and the special administrative regions of Hong Kong and Macau. China emerged as one of the world's first civilizations, in the fertile basin of the Yellow River in the North China Plain. For millennia, China's political system was based on hereditary monarchies, or dynasties, beginning with the semi-mythical Xia dynasty in 21st century BCE. Since then, China has expanded, fractured, and re-unified numerous times. In the 3rd century BCE, the Qin reunited core China and established the first Chinese empire. The succeeding Han dynasty, which ruled from 206 BCE until 220 CE, saw some of the most advanced technology at that time, including papermaking and the compass, along with agricultural and medical improvements. The invention of gunpowder and movable type in the Tang dynasty (618–907) and Northern Song (960–1127) completed the Four Great Inventions. Tang culture spread widely in Asia, as the new Silk Route brought traders to as far as Mesopotamia and the Horn of Africa. Dynastic rule ended in 1912 with the Xinhai Revolution, when the Republic of China (ROC) replaced the Qing dynasty. The subsequent Chinese Civil War resulted in a division of territory in 1949, when the Communist Party of China led by Mao Zedong established the People's Republic of China on mainland China while the Kuomintang-led nationalist government retreated to the island of Taiwan, where it governed until 1996 when Taiwan transitioned to democracy. China is a unitary one-party socialist republic and is one of the few existing socialist states. Political dissidents and human rights groups have denounced and criticized the Chinese government for widespread human rights abuses, including suppression of religious and ethnic minorities, censorship and mass surveillance, and cracking down on protests such as the 1989 Tiananmen Square protests. Since the introduction of economic reforms in 1978, China's economy has been one of the world's fastest-growing with annual growth rates consistently above 6 percent. According to the World Bank, China's GDP grew from $150 billion in 1978 to $12.24 trillion by 2017. Since 2010, China has been the world's second-largest economy by nominal GDP, and since 2014, the largest economy in the world by PPP. China is also the world's largest exporter and second-largest importer of goods. China is a recognized nuclear weapons state and has the world's largest standing army, the People's Liberation Army, and the second-largest defense budget. The PRC is a permanent member of the United Nations Security Council since replacing the ROC in 1971. China has been characterized as a emerging superpower, mainly because of its massive population, large and rapidly-growing economy, and powerful military. The word "China" has been used in English since the 16th century; however, it was not a word used by the Chinese themselves during this period in time. Its origin has been traced through Portuguese, Malay, and Persian back to the Sanskrit word "Cīna", used in ancient India. "China" appears in Richard Eden's 1555 translation of the 1516 journal of the Portuguese explorer Duarte Barbosa. Barbosa's usage was derived from Persian "Chīn" (), which was in turn derived from Sanskrit "Cīna" (). "Cīna" was first used in early Hindu scripture, including the "Mahābhārata" (5th century ) and the "Laws of Manu" (2nd century ). In 1655, Martino Martini suggested that the word China is derived ultimately from the name of the Qin dynasty (221–206 BCE). Although this derivation is still given in various sources, the origin of the Sanskrit word is a matter of debate, according to the "Oxford English Dictionary". Alternative suggestions include the names for Yelang and the Jing or Chu state. The official name of the modern state is the "People's Republic of China" (). The shorter form is "China" ' from ' ("central") and "" ("state"), a term which developed under the Western Zhou dynasty in reference to its royal demesne. It was then applied to the area around Luoyi (present-day Luoyang) during the Eastern Zhou and then to China's Central Plain before being used as an occasional synonym for the state under the Qing. It was often used as a cultural concept to distinguish the Huaxia people from perceived "barbarians". The name "Zhongguo" is also translated as in English. Archaeological evidence suggests that early hominids inhabited China between 2.24 million and 250,000 years ago. The hominid fossils of Peking Man, a "Homo erectus" who used fire, were discovered in a cave at Zhoukoudian near Beijing; they have been dated to between 680,000 and 780,000 years ago. The fossilized teeth of "Homo sapiens" (dated to 125,000–80,000 years ago) have been discovered in Fuyan Cave in Dao County, Hunan. Chinese proto-writing existed in Jiahu around 7000 , Damaidi around 6000 , Dadiwan from 5800–5400 , and Banpo dating from the 5th millennium . Some scholars have suggested that the Jiahu symbols (7th millennium ) constituted the earliest Chinese writing system. According to Chinese tradition, the first dynasty was the Xia, which emerged around 2100 . Xia dynasty marked the beginning of China's political system based on hereditary monarchies, or dynasties, which lasted for a millennia. The dynasty was considered mythical by historians until scientific excavations found early Bronze Age sites at Erlitou, Henan in 1959. It remains unclear whether these sites are the remains of the Xia dynasty or of another culture from the same period. The succeeding Shang dynasty is the earliest to be confirmed by contemporary records. The Shang ruled the plain of the Yellow River in eastern China from the 17th to the 11th century . Their oracle bone script (from  ) represents the oldest form of Chinese writing yet found, and is a direct ancestor of modern Chinese characters. The Shang was conquered by the Zhou, who ruled between the 11th and 5th centuries , though centralized authority was slowly eroded by feudal warlords. Some principalities eventually emerged from the weakened Zhou, no longer fully obeyed the Zhou king and continually waged war with each other in the 300-year Spring and Autumn period. By the time of the Warring States period of the 5th–3rd centuries , there were only seven powerful states left. The Warring States period ended in 221  after the state of Qin conquered the other six kingdoms, reunited China and established the dominant order of autocracy. King Zheng of Qin proclaimed himself the First Emperor of the Qin dynasty. He enacted Qin's legalist reforms throughout China, notably the forced standardization of Chinese characters, measurements, road widths (i.e., cart axles' length), and currency. His dynasty also conquered the Yue tribes in Guangxi, Guangdong, and Vietnam. The Qin dynasty lasted only fifteen years, falling soon after the First Emperor's death, as his harsh authoritarian policies led to widespread rebellion. Following a widespread civil war during which the imperial library at Xianyang was burned, the Han dynasty emerged to rule China between 206  and  220, creating a cultural identity among its populace still remembered in the ethnonym of the Han Chinese. The Han expanded the empire's territory considerably, with military campaigns reaching Central Asia, Mongolia, South Korea, and Yunnan, and the recovery of Guangdong and northern Vietnam from Nanyue. Han involvement in Central Asia and Sogdia helped establish the land route of the Silk Road, replacing the earlier path over the Himalayas to India. Han China gradually became the largest economy of the ancient world. Despite the Han's initial decentralization and the official abandonment of the Qin philosophy of Legalism in favor of Confucianism, Qin's legalist institutions and policies continued to be employed by the Han government and its successors. After the end of the Han dynasty, a period of strife known as Three Kingdoms followed, whose central figures were later immortalized in one of the Four Classics of Chinese literature. At its end, Wei was swiftly overthrown by the Jin dynasty. The Jin fell to civil war upon the ascension of a developmentally-disabled emperor; the Five Barbarians then invaded and ruled northern China as the Sixteen States. The Xianbei unified them as the Northern Wei, whose Emperor Xiaowen reversed his predecessors' apartheid policies and enforced a drastic sinification on his subjects, largely integrating them into Chinese culture. In the south, the general Liu Yu secured the abdication of the Jin in favor of the Liu Song. The various successors of these states became known as the Northern and Southern dynasties, with the two areas finally reunited by the Sui in 581. The Sui restored the Han to power through China, reformed its agriculture, economy and imperial examination system, constructed the Grand Canal, and patronized Buddhism. However, they fell quickly when their conscription for public works and a failed war in northern Korea provoked widespread unrest. Under the succeeding Tang and Song dynasties, Chinese economy, technology, and culture entered a golden age. The Tang Empire returned control of the Western Regions and the Silk Road, brought traders to as far as Mesopotamia and the Horn of Africa, and made the capital Chang'an a cosmopolitan urban center. However, it was devastated and weakened by the An Shi Rebellion in the 8th century. In 907, the Tang disintegrated completely when the local military governors became ungovernable. The Song dynasty ended the separatist situation in 960, leading to a balance of power between the Song and Khitan Liao. The Song was the first government in world history to issue paper money and the first Chinese polity to establish a permanent standing navy which was supported by the developed shipbuilding industry along with the sea trade. Between the 10th and 11th centuries, the population of China doubled in size to around 100 million people, mostly because of the expansion of rice cultivation in central and southern China, and the production of abundant food surpluses. The Song dynasty also saw a revival of Confucianism, in response to the growth of Buddhism during the Tang, and a flourishing of philosophy and the arts, as landscape art and porcelain were brought to new levels of maturity and complexity. However, the military weakness of the Song army was observed by the Jurchen Jin dynasty. In 1127, Emperor Huizong of Song and the capital Bianjing were captured during the Jin–Song Wars. The remnants of the Song retreated to southern China. The 13th century brought the Mongol conquest of China. In 1271, the Mongol leader Kublai Khan established the Yuan dynasty; the Yuan conquered the last remnant of the Song dynasty in 1279. Before the Mongol invasion, the population of Song China was 120 million citizens; this was reduced to 60 million by the time of the census in 1300. A peasant named Zhu Yuanzhang overthrew the Yuan in 1368 and founded the Ming dynasty as the Hongwu Emperor. Under the Ming dynasty, China enjoyed another golden age, developing one of the strongest navies in the world and a rich and prosperous economy amid a flourishing of art and culture. It was during this period that admiral Zheng He led the Ming treasure voyages throughout the Indian Ocean, reaching as far as East Africa. In the early years of the Ming dynasty, China's capital was moved from Nanjing to Beijing. With the budding of capitalism, philosophers such as Wang Yangming further critiqued and expanded Neo-Confucianism with concepts of individualism and equality of four occupations. The scholar-official stratum became a supporting force of industry and commerce in the tax boycott movements, which, together with the famines and defense against Japanese invasions of Korea (1592–1598) and Manchu invasions led to an exhausted treasury. In 1644, Beijing was captured by a coalition of peasant rebel forces led by Li Zicheng. The Chongzhen Emperor committed suicide when the city fell. The Manchu Qing dynasty, then allied with Ming dynasty general Wu Sangui, overthrew Li's short-lived Shun dynasty and subsequently seized control of Beijing, which became the new capital of the Qing dynasty. The Qing dynasty, which lasted from 1644 until 1912, was the last imperial dynasty of China. Its conquest of the Ming (1618–1683) cost 25 million lives and the economy of China shrank drastically. After the Southern Ming ended, the further conquest of the Dzungar Khanate added Mongolia, Tibet and Xinjiang to the empire. The centralized autocracy was strengthened to crack down on anti-Qing sentiment with the policy of valuing agriculture and restraining commerce, the "Haijin" ("sea ban"), and ideological control as represented by the literary inquisition, causing social and technological stagnation. In the mid-19th century, the dynasty experienced Western imperialism in the Opium Wars with Britain and France. China was forced to pay compensation, open treaty ports, allow extraterritoriality for foreign nationals, and cede Hong Kong to the British under the 1842 Treaty of Nanking, the first of the Unequal Treaties. The First Sino-Japanese War (1894–95) resulted in Qing China's loss of influence in the Korean Peninsula, as well as the cession of Taiwan to Japan. The Qing dynasty also began experiencing internal unrest in which tens of millions of people died, especially in the White Lotus Rebellion, the failed Taiping Rebellion that ravaged southern China in the 1850s and 1860s and the Dungan Revolt (1862–77) in the northwest. The initial success of the Self-Strengthening Movement of the 1860s was frustrated by a series of military defeats in the 1880s and 1890s. In the 19th century, the great Chinese diaspora began. Losses due to emigration were added to by conflicts and catastrophes such as the Northern Chinese Famine of 1876–79, in which between 9 and 13 million people died. The Guangxu Emperor drafted a reform plan in 1898 to establish a modern constitutional monarchy, but these plans were thwarted by the Empress Dowager Cixi. The ill-fated anti-foreign Boxer Rebellion of 1899–1901 further weakened the dynasty. Although Cixi sponsored a program of reforms, the Xinhai Revolution of 1911–12 brought an end to the Qing dynasty and established the Republic of China. On 1 January 1912, the Republic of China was established, and Sun Yat-sen of the Kuomintang (the KMT or Nationalist Party) was proclaimed provisional president. However, the presidency was later given to Yuan Shikai, a former Qing general who in 1915 proclaimed himself Emperor of China. In the face of popular condemnation and opposition from his own Beiyang Army, he was forced to abdicate and re-establish the republic. After Yuan Shikai's death in 1916, China was politically fragmented. Its Beijing-based government was internationally recognized but virtually powerless; regional warlords controlled most of its territory. In the late 1920s, the Kuomintang, under Chiang Kai-shek, the then Principal of the Republic of China Military Academy, was able to reunify the country under its own control with a series of deft military and political manoeuvrings, known collectively as the Northern Expedition. The Kuomintang moved the nation's capital to Nanjing and implemented "political tutelage", an intermediate stage of political development outlined in Sun Yat-sen's San-min program for transforming China into a modern democratic state. The political division in China made it difficult for Chiang to battle the communist People's Liberation Army (PLA), against whom the Kuomintang had been warring since 1927 in the Chinese Civil War. This war continued successfully for the Kuomintang, especially after the PLA retreated in the Long March, until Japanese aggression and the 1936 Xi'an Incident forced Chiang to confront Imperial Japan. The Second Sino-Japanese War (1937–1945), a theater of World War II, forced an uneasy alliance between the Kuomintang and the PLA. Japanese forces committed numerous war atrocities against the civilian population; in all, as many as 20 million Chinese civilians died. An estimated 40,000 to 300,000 Chinese were massacred in the city of Nanjing alone during the Japanese occupation. During the war, China, along with the UK, the US, and the Soviet Union, were referred to as "trusteeship of the powerful" and were recognized as the Allied "Big Four" in the Declaration by United Nations. Along with the other three great powers, China was one of the four major Allies of World War II, and was later considered one of the primary victors in the war. After the surrender of Japan in 1945, Taiwan, including the Pescadores, was returned to Chinese control. China emerged victorious but war-ravaged and financially drained. The continued distrust between the Kuomintang and the Communists led to the resumption of civil war. Constitutional rule was established in 1947, but because of the ongoing unrest, many provisions of the ROC constitution were never implemented in mainland China. Major combat in the Chinese Civil War ended in 1949 with the Communist Party in control of most of mainland China, and the Kuomintang retreating offshore, reducing its territory to only Taiwan, Hainan, and their surrounding islands. On 21 September 1949, Communist Party Chairman Mao Zedong proclaimed the establishment of the People's Republic of China with a speech at the First Plenary Session of the Chinese People's Political Consultative Conference followed by a public proclamation and celebration in Tiananmen Square. In 1950, the People's Liberation Army captured Hainan from the ROC and incorporated Tibet. However, remaining Kuomintang forces continued to wage an insurgency in western China throughout the 1950s. The regime consolidated its popularity among the peasants through land reform, which included the execution of between 1 and 2 million landlords. China developed an independent industrial system and its own nuclear weapons. The Chinese population increased from 550 million in 1950 to 900 million in 1974. However, the Great Leap Forward, an idealistic massive reform project, resulted in an estimated 15 to 35 million deaths between 1958 and 1961, mostly from starvation. In 1966, Mao and his allies launched the Cultural Revolution, sparking a decade of political recrimination and social upheaval that lasted until Mao's death in 1976. In October 1971, the PRC replaced the Republic in the United Nations, and took its seat as a permanent member of the Security Council. After Mao's death, the Gang of Four was quickly arrested and held responsible for the excesses of the Cultural Revolution. Deng Xiaoping took power in 1978, and instituted significant economic reforms. The Party loosened governmental control over citizens' personal lives, and the communes were gradually disbanded in favor of working contracted to households. This marked China's transition from a planned economy to a mixed economy with an increasingly open-market environment. China adopted its current constitution on 4 December 1982. In 1989, the violent suppression of student protests in Tiananmen Square brought sanctions against the Chinese government from various countries. Jiang Zemin, Li Peng and Zhu Rongji led the nation in the 1990s. Under their administration, China's economic performance pulled an estimated 150 million peasants out of poverty and sustained an average annual gross domestic product growth rate of 11.2%. The country joined the World Trade Organization in 2001, and maintained its high rate of economic growth under Hu Jintao and Wen Jiabao's leadership in the 2000s. However, the growth also severely impacted the country's resources and environment, and caused major social displacement. Xi Jinping has ruled since 2012 and has pursued large-scale efforts to reform China's economy (which has suffered from structural instabilities and slowing growth), and has also reformed the one-child policy and prison system, as well as instituting a vast anti corruption crackdown. In December 2019, a new strain of coronavirus, subsequently named COVID-19, broke out in Wuhan, Hubei, then spread to other provinces of China, and eventually around the world, becoming the worldwide COVID-19 pandemic. China's landscape is vast and diverse, ranging from the Gobi and Taklamakan Deserts in the arid north to the subtropical forests in the wetter south. The Himalaya, Karakoram, Pamir and Tian Shan mountain ranges separate China from much of South and Central Asia. The Yangtze and Yellow Rivers, the third- and sixth-longest in the world, respectively, run from the Tibetan Plateau to the densely populated eastern seaboard. China's coastline along the Pacific Ocean is long and is bounded by the Bohai, Yellow, East China and South China seas. China connects through the Kazakh border to the Eurasian Steppe which has been an artery of communication between East and West since the Neolithic through the Steppe route – the ancestor of the terrestrial Silk Road(s). The territory of China lies between latitudes 18° and 54° N, and longitudes 73° and 135° E. The geographical center of China is marked by the Center of the Country Monument at . China's landscapes vary significantly across its vast territory. In the east, along the shores of the Yellow Sea and the East China Sea, there are extensive and densely populated alluvial plains, while on the edges of the Inner Mongolian plateau in the north, broad grasslands predominate. Southern China is dominated by hills and low mountain ranges, while the central-east hosts the deltas of China's two major rivers, the Yellow River and the Yangtze River. Other major rivers include the Xi, Mekong, Brahmaputra and Amur. To the west sit major mountain ranges, most notably the Himalayas. High plateaus feature among the more arid landscapes of the north, such as the Taklamakan and the Gobi Desert. The world's highest point, Mount Everest (8,848 m), lies on the Sino-Nepalese border. The country's lowest point, and the world's third-lowest, is the dried lake bed of Ayding Lake (−154m) in the Turpan Depression. China's climate is mainly dominated by dry seasons and wet monsoons, which lead to pronounced temperature differences between winter and summer. In the winter, northern winds coming from high-latitude areas are cold and dry; in summer, southern winds from coastal areas at lower latitudes are warm and moist. The climate in China differs from region to region because of the country's highly complex topography. A major environmental issue in China is the continued expansion of its deserts, particularly the Gobi Desert. Although barrier tree lines planted since the 1970s have reduced the frequency of sandstorms, prolonged drought and poor agricultural practices have resulted in dust storms plaguing northern China each spring, which then spread to other parts of East Asia, including Japan and Korea. China's environmental watchdog, SEPA, stated in 2007 that China is losing per year to desertification. Water quality, erosion, and pollution control have become important issues in China's relations with other countries. Melting glaciers in the Himalayas could potentially lead to water shortages for hundreds of millions of people. China has a very agriculturally suitable climate and has been the largest producer of rice, wheat, tomatoes, brinjal, grapes, water melon, spinach in the world. China is one of 17 megadiverse countries, lying in two of the world's major biogeographic realms: the Palearctic and the Indomalayan. By one measure, China has over 34,687 species of animals and vascular plants, making it the third-most biodiverse country in the world, after Brazil and Colombia. The country signed the Rio de Janeiro Convention on Biological Diversity on 11 June 1992, and became a party to the convention on 5 January 1993. It later produced a National Biodiversity Strategy and Action Plan, with one revision that was received by the convention on 21 September 2010. China is home to at least 551 species of mammals (the third-highest such number in the world), 1,221 species of birds (eighth), 424 species of reptiles (seventh) and 333 species of amphibians (seventh). Wildlife in China share habitat with and bear acute pressure from the world's largest population of "Homo sapiens". At least 840 animal species are threatened, vulnerable or in danger of local extinction in China, due mainly to human activity such as habitat destruction, pollution and poaching for food, fur and ingredients for traditional Chinese medicine. Endangered wildlife is protected by law, and , the country has over 2,349 nature reserves, covering a total area of 149.95 million hectares, 15 percent of China's total land area. The Baiji was confirmed extinct on 12 December 2006. China has over 32,000 species of vascular plants, and is home to a variety of forest types. Cold coniferous forests predominate in the north of the country, supporting animal species such as moose and Asian black bear, along with over 120 bird species. The understorey of moist conifer forests may contain thickets of bamboo. In higher montane stands of juniper and yew, the bamboo is replaced by rhododendrons. Subtropical forests, which are predominate in central and southern China, support as many as 146,000 species of flora. Tropical and seasonal rainforests, though confined to Yunnan and Hainan Island, contain a quarter of all the animal and plant species found in China. China has over 10,000 recorded species of fungi, and of them, nearly 6,000 are higher fungi. In recent decades, China has suffered from severe environmental deterioration and pollution. While regulations such as the 1979 Environmental Protection Law are fairly stringent, they are poorly enforced, as they are frequently disregarded by local communities and government officials in favor of rapid economic development. China is the country with the second highest death toll because of air pollution, after India. There are approximately 1 million deaths caused by exposure to ambient air pollution. China is the world's largest carbon dioxide emitter. The country also has significant water pollution problems: 8.2% of China's rivers had been polluted by industrial and agricultural waste in 2019, and were unfit for use. However, China is the world's leading investor in renewable energy and its commercialization, with $52 billion invested in 2011 alone; it is a major manufacturer of renewable energy technologies and invests heavily in local-scale renewable energy projects. By 2015, over 24% of China's energy was derived from renewable sources, while most notably from hydroelectric power: a total installed capacity of 197 GW makes China the largest hydroelectric power producer in the world. China also has the largest power capacity of installed solar photovoltaics system and wind power system in the world. The People's Republic of China is the second-largest country in the world by land area after Russia, and is the third largest by total area, after Russia and Canada. China's total area is generally stated as being approximately . Specific area figures range from according to the "Encyclopædia Britannica", to according to the UN Demographic Yearbook, and the CIA World Factbook. China has the longest combined land border in the world, measuring from the mouth of the Yalu River (Amnok River) to the Gulf of Tonkin. China borders 14 nations, more than any other country except Russia, which also borders 14. China extends across much of East Asia, bordering Vietnam, Laos, and Myanmar (Burma) in Southeast Asia; India, Bhutan, Nepal, Afghanistan, and Pakistan in South Asia; Tajikistan, Kyrgyzstan and Kazakhstan in Central Asia; and Russia, Mongolia, and North Korea in Inner Asia and Northeast Asia. Additionally, China shares maritime boundaries with South Korea, Japan, Vietnam, and the Philippines. China's constitution states that The People's Republic of China "is a socialist state under the people's democratic dictatorship led by the working class and based on the alliance of workers and peasants," and that the state organs "apply the principle of democratic centralism." The PRC is one of the world's only socialist states explicitly aiming to build communism. The Chinese government has been variously described as communist and socialist, but also as authoritarian and corporatist, with heavy restrictions in many areas, most notably against free access to the Internet, freedom of the press, freedom of assembly, the right to have children, free formation of social organizations and freedom of religion. Its current political, ideological and economic system has been termed by its leaders as a "consultative democracy" "people's democratic dictatorship", "socialism with Chinese characteristics" (which is Marxism adapted to Chinese circumstances) and the "socialist market economy" respectively. According to Lutgard Lams, "President Xi is making great attempts to 'Sinicize' Marxist–Leninist Thought 'with Chinese characteristics' in the political sphere." Since 2018, the main body of the Chinese constitution declares that "the defining feature of socialism with Chinese characteristics is the leadership of the Communist Party of China (CPC)." The 2018 amendments constitutionalized the "de facto" one-party state status of China, wherein the General Secretary (party leader) holds ultimate power and authority over state and government and serves as the paramount leader of China. The electoral system is pyramidal. Local People's Congresses are directly elected, and higher levels of People's Congresses up to the National People's Congress (NPC) are indirectly elected by the People's Congress of the level immediately below. The political system is decentralized, and provincial and sub-provincial leaders have a significant amount of autonomy. Another eight political parties, have representatives in the NPC and the Chinese People's Political Consultative Conference (CPPCC). China supports the Leninist principle of "democratic centralism", but critics describe the elected National People's Congress as a "rubber stamp" body. The President is the titular head of state, elected by the National People's Congress. The Premier is the head of government, presiding over the State Council composed of four vice premiers and the heads of ministries and commissions. The incumbent president is Xi Jinping, who is also the General Secretary of the Communist Party of China and the Chairman of the Central Military Commission, making him China's paramount leader. The incumbent premier is Li Keqiang, who is also a senior member of the CPC Politburo Standing Committee, China's "de facto" top decision-making body. There have been some moves toward political liberalization, in that open contested elections are now held at the village and town levels. However, the party retains effective control over government appointments: in the absence of meaningful opposition, the CPC wins by default most of the time. In 2017, Xi called on the communist party to further tighten its grip on the country, to uphold the unity of the party leadership, and achieve the "Chinese Dream of national rejuvenation". Political concerns in China include the growing gap between rich and poor and government corruption. Nonetheless, the level of public support for the government and its management of the nation is high, with 80–95% of Chinese citizens expressing satisfaction with the central government, according to a 2011 survey. The People's Republic of China is divided into 22 provinces, five autonomous regions (each with a designated minority group), and four municipalities—collectively referred to as "mainland China"—as well as the special administrative regions (SARs) of Hong Kong and Macau. Geographically, all 31 provincial divisions of mainland China can be grouped into six regions: North China, Northeast China, East China, South Central China, Southwest China, and Northwest China. China considers Taiwan to be its 23rd province, although Taiwan is governed by the Republic of China (ROC), which rejects the PRC's claim. Conversely, the ROC claims sovereignty over all divisions governed by the PRC. The PRC has diplomatic relations with 175 countries and maintains embassies in 162. Its legitimacy is disputed by the Republic of China and a few other countries; it is thus the largest and most populous state with limited recognition. In 1971, the PRC replaced the Republic of China as the sole representative of China in the United Nations and as one of the five permanent members of the United Nations Security Council. China was also a former member and leader of the Non-Aligned Movement, and still considers itself an advocate for developing countries. Along with Brazil, Russia, India and South Africa, China is a member of the BRICS group of emerging major economies and hosted the group's third official summit at Sanya, Hainan in April 2011. Under its interpretation of the One-China policy, Beijing has made it a precondition to establishing diplomatic relations that the other country acknowledges its claim to Taiwan and severs official ties with the government of the Republic of China. Chinese officials have protested on numerous occasions when foreign countries have made diplomatic overtures to Taiwan, especially in the matter of armament sales. Much of current Chinese foreign policy is reportedly based on Premier Zhou Enlai's Five Principles of Peaceful Coexistence, and is also driven by the concept of "harmony without uniformity", which encourages diplomatic relations between states despite ideological differences. This policy may have led China to support states that are regarded as dangerous or repressive by Western nations, such as Zimbabwe, North Korea and Iran. China has a close economic and military relationship with Russia, and the two states often vote in unison in the UN Security Council. China became the world's largest trading nation in 2013, as measured by the sum of imports and exports. By 2016, China was the largest trading partner of 124 other countries. China became a member of the World Trade Organization (WTO) on 11 December 2001. In 2004, it proposed an entirely new East Asia Summit (EAS) framework as a forum for regional security issues. The EAS, which includes ASEAN Plus Three, India, Australia and New Zealand, held its inaugural summit in 2005. China has had a long and complex trade relationship with the United States. In 2000, the United States Congress approved "permanent normal trade relations" (PNTR) with China, allowing Chinese exports in at the same low tariffs as goods from most other countries. China has a significant trade surplus with the United States, its most important export market. In the early 2010s, US politicians argued that the Chinese yuan was significantly undervalued, giving China an unfair trade advantage. Since the turn of the century, China has followed a policy of engaging with African nations for trade and bilateral co-operation; in 2012, Sino-African trade totalled over US$160 billion. China maintains healthy and highly diversified trade links with the European Union. China has furthermore strengthened its ties with major South American economies, becoming the largest trading partner of Brazil and building strategic links with Argentina. China's Belt and Road Initiative has expanded significantly over the last six years and, as of 2019, includes 137 countries and 30 international organizations. Ever since its establishment after the second Chinese Civil War, the PRC has claimed the territories governed by the Republic of China (ROC), a separate political entity today commonly known as Taiwan, as a part of its territory. It regards the island of Taiwan as its Taiwan Province, Kinmen and Matsu as a part of Fujian Province and islands the ROC controls in the South China Sea as a part of Hainan Province and Guangdong Province. These claims are controversial because of the complicated Cross-Strait relations, with the PRC treating the One-China policy as one of its most important diplomatic principles. In addition to Taiwan, China is also involved in other international territorial disputes. Since the 1990s, China has been involved in negotiations to resolve its disputed land borders, including a disputed border with India and an undefined border with Bhutan. China is additionally involved in multilateral disputes over the ownership of several small islands in the East and South China Seas, such as the Senkaku Islands and the Scarborough Shoal. The Chinese democracy movement, social activists, and some members of the Communist Party of China believe in the need for social and political reform. While economic and social controls have been significantly relaxed in China since the 1970s, political freedom is still tightly restricted. The Constitution of the People's Republic of China states that the "fundamental rights" of citizens include freedom of speech, freedom of the press, the right to a fair trial, freedom of religion, universal suffrage, and property rights. However, in practice, these provisions do not afford significant protection against criminal prosecution by the state. Although some criticisms of government policies and the ruling Communist Party are tolerated, censorship of political speech and information, most notably on the Internet, are routinely used to prevent collective action. By 2020, China plans to give all its citizens a personal "Social Credit" score based on how they behave. The Social Credit System, now being piloted in a number of Chinese cities, is considered a form of mass surveillance which uses big data analysis technology. A number of foreign governments, foreign press agencies, and NGOs have criticized China's human rights record, alleging widespread civil rights violations such as detention without trial, forced abortions, forced confessions, torture, restrictions of fundamental rights, and excessive use of the death penalty. The government suppresses popular protests and demonstrations that it considers a potential threat to "social stability", as was the case with the Tiananmen Square protests of 1989. Falun Gong, a religious meditiation movement, was first taught publicly in 1992. In 1999, when there were about 70 million practitioners, the persecution of Falun Gong began, resulting in mass arrests, extralegal detention, and alleged reports of torture and deaths in custody. The Chinese state is regularly accused of large-scale repression and human rights abuses in Tibet and Xinjiang, including violent police crackdowns and religious suppression. At least one million members of China's Muslim Uyghur minority have been detained in mass detention camps, termed "Vocational Education and Training Centers", aimed at changing the political thinking of detainees, their identities, and their religious beliefs. In January 2019, the United Nations asked for direct access to the detention camps after a panel said it had received "credible reports" that 1.1 million Uygur, Kazakhs, Hui and other ethnic minorities had been detained in these camps. The state has also sought to control offshore reporting of tensions in Xinjiang, intimidating foreign-based reporters by detaining their family members. The Global Slavery Index estimated that in 2016 more than 3.8 million people were living in "conditions of modern slavery", or 0.25% of the population, including victims of human trafficking, forced labor, forced marriage, child labor, and state-imposed forced labor. The state-imposed forced system was formally abolished in 2013 but it is not clear the extent to which its various practices have stopped. The Chinese penal system includes labor prison factories, detention centers, and re-education camps, which fall under the heading Laogai ("reform through labor"). The Laogai Research Foundation in the United States estimated that there were over a thousand slave labour prisons and camps, known collectively as the Laogai. In 2019 a study called for the mass retraction of more than 400 scientific papers on organ transplantation, because of fears the organs were obtained unethically from Chinese prisoners. While the government says 10,000 transplants occur each year, hospital data shows between 60,000 and 100,000 organs are transplanted each year. The report provided evidence that this gap is being made up by executed prisoners of conscience. With 2.3 million active troops, the People's Liberation Army (PLA) is the largest standing military force in the world, commanded by the Central Military Commission (CMC). China has the second-biggest military reserve force, only behind North Korea. The PLA consists of the Ground Force (PLAGF), the Navy (PLAN), the Air Force (PLAAF), and the People's Liberation Army Rocket Force (PLARF). According to the Chinese government, China's military budget for 2017 totalled US$151.5 billion, constituting the world's second-largest military budget, although the military expenditures-GDP ratio with 1.3% of GDP is below world average. However, many authorities – including SIPRI and the U.S. Office of the Secretary of Defense – argue that China does not report its real level of military spending, which is allegedly much higher than the official budget. China is one of the world's five recognized nuclear weapons states. Since 2010, China had the world's second-largest economy in terms of nominal GDP, totaling approximately US$13.5 trillion (90 trillion Yuan) as of 2018. In terms of purchasing power parity (PPP GDP), China's economy has been the largest in the world since 2014, according to the World Bank. According to the World Bank, China's GDP grew from $150 billion in 1978 to $13.6 trillion by 2018. China's economic growth has been consistently above 6 percent since the introduction of economic reforms in 1978. China is also the world's largest exporter and second-largest importer of goods. Between 2010 and 2019, China's contribution to global GDP growth has been 25% to 39%. China had the largest economy in the world for most of the past two thousand years, during which it has seen cycles of prosperity and decline. Since economic reforms began in 1978, China has developed into a highly diversified economy and one of the most consequential players in international trade. Major sectors of competitive strength include manufacturing, retail, mining, steel, textiles, automobiles, energy generation, green energy, banking, electronics, telecommunications, real estate, e-commerce, and tourism. China has three out of the ten largest stock exchanges in the world—Shanghai, Hong Kong and Shenzhen—that together have a market capitalization of over $10 trillion, as of 2019. China has been the world's No. 1 manufacturer since 2010, after overtaking the US, which had been No. 1 for the previous hundred years. China has also been No. 2 in high-tech manufacturing since 2012, according to US National Science Foundation. China is the second largest retail market in the world, next to the United States. China leads the world in e-commerce, accounting for 40% of the global market share in 2016 and more than 50% of the global market share in 2019. China is the world's leader in electric vehicles, manufacturing and buying half of all the plug-in electric cars (BEV and PHEV) in the world in 2018. China had 174 GW of installed solar capacity by the end of 2018, which amounts to more than 40% of the global solar capacity. As of 2018, China was first in the world in total number of billionaires and second in millionaires—there were 658 Chinese billionaires and 3.5 million millionaires. However, it ranks behind over 70 countries (out of around 180) in per capita economic output, making it a middle income country. Additionally, its development is highly uneven. Its major cities and coastal areas are far more prosperous compared to rural and interior regions. China brought more people out of extreme poverty than any other country in history—between 1978 and 2018, China reduced extreme poverty by 800 million. China reduced the extreme poverty rate—per international standard, it refers to an income of less than $1.90/day—from 88% in 1981 to 1.85% by 2013. According to the World Bank, the number of Chinese in extreme poverty fell from 756 million to 25 million between 1990 and 2013. China's own national poverty standards are higher and thus the national poverty rates were 3.1% in 2017 and 1% in 2018. In 2019, China overtook the US as the home to the highest number of rich people in the world, according to the global wealth report by Credit Suisse. In other words, as of 2019, 100 million Chinese are in the top 10% of the wealthiest individuals in the world—those who have a net personal wealth of at least $110,000. From its founding in 1949 until late 1978, the People's Republic of China was a Soviet-style centrally planned economy. Following Mao's death in 1976 and the consequent end of the Cultural Revolution, Deng Xiaoping and the new Chinese leadership began to reform the economy and move towards a more market-oriented mixed economy under one-party rule. Agricultural collectivization was dismantled and farmlands privatized, while foreign trade became a major new focus, leading to the creation of Special Economic Zones (SEZs). Inefficient state-owned enterprises (SOEs) were restructured and unprofitable ones were closed outright, resulting in massive job losses. Modern-day China is mainly characterized as having a market economy based on private property ownership, and is one of the leading examples of state capitalism. The state still dominates in strategic "pillar" sectors such as energy production and heavy industries, but private enterprise has expanded enormously, with around 30 million private businesses recorded in 2008. In 2018, private enterprises in China accounted for 60% of GDP, 80% of urban employment and 90% of new jobs. In the early 2010s, China's economic growth rate began to slow amid domestic credit troubles, weakening international demand for Chinese exports and fragility in the global economy. China's GDP was smaller than Germany's in 2007; however, by 2017, China's $12.2 trillion-economy became larger than those of Germany, UK, France and Italy combined. In 2018, the IMF reiterated its forecast that China will overtake the US in terms of nominal GDP by the year 2030. Economists also expect China's middle class to expand to 600 million people by 2025. China is a member of the WTO and is the world's largest trading power, with a total international trade value of US$4.62 trillion in 2018. Its foreign exchange reserves reached US$3.1 trillion as of 2019, making its reserves by far the world's largest. In 2012, China was the world's largest recipient of inward foreign direct investment (FDI), attracting $253 billion. In 2014, China's foreign exchange remittances were $US64 billion making it the second largest recipient of remittances in the world. China also invests abroad, with a total outward FDI of $62.4 billion in 2012, and a number of major takeovers of foreign firms by Chinese companies. China is a major owner of US public debt, holding trillions of dollars worth of U.S. Treasury bonds. China's undervalued exchange rate has caused friction with other major economies, and it has also been widely criticized for manufacturing large quantities of counterfeit goods. Following the 2007–08 financial crisis, Chinese authorities sought to actively wean off of its dependence on the U.S. Dollar as a result of perceived weaknesses of the international monetary system. To achieve those ends, China took a series of actions to further the internationalization of the Renminbi. In 2008, China established dim sum bond market and expanded the Cross-Border Trade RMB Settlement Pilot Project, which helps establish pools of offshore RMB liquidity. This was followed with bilateral agreements to settle trades directly in renminbi with Russia, Japan, Australia, Singapore, the United Kingdom, and Canada. As a result of the rapid internationalization of the renminbi, it became the eighth-most-traded currency in the world, an emerging international reserve currency, and a component of the IMF's special drawing rights; however, partly due to capital controls that make the renminbi fall short of being a fully convertible currency, it remains far behind the Euro, Dollar and Japanese Yen in international trade volumes. China has had the world's largest middle class population since 2015, and the middle class grew to a size of 400 million by 2018. Wages in China have grown exponentially in the last 40 years—real (inflation-adjusted) wages grew seven-fold from 1978 to 2007. By 2018, median wages in Chinese cities such as Shanghai were about the same as or higher than the wages in Eastern European countries. China has the world's second-highest number of billionaires, with nearly 400 as of 2018, increasing at the rate of roughly two per week. China has a high level of economic inequality, which has increased in the past few decades. In 2018 China's GINI index was 0.467, according to the World Bank. China was once a world leader in science and technology up until the Ming dynasty. Ancient Chinese discoveries and inventions, such as papermaking, printing, the compass, and gunpowder (the Four Great Inventions), became widespread across East Asia, the Middle East and later to Europe. Chinese mathematicians were the first to use negative numbers. By the 17th century, Europe and the Western world surpassed China in scientific and technological advancement. The causes of this early modern Great Divergence continue to be debated by scholars to this day. After repeated military defeats by the European colonial powers and Japan in the 19th century, Chinese reformers began promoting modern science and technology as part of the Self-Strengthening Movement. After the Communists came to power in 1949, efforts were made to organize science and technology based on the model of the Soviet Union, in which scientific research was part of central planning. After Mao's death in 1976, science and technology was established as one of the Four Modernizations, and the Soviet-inspired academic system was gradually reformed. Since the end of the Cultural Revolution, China has made significant investments in scientific research and is quickly catching up with the US in R&D spending. In 2017, China spent $279 billion on scientific research and development. According to the OECD, China spent 2.11% of its GDP on research and development (R&D) in 2016. Science and technology are seen as vital for achieving China's economic and political goals, and are held as a source of national pride to a degree sometimes described as "techno-nationalism". In 2017, China was No. 2 in international patents application, behind the US but ahead of Japan. Chinese tech companies Huawei and ZTE were the top 2 filers of international patents in 2017. Chinese-born scientists have won the Nobel Prize in Physics four times, the Nobel Prize in Chemistry and Physiology or Medicine once respectively, though most of these scientists conducted their Nobel-winning research in western nations. China is developing its education system with an emphasis on science, mathematics and engineering; in 2009, China graduated over 10,000 PhD engineers, and as many as 500,000 BSc graduates, more than any other country. China also became the world's largest publisher of scientific papers in 2016. Chinese technology companies such as Huawei and Lenovo have become world leaders in telecommunications and personal computing, and Chinese supercomputers are consistently ranked among the world's most powerful. China has been the world's largest market for industrial robots since 2013 and will account for 45% of newly installed robots from 2019–2021. The Chinese space program is one of the world's most active. In 1970, China launched its first satellite, Dong Fang Hong I, becoming the fifth country to do so independently. In 2003, China became the third country to independently send humans into space, with Yang Liwei's spaceflight aboard Shenzhou 5; , ten Chinese nationals have journeyed into space, including two women. In 2011, China's first space station module, Tiangong-1, was launched, marking the first step in a project to assemble a large manned station by the early 2020s. In 2013, China successfully landed the Chang'e 3 lander and Yutu rover onto the lunar surface. In 2019, China became the first country to land a probe—Chang'e 4—on the far side of the moon. China is the largest telecom market in the world and currently has the largest number of active cellphones of any country in the world, with over 1.5 billion subscribers, as of 2018. It also has the world's largest number of internet and broadband users, with over 800 million Internet users —equivalent to around 60% of its population—and almost all of them being mobile as well. By 2018, China had more than 1 billion 4G users, accounting for 40% of world's total. China is making rapid advances in 5G—by late 2018, China had started large-scale and commercial 5G trials. China Mobile, China Unicom and China Telecom, are the three large providers of mobile and internet in China. China Telecom alone served more than 145 million broadband subscribers and 300 million mobile users; China Unicom had about 300 million subscribers; and China Mobile, the biggest of them all, had 925 million users, as of 2018. Combined, the three operators had over 3.4 million 4G base-stations in China. Several Chinese telecommunications companies, most notably Huawei and ZTE, have been accused of spying for the Chinese military. China is developing its own satellite navigation system, dubbed Beidou, which began offering commercial navigation services across Asia in 2012 and it started providing global services by the end of 2018. Now China belongs to the elite group of three countries—US and Russia being the other two members—that provide global satellite navigation. Since the late 1990s, China's national road network has been significantly expanded through the creation of a network of national highways and expressways. In 2018, China's highways had reached a total length of , making it the longest highway system in the world. China has the world's largest market for automobiles, having surpassed the United States in both auto sales and production. A side-effect of the rapid growth of China's road network has been a significant rise in traffic accidents, though the number of fatalities in traffic accidents fell by 20% from 2007 to 2017. In urban areas, bicycles remain a common mode of transport, despite the increasing prevalence of automobiles – , there are approximately 470 million bicycles in China. China's railways, which are state-owned, are among the busiest in the world, handling a quarter of the world's rail traffic volume on only 6 percent of the world's tracks in 2006. As of 2017, the country had of railways, the second longest network in the world. The railways strain to meet enormous demand particularly during the Chinese New Year holiday, when the world's largest annual human migration takes place. China's high-speed rail (HSR) system started construction in the early 2000s. By the end of 2019, high speed rail in China had over of dedicated lines alone, making it the longest HSR network in the world. With an annual ridership of over 1.1 billion passengers in 2015 it is the world's busiest. The network includes the Beijing–Guangzhou–Shenzhen High-Speed Railway, the single longest HSR line in the world, and the Beijing–Shanghai High-Speed Railway, which has three of longest railroad bridges in the world. The Shanghai Maglev Train, which reaches , is the fastest commercial train service in the world. Since 2000, the growth of rapid transit systems in Chinese cities has accelerated. , 26 Chinese cities have urban mass transit systems in operation and 39 more have metro systems approved with a dozen more to join them by 2020. There were approximately 229 airports in 2017, with around 240 planned by 2020. China has over 2,000 river and seaports, about 130 of which are open to foreign shipping. In 2017, the Ports of Shanghai, Hong Kong, Shenzhen, Ningbo-Zhoushan, Guangzhou, Qingdao and Tianjin ranked in the Top 10 in the world in container traffic and cargo tonnage. Water supply and sanitation infrastructure in China is facing challenges such as rapid urbanization, as well as water scarcity, contamination, and pollution. According to data presented by the Joint Monitoring Program for Water Supply and Sanitation of WHO and UNICEF in 2015, about 36% of the rural population in China still did not have access to improved sanitation. In June 2010, there were 1,519 sewage treatment plants in China and 18 plants were added each week. The ongoing South–North Water Transfer Project intends to abate water shortage in the north. The national census of 2010 recorded the population of the People's Republic of China as approximately 1,370,536,875. About 16.60% of the population were 14 years old or younger, 70.14% were between 15 and 59 years old, and 13.26% were over 60 years old. The population growth rate for 2013 is estimated to be 0.46%. China used to make up much of the world's poor; now it makes up much of the world's middle class. Although a middle-income country by Western standards, China's rapid growth has pulled hundreds of millions—800 million, to be more precise—of its people out of poverty since 1978. By 2013, less than 2% of the Chinese population lived below the international poverty line of US$1.9 per day, down from 88% in 1981. China's own standards for poverty are higher and still the country is on its way to eradicate national poverty completely by 2019. From 2009–2018, the unemployment rate in China has averaged about 4%. Given concerns about population growth, China implemented a two-child limit during the 1970s, and, in 1979, began to advocate for an even stricter limit of one child per family. Beginning in the mid 1980s, however, given the unpopularity of the strict limits, China began to allow some major exemptions, particularly in rural areas, resulting in what was actually a "1.5"-child policy from the mid-1980s to 2015 (ethnic minorities were also exempt from one child limits). The next major loosening of the policy was enacted in December 2013, allowing families to have two children if one parent is an only child. In 2016, the one-child policy was replaced in favor of a two-child policy. Data from the 2010 census implies that the total fertility rate may be around 1.4, although due to underreporting of births it may be closer to 1.5–1.6. According to one group of scholars, one-child limits had little effect on population growth or the size of the total population. However, these scholars have been challenged. Their own counterfactual model of fertility decline without such restrictions implies that China averted more than 500 million births between 1970 and 2015, a number which may reach one billion by 2060 given all the lost descendants of births averted during the era of fertility restrictions, with one-child restrictions accounting for the great bulk of that reduction. The policy, along with traditional preference for boys, may have contributed to an imbalance in the sex ratio at birth. According to the 2010 census, the sex ratio at birth was 118.06 boys for every 100 girls, which is beyond the normal range of around 105 boys for every 100 girls. The 2010 census found that males accounted for 51.27 percent of the total population. However, China's sex ratio is more balanced than it was in 1953, when males accounted for 51.82 percent of the total population. China legally recognizes 56 distinct ethnic groups, who altogether comprise the "Zhonghua Minzu". The largest of these nationalities are the ethnic Chinese or "Han", who constitute more than 80% of the total population. The Han Chinese – the world's largest single ethnic group – outnumber other ethnic groups in every provincial-level division except Tibet and Xinjiang. Ethnic minorities account for about less than 25% of the population of China, according to the 2010 census. Compared with the 2000 population census, the Han population increased by 66,537,177 persons, or 5.74%, while the population of the 55 national minorities combined increased by 7,362,627 persons, or 6.92%. The 2010 census recorded a total of 593,832 foreign nationals living in China. The largest such groups were from South Korea (120,750), the United States (71,493) and Japan (66,159). There are as many as 292 living languages in China. The languages most commonly spoken belong to the Sinitic branch of the Sino-Tibetan language family, which contains Mandarin (spoken by 70% of the population), and other varieties of Chinese language: Yue (including Cantonese and Taishanese), Wu (including Shanghainese and Suzhounese), Min (including Fuzhounese, Hokkien and Teochew), Xiang, Gan and Hakka. Languages of the Tibeto-Burman branch, including Tibetan, Qiang, Naxi and Yi, are spoken across the Tibetan and Yunnan–Guizhou Plateau. Other ethnic minority languages in southwest China include Zhuang, Thai, Dong and Sui of the Tai-Kadai family, Miao and Yao of the Hmong–Mien family, and Wa of the Austroasiatic family. Across northeastern and northwestern China, local ethnic groups speak Altaic languages including Manchu, Mongolian and several Turkic languages: Uyghur, Kazakh, Kyrgyz, Salar and Western Yugur. Korean is spoken natively along the border with North Korea. Sarikoli, the language of Tajiks in western Xinjiang, is an Indo-European language. Taiwanese aborigines, including a small population on the mainland, speak Austronesian languages. Standard Mandarin, a variety of Mandarin based on the Beijing dialect, is the official national language of China and is used as a lingua franca in the country between people of different linguistic backgrounds. Mongolian, Uyghur, Tibetan, Zhuang and various other languages are also regionally recognized throughout the country. Chinese characters have been used as the written script for the Sinitic languages for thousands of years. They allow speakers of mutually unintelligible Chinese varieties to communicate with each other through writing. In 1956, the government introduced simplified characters, which have supplanted the older traditional characters in mainland China. Chinese characters are romanized using the Pinyin system. Tibetan uses an alphabet based on an Indic script. Uyghur is most commonly written in Persian alphabet based Uyghur Arabic alphabet. The Mongolian script used in China and the Manchu script are both derived from the Old Uyghur alphabet. Zhuang uses both an official Latin alphabet script and a traditional Chinese character script. China has urbanized significantly in recent decades. The percent of the country's population living in urban areas increased from 20% in 1980 to over 55% in 2016. It is estimated that China's urban population will reach one billion by 2030, potentially equivalent to one-eighth of the world population. , there are more than 262 million migrant workers in China, mostly rural migrants seeking work in cities. China has over 160 cities with a population of over one million, including the seven megacities (cities with a population of over 10 million) of Chongqing, Shanghai, Beijing, Guangzhou, Tianjin, Shenzhen, and Wuhan. Shanghai is China's most populous urban area while Chongqing is its largest city proper. By 2025, it is estimated that the country will be home to 221 cities with over a million inhabitants. The figures in the table below are from the 2010 census, and are only estimates of the urban populations within administrative city limits; a different ranking exists when considering the total municipal populations (which includes suburban and rural populations). The large "floating populations" of migrant workers make conducting censuses in urban areas difficult; the figures below include only long-term residents. Since 1986, compulsory education in China comprises primary and junior secondary school, which together last for nine years. In 2010, about 82.5 percent of students continued their education at a three-year senior secondary school. The Gaokao, China's national university entrance exam, is a prerequisite for entrance into most higher education institutions. In 2010, 27 percent of secondary school graduates are enrolled in higher education. This number increased significantly over the last years, reaching a tertiary school enrollment of 50 percent in 2018. Vocational education is available to students at the secondary and tertiary level. In February 2006, the government pledged to provide completely free nine-year education, including textbooks and fees. Annual education investment went from less than US$50 billion in 2003 to more than US$250 billion in 2011. However, there remains an inequality in education spending. In 2010, the annual education expenditure per secondary school student in Beijing totalled ¥20,023, while in Guizhou, one of the poorest provinces in China, only totalled ¥3,204. Free compulsory education in China consists of primary school and junior secondary school between the ages of 6 and 15. In 2011, around 81.4% of Chinese have received secondary education. , 96% of the population over age 15 are literate. In 1949, only 20% of the population could read, compared to 65.5% thirty years later. In 2009, Chinese students from Shanghai achieved the world's best results in mathematics, science and literacy, as tested by the Programme for International Student Assessment (PISA), a worldwide evaluation of 15-year-old school pupils' scholastic performance. Despite the high results, Chinese education has also faced both native and international criticism for its emphasis on rote memorization and its gap in quality from rural to urban areas. The National Health and Family Planning Commission, together with its counterparts in the local commissions, oversees the health needs of the Chinese population. An emphasis on public health and preventive medicine has characterized Chinese health policy since the early 1950s. At that time, the Communist Party started the Patriotic Health Campaign, which was aimed at improving sanitation and hygiene, as well as treating and preventing several diseases. Diseases such as cholera, typhoid and scarlet fever, which were previously rife in China, were nearly eradicated by the campaign. After Deng Xiaoping began instituting economic reforms in 1978, the health of the Chinese public improved rapidly because of better nutrition, although many of the free public health services provided in the countryside disappeared along with the People's Communes. Healthcare in China became mostly privatized, and experienced a significant rise in quality. In 2009, the government began a 3-year large-scale healthcare provision initiative worth US$124 billion. By 2011, the campaign resulted in 95% of China's population having basic health insurance coverage. In 2011, China was estimated to be the world's third-largest supplier of pharmaceuticals, but its population has suffered from the development and distribution of counterfeit medications. , the average life expectancy at birth in China is 76 years, and the infant mortality rate is 7 per thousand. Both have improved significantly since the 1950s. Rates of stunting, a condition caused by malnutrition, have declined from 33.1% in 1990 to 9.9% in 2010. Despite significant improvements in health and the construction of advanced medical facilities, China has several emerging public health problems, such as respiratory illnesses caused by widespread air pollution, hundreds of millions of cigarette smokers, and an increase in obesity among urban youths. China's large population and densely populated cities have led to serious disease outbreaks in recent years, such as the 2003 outbreak of SARS, although this has since been largely contained. In 2010, air pollution caused 1.2 million premature deaths in China. The COVID-19 pandemic was first identified in Wuhan in December 2019. The Chinese government has been criticized for its handling of the epidemic and accused of concealing the extent of the outbreak before it became an international pandemic. The government of the People's Republic of China officially espouses state atheism, and has conducted antireligious campaigns to this end. Religious affairs and issues in the country are overseen by the State Administration for Religious Affairs. Freedom of religion is guaranteed by China's constitution, although religious organizations that lack official approval can be subject to state persecution. Over the millennia, Chinese civilization has been influenced by various religious movements. The "three teachings", including Confucianism, Taoism, and Buddhism (Chinese Buddhism), historically have a significant role in shaping Chinese culture, enriching a theological and spiritual framework which harkens back to the early Shang and Zhou dynasty. Chinese popular or folk religion, which is framed by the three teachings and other traditions, consists in allegiance to the "shen" (), a character that signifies the "energies of generation", who can be deities of the environment or ancestral principles of human groups, concepts of civility, culture heroes, many of whom feature in Chinese mythology and history. Among the most popular cults are those of Mazu (goddess of the seas), Huangdi (one of the two divine patriarchs of the Chinese race), Guandi (god of war and business), Caishen (god of prosperity and richness), Pangu and many others. China is home to many of the world's tallest religious statues, including the tallest of all, the Spring Temple Buddha in Henan. Clear data on religious affiliation in China is difficult to gather due to varying definitions of "religion" and the unorganized, diffusive nature of Chinese religious traditions. Scholars note that in China there is no clear boundary between three teachings religions and local folk religious practice. A 2015 poll conducted by Gallup International found that 61% of Chinese people self-identified as "convinced atheist", though it is worthwhile to note that Chinese religions or some of their strands are definable as non-theistic and humanistic religions, since they do not believe that divine creativity is completely transcendent, but it is inherent in the world and in particular in the human being. According to a 2014 study, approximately 74% are either non-religious or practise Chinese folk belief, 16% are Buddhists, 2% are Christians, 1% are Muslims, and 8% adhere to other religions including Taoists and folk salvationism. In addition to Han people's local religious practices, there are also various ethnic minority groups in China who maintain their traditional autochthone religions. The various folk religions today comprise 2–3% of the population, while Confucianism as a religious self-identification is common within the intellectual class. Significant faiths specifically connected to certain ethnic groups include Tibetan Buddhism and the Islamic religion of the Hui, Uyghur, Kazakh, Kyrgyz and other peoples in Northwest China. Since ancient times, Chinese culture has been heavily influenced by Confucianism. For much of the country's dynastic era, opportunities for social advancement could be provided by high performance in the prestigious imperial examinations, which have their origins in the Han dynasty. The literary emphasis of the exams affected the general perception of cultural refinement in China, such as the belief that calligraphy, poetry and painting were higher forms of art than dancing or drama. Chinese culture has long emphasized a sense of deep history and a largely inward-looking national perspective. Examinations and a culture of merit remain greatly valued in China today. The first leaders of the People's Republic of China were born into the traditional imperial order, but were influenced by the May Fourth Movement and reformist ideals. They sought to change some traditional aspects of Chinese culture, such as rural land tenure, sexism, and the Confucian system of education, while preserving others, such as the family structure and culture of obedience to the state. Some observers see the period following the establishment of the PRC in 1949 as a continuation of traditional Chinese dynastic history, while others claim that the Communist Party's rule has damaged the foundations of Chinese culture, especially through political movements such as the Cultural Revolution of the 1960s, where many aspects of traditional culture were destroyed, having been denounced as "regressive and harmful" or "vestiges of feudalism". Many important aspects of traditional Chinese morals and culture, such as Confucianism, art, literature, and performing arts like Peking opera, were altered to conform to government policies and propaganda at the time. Access to foreign media remains heavily restricted. Today, the Chinese government has accepted numerous elements of traditional Chinese culture as being integral to Chinese society. With the rise of Chinese nationalism and the end of the Cultural Revolution, various forms of traditional Chinese art, literature, music, film, fashion and architecture have seen a vigorous revival, and folk and variety art in particular have sparked interest nationally and even worldwide. China is now the third-most-visited country in the world, with 55.7 million inbound international visitors in 2010. It also experiences an enormous volume of domestic tourism; an estimated 740 million Chinese holidaymakers travelled within the country in October 2012. Chinese literature is based on the literature of the Zhou dynasty. Concepts covered within the Chinese classic texts present a wide range of thoughts and subjects including calendar, military, astrology, herbology, geography and many others. Some of the most important early texts include the "I Ching" and the "Shujing" within the Four Books and Five Classics which served as the Confucian authoritative books for the state-sponsored curriculum in dynastic era. Inherited from the "Classic of Poetry", classical Chinese poetry developed to its floruit during the Tang dynasty. Li Bai and Du Fu opened the forking ways for the poetic circles through romanticism and realism respectively. Chinese historiography began with the "Shiji", the overall scope of the historiographical tradition in China is termed the Twenty-Four Histories, which set a vast stage for Chinese fictions along with Chinese mythology and folklore. Pushed by a burgeoning citizen class in the Ming dynasty, Chinese classical fiction rose to a boom of the historical, town and gods and demons fictions as represented by the Four Great Classical Novels which include "Water Margin", "Romance of the Three Kingdoms", "Journey to the West" and "Dream of the Red Chamber". Along with the wuxia fictions of Jin Yong and Liang Yusheng, it remains an enduring source of popular culture in the East Asian cultural sphere. In the wake of the New Culture Movement after the end of the Qing dynasty, Chinese literature embarked on a new era with written vernacular Chinese for ordinary citizens. Hu Shih and Lu Xun were pioneers in modern literature. Various literary genres, such as misty poetry, scar literature, young adult fiction and the xungen literature, which is influenced by magic realism, emerged following the Cultural Revolution. Mo Yan, a xungen literature author, was awarded the Nobel Prize in Literature in 2012. Chinese cuisine is highly diverse, drawing on several millennia of culinary history and geographical variety, in which the most influential are known as the "Eight Major Cuisines", including Sichuan, Cantonese, Jiangsu, Shandong, Fujian, Hunan, Anhui, and Zhejiang cuisines. All of them are featured by the precise skills of shaping, heating, colorway and flavoring. Chinese cuisine is also known for its width of cooking methods and ingredients, as well as food therapy that is emphasized by traditional Chinese medicine. Generally, China's staple food is rice in the south, wheat based breads and noodles in the north. The diet of the common people in pre-modern times was largely grain and simple vegetables, with meat reserved for special occasions. And the bean products, such as tofu and soy milk, remain as a popular source of protein. Pork is now the most popular meat in China, accounting for about three-fourths of the country's total meat consumption. While pork dominates the meat market, there is also the vegetarian Buddhist cuisine and the pork-free Chinese Islamic cuisine. Southern cuisine, due to the area's proximity to the ocean and milder climate, has a wide variety of seafood and vegetables; it differs in many respects from the wheat-based diets across dry northern China. Numerous offshoots of Chinese food, such as Hong Kong cuisine and American Chinese food, have emerged in the nations that play host to the Chinese diaspora. China has one of the oldest sporting cultures in the world. There is evidence that archery ("shèjiàn") was practiced during the Western Zhou dynasty. Swordplay ("jiànshù") and cuju, a sport loosely related to association football date back to China's early dynasties as well. Physical fitness is widely emphasized in Chinese culture, with morning exercises such as qigong and t'ai chi ch'uan widely practiced, and commercial gyms and private fitness clubs are gaining popularity across the country. Basketball is currently the most popular spectator sport in China. The Chinese Basketball Association and the American National Basketball Association have a huge following among the people, with native or ethnic Chinese players such as Yao Ming and Yi Jianlian held in high esteem. China's professional football league, now known as Chinese Super League, was established in 1994, it is the largest football market in Asia. Other popular sports in the country include martial arts, table tennis, badminton, swimming and snooker. Board games such as go (known as "wéiqí" in Chinese), xiangqi, mahjong, and more recently chess, are also played at a professional level. In addition, China is home to a huge number of cyclists, with an estimated 470 million bicycles . Many more traditional sports, such as dragon boat racing, Mongolian-style wrestling and horse racing are also popular. China has participated in the Olympic Games since 1932, although it has only participated as the PRC since 1952. China hosted the 2008 Summer Olympics in Beijing, where its athletes received 51 gold medals – the highest number of gold medals of any participating nation that year. China also won the most medals of any nation at the 2012 Summer Paralympics, with 231 overall, including 95 gold medals. In 2011, Shenzhen in Guangdong, China hosted the 2011 Summer Universiade. China hosted the 2013 East Asian Games in Tianjin and the 2014 Summer Youth Olympics in Nanjing; the first country to host both regular and Youth Olympics. Beijing and its nearby city Zhangjiakou of Hebei province will also collaboratively host the 2022 Olympic Winter Games, which will make Beijing the first city in the world to hold both the Summer Olympics and the Winter Olympics.
https://en.wikipedia.org/wiki?curid=5405
California California is a state in the Pacific Region of the United States. With 39.5 million residents across a total area of about , California is the most populous U.S. state and the third-largest by area, and is also the world's thirty-fourth most populous subnational entity. California is also the most populated subnational entity in North America. The state capital is Sacramento. The Greater Los Angeles Area and the San Francisco Bay Area are the nation's second- and fifth-most populous urban regions, with 18.7 million and 9.7 million residents respectively. Los Angeles is California's most populous city, and the country's second-most populous, after New York City. California also has the nation's most populous county, Los Angeles County, and its largest county by area, San Bernardino County. The City and County of San Francisco is both the country's second most densely populated major city after New York City and the fifth most densely populated county, behind only four of the five New York City boroughs. California's economy, with a gross state product of $3.0 trillion, is the largest sub-national economy in the world. If it were a country, California would be the fifth-largest economy in the world (larger than the United Kingdom, France, or India), and the 37th-most populous . The Greater Los Angeles Area and the San Francisco Bay Area are the nation's second- and third-largest urban economies ($1.3 trillion and $1.0 trillion respectively ), after the New York metropolitan area ($2.0 trillion). The San Francisco Bay Area PSA had the nation's highest gross domestic product per capita in 2018 ($106,757) among large primary statistical areas, and is home to four of the world's ten largest companies by market capitalization and four of the world's ten richest people. California culture is considered a global trendsetter in popular culture, communication, information, innovation, environmentalism, economics, politics, and entertainment. As a result of the state's diversity and migration, California integrates foods, languages, and traditions from other areas across the country and around the globe. It is considered the origin of the American film industry, the hippie counterculture, barbecue, fast food, beach and car culture, the Internet, and the personal computer, among others. The San Francisco Bay Area and the Greater Los Angeles Area are widely seen as centers of the global technology and entertainment industries, respectively. California's economy is very diverse: 58% of it is based on finance, government, real estate services, technology, and professional, scientific, and technical business services. Although it accounts for only 1.5% of the state's economy, California's agriculture industry has the highest output of any U.S. state. California shares a border with Oregon to the north, Nevada and Arizona to the east, and the Mexican state of Baja California to the south. The state's diverse geography ranges from the Pacific Coast in the west to the Sierra Nevada mountain range in the east, and from the redwood and Douglas fir forests in the northwest to the Mojave Desert in the southeast. The Central Valley, a major agricultural area, dominates the state's center. Although California is well-known for its warm Mediterranean climate, the large size of the state results in climates that vary from moist temperate rainforest in the north to arid desert in the interior, as well as snowy alpine in the mountains. Over time, drought and wildfires have become more frequent. What is now California was first settled by various Native Californian tribes before being explored by a number of European colonizers during the 16th and 17th centuries. The Spanish Empire then claimed and colonized it. In 1804 it was included in Alta California province, within the Viceroyalty of New Spain. The area became a part of Mexico in 1821 following its successful war for independence but was ceded to the United States in 1848 after the Mexican–American War. The western portion of Alta California was then organized and admitted as the 31st state on September 9, 1850. The California Gold Rush starting in 1848 led to dramatic social and demographic changes, with large-scale emigration from the east and abroad with an accompanying economic boom. The Spaniards gave the name to the peninsula of Baja California and to Alta California, the region that became the present-day states of California, Nevada, and Utah, and parts of Arizona, New Mexico, Texas, and Wyoming. The name likely derived from the mythical island of California in the fictional story of Queen Calafia, as recorded in a 1510 work "The Adventures of Esplandián" by Garci Rodríguez de Montalvo. This work was the fifth in a popular Spanish chivalric romance series that began with "Amadis de Gaula". Queen Calafia's kingdom was said to be a remote land rich in gold and pearls, inhabited by beautiful black women who wore gold armor and lived like Amazons, as well as griffins and other strange beasts. In the fictional paradise, the ruler Queen Calafia fought alongside Muslims and her name may have been chosen to echo the title of a Muslim leader, the Caliph. It is possible the name California was meant to imply the island was a Caliphate. Shortened forms of the state's name include CA, Cal., Calif., and US-CA. Settled by successive waves of arrivals during the last 10,000 years, California was one of the most culturally and linguistically diverse areas in pre-Columbian North America. Various estimates of the native population range from 100,000 to 300,000. The indigenous peoples of California included more than 70 distinct ethnic groups of Native Americans, ranging from large, settled populations living on the coast to groups in the interior. California groups also were diverse in their political organization with bands, tribes, villages, and on the resource-rich coasts, large chiefdoms, such as the Chumash, Pomo and Salinan. Trade, intermarriage and military alliances fostered many social and economic relationships among the diverse groups. The first European to explore the California coast as far north as the Russian River was a Spanish sailing expedition, led by Portuguese captain Juan Rodríguez Cabrillo, in 1542. Some 37 years later English explorer Francis Drake also explored and claimed an undefined portion of the California coast in 1579. Spanish traders made unintended visits with the Manila galleons on their return trips from the Philippines beginning in 1565. The first Asians to set foot on what would be the United States occurred in 1587, when Filipino sailors arrived in Spanish ships at Morro Bay. Sebastián Vizcaíno explored and mapped the coast of California in 1602 for New Spain. Despite the on-the-ground explorations of California in the 16th century, Rodríguez's idea of California as an island persisted. That depiction appeared on many European maps well into the 18th century. After the Portolà expedition of 1769–70, Spanish missionaries led by Junipero Serra began setting up 21 California Missions on or near the coast of Alta (Upper) California, beginning in San Diego. During the same period, Spanish military forces built several forts ("presidios") and three small towns ("pueblos"). The San Francisco Mission grew into the city of San Francisco, and two of the pueblos grew into the cities of Los Angeles and San Jose. Several other smaller cities and towns also sprang up surrounding the various Spanish missions and pueblos, which remain to this day. The Spanish colonization began decimating the natives through epidemics of various diseases for which the indigenous peoples had no natural immunity, such as measles and diphtheria. The establishment of the Spanish systems of government and social structure, which the Spanish settlers had brought with them, also technologically and culturally overwhelmed the societies of the earlier indigenous peoples. During this same period, Russian ships also explored along the California coast and in 1812 established a trading post at Fort Ross. Russia's early 19th-century coastal settlements in California were positioned just north of the northernmost edge of the area of Spanish settlement in San Francisco Bay, and were the southernmost Russian settlements in North America. The Russian settlements associated with Fort Ross were spread from Point Arena to Tomales Bay. In 1821, the Mexican War of Independence gave Mexico (including California) independence from Spain. For the next 25 years, Alta California remained as a remote, sparsely populated, northwestern administrative district of the newly independent country of Mexico. After Mexican independence from Spain, the missions, which controlled most of the best land in the state, were secularized by 1834 and became the property of the Mexican government. The governor granted many square leagues of land to others with political influence. These huge "ranchos" or cattle ranches emerged as the dominant institutions of Mexican California. The ranchos developed under ownership by Californios (Hispanics native of California) who traded cowhides and tallow with Boston merchants. Beef did not become a commodity until the 1849 gold Rush. From the 1820s, trappers and settlers from the United States and the future Canada arrived in Northern California. These new arrivals used the Siskiyou Trail, California Trail, Oregon Trail and Old Spanish Trail to cross the rugged mountains and harsh deserts in and surrounding California. The early government of the newly independent Mexico was highly unstable, and in a reflection of this, from 1831 onwards, California also experienced a series of armed disputes, both internal and with the central Mexican government. During this tumultuous political period Juan Bautista Alvarado was able to secure the governorship during 1836–1842. The military action which first brought Alvarado to power had momentarily declared California to be an independent state, and had been aided by American and British residents of California, including Isaac Graham. In 1840, one hundred of those residents who did not have passports were arrested, leading to the Graham affair. One of the largest ranchers in California was John Marsh. After failing to obtain justice against squatters on his land from the Mexican courts, he determined that California should become part of the United States. Marsh conducted a letter-writing campaign espousing the California climate, the soil, and other reasons to settle there, as well as the best route to follow, which became known as "Marsh's route". His letters were read, reread, passed around, and printed in newspapers throughout the country, and started the first wagon trains rolling to California. He invited immigrants to stay on his ranch until they could get settled, and assisted in their obtaining passports. After ushering in the period of organized emigration to California, Marsh helped end the rule of the last Mexican governor of California, thereby paving the way to California's ultimate acquisition by the United States. In 1846, a group of American settlers in and around Sonoma rebelled against Mexican rule during the Bear Flag Revolt. Afterwards, rebels raised the Bear Flag (featuring a bear, a star, a red stripe and the words "California Republic") at Sonoma. The Republic's only president was William B. Ide, who played a pivotal role during the Bear Flag Revolt. This revolt by American settlers served as a prelude to the later American military invasion of California and was closely coordinated with nearby American military commanders. The California Republic was short lived; the same year marked the outbreak of the Mexican–American War (1846–48). When Commodore John D. Sloat of the United States Navy sailed into Monterey Bay and began the military occupation of California by the United States, Northern California capitulated in less than a month to the United States forces. After a series of defensive battles in Southern California, the Treaty of Cahuenga was signed by the Californios on January 13, 1847, securing American control in California. Following the Treaty of Guadalupe Hidalgo (February 2, 1848) that ended the war, the westernmost portion of the annexed Mexican territory of Alta California soon became the American state of California, and the remainder of the old territory was then subdivided into the new American Territories of Arizona, Nevada, Colorado and Utah. The even more lightly populated and arid lower region of old Baja California remained as a part of Mexico. In 1846, the total settler population of the western part of the old Alta California had been estimated to be no more than 8,000, plus about 100,000 Native Americans, down from about 300,000 before Hispanic settlement in 1769. In 1848, only one week before the official American annexation of the area, gold was discovered in California, this being an event which was to forever alter both the state's demographics and its finances. Soon afterward, a massive influx of immigration into the area resulted, as prospectors and miners arrived by the thousands. The population burgeoned with United States citizens, Europeans, Chinese and other immigrants during the great California Gold Rush. By the time of California's application for statehood in 1850, the settler population of California had multiplied to 100,000. By 1854, more than 300,000 settlers had come. Between 1847 and 1870, the population of San Francisco increased from 500 to 150,000. California was suddenly no longer a sparsely populated backwater, but seemingly overnight it had grown into a major population center. The seat of government for California under Spanish and later Mexican rule had been located in Monterey from 1777 until 1845. Pio Pico, last Mexican governor of Alta California, had briefly moved the capital to Los Angeles in 1845. The United States consulate had also been located in Monterey, under consul Thomas O. Larkin. In 1849, a state Constitutional Convention was first held in Monterey. Among the first tasks of the Convention was a decision on a location for the new state capital. The first full legislative sessions were held in San Jose (1850–1851). Subsequent locations included Vallejo (1852–1853), and nearby Benicia (1853–1854); these locations eventually proved to be inadequate as well. The capital has been located in Sacramento since 1854 with only a short break in 1862 when legislative sessions were held in San Francisco due to flooding in Sacramento. Once the state's Constitutional Convention had finalized its state constitution, it applied to the U.S. Congress for admission to statehood. On September 9, 1850, as part of the Compromise of 1850, California became a free state and September9 a state holiday. During the American Civil War (1861–1865), California was able to send gold shipments eastwards to Washington in support of the Union cause; however, due to the existence of a large contingent of pro-South sympathizers within the state, the state was not able to muster any full military regiments to send eastwards to officially serve in the Union war effort. Still, several smaller military units within the Union army were unofficially associated with the state of California, such as the "California 100 Company", due to a majority of their members being from California. At the time of California's admission into the Union, travel between California and the rest of the continental United States had been a time-consuming and dangerous feat. Nineteen years afterwards, in 1869, shortly after the conclusion of the Civil War, a more direct connection was developed with the completion of the First Transcontinental Railroad in 1869. California was then easy to reach. Much of the state was extremely well suited to fruit cultivation and agriculture in general. Vast expanses of wheat, other cereal crops, vegetable crops, cotton, and nut and fruit trees were grown (including oranges in Southern California), and the foundation was laid for the state's prodigious agricultural production in the Central Valley and elsewhere. Under earlier Spanish and Mexican rule, California's original native population had precipitously declined, above all, from Eurasian diseases to which the indigenous people of California had not yet developed a natural immunity. Under its new American administration, California's harsh governmental policies towards its own indigenous people did not improve. As in other American states, many of the native inhabitants were soon forcibly removed from their lands by incoming American settlers such as miners, ranchers, and farmers. Although California had entered the American union as a free state, the "loitering or orphaned Indians" were de facto enslaved by their new Anglo-American masters under the 1853 "Act for the Government and Protection of Indians". There were also massacres in which hundreds of indigenous people were killed. Between 1850 and 1860, the California state government paid around 1.5 million dollars (some 250,000 of which was reimbursed by the federal government) to hire militias whose purpose was to protect settlers from the indigenous populations. In later decades, the native population was placed in reservations and rancherias, which were often small and isolated and without enough natural resources or funding from the government to sustain the populations living on them. As a result, the rise of California was a calamity for the native inhabitants. Several scholars and Native American activists, including Benjamin Madley and Ed Castillo, have described the actions of the California government as a genocide. Migration to California accelerated during the early 20th century with the completion of major transcontinental highways like the Lincoln Highway and Route 66. In the period from 1900 to 1965, the population grew from fewer than one million to the greatest in the Union. In 1940, the Census Bureau reported California's population as 6.0% Hispanic, 2.4% Asian, and 89.5% non-Hispanic white. To meet the population's needs, major engineering feats like the California and Los Angeles Aqueducts; the Oroville and Shasta Dams; and the Bay and Golden Gate Bridges were built across the state. The state government also adopted the California Master Plan for Higher Education in 1960 to develop a highly efficient system of public education. Meanwhile, attracted to the mild Mediterranean climate, cheap land, and the state's wide variety of geography, filmmakers established the studio system in Hollywood in the 1920s. California manufactured 8.7 percent of total United States military armaments produced during World War II, ranking third (behind New York and Michigan) among the 48 states. California however easily ranked first in production of military ships during the war (transport, cargo, [merchant ships] such as Liberty ships, Victory ships, and warships) at drydock facilities in San Diego, Los Angeles, and the San Francisco Bay Area. After World War II, California's economy greatly expanded due to strong aerospace and defense industries, whose size decreased following the end of the Cold War. Stanford University and its Dean of Engineering Frederick Terman began encouraging faculty and graduates to stay in California instead of leaving the state, and develop a high-tech region in the area now known as Silicon Valley. As a result of these efforts, California is regarded as a world center of the entertainment and music industries, of technology, engineering, and the aerospace industry, and as the United States center of agricultural production. Just before the Dot Com Bust, California had the fifth-largest economy in the world among nations. Yet since 1991, and starting in the late 1980s in Southern California, California has seen a net loss of domestic migrants in most years. This is often referred to by the media as the California exodus. During the 20th century, two great disasters happened in California. The 1906 San Francisco earthquake and 1928 St. Francis Dam flood remain the deadliest in U.S history. Although air pollution problems have been reduced, health problems associated with pollution have continued. The brown haze known as "smog" has been substantially abated after the passage of federal and state restrictions on automobile exhaust. An energy crisis in 2001 led to rolling blackouts, soaring power rates, and the importation of electricity from neighboring states. Southern California Edison and Pacific Gas and Electric Company came under heavy criticism. Housing prices in urban areas continued to increase; a modest home which in the 1960s cost $25,000 would cost half a million dollars or more in urban areas by 2005. More people commuted longer hours to afford a home in more rural areas while earning larger salaries in the urban areas. Speculators bought houses they never intended to live in, expecting to make a huge profit in a matter of months, then rolling it over by buying more properties. Mortgage companies were compliant, as everyone assumed the prices would keep rising. The bubble burst in 2007-8 as housing prices began to crash and the boom years ended. Hundreds of billions in property values vanished and foreclosures soared as many financial institutions and investors were badly hurt. California is the 3rd largest state in the United States in area, after Alaska and Texas. California is often geographically bisected into two regions, Southern California, comprising the 10 southernmost counties, and Northern California, comprising the 48 northernmost counties. It is bordered by Oregon to the north, Nevada to the east and northeast, Arizona to the southeast, the Pacific Ocean to the west and it shares an international border with the Mexican state of Baja California to the south (with which it makes up part of The Californias region of North America, alongside Baja California Sur). In the middle of the state lies the California Central Valley, bounded by the Sierra Nevada in the east, the coastal mountain ranges in the west, the Cascade Range to the north and by the Tehachapi Mountains in the south. The Central Valley is California's productive agricultural heartland. Divided in two by the Sacramento-San Joaquin River Delta, the northern portion, the Sacramento Valley serves as the watershed of the Sacramento River, while the southern portion, the San Joaquin Valley is the watershed for the San Joaquin River. Both valleys derive their names from the rivers that flow through them. With dredging, the Sacramento and the San Joaquin Rivers have remained deep enough for several inland cities to be seaports. The Sacramento-San Joaquin River Delta is a critical water supply hub for the state. Water is diverted from the delta and through an extensive network of pumps and canals that traverse nearly the length of the state, to the Central Valley and the State Water Projects and other needs. Water from the Delta provides drinking water for nearly 23 million people, almost two-thirds of the state's population as well as water for farmers on the west side of the San Joaquin Valley. Suisun Bay lies at the confluence of the Sacramento and San Joaquin Rivers. The water is drained by the Carquinez Strait, which flows into San Pablo Bay, a northern extension of San Francisco Bay, which then connects to the Pacific Ocean via the Golden Gate strait. The Channel Islands are located off the Southern coast, while the Farallon Islands lie west of San Francisco. The Sierra Nevada (Spanish for "snowy range") includes the highest peak in the contiguous 48 states, Mount Whitney, at . The range embraces Yosemite Valley, famous for its glacially carved domes, and Sequoia National Park, home to the giant sequoia trees, the largest living organisms on Earth, and the deep freshwater lake, Lake Tahoe, the largest lake in the state by volume. To the east of the Sierra Nevada are Owens Valley and Mono Lake, an essential migratory bird habitat. In the western part of the state is Clear Lake, the largest freshwater lake by area entirely in California. Although Lake Tahoe is larger, it is divided by the California/Nevada border. The Sierra Nevada falls to Arctic temperatures in winter and has several dozen small glaciers, including Palisade Glacier, the southernmost glacier in the United States. About 45 percent of the state's total surface area is covered by forests, and California's diversity of pine species is unmatched by any other state. California contains more forestland than any other state except Alaska. Many of the trees in the California White Mountains are the oldest in the world; an individual bristlecone pine is over 5,000 years old. In the south is a large inland salt lake, the Salton Sea. The south-central desert is called the Mojave; to the northeast of the Mojave lies Death Valley, which contains the lowest and hottest place in North America, the Badwater Basin at . The horizontal distance from the bottom of Death Valley to the top of Mount Whitney is less than . Indeed, almost all of southeastern California is arid, hot desert, with routine extreme high temperatures during the summer. The southeastern border of California with Arizona is entirely formed by the Colorado River, from which the southern part of the state gets about half of its water. A majority of California's cities are located in either the San Francisco Bay Area or the Sacramento metropolitan area in Northern California; or the Los Angeles area, the Riverside-San Bernardino-Inland Empire, or the San Diego metropolitan area in Southern California. The Los Angeles Area, the Bay Area, and the San Diego metropolitan area are among several major metropolitan areas along the California coast. As part of the Ring of Fire, California is subject to tsunamis, floods, droughts, Santa Ana winds, wildfires, landslides on steep terrain, and has several volcanoes. It has many earthquakes due to several faults running through the state, the largest being the San Andreas Fault. About 37,000 earthquakes are recorded each year, but most are too small to be felt. Although most of the state has a Mediterranean climate, due to the state's large size the climate ranges from polar to subtropical. The cool California Current offshore often creates summer fog near the coast. Farther inland, there are colder winters and hotter summers. The maritime moderation results in the shoreline summertime temperatures of Los Angeles and San Francisco being the coolest of all major metropolitan areas of the United States and uniquely cool compared to areas on the same latitude in the interior and on the east coast of the North American continent. Even the San Diego shoreline bordering Mexico is cooler in summer than most areas in the contiguous United States. Just a few miles inland, summer temperature extremes are significantly higher, with downtown Los Angeles being several degrees warmer than at the coast. The same microclimate phenomenon is seen in the climate of the Bay Area, where areas sheltered from the sea experience significantly hotter summers than nearby areas closer to the ocean. Northern parts of the state have more rain than the south. California's mountain ranges also influence the climate: some of the rainiest parts of the state are west-facing mountain slopes. Northwestern California has a temperate climate, and the Central Valley has a Mediterranean climate but with greater temperature extremes than the coast. The high mountains, including the Sierra Nevada, have an alpine climate with snow in winter and mild to moderate heat in summer. California's mountains produce rain shadows on the eastern side, creating extensive deserts. The higher elevation deserts of eastern California have hot summers and cold winters, while the low deserts east of the Southern California mountains have hot summers and nearly frostless mild winters. Death Valley, a desert with large expanses below sea level, is considered the hottest location in the world; the highest temperature in the world, , was recorded there on July 10, 1913. The lowest temperature in California was on January 20, 1937 in Boca. The table below lists average temperatures for January and August in a selection of places throughout the state; some highly populated and some not. This includes the relatively cool summers of the Humboldt Bay region around Eureka, the extreme heat of Death Valley, and the mountain climate of Mammoth in the Sierra Nevadas. California is one of the richest and most diverse parts of the world, and includes some of the most endangered ecological communities. California is part of the Nearctic ecozone and spans a number of terrestrial ecoregions. California's large number of endemic species includes relict species, which have died out elsewhere, such as the Catalina ironwood ("Lyonothamnus floribundus"). Many other endemics originated through differentiation or adaptive radiation, whereby multiple species develop from a common ancestor to take advantage of diverse ecological conditions such as the California lilac ("Ceanothus"). Many California endemics have become endangered, as urbanization, logging, overgrazing, and the introduction of exotic species have encroached on their habitat. California boasts several superlatives in its collection of flora: the largest trees, the tallest trees, and the oldest trees. California's native grasses are perennial plants. After European contact, these were generally replaced by invasive species of European annual grasses; and, in modern times, California's hills turn a characteristic golden-brown in summer. Because California has the greatest diversity of climate and terrain, the state has six life zones which are the lower Sonoran (desert); upper Sonoran (foothill regions and some coastal lands), transition (coastal areas and moist northeastern counties); and the Canadian, Hudsonian, and Arctic Zones, comprising the state's highest elevations. Plant life in the dry climate of the lower Sonoran zone contains a diversity of native cactus, mesquite, and paloverde. The Joshua tree is found in the Mojave Desert. Flowering plants include the dwarf desert poppy and a variety of asters. Fremont cottonwood and valley oak thrive in the Central Valley. The upper Sonoran zone includes the chaparral belt, characterized by forests of small shrubs, stunted trees, and herbaceous plants. "Nemophila", mint, "Phacelia", "Viola", and the California poppy ("Eschscholzia californica", the state flower) also flourish in this zone, along with the lupine, more species of which occur here than anywhere else in the world. The transition zone includes most of California's forests with the redwood ("Sequoia sempervirens") and the "big tree" or giant sequoia ("Sequoiadendron giganteum"), among the oldest living things on earth (some are said to have lived at least 4,000 years). Tanbark oak, California laurel, sugar pine, madrona, broad-leaved maple, and Douglas-fir also grow here. Forest floors are covered with swordfern, alumnroot, barrenwort, and trillium, and there are thickets of huckleberry, azalea, elder, and wild currant. Characteristic wild flowers include varieties of mariposa, tulip, and tiger and leopard lilies. The high elevations of the Canadian zone allow the Jeffrey pine, red fir, and lodgepole pine to thrive. Brushy areas are abundant with dwarf manzanita and ceanothus; the unique Sierra puffball is also found here. Right below the timberline, in the Hudsonian zone, the whitebark, foxtail, and silver pines grow. At about , begins the Arctic zone, a treeless region whose flora include a number of wildflowers, including Sierra primrose, yellow columbine, alpine buttercup, and alpine shooting star. Common plants that have been introduced to the state include the eucalyptus, acacia, pepper tree, geranium, and Scotch broom. The species that are federally classified as endangered are the Contra Costa wallflower, Antioch Dunes evening primrose, Solano grass, San Clemente Island larkspur, salt marsh bird's beak, McDonald's rock-cress, and Santa Barbara Island liveforever. , 85 plant species were listed as threatened or endangered. In the deserts of the lower Sonoran zone, the mammals include the jackrabbit, kangaroo rat, squirrel, and opossum. Common birds include the owl, roadrunner, cactus wren, and various species of hawk. The area's reptilian life include the sidewinder viper, desert tortoise, and horned toad. The upper Sonoran zone boasts mammals such as the antelope, brown-footed woodrat, and ring-tailed cat. Birds unique to this zone are the California thrasher, bushtit, and California condor. In the transition zone, there are Colombian black-tailed deer, black bears, gray foxes, cougars, bobcats, and Roosevelt elk. Reptiles such as the garter snakes and rattlesnakes inhabit the zone. In addition, amphibians such as the water puppy and redwood salamander are common too. Birds such as the kingfisher, chickadee, towhee, and hummingbird thrive here as well. The Canadian zone mammals include the mountain weasel, snowshoe hare, and several species of chipmunks. Conspicuous birds include the blue-fronted jay, Sierra chickadee, Sierra hermit thrush, water ouzel, and Townsend's solitaire. As one ascends into the Hudsonian zone, birds become scarcer. While the Sierra rosy finch is the only bird native to the high Arctic region, other bird species such as the hummingbird and Clark's nutcracker. Principal mammals found in this region include the Sierra coney, white-tailed jackrabbit, and the bighorn sheep. , the bighorn sheep was listed as endangered by the U.S. Fish and Wildlife Service. The fauna found throughout several zones are the mule deer, coyote, mountain lion, northern flicker, and several species of hawk and sparrow. Aquatic life in California thrives, from the state's mountain lakes and streams to the rocky Pacific coastline. Numerous trout species are found, among them rainbow, golden, and cutthroat. Migratory species of salmon are common as well. Deep-sea life forms include sea bass, yellowfin tuna, barracuda, and several types of whale. Native to the cliffs of northern California are seals, sea lions, and many types of shorebirds, including migratory species. , 118 California animals were on the federal endangered list; 181 plants were listed as endangered or threatened. Endangered animals include the San Joaquin kitfox, Point Arena mountain beaver, Pacific pocket mouse, salt marsh harvest mouse, Morro Bay kangaroo rat (and five other species of kangaroo rat), Amargosa vole, California least tern, California condor, loggerhead shrike, San Clemente sage sparrow, San Francisco garter snake, five species of salamander, three species of chub, and two species of pupfish. Eleven butterflies are also endangered and two that are threatened are on the federal list. Among threatened animals are the coastal California gnatcatcher, Paiute cutthroat trout, southern sea otter, and northern spotted owl. California has a total of of National Wildlife Refuges. , 123 California animals were listed as either endangered or threatened on the federal list. Also, , 178 species of California plants were listed either as endangered or threatened on this federal list. The most prominent river system within California is formed by the Sacramento River and San Joaquin River, which are fed mostly by snowmelt from the west slope of the Sierra Nevada, and respectively drain the north and south halves of the Central Valley. The two rivers join in the Sacramento–San Joaquin River Delta, flowing into the Pacific Ocean through San Francisco Bay. Many major tributaries feed into the Sacramento–San Joaquin system, including the Pit River, Feather River and Tuolumne River. The Klamath and Trinity Rivers drain a large area in far northwestern California. The Eel River and Salinas River each drain portions of the California coast, north and south of San Francisco Bay, respectively. The Mojave River is the primary watercourse in the Mojave Desert, and the Santa Ana River drains much of the Transverse Ranges as it bisects Southern California. The Colorado River forms the state's southeast border with Arizona. Most of California's major rivers are dammed as part of two massive water projects: the Central Valley Project, providing water for agriculture in the Central Valley, and the California State Water Project diverting water from northern to southern California. The state's coasts, rivers, and other bodies of water are regulated by the California Coastal Commission. The United States Census Bureau estimates that the population of California was 39,512,223 on July 1, 2019, a 6.06% increase since the 2010 United States Census. The population is projected to reach 40 million by 2020 and 50 million by 2060. Between 2000 and 2009, there was a natural increase of 3,090,016 (5,058,440 births minus 2,179,958 deaths). During this time period, international migration produced a net increase of 1,816,633 people while domestic migration produced a net decrease of 1,509,708, resulting in a net in-migration of 306,925 people. The state of California's own statistics show a population of 38,292,687 for January 1, 2009. However, according to the Manhattan Institute for Policy Research, since 1990 almost 3.4 million Californians have moved to other states, with most leaving to Texas, Nevada, and Arizona. Within the Western hemisphere California is the second most populous sub-national administrative entity (behind the state of São Paulo in Brazil) and third most populous sub-national entity of any kind outside Asia (in which wider category it also ranks behind England in the United Kingdom, which has no administrative functions). California's population is greater than that of all but 34 countries of the world. The Greater Los Angeles Area is the 2nd-largest metropolitan area in the United States, after the New York metropolitan area, while Los Angeles, with nearly half the population of New York City, is the second-largest city in the United States. Conversely, San Francisco, with nearly one-quarter the population density of Manhattan, is the most densely populated city in California and one of the most densely populated cities in the United States. Also, Los Angeles County has held the title of most populous United States county for decades, and it alone is more populous than 42 United States states. Including Los Angeles, four of the top 15 most populous cities in the U.S. are in California: Los Angeles (2nd), San Diego (8th), San Jose (10th), and San Francisco (13th). The center of population of California is located in the town of Buttonwillow, Kern County. The state has 482 incorporated cities and towns, of which 460 are cities and 22 are towns. Under California law, the terms "city" and "town" are explicitly interchangeable; the name of an incorporated municipality in the state can either be "City of (Name)" or "Town of (Name)". Sacramento became California's first incorporated city on February 27, 1850. San Jose, San Diego, and Benicia tied for California's second incorporated city, each receiving incorporation on March 27, 1850. Jurupa Valley became the state's most recent and 482nd incorporated municipality on July 1, 2011. The majority of these cities and towns are within one of five metropolitan areas: the Los Angeles Metropolitan Area, the San Francisco Bay Area, the Riverside-San Bernardino Area, the San Diego metropolitan area, or the Sacramento metropolitan area. Starting in the year 2010, for the first time since the California Gold Rush, California-born residents make up the majority of the state's population. Along with the rest of the United States, California's immigration pattern has also shifted over the course of the late 2000s to early 2010s. Immigration from Latin American countries has dropped significantly with most immigrants now coming from Asia. In total for 2011, there were 277,304 immigrants. Fifty-seven percent came from Asian countries versus 22% from Latin American countries. Net immigration from Mexico, previously the most common country of origin for new immigrants, has dropped to zero / less than zero since more Mexican nationals are departing for their home country than immigrating. As a result, it is projected that Hispanic citizens will constitute 49% of the population by 2060, instead of the previously projected 2050, due primarily to domestic births. The state's population of undocumented immigrants has been shrinking in recent years, due to increased enforcement and decreased job opportunities for lower-skilled workers. The number of migrants arrested attempting to cross the Mexican border in the Southwest decreased from a high of 1.1 million in 2005 to 367,000 in 2011. Despite these recent trends, illegal aliens constituted an estimated 7.3 percent of the state's population, the third highest percentage of any state in the country, totaling nearly 2.6 million. In particular, illegal immigrants tended to be concentrated in Los Angeles, Monterey, San Benito, Imperial, and Napa Counties—the latter four of which have significant agricultural industries that depend on manual labor. More than half of illegal immigrants originate from Mexico. The state of California and some California cities, including Los Angeles, Oakland and San Francisco, have adopted sanctuary policies. According to the United States Census Bureau in 2018 the population self-identifies as (alone or in combination): By ethnicity, in 2018 the population was 60.7% non-Hispanic (of any race) and 39.3% Hispanic or Latino (of any race). Hispanics are the largest single ethnic group in California. Non-Hispanic whites constituted 36.8% of the state's population. "Californios" are the Hispanic residents native to California, who are culturally or genetically descended from the Spanish-speaking community which has existed in California since 1542, of varying Mexican American/Chicano, Criollo Spaniard, and Mestizo origin. , 75.1% of California's population younger than age 1 were minorities, meaning they had at least one parent who was not non-Hispanic white (white Hispanics are counted as minorities). In terms of total numbers, California has the largest population of White Americans in the United States, an estimated 22,200,000 residents. The state has the 5th largest population of African Americans in the United States, an estimated 2,250,000 residents. California's Asian American population is estimated at 4.4 million, constituting a third of the nation's total. California's Native American population of 285,000 is the most of any state. According to estimates from 2011, California has the largest minority population in the United States by numbers, making up 60% of the state population. Over the past 25 years, the population of non-Hispanic whites has declined, while Hispanic and Asian populations have grown. Between 1970 and 2011, non-Hispanic whites declined from 80% of the state's population to 40%, while Hispanics grew from 32% in 2000 to 38% in 2011. It is currently projected that Hispanics will rise to 49% of the population by 2060, primarily due to domestic births rather than immigration. With the decline of immigration from Latin America, Asian Americans now constitute the fastest growing racial/ethnic group in California; this growth is primarily driven by immigration from China, India and the Philippines, respectively. English serves as California's de jure and de facto official language. In 2010, the Modern Language Association of America estimated that 57.02% (19,429,309) of California residents age5 and older spoke only English at home, while 42.98% spoke another language at home. According to the 2007 American Community Survey, 73% of people who speak a language other than English at home are able to speak English "well" or "very well," while 9.8% of them could not speak English at all. Like most U.S. states (32 out of 50), California law enshrines English as its official language, and has done so since the passage of Proposition 63 by California voters. Various government agencies do, and are often required to, furnish documents in the various languages needed to reach their intended audiences. In total, 16 languages other than English were spoken as primary languages at home by more than 100,000 persons, more than any other state in the nation. New York State, in second place, had nine languages other than English spoken by more than 100,000 persons. The most common language spoken besides English was Spanish, spoken by 28.46% (9,696,638) of the population. With Asia contributing most of California's new immigrants, California had the highest concentration nationwide of Vietnamese and Chinese speakers, the second highest concentration of Korean, and the third highest concentration of Tagalog speakers. California has historically been one of the most linguistically diverse areas in the world, with more than 70 indigenous languages derived from 64 root languages in six language families. A survey conducted between 2007 and 2009 identified 23 different indigenous languages among California farmworkers. All of California's indigenous languages are endangered, although there are now efforts toward language revitalization. As a result of the state's increasing diversity and migration from other areas across the country and around the globe, linguists began noticing a noteworthy set of emerging characteristics of spoken American English in California since the late 20th century. This variety, known as California English, has a vowel shift and several other phonological processes that are different from varieties of American English used in other regions of the United States. The culture of California is a Western culture and most clearly has its modern roots in the culture of the United States, but also, historically, many Hispanic Californio and Mexican influences. As a border and coastal state, Californian culture has been greatly influenced by several large immigrant populations, especially those from Latin America and Asia. California has long been a subject of interest in the public mind and has often been promoted by its boosters as a kind of paradise. In the early 20th century, fueled by the efforts of state and local boosters, many Americans saw the Golden State as an ideal resort destination, sunny and dry all year round with easy access to the ocean and mountains. In the 1960s, popular music groups such as The Beach Boys promoted the image of Californians as laid-back, tanned beach-goers. The California Gold Rush of the 1850s is still seen as a symbol of California's economic style, which tends to generate technology, social, entertainment, and economic fads and booms and related busts. Hollywood and the rest of the Los Angeles area is a major global center for entertainment, with the U.S. film industry's "Big Five" major film studios (Columbia, Disney, Paramount, Universal, and Warner Bros.) being based in or around the area. The four major American television broadcast networks (ABC, CBS, Fox and NBC) all have production facilities and offices in the state. All four, plus the two major Spanish-language networks (Telemundo and Univision) each have at least two owned-and-operated TV stations in California, one in Los Angeles and one in the San Francisco Bay Area. The San Francisco Bay Area is home to several prominent internet media and social media companies, including three of the "Big Five" technology companies (Apple, Facebook, and Google) as well as other services such as Netflix, Pandora Radio, Twitter, Yahoo!, and YouTube. One of the oldest radio stations in the United States still in existence, KCBS (AM) in the Bay Area, was founded in 1909. Universal Music Group, one of the "Big Four" record labels, is based in Santa Monica. California is also the birthplace of several international music genres, including the Bakersfield sound, Bay Area thrash metal, g-funk, nu metal, stoner rock, surf music, West Coast hip hop, and West Coast jazz. The largest religious denominations by number of adherents as a percentage of California's population in 2014 were the Catholic Church with 28 percent, Evangelical Protestants with 20 percent, and Mainline Protestants with 10 percent. Together, all kinds of Protestants accounted for 32 percent. Those unaffiliated with any religion represented 27 percent of the population. The breakdown of other religions is 1% Muslim, 2% Hindu and 2% Buddhist. This is a change from 2008, when the population identified their religion with the Catholic Church with 31 percent; Evangelical Protestants with 18 percent; and Mainline Protestants with 14 percent. In 2008, those unaffiliated with any religion represented 21 percent of the population. The breakdown of other religions in 2008 was 0.5% Muslim, 1% Hindu and 2% Buddhist. The "American Jewish Year Book" placed the total Jewish population of California at about 1,194,190 in 2006. According to the Association of Religion Data Archives (ARDA) the largest denominations by adherents in 2010 were the Roman Catholic Church with 10,233,334; The Church of Jesus Christ of Latter-day Saints with 763,818; and the Southern Baptist Convention with 489,953. The first priests to come to California were Roman Catholic missionaries from Spain. Roman Catholics founded 21 missions along the California coast, as well as the cities of Los Angeles and San Francisco. California continues to have a large Roman Catholic population due to the large numbers of Mexicans and Central Americans living within its borders. California has twelve dioceses and two archdioceses, the Archdiocese of Los Angeles and the Archdiocese of San Francisco, the former being the largest archdiocese in the United States. A Pew Research Center survey revealed that California is somewhat less religious than the rest of the states: 62 percent of Californians say they are "absolutely certain" of their belief in God, while in the nation 71 percent say so. The survey also revealed 48 percent of Californians say religion is "very important", compared to 56 percent nationally. California has nineteen major professional sports league franchises, far more than any other state. The San Francisco Bay Area has six major league teams spread in its three major cities: San Francisco, San Jose, and Oakland, while the Greater Los Angeles Area is home to ten major league franchises. San Diego and Sacramento each have one major league team. The NFL Super Bowl has been hosted in California 11 times at four different stadiums: Los Angeles Memorial Coliseum, the Rose Bowl, Stanford Stadium, and San Diego's Qualcomm Stadium. A twelfth, Super Bowl 50, was held at Levi's Stadium in Santa Clara on February 7, 2016. California has long had many respected collegiate sports programs. California is home to the oldest college bowl game, the annual Rose Bowl, among others. California is the only U.S. state to have hosted both the Summer and Winter Olympics. The 1932 and 1984 summer games were held in Los Angeles. Squaw Valley Ski Resort in the Lake Tahoe region hosted the 1960 Winter Olympics. Los Angeles will host the 2028 Summer Olympics, marking the fourth time that California will have hosted the Olympic Games. Multiple games during the 1994 FIFA World Cup took place in California, with the Rose Bowl hosting eight matches (including the final), while Stanford Stadium hosted six matches. Public secondary education consists of high schools that teach elective courses in trades, languages, and liberal arts with tracks for gifted, college-bound and industrial arts students. California's public educational system is supported by a unique constitutional amendment that requires a minimum annual funding level for grades K–12 and community colleges that grow with the economy and student enrollment figures. In 2016, California's K–12 public school per-pupil spending was ranked 22nd in the nation ($11,500 per student vs. $11,800 for the U.S. average). For 2012, California's K–12 public schools ranked 48th in the number of employees per student, at 0.102 (the U.S. average was 0.137), while paying the 7th most per employee, $49,000 (the U.S. average was $39,000). A 2007 study concluded that California's public school system was "broken" in that it suffered from over-regulation. California's public postsecondary education offers three separate systems: California is also home to such notable private universities as Stanford University, the University of Southern California, the California Institute of Technology, and the Claremont Colleges. California has hundreds of other private colleges and universities, including many religious and special-purpose institutions. California has twinning arrangements with the region of Catalonia in Spain and with the Province of Alberta in Canada. California's economy ranks among the largest in the world. , the gross state product (GSP) was $3.2 trillion ($80,600 per capita), the largest in the United States. California is responsible for 1/7 of the United States' approximate $22 trillion gross domestic product (GDP). , California's nominal GDP is larger than all but four countries (the United States, China, Japan, and Germany). In terms of Purchasing Power Parity, it is larger than all but eight countries (the United States, China, India, Japan, Germany, Russia, Brazil and Indonesia). California's economy is larger than Africa and Australia and is almost as large as South America. The five largest sectors of employment in California are trade, transportation, and utilities; government; professional and business services; education and health services; and leisure and hospitality. In output, the five largest sectors are financial services, followed by trade, transportation, and utilities; education and health services; government; and manufacturing. , California has an unemployment rate of 5.5%. California's economy is dependent on trade and international related commerce accounts for about one-quarter of the state's economy. In 2008, California exported $144 billion worth of goods, up from $134 billion in 2007 and $127 billion in 2006. Computers and electronic products are California's top export, accounting for 42 percent of all the state's exports in 2008. Agriculture is an important sector in California's economy. Farming-related sales more than quadrupled over the past three decades, from $7.3 billion in 1974 to nearly $31 billion in 2004. This increase has occurred despite a 15 percent decline in acreage devoted to farming during the period, and water supply suffering from chronic instability. Factors contributing to the growth in sales-per-acre include more intensive use of active farmlands and technological improvements in crop production. In 2008, California's 81,500 farms and ranches generated $36.2 billion products revenue. In 2011, that number grew to $43.5 billion products revenue. The Agriculture sector accounts for two percent of the state's GDP and employs around three percent of its total workforce. According to the USDA in 2011, the three largest California agricultural products by value were milk and cream, shelled almonds, and grapes. Per capita GDP in 2007 was $38,956, ranking eleventh in the nation. Per capita income varies widely by geographic region and profession. The Central Valley is the most impoverished, with migrant farm workers making less than minimum wage. According to a 2005 report by the Congressional Research Service, the San Joaquin Valley was characterized as one of the most economically depressed regions in the United States, on par with the region of Appalachia. Using the supplemental poverty measure, California has a poverty rate of 23.5%, the highest of any state in the country. However, using the official measure the poverty rate was only 13.3% as of 2017. Many coastal cities include some of the wealthiest per-capita areas in the United States. The high-technology sectors in Northern California, specifically Silicon Valley, in Santa Clara and San Mateo counties, have emerged from the economic downturn caused by the dot-com bust. In 2019, there were 1,042,027 millionaire households in the state, more than any other state in the nation. In 2010, California residents were ranked first among the states with the best average credit score of 754. State spending increased from $56 billion in 1998 to $127 billion in 2011. California, with 12% of the United States population, has one-third of the nation's welfare recipients. California has the third highest per capita spending on welfare among the states, as well as the highest spending on welfare at $6.67 billion. In January 2011, California's total debt was at least $265 billion. On June 27, 2013, Governor Jerry Brown signed a balanced budget (no deficit) for the state, its first in decades; however the state's debt remains at $132 billion. With the passage of Proposition 30 in 2012 and Proposition 55 in 2016, California now levies a 13.3% maximum marginal income tax rate with ten tax brackets, ranging from 1% at the bottom tax bracket of $0 annual individual income to 13.3% for annual individual income over $1,000,000 (though the top brackets are only temporary until Proposition 55 expires at the end of 2030). While Proposition 30 also enacted a minimum state sales tax of 7.5%, this sales tax increase was not extended by Proposition 55 and reverted to a previous minimum state sales tax rate of 7.25% in 2017. Local governments can and do levy additional sales taxes in addition to this minimum rate. All real property is taxable annually; the ad valorem tax is based on the property's fair market value at the time of purchase or the value of new construction. Property tax increases are capped at 2% annually or the rate of inflation (whichever is lower), per Proposition 13. Because it is the most populous state in the United States, California is one of the country's largest users of energy. However because of its high energy rates, conservation mandates, mild weather in the largest population centers and strong environmental movement, its "per capita" energy use is one of the smallest of any state in the United States. Due to the high electricity demand, California imports more electricity than any other state, primarily hydroelectric power from states in the Pacific Northwest (via Path 15 and Path 66) and coal- and natural gas-fired production from the desert Southwest via Path 46. As a result of the state's strong environmental movement, California has some of the most aggressive renewable energy goals in the United States, with a target for California to obtain a third of its electricity from renewables by 2020. Currently, several solar power plants such as the Solar Energy Generating Systems facility are located in the Mojave Desert. California's wind farms include Altamont Pass, San Gorgonio Pass, and Tehachapi Pass. The Tehachapi area is also where the Tehachapi Energy Storage Project is located. Several dams across the state provide hydro-electric power. It would be possible to convert the total supply to 100% renewable energy, including heating, cooling and mobility, by 2050. The state's crude oil and natural gas deposits are located in the Central Valley and along the coast, including the large Midway-Sunset Oil Field. Natural gas-fired power plants typically account for more than one-half of state electricity generation. California is also home to two major nuclear power plants: Diablo Canyon and San Onofre, the latter having been shut down in 2013. Voters banned the approval of new nuclear power plants since the late 1970s because of concerns over radioactive waste disposal. In addition, several cities such as Oakland, Berkeley and Davis have declared themselves as nuclear-free zones. California's vast terrain is connected by an extensive system of controlled-access highways ('freeways'), limited-access roads ('expressways'), and highways. California is known for its car culture, giving California's cities a reputation for severe traffic congestion. Construction and maintenance of state roads and statewide transportation planning are primarily the responsibility of the California Department of Transportation, nicknamed "Caltrans". The rapidly growing population of the state is straining all of its transportation networks, and California has some of the worst roads in the United States. The Reason Foundation's 19th Annual Report on the Performance of State Highway Systems ranked California's highways the third-worst of any state, with Alaska second, and Rhode Island first. The state has been a pioneer in road construction. One of the state's more visible landmarks, the Golden Gate Bridge, was the longest suspension bridge main span in the world at between 1937 (when it opened) and 1964. With its orange paint and panoramic views of the bay, this highway bridge is a popular tourist attraction and also accommodates pedestrians and bicyclists. The San Francisco–Oakland Bay Bridge (often abbreviated the "Bay Bridge"), completed in 1936, transports about 280,000 vehicles per day on two-decks. Its two sections meet at Yerba Buena Island through the world's largest diameter transportation bore tunnel, at wide by high. The Arroyo Seco Parkway, connecting Los Angeles and Pasadena, opened in 1940 as the first freeway in the Western United States. It was later extended south to the Four Level Interchange in downtown Los Angeles, regarded as the first stack interchange ever built. Los Angeles International Airport (LAX), the 4th busiest airport in the world in 2018, and San Francisco International Airport (SFO), the 25th busiest airport in the world in 2018, are major hubs for trans-Pacific and transcontinental traffic. There are about a dozen important commercial airports and many more general aviation airports throughout the state. California also has several important seaports. The giant seaport complex formed by the Port of Los Angeles and the Port of Long Beach in Southern California is the largest in the country and responsible for handling about a fourth of all container cargo traffic in the United States. The Port of Oakland, fourth largest in the nation, also handles trade entering from the Pacific Rim to the rest of the country. The Port of Stockton is the farthest inland port on the west coast of the United States. The California Highway Patrol is the largest statewide police agency in the United States in employment with more than 10,000 employees. They are responsible for providing any police-sanctioned service to anyone on California's state-maintained highways and on state property. The California Department of Motor Vehicles is by far the largest in North America. By the end of 2009, the California DMV had 26,555,006 driver's licenses and ID cards on file. In 2010, there were 1.17 million new vehicle registrations in force. Inter-city rail travel is provided by Amtrak California; the three routes, the "Capitol Corridor", "Pacific Surfliner", and "San Joaquin", are funded by Caltrans. These services are the busiest intercity rail lines in the United States outside the Northeast Corridor and ridership is continuing to set records. The routes are becoming increasingly popular over flying, especially on the LAX-SFO route. Integrated subway and light rail networks are found in Los Angeles (Metro Rail) and San Francisco (MUNI Metro). Light rail systems are also found in San Jose (VTA), San Diego (San Diego Trolley), Sacramento (RT Light Rail), and Northern San Diego County (Sprinter). Furthermore, commuter rail networks serve the San Francisco Bay Area (ACE, BART, Caltrain, SMART), Greater Los Angeles (Metrolink), and San Diego County (Coaster). The California High-Speed Rail Authority was created in 1996 by the state to implement an extensive rail system. Construction was approved by the voters during the November 2008 general election, with the first phase of construction estimated to cost $64.2 billion. Nearly all counties operate bus lines, and many cities operate their own city bus lines as well. Intercity bus travel is provided by Greyhound, Megabus, and Amtrak Thruway Motorcoach. California's interconnected water system is the world's largest, managing over of water per year, centered on six main systems of aqueducts and infrastructure projects. Water use and conservation in California is a politically divisive issue, as the state experiences periodic droughts and has to balance the demands of its large agricultural and urban sectors, especially in the arid southern portion of the state. The state's widespread redistribution of water also invites the frequent scorn of environmentalists. The California Water Wars, a conflict between Los Angeles and the Owens Valley over water rights, is one of the most well-known examples of the struggle to secure adequate water supplies. Former California Governor Arnold Schwarzenegger said: "We've been in crisis for quite some time because we're now 38 million people and not anymore 18 million people like we were in the late 60s. So it developed into a battle between environmentalists and farmers and between the south and the north and between rural and urban. And everyone has been fighting for the last four decades about water." The capital of California is located within Sacramento. The state is organized into three branches of government—the executive branch consisting of the Governor and the other independently elected constitutional officers; the legislative branch consisting of the Assembly and Senate; and the judicial branch consisting of the Supreme Court of California and lower courts. The state also allows ballot propositions: direct participation of the electorate by initiative, referendum, recall, and ratification. Before the passage of California Proposition 14 (2010), California allowed each political party to choose whether to have a closed primary or a primary where only party members and independents vote. After June 8, 2010, when Proposition 14 was approved, excepting only the United States President and county central committee offices, all candidates in the primary elections are listed on the ballot with their preferred party affiliation, but they are not the official nominee of that party. At the primary election, the two candidates with the top votes will advance to the general election regardless of party affiliation. If at a special primary election, one candidate receives more than 50% of all the votes cast, they are elected to fill the vacancy and no special general election will be held. The California executive branch consists of the Governor of California and seven other elected constitutional officers: Lieutenant Governor, Attorney General, Secretary of State, State Controller, State Treasurer, Insurance Commissioner, and State Superintendent of Public Instruction. They serve four-year terms and may be re-elected only once. The California State Legislature consists of a 40-member Senate and 80-member Assembly. Senators serve four-year terms and Assembly members two. Members of the Assembly are subject to term limits of three terms, and members of the Senate are subject to term limits of two terms. California's legal system is explicitly based upon English common law (as is the case with all other states except Louisiana) but carries a few features from Spanish civil law, such as community property. California's prison population grew from 25,000 in 1980 to over 170,000 in 2007. Capital punishment is a legal form of punishment and the state has the largest "Death Row" population in the country (though Oklahoma and Texas are far more active in carrying out executions). California's judiciary system is the largest in the United States with a total of 1,600 judges (the federal system has only about 840). At the apex is the seven-member Supreme Court of California, while the California Courts of Appeal serve as the primary appellate courts and the California Superior Courts serve as the primary trial courts. Justices of the Supreme Court and Courts of Appeal are appointed by the Governor, but are subject to retention by the electorate every 12 years. The administration of the state's court system is controlled by the Judicial Council, composed of the Chief Justice of the California Supreme Court, 14 judicial officers, four representatives from the State Bar of California, and one member from each house of the state legislature. California is divided into 58 counties. Per Article 11, Section 1, of the Constitution of California, they are the legal subdivisions of the state. The county government provides countywide services such as law enforcement, jails, elections and voter registration, vital records, property assessment and records, tax collection, public health, health care, social services, libraries, flood control, fire protection, animal control, agricultural regulations, building inspections, ambulance services, and education departments in charge of maintaining statewide standards. In addition, the county serves as the local government for all unincorporated areas. Each county is governed by an elected board of supervisors. Incorporated cities and towns in California are either charter or general-law municipalities. General-law municipalities owe their existence to state law and are consequently governed by it; charter municipalities are governed by their own city or town charters. Municipalities incorporated in the 19th century tend to be charter municipalities. All ten of the state's most populous cities are charter cities. Most small cities have a council–manager form of government, where the elected city council appoints a city manager to supervise the operations of the city. Some larger cities have a directly-elected mayor who oversees the city government. In many council-manager cities, the city council selects one of its members as a mayor, sometimes rotating through the council membership—but this type of mayoral position is primarily ceremonial. The Government of San Francisco is the only consolidated city-county in California, where both the city and county governments have been merged into one unified jurisdiction. About 1,102 school districts, independent of cities and counties, handle California's public education. California school districts may be organized as elementary districts, high school districts, unified school districts combining elementary and high school grades, or community college districts. There are about 3,400 special districts in California. A special district, defined by California Government Code § 16271(d) as "any agency of the state for the local performance of governmental or proprietary functions within limited boundaries", provides a limited range of services within a defined geographic area. The geographic area of a special district can spread across multiple cities or counties, or could consist of only a portion of one. Most of California's special districts are "single-purpose districts", and provide one service. The state of California sends 53 members to the House of Representatives, the nation's largest congressional state delegation. Consequently California also has the largest number of electoral votes in national presidential elections, with 55. The current Speaker of the House of Representatives is the representative of California's 12th district, Nancy Pelosi; Kevin McCarthy, representing the state's 23rd district, is the House Minority Leader. California's U.S. Senators are Dianne Feinstein, a native and former mayor of San Francisco, and Kamala Harris, a native, former District Attorney from San Francisco and former Attorney General of California. In the 1992 U.S. Senate election, California became the first state to elect a Senate delegation entirely composed of women, due to the victories of Feinstein and Barbara Boxer. In California, , the U.S. Department of Defense had a total of 117,806 active duty servicemembers of which 88,370 were Sailors or Marines, 18,339 were Airmen, and 11,097 were Soldiers, with 61,365 Department of Defense civilian employees. Additionally, there were a total of 57,792 Reservists and Guardsman in California. In 2010, Los Angeles County was the largest origin of military recruits in the United States by county, with 1,437 individuals enlisting in the military. However, , Californians were relatively under-represented in the military as a proportion to its population. In 2000, California, had 2,569,340 veterans of United States military service: 504,010 served in World War II, 301,034 in the Korean War, 754,682 during the Vietnam War, and 278,003 during 1990–2000 (including the Persian Gulf War). , there were 1,942,775 veterans living in California, of which 1,457,875 served during a period of armed conflict, and just over four thousand served before World WarII (the largest population of this group of any state). California's military forces consist of the Army and Air National Guard, the naval and state military reserve (militia), and the California Cadet Corps. On August 5, 1950, a nuclear-capable United States Air Force Boeing B-29 Superfortress bomber carrying a nuclear bomb crashed shortly after takeoff from Fairfield-Suisun Air Force Base. Brigadier General Robert F. Travis, command pilot of the bomber, was among the dead. California has an idiosyncratic political culture compared to the rest of the country, and is sometimes regarded as a trendsetter. In socio-cultural mores and national politics, Californians are perceived as more liberal than other Americans, especially those who live in the inland states. As of the 2016 presidential election, California was the second most Democratic state behind Hawaii. According to the Cook Political Report, California contains five of the 15 most Democratic congressional districts in the United States. Among the political idiosyncrasies and trendsetting, California was the second state to recall their state governor, the second state to legalize abortion, and the only state to ban marriage for gay couples twice by vote (including Proposition8 in 2008). Voters also passed Proposition 71 in 2004 to fund stem cell research, and Proposition 14 in 2010 to completely change the state's primary election process. California has also experienced disputes over water rights; and a tax revolt, culminating with the passage of Proposition 13 in 1978, limiting state property taxes. The state's trend towards the Democratic Party and away from the Republican Party can be seen in state elections. From 1899 to 1939, California had Republican governors. Since 1990, California has generally elected Democratic candidates to federal, state and local offices, including current Governor Gavin Newsom; however, the state has elected Republican Governors, though many of its Republican Governors, such as Arnold Schwarzenegger, tend to be considered moderate Republicans and more centrist than the national party. Several political movements have advocated for Californian independence. The California National Party and the California Freedom Coalition both advocate for Californian independence along the lines of progressivism and civic nationalism. The Yes California movement attempted to organize an independence referendum via ballot initiative for 2019, which was then postponed. The Democrats also now hold a supermajority in both houses of the state legislature. There are 60 Democrats and 20 Republicans in the Assembly; and 29 Democrats and 11 Republicans in the Senate. The trend towards the Democratic Party is most obvious in presidential elections. From 1952 through 1988, California was a Republican leaning state, with the party carrying the state's electoral votes in nine of ten elections, with 1964 as the exception. Southern California Republicans Richard Nixon and Ronald Reagan were both elected twice as the 37th and 40th U.S. Presidents, respectively. However, Democrats have won all of California's electoral votes for the last seven elections, starting in 1992. In the United States House, the Democrats held a 34–19 edge in the CA delegation of the 110th United States Congress in 2007. As the result of gerrymandering, the districts in California were usually dominated by one or the other party, and few districts were considered competitive. In 2008, Californians passed Proposition 20 to empower a 14-member independent citizen commission to redraw districts for both local politicians and Congress. After the 2012 elections, when the new system took effect, Democrats gained four seats and held a 38–15 majority in the delegation. Following the 2018 midterm House elections, Democrats won 46 out of 53 congressional house seats in California, leaving Republicans with seven. In general, Democratic strength is centered in the populous coastal regions of the Los Angeles metropolitan area and the San Francisco Bay Area. Republican strength is still greatest in eastern parts of the state. Orange County had remained largely Republican until the 2016 and 2018 elections, in which a majority of the county's votes were cast for Democratic candidates. One study ranked Berkeley, Oakland, Inglewood and San Francisco in the top 20 most liberal American cities; and Bakersfield, Orange, Escondido, Garden Grove, and Simi Valley in the top 20 most conservative cities. In October 2012, out of the 23,802,577 people eligible to vote, 18,245,970 people were registered to vote. Of the people registered, the three largest registered groups were Democrats (7,966,422), Republicans (5,356,608), and Decline to State (3,820,545). Los Angeles County had the largest number of registered Democrats (2,430,612) and Republicans (1,037,031) of any county in the state.
https://en.wikipedia.org/wiki?curid=5407
Columbia River The Columbia River (Upper Chinook: ' or '; Sahaptin: "Nch’i-Wàna" or "Nchi wana"; Sinixt dialect" ") is the largest river in the Pacific Northwest region of North America. The river rises in the Rocky Mountains of British Columbia, Canada. It flows northwest and then south into the US state of Washington, then turns west to form most of the border between Washington and the state of Oregon before emptying into the Pacific Ocean. The river is long, and its largest tributary is the Snake River. Its drainage basin is roughly the size of France and extends into seven US states and a Canadian province. The fourth-largest river in the United States by volume, the Columbia has the greatest flow of any North American river entering the Pacific. The Columbia and its tributaries have been central to the region's culture and economy for thousands of years. They have been used for transportation since ancient times, linking the region's many cultural groups. The river system hosts many species of anadromous fish, which migrate between freshwater habitats and the saline waters of the Pacific Ocean. These fish—especially the salmon species—provided the core subsistence for native peoples. In the late 18th century, a private American ship became the first non-indigenous vessel to enter the river; it was followed by a British explorer, who navigated past the Oregon Coast Range into the Willamette Valley. In the following decades, fur trading companies used the Columbia as a key transportation route. Overland explorers entered the Willamette Valley through the scenic but treacherous Columbia River Gorge, and pioneers began to settle the valley in increasing numbers. Steamships along the river linked communities and facilitated trade; the arrival of railroads in the late 19th century, many running along the river, supplemented these links. Since the late 19th century, public and private sectors have heavily developed the river. To aid ship and barge navigation, locks have been built along the lower Columbia and its tributaries, and dredging has opened, maintained, and enlarged shipping channels. Since the early 20th century, dams have been built across the river for power generation, navigation, irrigation, and flood control. The 14 hydroelectric dams on the Columbia's main stem and many more on its tributaries produce more than 44 percent of total US hydroelectric generation. Production of nuclear power has taken place at two sites along the river. Plutonium for nuclear weapons was produced for decades at the Hanford Site, which is now the most contaminated nuclear site in the US. These developments have greatly altered river environments in the watershed, mainly through industrial pollution and barriers to fish migration. The Columbia begins its journey in the southern Rocky Mountain Trench in British Columbia (BC). Columbia Lake – above sea level – and the adjoining Columbia Wetlands form the river's headwaters. The trench is a broad, deep, and long glacial valley between the Canadian Rockies and the Columbia Mountains in BC. For its first , the Columbia flows northwest along the trench through Windermere Lake and the town of Invermere, a region known in British Columbia as the Columbia Valley, then northwest to Golden and into Kinbasket Lake. Rounding the northern end of the Selkirk Mountains, the river turns sharply south through a region known as the Big Bend Country, passing through Revelstoke Lake and the Arrow Lakes. Revelstoke, the Big Bend, and the Columbia Valley combined are referred to in BC parlance as the Columbia Country. Below the Arrow Lakes, the Columbia passes the cities of Castlegar, located at the Columbia's confluence with the Kootenay River, and Trail, two major population centers of the West Kootenay region. The Pend Oreille River joins the Columbia about north of the US–Canada border. The Columbia enters eastern Washington flowing south and turning to the west at the Spokane River confluence. It marks the southern and eastern borders of the Colville Indian Reservation and the western border of the Spokane Indian Reservation. The river turns south after the Okanogan River confluence, then southeasterly near the confluence with the Wenatchee River in central Washington. This C‑shaped segment of the river is also known as the "Big Bend". During the Missoula Floods 10,000 to 15,000 years ago, much of the floodwater took a more direct route south, forming the ancient river bed known as the Grand Coulee. After the floods, the river found its present course, and the Grand Coulee was left dry. The construction of the Grand Coulee Dam in the mid-20th century impounded the river, forming Lake Roosevelt, from which water was pumped into the dry coulee, forming the reservoir of Banks Lake. The river flows past The Gorge Amphitheatre, a prominent concert venue in the Northwest, then through Priest Rapids Dam, and then through the Hanford Nuclear Reservation. Entirely within the reservation is Hanford Reach, the only US stretch of the river that is completely free-flowing, unimpeded by dams and not a tidal estuary. The Snake River and Yakima River join the Columbia in the Tri‑Cities population center. The Columbia makes a sharp bend to the west at the Washington–Oregon border. The river defines that border for the final of its journey. The Deschutes River joins the Columbia near The Dalles. Between The Dalles and Portland, the river cuts through the Cascade Range, forming the dramatic Columbia River Gorge. No other rivers except for the Klamath and Pit River completely breach the Cascades—the other rivers that flow through the range also originate in or very near the mountains. The headwaters and upper course of the Pit River are on the Modoc Plateau; downstream the Pit cuts a canyon through the southern reaches of the Cascades. In contrast, the Columbia cuts through the range nearly a thousand miles from its source in the Rocky Mountains. The gorge is known for its strong and steady winds, scenic beauty, and its role as an important transportation link. The river continues west, bending sharply to the north-northwest near Portland and Vancouver, Washington, at the Willamette River confluence. Here the river slows considerably, dropping sediment that might otherwise form a river delta. Near Longview, Washington and the Cowlitz River confluence, the river turns west again. The Columbia empties into the Pacific Ocean just west of Astoria, Oregon, over the Columbia Bar, a shifting sandbar that makes the river's mouth one of the most hazardous stretches of water to navigate in the world. Because of the danger and the many shipwrecks near the mouth, it acquired a reputation as the "Graveyard of Ships". The Columbia drains an area of about . Its drainage basin covers nearly all of Idaho, large portions of British Columbia, Oregon, and Washington, ultimately all of Montana west of the Continental Divide, and small portions of Wyoming, Utah, and Nevada; the total area is similar to the size of France. Roughly of the river's length and 85 percent of its drainage basin are in the US. The Columbia is the twelfth-longest river and has the sixth-largest drainage basin in the United States. In Canada, where the Columbia flows for and drains , the river ranks 23rd in length, and the Canadian part of its basin ranks 13th in size among Canadian basins. The Columbia shares its name with nearby places, such as British Columbia, as well as with landforms and bodies of water. With an average flow at the mouth of about , the Columbia is the largest river by discharge flowing into the Pacific from the Americas and is the fourth-largest by volume in the US. The average flow where the river crosses the international border between Canada and the United States is from a drainage basin of . This amounts to about 15 percent of the entire Columbia watershed. The Columbia's highest recorded flow, measured at The Dalles, was in June 1894, before the river was dammed. The lowest flow recorded at The Dalles was on April 16, 1968, and was caused by the initial closure of the John Day Dam, upstream. The Dalles is about from the mouth; the river at this point drains about or about 91 percent of the total watershed. Flow rates on the Columbia are affected by many large upstream reservoirs, many diversions for irrigation, and, on the lower stretches, reverse flow from the tides of the Pacific Ocean. The National Ocean Service observes water levels at six tide gauges and issues tide forecasts for twenty-two additional locations along the river between the entrance at the North Jetty and the base of Bonneville Dam, the head of tide. When the rifting of Pangaea, due to the process of plate tectonics, pushed North America away from Europe and Africa and into the Panthalassic Ocean (ancestor to the modern Pacific Ocean), the Pacific Northwest was not part of the continent. As the North American continent moved westward, the Farallon Plate subducted under its western margin. As the plate subducted, it carried along island arcs which were accreted to the North American continent, resulting in the creation of the Pacific Northwest between 150 and 90 million years ago. The general outline of the Columbia Basin was not complete until between 60 and 40 million years ago, but it lay under a large inland sea later subject to uplift. Between 50 and 20 million years ago, from the Eocene through the Miocene eras, tremendous volcanic eruptions frequently modified much of the landscape traversed by the Columbia. The lower reaches of the ancestral river passed through a valley near where Mount Hood later arose. Carrying sediments from erosion and erupting volcanoes, it built a thick delta that underlies the foothills on the east side of the Coast Range near Vernonia in northwestern Oregon. Between 17 million and 6 million years ago, huge outpourings of flood basalt lava covered the Columbia River Plateau and forced the lower Columbia into its present course. The modern Cascade Range began to uplift 5 to 4 million years ago. Cutting through the uplifting mountains, the Columbia River significantly deepened the Columbia River Gorge. The river and its drainage basin experienced some of the world's greatest known catastrophic floods toward the end of the last ice age. The periodic rupturing of ice dams at Glacial Lake Missoula resulted in the Missoula Floods, with discharges exceeding the combined flow of all the other rivers in the world, dozens of times over thousands of years. The exact number of floods is unknown, but geologists have documented at least 40; evidence suggests that they occurred between about 19,000 and 13,000 years ago. The floodwaters rushed across eastern Washington, creating the channeled scablands, which are a complex network of dry canyon-like channels, or coulees that are often braided and sharply gouged into the basalt rock underlying the region's deep topsoil. Numerous flat-topped buttes with rich soil stand high above the chaotic scablands. Constrictions at several places caused the floodwaters to pool into large temporary lakes, such as Lake Lewis, in which sediments were deposited. Water depths have been estimated at at Wallula Gap and over modern Portland, Oregon. Sediments were also deposited when the floodwaters slowed in the broad flats of the Quincy, Othello, and Pasco Basins. The floods' periodic inundation of the lower Columbia River Plateau deposited rich sediments; 21st-century farmers in the Willamette Valley "plow fields of fertile Montana soil and clays from Washington's Palouse". Over the last several thousand years a series of large landslides have occurred on the north side of the Columbia River Gorge, sending massive amounts of debris south from Table Mountain and Greenleaf Peak into the gorge near the present site of Bonneville Dam. The most recent and significant is known as the Bonneville Slide, which formed a massive earthen dam, filling of the river's length. Various studies have placed the date of the Bonneville Slide anywhere between 1060 and 1760 AD; the idea that the landslide debris present today was formed by more than one slide is relatively recent and may explain the large range of estimates. It has been suggested that if the later dates are accurate there may be a link with the 1700 Cascadia earthquake. The pile of debris resulting from the Bonneville Slide blocked the river until rising water finally washed away the sediment. It is not known how long it took the river to break through the barrier; estimates range from several months to several years. Much of the landslide's debris remained, forcing the river about south of its previous channel and forming the Cascade Rapids. In 1938, the construction of Bonneville Dam inundated the rapids as well as the remaining trees that could be used to refine the estimated date of the landslide. In 1980, the eruption of Mount St. Helens deposited large amounts of sediment in the lower Columbia, temporarily reducing the depth of the shipping channel by . Humans have inhabited the Columbia's watershed for more than 15,000 years, with a transition to a sedentary lifestyle based mainly on salmon starting about 3,500 years ago. In 1962, archaeologists found evidence of human activity dating back 11,230 years at the Marmes Rockshelter, near the confluence of the Palouse and Snake rivers in eastern Washington. In 1996 the skeletal remains of a 9,000-year-old prehistoric man (dubbed Kennewick Man) were found near Kennewick, Washington. The discovery rekindled debate in the scientific community over the origins of human habitation in North America and sparked a protracted controversy over whether the scientific or Native American community was entitled to possess and/or study the remains. Many different Native Americans and First Nations peoples have a historical and continuing presence on the Columbia. South of the Canada–US border, the Colville, Spokane, Coeur d'Alene, Yakama, Nez Perce, Cayuse, Palus, Umatilla, Cowlitz, and the Confederated Tribes of Warm Springs live along the US stretch. Along the upper Snake River and Salmon River, the Shoshone Bannock tribes are present. The Sinixt or Lakes people lived on the lower stretch of the Canadian portion, while above that the Shuswap people (Secwepemc in their own language) reckon the whole of the upper Columbia east to the Rockies as part of their territory. The Canadian portion of the Columbia Basin outlines the traditional homelands of the Canadian Kootenay–Ktunaxa. The Chinook tribe, which is not federally recognized, who live near the lower Columbia River, call it ' or ' in the Upper Chinook (Kiksht) language, and it is "Nch’i-Wàna" or "Nchi wana" to the Sahaptin (Ichishkíin Sɨ́nwit)-speaking peoples of its middle course in present-day Washington. The river is known as "" by the Sinixt people, who live in the area of the Arrow Lakes in the river's upper reaches in Canada. All three terms essentially mean "the big river". Oral histories describe the formation and destruction of the Bridge of the Gods, a land bridge that connected the Oregon and Washington sides of the river in the Columbia River Gorge. The bridge, which aligns with geological records of the Bonneville Slide, was described in some stories as the result of a battle between gods, represented by Mount Adams and Mount Hood, in their competition for the affection of a goddess, represented by Mount St. Helens. Native American stories about the bridge differ in their details but agree in general that the bridge permitted increased interaction between tribes on the north and south sides of the river. Horses, originally acquired from Spanish New Mexico, spread widely via native trade networks, reaching the Shoshone of the Snake River Plain by 1700. The Nez Perce, Cayuse, and Flathead people acquired their first horses around 1730. Along with horses came aspects of the emerging plains culture, such as equestrian and horse training skills, greatly increased mobility, hunting efficiency, trade over long distances, intensified warfare, the linking of wealth and prestige to horses and war, and the rise of large and powerful tribal confederacies. The Nez Perce and Cayuse kept large herds and made annual long-distance trips to the Great Plains for bison hunting, adopted the plains culture to a significant degree, and became the main conduit through which horses and the plains culture diffused into the Columbia River region. Other peoples acquired horses and aspects of the plains culture unevenly. The Yakama, Umatilla, Palus, Spokane, and Coeur d'Alene maintained sizable herds of horses and adopted some of the plains cultural characteristics, but fishing and fish-related economies remained important. Less affected groups included the Molala, Klickitat, Wenatchi, Okanagan, and Sinkiuse-Columbia peoples, who owned small numbers of horses and adopted few plains culture features. Some groups remained essentially unaffected, such as the Sanpoil and Nespelem people, whose culture remained centered on fishing. Natives of the region encountered foreigners at several times and places during the 18th and 19th centuries. European and American vessels explored the coastal area around the mouth of the river in the late 18th century, trading with local natives. The contact would prove devastating to the Indian tribes; a large portion of their population was wiped out by a smallpox epidemic. Canadian explorer Alexander Mackenzie crossed what is now interior British Columbia in 1793. From 1805 to 1807, the Lewis and Clark Expedition entered the Oregon Country along the Clearwater and Snake rivers, and encountered numerous small settlements of natives. Their records recount tales of hospitable traders who were not above stealing small items from the visitors. They also noted brass teakettles, a British musket, and other artifacts that had been obtained in trade with coastal tribes. From the earliest contact with westerners, the natives of the mid- and lower Columbia were not tribal, but instead congregated in social units no larger than a village, and more often at a family level; these units would shift with the season as people moved about, following the salmon catch up and down the river's tributaries. Sparked by the 1848 Whitman Massacre, a number of violent battles were fought between American settlers and the region's natives. The subsequent Indian Wars, especially the Yakima War, decimated the native population and removed much land from native control. As years progressed, the right of natives to fish along the Columbia became the central issue of contention with the states, commercial fishers, and private property owners. The US Supreme Court upheld fishing rights in landmark cases in 1905 and 1918, as well as the 1974 case "United States v. Washington", commonly called the Boldt Decision. Fish were central to the culture of the region's natives, both as sustenance and as part of their religious beliefs. Natives drew fish from the Columbia at several major sites, which also served as trading posts. Celilo Falls, located east of the modern city of The Dalles, was a vital hub for trade and the interaction of different cultural groups, being used for fishing and trading for 11,000 years. Prior to contact with westerners, villages along this stretch may have at times had a population as great as 10,000. The site drew traders from as far away as the Great Plains. The Cascades Rapids of the Columbia River Gorge, and Kettle Falls and Priest Rapids in eastern Washington, were also major fishing and trading sites. In prehistoric times the Columbia's salmon and steelhead runs numbered an estimated annual average of 10 to 16 million fish. In comparison, the largest run since 1938 was in 1986, with 3.2 million fish entering the Columbia. The annual catch by natives has been estimated at . The most important and productive native fishing site was located at Celilo Falls, which was perhaps the most productive inland fishing site in North America. The falls were located at the border between Chinookan- and Sahaptian-speaking peoples and served as the center of an extensive trading network across the Pacific Plateau. Celilo was the oldest continuously inhabited community on the North American continent. Salmon canneries established by white settlers beginning in 1866 had a strong negative impact on the salmon population, and in 1908 US President Theodore Roosevelt observed that the salmon runs were but a fraction of what they had been 25 years prior. As river development continued in the 20th century, each of these major fishing sites was flooded by a dam, beginning with Cascades Rapids in 1938. The development was accompanied by extensive negotiations between natives and US government agencies. The Confederated Tribes of Warm Springs, a coalition of various tribes, adopted a constitution and incorporated after the 1938 completion of the Bonneville Dam flooded Cascades Rapids; Still, in the 1930s, there were natives who lived along the river and fished year round, moving along with the fish's migration patterns throughout the seasons. The Yakama were slower to do so, organizing a formal government in 1944. In the 21st century, the Yakama, Nez Perce, Umatilla, and Warm Springs tribes all have treaty fishing rights along the Columbia and its tributaries. In 1957 Celilo Falls was submerged by the construction of The Dalles Dam, and the native fishing community was displaced. The affected tribes received a $26.8 million settlement for the loss of Celilo and other fishing sites submerged by The Dalles Dam. The Confederated Tribes of Warm Springs used part of its $4 million settlement to establish the Kah-Nee-Ta resort south of Mount Hood. Some historians believe that Japanese or Chinese vessels blown off course reached the Northwest Coast long before Europeans—possibly as early as 219 BCE. Historian Derek Hayes claims that "It is a near certainty that Japanese or Chinese people arrived on the northwest coast long before any European." It is unknown whether they landed near the Columbia. Evidence exists that Spanish castaways reached the shore in 1679 and traded with the Clatsop; if these were the first Europeans to see the Columbia, they failed to send word home to Spain. In the 18th century, there was strong interest in discovering a Northwest Passage that would permit navigation between the Atlantic (or inland North America) and the Pacific Ocean. Many ships in the area, especially those under Spanish and British command, searched the northwest coast for a large river that might connect to Hudson Bay or the Missouri River. The first documented European discovery of the Columbia River was that of Bruno de Heceta, who in 1775 sighted the river's mouth. On the advice of his officers, he did not explore it, as he was short-staffed and the current was strong. He considered it a bay, and called it "Ensenada de Asunción". Later Spanish maps based on his discovery showed a river, labeled "Rio de San Roque", or an entrance, called "Entrada de Hezeta". Following Heceta's reports, British maritime fur trader Captain John Meares searched for the river in 1788 but concluded that it did not exist. He named Cape Disappointment for the non-existent river, not realizing the cape marks the northern edge of the river's mouth. What happened next would form the basis for decades of both cooperation and dispute between British and American exploration of, and ownership claim to, the region. Royal Navy commander George Vancouver sailed past the mouth in April 1792 and observed a change in the water's color, but he accepted Meares' report and continued on his journey northward. Later that month, Vancouver encountered the American captain Robert Gray at the Strait of Juan de Fuca. Gray reported that he had seen the entrance to the Columbia and had spent nine days trying but failing to enter. On May 12, 1792, Gray returned south and crossed the Columbia Bar, becoming the first known explorer of European descent to enter the river. Gray's fur trading mission had been financed by Boston merchants, who outfitted him with a private vessel named "Columbia Rediviva"; he named the river after the ship on May 18. Gray spent nine days trading near the mouth of the Columbia, then left without having gone beyond upstream. The farthest point reached was Grays Bay at the mouth of Grays River. Gray's discovery of the Columbia River was later used by the United States to support its claim to the Oregon Country, which was also claimed by Russia, Great Britain, Spain and other nations. In October 1792, Vancouver sent Lieutenant William Robert Broughton, his second-in-command, up the river. Broughton got as far as the Sandy River at the western end of the Columbia River Gorge, about upstream, sighting and naming Mount Hood. Broughton formally claimed the river, its drainage basin, and the nearby coast for Britain. In contrast, Gray had not made any formal claims on behalf of the United States. Because the Columbia was at the same latitude as the headwaters of the Missouri River, there was some speculation that Gray and Vancouver had discovered the long-sought Northwest Passage. A 1798 British map showed a dotted line connecting the Columbia with the Missouri. When the American explorers Meriwether Lewis and William Clark charted the vast, unmapped lands of the American West in their overland expedition (1803–05), they found no passage between the rivers. After crossing the Rocky Mountains, Lewis and Clark built dugout canoes and paddled down the Snake River, reaching the Columbia near the present-day Tri-Cities, Washington. They explored a few miles upriver, as far as Bateman Island, before heading down the Columbia, concluding their journey at the river's mouth and establishing Fort Clatsop, a short-lived establishment that was occupied for less than three months. Canadian explorer David Thompson, of the North West Company, spent the winter of 1807–08 at Kootanae House near the source of the Columbia at present-day Invermere, British Columbia. Over the next few years he explored much of the river and its northern tributaries. In 1811 he traveled down the Columbia to the Pacific Ocean, arriving at the mouth just after John Jacob Astor's Pacific Fur Company had founded Astoria. On his return to the north, Thompson explored the one remaining part of the river he had not yet seen, becoming the first Euro-descended person to travel the entire length of the river. In 1825, the Hudson's Bay Company (HBC) established Fort Vancouver on the bank of the Columbia, in what is now Vancouver, Washington, as the headquarters of the company's Columbia District, which encompassed everything west of the Rocky Mountains. Chief Factor John McLoughlin, a physician who had been in the fur trade since 1804, was appointed superintendent of the Columbia District. The HBC reoriented its Columbia District operations toward the Pacific Ocean via the Columbia, which became the region's main trunk route. In the early 1840s Americans began to colonize the Oregon country in large numbers via the Oregon Trail, despite the HBC's efforts to discourage American settlement in the region. For many the final leg of the journey involved travel down the lower Columbia River to Fort Vancouver. This part of the Oregon Trail, the treacherous stretch from The Dalles to below the Cascades, could not be traversed by horses or wagons (only watercraft, at great risk). This prompted the 1846 construction of the Barlow Road. In the Treaty of 1818 the United States and Britain agreed that both nations were to enjoy equal rights in Oregon Country for 10 years. By 1828, when the so-called "joint occupation" was renewed for an indefinite period, it seemed probable that the lower Columbia River would in time become the border between the two nations. For years the Hudson's Bay Company successfully maintained control of the Columbia River and American attempts to gain a foothold were fended off. In the 1830s, American religious missions were established at several locations in the lower Columbia River region. In the 1840s a mass migration of American settlers undermined British control. The Hudson's Bay Company tried to maintain dominance by shifting from the fur trade, which was in decline, to exporting other goods such as salmon and lumber. Colonization schemes were attempted, but failed to match the scale of American settlement. Americans generally settled south of the Columbia, mainly in the Willamette Valley. The Hudson's Bay Company tried to establish settlements north of the river, but nearly all the British colonists moved south to the Willamette Valley. The hope that the British colonists might dilute the American presence in the valley failed in the face of the overwhelming number of American settlers. These developments rekindled the issue of "joint occupation" and the boundary dispute. While some British interests, especially the Hudson's Bay Company, fought for a boundary along the Columbia River, the Oregon Treaty of 1846 set the boundary at the 49th parallel. As part of the treaty, the British retained all areas north of the line while the U.S. acquired the south. The Columbia River became much of the border between the U.S. territories of Oregon and Washington. Oregon became a U.S. state in 1859, while Washington later entered into the Union in 1889. By the turn of the 20th century, the difficulty of navigating the Columbia was seen as an impediment to the economic development of the Inland Empire region east of the Cascades. The dredging and dam building that followed would permanently alter the river, disrupting its natural flow but also providing electricity, irrigation, navigability and other benefits to the region. American captain Robert Gray and British captain George Vancouver, who explored the river in 1792, proved that it was possible to cross the Columbia Bar. Many of the challenges associated with that feat remain today; even with modern engineering alterations to the mouth of the river, the strong currents and shifting sandbar make it dangerous to pass between the river and the Pacific Ocean. The use of steamboats along the river, beginning with the British "Beaver" in 1836 and followed by American vessels in 1850, contributed to the rapid settlement and economic development of the region. Steamboats operated in several distinct stretches of the river: on its lower reaches, from the Pacific Ocean to Cascades Rapids; from the Cascades to Celilo Falls; from Celilo to the confluence with the Snake River; on the Wenatchee Reach of eastern Washington; on British Columbia's Arrow Lakes; and on tributaries like the Willamette, the Snake and Kootenay Lake. The boats, initially powered by burning wood, carried passengers and freight throughout the region for many years. Early railroads served to connect steamboat lines interrupted by waterfalls on the river's lower reaches. In the 1880s, railroads maintained by companies such as the Oregon Railroad and Navigation Company began to supplement steamboat operations as the major transportation links along the river. As early as 1881, industrialists proposed altering the natural channel of the Columbia to improve navigation. Changes to the river over the years have included the construction of jetties at the river's mouth, dredging, and the construction of canals and navigation locks. Today, ocean freighters can travel upriver as far as Portland and Vancouver, and barges can reach as far inland as Lewiston, Idaho. The shifting Columbia Bar makes passage between the river and the Pacific Ocean difficult and dangerous, and numerous rapids along the river hinder navigation. "Pacific Graveyard," a 1964 book by James A. Gibbs, describes the many shipwrecks near the mouth of the Columbia. Jetties, first constructed in 1886, extend the river's channel into the ocean. Strong currents and the shifting sandbar remain a threat to ships entering the river and necessitate continuous maintenance of the jetties. In 1891 the Columbia was dredged to enhance shipping. The channel between the ocean and Portland and Vancouver was deepened from to . "The Columbian" called for the channel to be deepened to as early as 1905, but that depth was not attained until 1976. Cascade Locks and Canal were first constructed in 1896 around the Cascades Rapids, enabling boats to travel safely through the Columbia River Gorge. The Celilo Canal, bypassing Celilo Falls, opened to river traffic in 1915. In the mid-20th century, the construction of dams along the length of the river submerged the rapids beneath a series of reservoirs. An extensive system of locks allowed ships and barges to pass easily from one reservoir to the next. A navigation channel reaching to Lewiston, Idaho, along the Columbia and Snake rivers, was completed in 1975. Among the main commodities are wheat and other grains, mainly for export. As of 2016, the Columbia ranked third, behind the Mississippi and Paraná rivers, among the world's largest export corridors for grain. The 1980 eruption of Mount St. Helens caused mudslides in the area, which reduced the Columbia's depth by for a stretch, disrupting Portland's economy. Efforts to maintain and improve the navigation channel have continued to the present day. In 1990 a new round of studies examined the possibility of further dredging on the lower Columbia. The plans were controversial from the start because of economic and environmental concerns. In 1999, Congress authorized deepening the channel between Portland and Astoria from , which will make it possible for large container and grain ships to reach Portland and Vancouver. The project has met opposition because of concerns about stirring up toxic sediment on the riverbed. Portland-based Northwest Environmental Advocates brought a lawsuit against the Army Corps of Engineers, but it was rejected by the Ninth U.S. Circuit Court of Appeals in August 2006. The project includes measures to mitigate environmental damage; for instance, the US Army Corps of Engineers must restore 12 times the area of wetland damaged by the project. In early 2006, the Corps spilled of hydraulic oil into the Columbia, drawing further criticism from environmental organizations. Work on the project began in 2005 and concluded in 2010. The project's cost is estimated at $150 million. The federal government is paying 65 percent, Oregon and Washington are paying $27 million each, and six local ports are also contributing to the cost. In 1902, the United States Bureau of Reclamation was established to aid in the economic development of arid western states. One of its major undertakings was building Grand Coulee Dam to provide irrigation for the of the Columbia Basin Project in central Washington. With the onset of World War II, the focus of dam construction shifted to production of hydroelectricity. Irrigation efforts resumed after the war. River development occurred within the structure of the 1909 International Boundary Waters Treaty between the US and Canada. The United States Congress passed the Rivers and Harbors Act of 1925, which directed the Army Corps of Engineers and the Federal Power Commission to explore the development of the nation's rivers. This prompted agencies to conduct the first formal financial analysis of hydroelectric development; the reports produced by various agencies were presented in House Document 308. Those reports, and subsequent related reports, are referred to as 308 Reports. In the late 1920s, political forces in the Northwestern United States generally favored private development of hydroelectric dams along the Columbia. But the overwhelming victories of gubernatorial candidate George W. Joseph in the 1930 Republican primary, and later his law partner Julius Meier, were understood to demonstrate strong public support for public ownership of dams. In 1933, President Franklin D. Roosevelt signed a bill that enabled the construction of the Bonneville and Grand Coulee dams as public works projects. The legislation was attributed to the efforts of Oregon Senator Charles McNary, Washington Senator Clarence Dill, and Oregon Congressman Charles Martin, among others. In 1948 floods swept through the Columbia watershed, destroying Vanport, then the second largest city in Oregon, and impacting cities as far north as Trail, British Columbia. The flooding prompted the United States Congress to pass the Flood Control Act of 1950, authorizing the federal development of additional dams and other flood control mechanisms. By that time local communities had become wary of federal hydroelectric projects, and sought local control of new developments; a public utility district in Grant County, Washington, ultimately began construction of the dam at Priest Rapids. In the 1960s, the United States and Canada signed the Columbia River Treaty, which focused on flood control and the maximization of downstream power generation. Canada agreed to build dams and provide reservoir storage, and the United States agreed to deliver to Canada one-half of the increase in US downstream power benefits as estimated five years in advance. Canada's obligation was met by building three dams (two on the Columbia, and one on the Duncan River), the last of which was completed in 1973. Today the main stem of the Columbia River has 14 dams, of which three are in Canada and 11 in the US. Four mainstem dams and four lower Snake River dams contain navigation locks to allow ship and barge passage from the ocean as far as Lewiston, Idaho. The river system as a whole has more than 400 dams for hydroelectricity and irrigation. The dams address a variety of demands, including flood control, navigation, stream flow regulation, storage and delivery of stored waters, reclamation of public lands and Indian reservations, and the generation of hydroelectric power. The larger US dams are owned and operated by the federal government (some by the Army Corps of Engineers and some by the Bureau of Reclamation), while the smaller dams are operated by public utility districts, and private power companies. The federally operated system is known as the Federal Columbia River Power System, which includes 31 dams on the Columbia and its tributaries. The system has altered the seasonal flow of the river in order to meet higher electricity demands during the winter. At the beginning of the 20th century, roughly 75 percent of the Columbia's flow occurred in the summer, between April and September. By 1980, the summer proportion had been lowered to about 50 percent, essentially eliminating the seasonal pattern. The installation of dams dramatically altered the landscape and ecosystem of the river. At one time, the Columbia was one of the top salmon-producing river systems in the world. Previously active fishing sites, such as Celilo Falls in the eastern Columbia River Gorge, have exhibited a sharp decline in fishing along the Columbia in the last century, and salmon populations have been dramatically reduced. Fish ladders have been installed at some dam sites to help the fish journey to spawning waters. Chief Joseph Dam has no fish ladders and completely blocks fish migration to the upper half of the Columbia River system. The Bureau of Reclamation's Columbia Basin Project focused on the generally dry region of central Washington known as the Columbia Basin, which features rich loess soil. Several groups developed competing proposals, and in 1933, President Franklin D. Roosevelt authorized the Columbia Basin Project. The Grand Coulee Dam was the project's central component; upon completion, it pumped water up from the Columbia to fill the formerly dry Grand Coulee, forming Banks Lake. By 1935, the intended height of the dam was increased from a range between to , a height that would extend the lake impounded by the dam all the way to the Canada–US border; the project had grown from a local New Deal relief measure to a major national project. The project's initial purpose was irrigation, but the onset of World War II created a high demand for electricity, mainly for aluminum production and for the development of nuclear weapons at the Hanford Site. Irrigation began in 1951. The project provides water to more than of fertile but arid land in central Washington, transforming the region into a major agricultural center. Important crops include orchard fruit, potatoes, alfalfa, mint, beans, beets, and wine grapes. Since 1750, the Columbia has experienced six multi-year droughts. The longest, lasting 12 years in the mid‑19th century, reduced the river's flow to 20 percent below average. Scientists have expressed concern that a similar drought would have grave consequences in a region so dependent on the Columbia. In 1992–1993, a lesser drought affected farmers, hydroelectric power producers, shippers, and wildlife managers. Many farmers in central Washington build dams on their property for irrigation and to control frost on their crops. The Washington Department of Ecology, using new techniques involving aerial photographs, estimated there may be as many as a hundred such dams in the area, most of which are illegal. Six such dams have failed in recent years, causing hundreds of thousands of dollars of damage to crops and public roads. Fourteen farms in the area have gone through the permitting process to build such dams legally. The Columbia's heavy flow and large elevation drop over a short distance, , give it tremendous capacity for hydroelectricity generation. In comparison, the Mississippi drops less than . The Columbia alone possesses one-third of the United States's hydroelectric potential. In 2012, the river and its tributaries accounted for 29 GW of hydroelectric generating capacity, contributing 44 percent of the total hydroelectric generation in the nation. The largest of the 150 hydroelectric projects, the Grand Coulee Dam and the Chief Joseph Dam, are also the largest in the United States. As of 2017, Grand Coulee is the fifth largest hydroelectric plant in the world. Inexpensive hydropower supported the location of a large aluminum industry in the region, because its reduction from bauxite requires large amounts of electricity. Until 2000, the Northwestern United States produced up to 17 percent of the world's aluminum and 40 percent of the aluminum produced in the United States. The commoditization of power in the early 21st century, coupled with drought that reduced the generation capacity of the river, damaged the industry and by 2001, Columbia River aluminum producers had idled 80 percent of its production capacity. By 2003, the entire United States produced only 15 percent of the world's aluminum, and many smelters along the Columbia had gone dormant or out of business. Power remains relatively inexpensive along the Columbia, and since the mid-2000 several global enterprises have moved server farm operations into the area to avail themselves of cheap power. Downriver of Grand Coulee, each dam's reservoir is closely regulated by the Bonneville Power Administration (BPA), the U.S. Army Corps of Engineers, and various Washington public utility districts to ensure flow, flood control, and power generation objectives are met. Increasingly, hydro-power operations are required to meet standards under the US Endangered Species Act and other agreements to manage operations to minimize impacts on salmon and other fish, and some conservation and fishing groups support removing four dams on the lower Snake River, the largest tributary of the Columbia. In 1941, the BPA hired Oklahoma folksinger Woody Guthrie to write songs for a documentary film promoting the benefits of hydropower. In the month he spent traveling the region Guthrie wrote 26 songs, which have become an important part of the cultural history of the region. The Columbia supports several species of anadromous fish that migrate between the Pacific Ocean and fresh water tributaries of the river. Sockeye salmon, Coho and Chinook (also known as "king") salmon, and steelhead, all of the genus "Oncorhynchus", are ocean fish that migrate up the rivers at the end of their life cycles to spawn. White sturgeon, which take 15 to 25 years to mature, typically migrate between the ocean and the upstream habitat several times during their lives. Salmon populations declined dramatically after the establishment of canneries in 1867. In 1879 it was reported that 545,450 salmon, with an average weight of were caught (in a recent season) and mainly canned for export to England. A can weighing could be sold for 8d or 9d. By 1908, there was widespread concern about the decline of salmon and sturgeon. In that year, the people of Oregon passed two laws under their newly instituted program of citizens' initiatives limiting fishing on the Columbia and other rivers. Then in 1948, another initiative banned the use of seine nets (devices already used by Native Americans, and refined by later settlers) altogether. Dams interrupt the migration of anadromous fish. Salmon and steelhead return to the streams in which they were born to spawn; where dams prevent their return, entire populations of salmon die. Some of the Columbia and Snake River dams employ fish ladders, which are effective to varying degrees at allowing these fish to travel upstream. Another problem exists for the juvenile salmon headed downstream to the ocean. Previously, this journey would have taken two to three weeks. With river currents slowed by the dams, and the Columbia converted from wild river to a series of slackwater pools, the journey can take several months, which increases the mortality rate. In some cases, the Army Corps of Engineers transports juvenile fish downstream by truck or river barge. The Chief Joseph Dam and several dams on the Columbia's tributaries entirely block migration, and there are no migrating fish on the river above these dams. Sturgeon have different migration habits and can survive without ever visiting the ocean. In many upstream areas cut off from the ocean by dams, sturgeon simply live upstream of the dam. Not all fish have suffered from the modifications to the river; the northern pikeminnow (formerly known as the "squawfish") thrives in the warmer, slower water created by the dams. Research in the mid-1980s found that juvenile salmon were suffering substantially from the predatory pikeminnow, and in 1990, in the interest of protecting salmon, a "bounty" program was established to reward anglers for catching pikeminnow. In 1994, the salmon catch was smaller than usual in the rivers of Oregon, Washington, and British Columbia, causing concern among commercial fishermen, government agencies, and tribal leaders. US government intervention, to which the states of Alaska, Idaho, and Oregon objected, included an 11-day closure of an Alaska fishery. In April 1994 the Pacific Fisheries Management Council unanimously approved the strictest regulations in 18 years, banning all commercial salmon fishing for that year from Cape Falcon north to the Canada–US border. In the winter of 1994, the return of coho salmon far exceeded expectations, which was attributed in part to the fishing ban. Also in 1994, United States Secretary of the Interior Bruce Babbitt first proposed the removal of several Pacific Northwest dams because of their impact on salmon spawning. The Northwest Power Planning Council approved a plan that provided more water for fish and less for electricity, irrigation, and transportation. Environmental advocates have called for the removal of certain dams in the Columbia system in the years since. Of the 227 major dams in the Columbia River drainage basin, the four Washington dams on the lower Snake River are often identified for removal, for example in an ongoing lawsuit concerning a Bush administration plan for salmon recovery. These dams and reservoirs limit the recovery of upriver salmon runs to Idaho's Salmon and Clearwater rivers. Historically, the Snake produced over 1.5 million spring and summer Chinook salmon, a number that has dwindled to several thousand in recent years. Idaho Power Company's Hells Canyon dams have no fish ladders (and do not pass juvenile salmon downstream), and thus allow no steelhead or salmon to migrate above Hells Canyon. In 2007, the destruction of the Marmot Dam on the Sandy River was the first dam removal in the system. Other Columbia Basin dams that have been removed include Condit Dam on Washington's White Salmon River, and the Milltown Dam on the Clark Fork in Montana. In southeastern Washington, a stretch of the river passes through the Hanford Site, established in 1943 as part of the Manhattan Project. The site served as a plutonium production complex, with nine nuclear reactors and related facilities along the banks of the river. From 1944 to 1971, pump systems drew cooling water from the river and, after treating this water for use by the reactors, returned it to the river. Before being released back into the river, the used water was held in large tanks known as retention basins for up to six hours. Longer-lived isotopes were not affected by this retention, and several terabecquerels entered the river every day. By 1957, the eight plutonium production reactors at Hanford dumped a daily average of 50,000 curies of radioactive material into the Columbia. These releases were kept secret by the federal government until the release of declassified documents in the late 1980s. Radiation was measured downstream as far west as the Washington and Oregon coasts. The nuclear reactors were decommissioned at the end of the Cold War, and the Hanford site is the focus of one of the world's largest environmental cleanup, managed by the Department of Energy under the oversight of the Washington Department of Ecology and the Environmental Protection Agency. Nearby aquifers contain an estimated 270 billion US gallons (1 billion m3) of groundwater contaminated by high-level nuclear waste that has leaked out of Hanford's underground storage tanks. , 1 million US gallons (3,785 m3) of highly radioactive waste is traveling through groundwater toward the Columbia River. This waste is expected to reach the river in 12 to 50 years if cleanup does not proceed on schedule. In addition to concerns about nuclear waste, numerous other pollutants are found in the river. These include chemical pesticides, bacteria, arsenic, dioxins, and polychlorinated biphenyls (PCB). Studies have also found significant levels of toxins in fish and the waters they inhabit within the basin. Accumulation of toxins in fish threatens the survival of fish species, and human consumption of these fish can lead to health problems. Water quality is also an important factor in the survival of other wildlife and plants that grow in the Columbia River drainage basin. The states, Indian tribes, and federal government are all engaged in efforts to restore and improve the water, land, and air quality of the Columbia River drainage basin and have committed to work together to enhance and accomplish critical ecosystem restoration efforts. A number of cleanup efforts are currently underway, including Superfund projects at Portland Harbor, Hanford, and Lake Roosevelt. Timber industry activity further contaminates river water, for example in the increased sediment runoff that results from clearcuts. The Northwest Forest Plan, a piece of federal legislation from 1994, mandated that timber companies consider the environmental impacts of their practices on rivers like the Columbia. On July 1, 2003, Christopher Swain of Portland, Oregon, became the first person to swim the Columbia River's entire length, in an effort to raise public awareness about the river's environmental health. Both natural and anthropogenic processes are involved in the cycling of nutrients in the Columbia River basin. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts to nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams. Nutrients dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration, and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of nutrients into the Pacific Ocean; with the exception of nitrogen, which is delivered into the estuary by ocean upwelling sources. Most of the Columbia's drainage basin (which, at , is about the size of France) lies roughly between the Rocky Mountains on the east and the Cascade Mountains on the west. In the United States and Canada the term watershed is often used to mean drainage basin. The term "Columbia Basin" is used to refer not only to the entire drainage basin but also to subsets of the river's full watershed, such as the relatively flat and unforested area in eastern Washington bounded by the Cascades, the Rocky Mountains, and the Blue Mountains. Within the watershed are diverse landforms including mountains, arid plateaus, river valleys, rolling uplands, and deep gorges. Grand Teton National Park lies in the watershed, as well as parts of Yellowstone National Park, Glacier National Park, Mount Rainier National Park, and North Cascades National Park. Canadian National Parks in the watershed include Kootenay National Park, Yoho National Park, Glacier National Park, and Mount Revelstoke National Park. Hells Canyon, the deepest gorge in North America, and the Columbia Gorge are in the watershed. Vegetation varies widely, ranging from western hemlock and western redcedar in the moist regions to sagebrush in the arid regions. The watershed provides habitat for 609 known fish and wildlife species, including the bull trout, bald eagle, gray wolf, grizzly bear, and Canada lynx. The World Wide Fund for Nature (WWF) divides the waters of the Columbia and its tributaries into three freshwater ecoregions, naming them Columbia Glaciated, Columbia Unglaciated, and Upper Snake. The Columbia Glaciated ecoregion, making up about a third of the total watershed, lies in the north and was covered with ice sheets during the Pleistocene. The ecoregion includes the mainstem Columbia north of the Snake River and tributaries such as the Yakima, Okanagan, Pend Oreille, Clark Fork, and Kootenay rivers. The effects of glaciation include a number of large lakes and a relatively low diversity of freshwater fish. The Upper Snake ecoregion is defined as the Snake River watershed above Shoshone Falls, which totally blocks fish migration. This region has 14 species of fish, many of which are endemic. The Columbia Unglaciated ecoregion makes up the rest of the watershed. It includes the mainstem Columbia below the Snake River and tributaries such as the Salmon, John Day, Deschutes, and lower Snake Rivers. Of the three ecoregions it is the richest in terms of freshwater species diversity. There are 35 species of fish, of which four are endemic. There are also high levels of mollusk endemism. In 2016, over eight million people lived within the Columbia's drainage basin. Of this total about 3.5 million people lived in Oregon, 2.1 million in Washington, 1.7 million in Idaho, half a million in British Columbia, and 0.4 million in Montana. Population in the watershed has been rising for many decades and is projected to rise to about 10 million by 2030. The highest population densities are found west of the Cascade Mountains along the I-5 corridor, especially in the Portland-Vancouver urban area. High densities are also found around Spokane, Washington, and Boise, Idaho. Although much of the watershed is rural and sparsely populated, areas with recreational and scenic values are growing rapidly. The central Oregon county of Deschutes is the fastest-growing in the state. Populations have also been growing just east of the Cascades in central Washington around the city of Yakima and the Tri-Cities area. Projections for the coming decades assume growth throughout the watershed, including the interior. The Canadian part of the Okanagan subbasin is also growing rapidly. Climate varies greatly from place to place within the watershed. Elevation ranges from sea level at the river mouth to more than in the mountains, and temperatures vary with elevation. The highest peak is Mount Rainier, at . High elevations have cold winters and short cool summers; interior regions are subject to great temperature variability and severe droughts. Over some of the watershed, especially west of the Cascade Mountains, precipitation maximums occur in winter, when Pacific storms come ashore. Atmospheric conditions block the flow of moisture in summer, which is generally dry except for occasional thunderstorms in the interior. In some of the eastern parts of the watershed, especially shrub-steppe regions with Continental climate patterns, precipitation maximums occur in early summer. Annual precipitation varies from more than a year in the Cascades to less than in the interior. Much of the watershed gets less than a year. Several major North American drainage basins and many minor ones share a common border with the Columbia River's drainage basin. To the east, in northern Wyoming and Montana, the Continental Divide separates the Columbia watershed from the Mississippi-Missouri watershed, which empties into the Gulf of Mexico. To the northeast, mostly along the southern border between British Columbia and Alberta, the Continental Divide separates the Columbia watershed from the Nelson-Lake Winnipeg-Saskatchewan watershed, which empties into Hudson Bay. The Mississippi and Nelson watersheds are separated by the Laurentian Divide, which meets the Continental Divide at Triple Divide Peak near the headwaters of the Columbia's Flathead River tributary. This point marks the meeting of three of North America's main drainage patterns, to the Pacific Ocean, to Hudson Bay, and to the Atlantic Ocean via the Gulf of Mexico. Further north along the Continental Divide, a short portion of the combined Continental and Laurentian divides separate the Columbia watershed from the MacKenzie-Slave-Athabasca watershed, which empties into the Arctic Ocean. The Nelson and Mackenzie watersheds are separated by a divide between streams flowing to the Arctic Ocean and those of the Hudson Bay watershed. This divide meets the Continental Divide at Snow Dome (also known as Dome), near the northernmost bend of the Columbia River. To the southeast, in western Wyoming, another divide separates the Columbia watershed from the Colorado–Green watershed, which empties into the Gulf of California. The Columbia, Colorado, and Mississippi watersheds meet at Three Waters Mountain in the Wind River Range of . To the south, in Oregon, Nevada, Utah, Idaho, and Wyoming, the Columbia watershed is divided from the Great Basin, whose several watersheds are endorheic, not emptying into any ocean but rather drying up or sinking into sumps. Great Basin watersheds that share a border with the Columbia watershed include Harney Basin, Humboldt River, and Great Salt Lake. The associated triple divide points are Commissary Ridge North, Wyoming, and Sproats Meadow Northwest, Oregon. To the north, mostly in British Columbia, the Columbia watershed borders the Fraser River watershed. To the west and southwest the Columbia watershed borders a number of smaller watersheds that drain to the Pacific Ocean, such as the Klamath River in Oregon and California and the Puget Sound Basin in Washington. The Columbia receives more than 60 significant tributaries. The four largest that empty directly into the Columbia (measured either by discharge or by size of watershed) are the Snake River (mostly in Idaho), the Willamette River (in northwest Oregon), the Kootenay River (mostly in British Columbia), and the Pend Oreille River (mostly in northern Washington and Idaho, also known as the lower part of the Clark Fork). Each of these four averages more than and drains an area of more than . The Snake is by far the largest tributary. Its watershed of is larger than the state of Idaho. Its discharge is roughly a third of the Columbia's at the rivers' confluence but compared to the Columbia upstream of the confluence the Snake is longer (113%) and has a larger drainage basin (104%). The Pend Oreille River system (including its main tributaries, the Clark Fork and Flathead rivers) is also similar in size to the Columbia at their confluence. Compared to the Columbia River above the two rivers' confluence, the Pend Oreille-Clark-Flathead is nearly as long (about 86%), its basin about three-fourths as large (76%), and its discharge over a third (37%).
https://en.wikipedia.org/wiki?curid=5408
Commelinales Commelinales is the botanical name of an order of flowering plants. It comprises five families: Commelinaceae, Haemodoraceae, Hanguanaceae, Philydraceae, and Pontederiaceae. All the families combined contain over 885 species in about 70 genera; the majority of species are in the Commelinaceae. Plants in the order share a number of synapomorphies that tie them together, such as a lack of mycorrhizal associations and tapetal raphides. Estimates differ as to when the Comminales evolved, but most suggest an origin and diversification sometime during the mid- to late Cretaceous. Depending on the methods used, studies suggest a range of origin between 123 and 73 million years, with diversification occurring within the group 110 to 66 million years ago. The order's closest relatives are in the Zingiberales, which includes ginger, bananas, cardamom, and others. According to the most recent classification scheme, the APG IV of 2016, the order includes five families: This is unchanged from the APG III of 2009 and the APG II of 2003, but different from the older APG system of 1998, which did not include Hanguanaceae. The older Cronquist system of 1981, which was based purely on morphological data, placed the order in subclass Commelinidae of class Liliopsida and included the families Commelinaceae, Mayacaceae, Rapateaceae and Xyridaceae. These families are now known to be only distantly related. In the classification system of Dahlgren the Commelinales were one of four orders in the superorder Commeliniflorae (also called Commelinanae), and contained five families, of which only Commelinaceae has been retained by the Angiosperm Phylogeny Group (APG).
https://en.wikipedia.org/wiki?curid=5409
Cucurbitales The Cucurbitales are an order of flowering plants, included in the rosid group of dicotyledons. This order mostly belongs to tropical areas, with limited presence in subtropic and temperate regions. The order includes shrubs and trees, together with many herbs and climbers. One major characteristic of the Cucurbitales is the presence of unisexual flowers, mostly pentacyclic, with thick pointed petals (whenever present). The pollination is usually performed by insects, but wind pollination is also present (in Coriariaceae and Datiscaceae). The order consists of roughly 2600 species in eight families. The largest families are Begoniaceae (begonia family) with around 1500 species and Cucurbitaceae (gourd family) with around 900 species. These two families include the only economically important plants. Specifically, the Cucurbitaceae (gourd family) include some food species, such as squash, pumpkin (both from "Cucurbita"), watermelon ("Citrullus vulgaris"), and cucumber and melons ("Cucumis"). The Begoniaceae are known for their horticultural species, of which there are over 130 with many more varieties. The Cucurbitales are an order of plants with a cosmopolitan distribution, particularly diverse in the tropics. Most are herbs, climber herbs, woody lianas or shrubs but some genera include canopy-forming evergreen lauroid trees. Members of the Cucurbitales form an important component of low to montane tropical forest with greater representation in terms of the number of species. Although not known with certainty the total number of species in the order, conservative estimates indicate about 2600 species worldwide, distributed in 109 genera. Compared to other flowering plant orders, the taxonomy is poorly understood due to their great diversity, difficulty in identification, and limited study. The order Cucurbitales in the eurosid I clade comprises almost 2600 species in 109 or 110 genera in eight families, tropical and temperate, of very different sizes, morphology, and ecology. It is a case of divergent evolution. In contrast, there is convergent evolution with other groups not related due to ecological or physical drivers toward a similar solution, including analogous structures. Some species are trees that have similar foliage to the true laurels due to convergent evolution. The patterns of speciation in the Cucurbitales are diversified in a high number of species. They have a pantropical distribution with centers of diversity in Africa, South America, and Southeast Asia. They most likely originated in West Gondwana 67–107 million years ago, so the oldest split could relate to the break-up of Gondwana in the middle Eocene to late Oligocene, 45–24 million years ago. The group reached their current distribution by multiple intercontinental dispersal events. One factor was product of aridification, other groups responded to favorable climatic periods and expanded across the available habitat, occurring as opportunistic species across wide distribution; other groups diverged over long periods within isolated areas. The Cucurbitales comprise the families: Apodanthaceae, Anisophylleaceae, Begoniaceae, Coriariaceae, Corynocarpaceae, Cucurbitaceae, Tetramelaceae, and Datiscaceae. Some of the synapomorphies of the order are: leaves in spiral with secondary veins palmated, calyx or perianth valvate, and the elevated stomatal calyx/perianth bearing separate styles. The two whorls are similar in texture. "Tetrameles nudiflora" is a tree of immense proportions of height and width; Tetramelaceae, Anisophylleaceae, and Corynocarpaceae are tall canopy trees in temperate and tropical forests. The genus "Dendrosicyos", with the only species being the cucumber tree, is adapted to the arid semidesert island of Socotra. Deciduous perennial Cucurbitales lose all of their leaves for part of the year depending on variations in rainfall. The leaf loss coincides with the dry season in tropical, subtropical and arid regions. In temperate or polar climates, the dry season is due to the inability of the plant to absorb water available in the form of ice. Apodanthaceae are obligatory endoparasites that only emerge once a year in the form of small flowers that develop into small berries, however taxonomists have not agreed on the exact placement of this family within the Cucurbitales. Over half of the known members of this order belong to the greatly diverse begonia family Begoniaceae, with around 1500 species in two genera. Before modern DNA-molecular classifications, some Cucurbitales species were assigned to orders as diverse as Ranunculales, Malpighiales, Violales, and Rafflesiales. Early molecular studies revealed several surprises, such as the nonmonophyly of the traditional Datiscaceae, including "Tetrameles" and "Octomeles", but the exact relationships among the families remain unclear. The lack of knowledge about the order in general is due to many species being found in countries with limited economic means or unstable political environments, factors unsuitable for plant collection and detailed study. Thus the vast majority of species remain poorly determined, and a future increase in the number of species is expected. Under the Cronquist system, the families Begoniaceae, Cucurbitaceae, and Datiscaceae were placed in the order Violales, within the subclass Dilleniidae, with the Tetramelaceae subsumed into the Datiscaceae. Corynocarpaceae was placed in order Celastrales, and Anisophylleaceae in order Rosales, both under subclass Rosidae. Coriariaceae was placed in Ranunculaceae, subclass Magnoliidae. Apodanthaceae was not recognised as a family, its genera being assigned to another parasitic plant family, the Rafflesiaceae. The present classification is due to APG III (2009). Modern molecular phylogenetics suggest the following relationships:
https://en.wikipedia.org/wiki?curid=5411